Failure to replicate as an opportunity for learning

Bookmark and Share

I am currently reading Kuhn’s “The Structure of Scientific Revolutions” and often find myself travelling back in time to a family dinner at home.

Many years ago, while still an undergrad in Argentina, I returned home from a tiring day in the lab complaining that the experiment ‘hadn’t worked’. My dad looked at me dismissively and said:

“The experiment worked, you just don’t know what variables you didn’t control”.

My dad is not a scientist, but an avid reader of popular science books. He had attended a couple of years of chemistry before leaving University to start his own business, so it was hard not to attack him with my mashed potatoes. But this was probably the most important lesson I learned in my entire science career:

Failure to replicate exposes those unknown/unthought-of variables that determine the result of a given experiment.

As a PhD student later on, I was expected to replicate previous findings before moving on with my own work that built on those findings. In many cases I replicated successfully, in other cases I didn’t. I had to even replicate work from within the lab. When I failed, we uncovered nuances about how each of us were actually ‘doing’ the work. In some cases, replicability came down in those nuances that were not written down in the lab protocols and recipes. But in all cases we learned something from those failures.

I expect the same from myself and my students, though they (and many of my colleagues) find that re-doing what has been done is a waste of time. I don’t. And here is why:

Let’s say that someone described the expression pattern of a protein in the brain of species A and I want to see if the expression pattern is the same in the brain of species B. I follow the same protocol, and find a difference between the two species. Now, how do I decide whether that difference is a species-specific difference or something else that I am doing that is different from what the original authors did and that I did not account for? Well, the only way of knowing is by trying to replicate the original findings in species A. If I can replicate, then I can more confidently argue that it is a species-specific difference (at least with respect to that specific protocol). If I can’t then I have further defined the boundaries within which those original findings are valid. Win-Win.

DO NOT DUPLICATE

by Sam UL cc-by-nc-sa on flickr

This brings up another reaction to the results of experiments: How hard do we work at trying to get an experiment to ‘work’ when we expect it won’t? For example: if I expect (based on the published literature) that a protein is not being expressed in a particular brain region, I may quite quickly accept a negative result. But if I did not have this pre-knowledge, I might go through a lot of different attempts before I am convinced it is not there. So the readiness with which we accept or not a negative or positive result is influenced by that pre-knowledge. But how deeply do we go into that published literature to examine how well justified that “pre-kowledge” is? As I go through the literature I often find manuscripts that make claims where I would like to see how the inter-lab or inter-individual variability has been accounted for, or at least considered, or what it took to accept a positive or negative result.

Every now and then I find someone in my lab that can’t replicate my own results. I welcome that. In the majority of the cases, we can easily identify what the variable is – in other we uncover something that we had not thought of that may be influencing our results. After all, one can only control the variables that one thinks of a priori. How is one to control for variables one does not think of? Well, those will become obvious when someone fails to replicate.

So why are scientists so reactive to these failures to replicate? After all, it is quite likely that the group failing to replicate also did not think of those variables until they got their results. A few months ago PLOS, FigShare and Science Exchange launched the Reproducibility Initiative that, as they say will help correct the literature out there, but I think also define better the conditions that make an experiment work one or another way.

So, back to my dad. All experiments work, even those that give us an unexpected result. What I learned from dad is that being a good scientist is not abut dismissing “bad experiments” and discarding the results, but more about looking deeper into what variables might have led to a different result. In many cases, it might be a bad chemical batch – in others it might uncover a crucial variable that defines the boundaries of validity of a result.

I call that progress.

Kuhn, T. S. (1962). The structure of scientific revoutions. Chicago: The University of Chicago Press.
*** Update 16/11/12: I noticed that I had mistakenly used an image with full copyright, despite having narrowed my original search to CC-licenced content. I apologise for this oversight. I have now removed the original image and replaced it with one with a suitable licence.

 

Related Posts Plugin for WordPress, Blogger...

Creative Commons License
Failure to replicate as an opportunity for learning by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Comments are closed.