Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS PLOS Biologue

The XV Collection: Ethical Oversights in Ethical Oversight of Animal Research

 

By Jonathan Kimmelman

 

Sometimes the life sciences work fantastically, as when insights into fundamental processes are transformed into life-saving treatments. Other times the scientific process flops: false claims take on a life of their own [1][2], or ineffective treatments are advanced into drug development [3] and/or care [4].  A key to improving the balance of successes versus failures is the systematic investigation of how science works—a line of inquiry known as meta-research. PLOS Biology is the only general life science journal to offer strong support for this project by devoting a specific section to meta-research reports.

 

Among the many excellent meta-research articles published by PLOS Biology, I’ve chosen to highlight Vogt et al [5] for the XV Collection. It stands out for offering a glimpse of research evaluation processes that are all but inaccessible to systematic analysis because they typically occur behind the closed doors of institutional review panels.

 

First some background: since roughly the 1970s, various government authorities have required that research proposing to use nonhuman animals undergo an independent review and approval process before it is conducted. Such review processes stem from the ethical sensitivities surrounding experiments that use nonhuman animals as their research reagents. Unlike rocks, chemicals in the Sigma catalogue, or subatomic particles, animals have a capacity for suffering and their interests must be protected. Animal care committees are charged with making an independent judgement about whether nonhuman animal studies are morally justified. Yet the documentation submitted to these committees is deemed confidential and little is known about precisely what factors drive those judgements.

 

In Vogt et al, the authors accessed almost 1300 applications to conduct animal experiments in Switzerland, a jurisdiction where reviewers on animal care committees are instructed to weigh an experiment’s harms against its potential benefits. Outside of Switzerland, access to such protocols is almost impossible; indeed, researchers often aren’t even required to provide a detailed research protocol.

 

The authors then examined the extent to which these application protocols described the implementation of practices aimed at addressing threats to the internal validity of experiments. These practices are important safeguards against faulty experimental design and researcher bias. Vogt et al found that few applications stated an intention to use procedures like randomization or concealed allocation. Such practices were only slightly more common for studies involving “higher” nonhuman animal species like cats, dogs, and primates.  But even here, they were uncommon. For example, fewer than 20% of such studies proposed a blinded study design. The authors conclude that animal care committees in Switzerland approve experiments based not on an appraisal of the scientific methods, but rather on confidence, based perhaps on the scientific bona fides of researchers and/or sponsors.

 

Ethical review boards have to assess numerous highly technical scientific proposals but their own judgements do not appear to be based on sound science or ethics

Regretfully, the lack of rigorous evaluation of proposed research seems to extend to human research ethics committees as well. In 2018, PLOS Biology published an article by Wieschowski et al examining the preclinical justification for 106 early phase clinical trial protocols submitted for institutional review boards (IRBs) at German institutions [6] (disclosure: I am a middle author on this publication). The report found that 17% of protocols did not cite any preclinical efficacy studies. Those protocols that did cite preclinical efficacy studies offered scant information on the extent to which such studies had addressed various threats to clinical generalizability. One logical implication of this report is that—as with nonhuman animal studies in Vogt et al—ethics review committees approve early phase trials without a clear appraisal of their evidentiary grounding. Instead they rely on confidence in the researchers, sponsors, or perhaps other regulatory processes.

 

Scores of studies have previously documented deficiencies in the methods described in preclinical study publications. What makes Vogt et al stand out from these other studies, however, is that such deficiencies are documented farther upstream—at the point where studies are designed and reviewed.

 

Vogt et al (and Wieschowski et al, too) has other, more profound implications. Nonhuman animal and human experiments may look as if they are entirely conceived of and designed by scientists and research sponsors. Yet IRBs and animal care committees—far from mere bureaucratic after-thoughts—play a critical role in shaping what questions are asked in research and how such questions are resolved.  Among other things, such committees grant scientists the moral license for pursuing research that might otherwise be deemed inhumane or unethical. In so doing, they signal to scientists and others what sorts of research practices are proper and which ones are not, and scientists who want to get their protocols approved quickly learn to internalize these norms.

 

Yet such committees process large volumes of highly technical research protocols and must rely on heuristics for assessing the relationship between a study’s burden and its value. Whether that is “confidence”” (as alleged by Vogt et al) or precedent, it’s hard to avoid concluding that many aspects of ethical review in life science research contradict the spirit of independent, systematic and unbiased risk/benefit analysis enshrined in various policy documents. If the life sciences suffer from an excess of unreproducible findings, the ethical oversight (in both senses of the term) is partly to blame.

 

References

1. Greenberg SA (2009) How citation distortions create unfounded authority: analysis of a citation network. BMJ 339:b268. https://doi.org/10.1136/bmj.b2680

2. Sipp D, Robey PG, Turner L (2018) Clear up this stem-cell mess. Nature 561, 455-457. https://doi.org/10.1038/d41586-018-06756-9

3. Fingleton B (2003) Matrix metalloproteinase inhibitors for cancer therapy: the current situation and future prospects. Expert Opinion on Therapeutic Targets, 7:3, 385-397, DOI: 10.1517/14728222.7.3.385

4. Turner L, Knoepfler P (2016) Selling Stem Cells in the USA: Assessing the Direct-to-Consumer Industry. Cell Stem Cell 9(2) 154-157. https://doi.org/10.1016/j.stem.2016.06.007

5.  Vogt L, Reichlin TS, Nathues C, Würbel H (2016) Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor. PLOS Biology 14(12): e2000598. https://doi.org/10.1371/journal.pbio.2000598

6. Wieschowski S, Chin WWL, Federico C, Sievers S, Kimmelman J, et al. (2018) Preclinical efficacy studies in investigator brochures: Do they enable risk–benefit assessment?. PLOS Biology 16(4): e2004879. https://doi.org/10.1371/journal.pbio.2004879

 

Jonathan Kimmelman is the Director of the Biomedical Ethics Unity in the Department of Social Studies of Medicine at McGill University, and is a member of the PLOS Biology Editorial Board.

 

This blog post is the last in a series of twelve, forming PLOS Biology’s XV Collection, celebrating 15 years of outstanding open science; read Lauren Richardson’s blog for more information.

 

Featured image credit: Maggie Bartlett, NHGRI

Jonathan Kimmelman image credit: Robert Streiffer

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top