Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

Silence: Everyday Betrayals of Research Participants

 

Cancer patients at holistic centre "are more likely to die"
Headline, The Times, London, September 1990.

 

Many of them were watching the evening TV news on the BBC, with no idea of the blow that was about to hit. They were cancer patients at a center in Bristol, and participating in a study. The researchers had reported in the Lancet that the patients were 3 times as likely to have their cancer return, and twice as likely to die. There was a press release and a press conference – but no communication with the participants. Liz Hunt interviewed some of them:

Isla Bourke, one of the women in the study, remembers wandering around the BBC newsroom where she worked in a daze. “Everyone knew I had gone to Bristol, they were looking at me, wondering what I had done to myself. I felt so terrible”.

Vicky Harris, another of the study cases, was travelling home on the tube when she caught a glimpse of the headlines in someone’s evening newspaper. She learned that she was twice as likely to die of breast cancer just because she had been to Bristol. “I actually wondered whether I would make it home”, she recalled.

Another of the women, Heather Goodare, has written about the support and advocacy group some of them formed, protesting and pushing for answers. It turned out that the sensational conclusions of what was only an interim report didn’t even hold up under scrutiny.

That was in 1990. The callousness of the episode shocked me. That shock grew when I realized that most researchers weren’t communicating with people in their studies once data collection was over.

Recently, Kate Elzinga and her colleagues reported on their survey of people with cancer who had participated in trials at a cancer center in Canada. Although 9 out of 10 felt strongly they had a right to know what happened,

Unfortunately, very few patients (8%) recalled having received the results for a study they participated in, and of these patients, less than half fully understood the results they were given.

Only 9% of people with cancer participating in trials in a UK survey received results, too (2011). We know that there are lots of problems in communication with study participants, but are things really that bad? This is an enormous gulf between what people think is right and what’s happening in practice in some places at least.

 

Cartoon about sharing study results

 

Even communication with research participants at the informed consent stage isn’t as effective as it should be, despite the amount of emphasis it gets. Natalie Armstrong and colleagues wrote that “decades of research” show that information leaflets for potential participants “usually disappoint in delivering on their aims”. An informed consent document functions as a contract more than as a method of informing people, they concluded.

One-on-one discussion can fare better, but best practice isn’t the everyday. Some studies even find the majority of participants didn’t really understand what they had gotten into. Substantial numbers of participants don’t understand their rights – that they can withdraw from the study, for example, or even, sometimes, that they had a choice about participating or not in the first place.

What about the end of the study? Are those findings that less than 10% of people are informed of results typical? It’s hard to know, but it’s clear this remains a major problem. David Shalowitz and Franklin Miller published a review in 2008. How many people want the opportunity to hear about results varied – from 20% to 100%, but:

Communicating research results seems to represent the exception to practice. For example, only five of 150 institutions surveyed by Fernandez et al. had a formal mechanism for returning research results to participants [12], and only 3% of 180 consent forms for leukemia clinical trials indicated that participants could receive study results [13]. Furthermore, only nine of 22 Canadian REBs surveyed had policies addressing communication of results or required investigators to address the issue themselves [29].

Communicating results is usual, though, in community participatory research, which you would expect if genuinely fits the genre [PDF]. I couldn’t find data on the practices of drug companies sponsoring research, although I found some examples prepared by companies by the Center for Information and Study on Clinical Research Participation (CISCRP).

Some public funding agencies require or expect results to be shared with patients, in some contexts at least, but I couldn’t find data on adherence with these policies. Some focus on the issue of whether or not to inform people of their individual results, including in genetic research.

 

Research participants' information rights: before, during and after

 

Back in the 1990s, we pushed hard for publishing results, and communicating them to participants, to be part of Australia’s human research ethics requirements. And both were included from the first national guidelines in 1999. The Australian position has been strengthened since, going from “Normally, research results should be made available to research participants”, to “Research outcomes should be made accessible to research participants in a way that is timely and clear” (2015).

Soon after, I chaired a joint working party on consumer and community participation in research, between the National Health and Medical Research Council and the Consumers’ Health Forum of Australia. That resulted in a national statement in 2001, with a few strongly worded pages on sharing research results (pages 21-23) [PDF]. It’s part of the routine information for grant applicants, and there are supporting documents too. The recent 2016 version has, disappointingly, watered this down to less than what is required by the ethics guidelines. That’s both disappointing and odd.

After the 5th revision of the Declaration of Helsinki in 2000 – the successor of the Nuremberg ethics requirements for human research – this issue was one of the code’s continuing weaknesses I criticized. It still hadn’t been added in the 6th revision (2008 [PDF]). It took till the 2013 revision for this to be added:

All medical research subjects should be given the option of being informed about the general outcome and results of the study.

From Helsinki, it started to trickle through elsewhere. I’m not sure if that’s what spurred others to start requiring informing participants, or whether it’s an idea whose time was starting to come. It’s still patchy though. For example, the NHS Health Research Authority (HRA) in England requires informing clinical trial participants of results – but not in early phase trials.

And of course, clinical trials are not the only research with human participants. What’s happening more broadly is even less clear.

It’s worrying that we know so little about what’s going on. I started this post with an example of distressing research findings. There are enough studies out there now showing the potential to cause distress when sharing the results directly too. Here are quotes from interviewees in a study by Carolyn Tarrant and colleagues (2015), from a trial in pregnant women that showed harm from the intervention being tested:

I assumed that they fully expected it to help and it would I suppose, otherwise I wouldn’t have said yes.

When we saw the potential that he could well have had the brain damage as a result of the trial that obviously brings up huge guilt emotion.

I really felt as if I’d been cheated really, and fooled into taking something that I wouldn’t have had I have known all the facts […] Angry for myself for […] trusting that everything was fine.

That’s not how everyone felt. But as Tarrant and colleagues point out, sharing results “is not a wholly benign practice”.

Many researchers are developing a lot of experience – not just in communicating with participants afterwards, but during the research process as well. Around the time I read Elzinga’s research about how few people were informed of cancer trial results, Erika Augustine and colleagues reported on their experience of holding teleconferences with participants during a trial for a drug in Parkinson’s disease: more than 600 participants dialed in for one set of calls.

Yet, reading through a pile of studies – including a few trials of ways to go about this process – left me with more questions than answers. This critical part of the research enterprise is suffering from too little attention. I think we need:

 

  • a strong systematic review of the research on ways to communicate research results to participants;
  • researchers to include the scripts and materials they used when they report on their experiences and studies of result sharing;
  • repositories of models of excellence, and resources people can tap into to develop their own processes well.

Then there are the basic building blocks of communication techniques. There is a lot of concern about how to translate research findings for potential users. Yet even with all that investment of resources, we know we’re still not good at explaining research results.

The most rigorous ways of studying phenomena are often nowhere near close to the way we understand them in everyday life. Converting research data back to something practical and meaningful is still a challenge.

 

Cartoon about people speaking in everyday life in scientific jargon

 

The FDA and NIH mandate basic public results reporting for all clinical trials, from the beginning of 2017. The requirement for that comes long after participants should already know. But it emphasizes the urgency of getting better at communicating results, as public reporting with lay summaries scales up.

Publishing results in the biomedical literature has been a professional and ethical requirement longer than reporting to participants has been. But a substantial proportion of trials still don’t get reported. The rate of unpublished trials might be as high as 50% (2008), although some studies show much higher rates being published (for example, here, here, and here).

Being more open with research participants requires a culture, and power, shift. That may be the real problem here. Just having policies isn’t enough: with all the “let-out” clauses for exceptions surrounding this issue, policy may not translate into practice as expected.

The press conference around the Bristol study results is an example of how much is stacked against sharing research results with participants. The incentives for researchers and research sponsors, funders, and institutions to get publicity for the publication of results are high.

There can be a bizarre combination of 3-ring circus and cloak-and-dagger secrecy around major health research results. But if all participants know the results as soon as they should, the cats will be getting out of their bags, won’t they?

Whatever the reasons, not closing the loop with research participants properly and making the results publicly available as well, exploits participants’ altruism and contributions. Even if the results don’t have critical personal relevance, as Tarrant wrote, sharing them “enabled a sense of closure and completeness, and signified to women that the researchers were acknowledging their contribution”:

It was like you’d not been forgotten about really, that all of us that had done this, we’d done it and it had been acknowledged by getting the results.

Just the thought that something that you’ve done might help somebody in your position in the future’s a really good feeling.

Research participants are entitled to this.

 

~~~~

 

If you’re interested in the public reporting of clinical trials results, check out the All Trials campaign.

 

The cartoon and illustrations are my own (CC-NC-ND-SA license). (More cartoons at Statistically Funny and on Tumblr.) 

 

* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

 

 

Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top