Author: Emma Veitch

How to Write a Better Systematic Review Abstract: Guidance is Here

Given that many individuals who search the biomedical literature will only end up reading the abstract (if that!), it’s disappointing that generally biomedical research abstracts are so badly written. This problem of inaccurate and incomplete reporting of abstracts has been studied in depth for randomized trials – for example, the abstracts of such studies frequently report different figures for the numbers of individuals randomized and analysed. Abstracts can also be a factor in the “spinning” of study results in press releases and media coverage, and thus it’s even more important that these parts of a scientific paper are accurate. A previous study published in PLOS Medicine found that 40% of randomized trial abstract conclusions contained “spin” (specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment), and that the only factor studied found to be associated with “spin” in press releases was “spin” in the abstract conclusion section.

However, help is now at hand. Following on from a previous effort in which the CONSORT group developed guidance for improving the writeup of randomized trial abstracts , a group involving some of the same researchers now develop similar guidance for the reporting of systematic review abstracts. The guidance, named “PRISMA for Abstracts”, is concise, and easy to use. It is designed to be used in conjunction with the previous guidance for the reporting of an entire systematic review – the PRISMA statement. The guidance for abstracts includes twelve specific recommendations which outline what key pieces of information should be included in the abstract, with the intention of avoiding accidental omission or misinterpretation, and to help readers decide whether the study is relevant to them, and whether to go on and read further. For each piece of guidance, examples from the literature are included along with justification and references. Although much of the guidance may seem simple and straight forward, it’s important to note that readers clearly find such tools useful: at the time of writing, the PRISMA guidance had received over 90,000 views and over 750 citations. Hopefully the guidance on abstract reporting will be similarly valuable: systematic reviewers should take note and use these recommendations when preparing their reports for publication.

Category: General | Tagged , , | Comments Off

Estimates from the Global Burden of Disease can become a global public good only if the data are made public

As most of the global health community may be aware, today the long-awaited findings of the Global Burden of Disease 2010 (GBD2010) project were launched, to much fanfare and excitement, at a public event at the Royal Society, London. The launch coincides with the publication in the Lancet of seven original research articles describing the findings, along with additional commentary. Needless to say, the articles have been generously made free-access by the Lancet (but which is not the same as true open-access, under generally accepted definitions).

Amidst all of the enthusiasm, debate, and discussions on the findings of GBD2010 – for example, the reported reduction in under-5 mortality rates between 1990 and 2010 – questions remain over the way in which the findings have been generated and communicated. At the meeting, tweets excitedly announced that “insights from #GBD2010 are comparable to the human genome project”. But there are key differences between the two projects in their approach to transparency and data sharing, as many present at the meeting (such as Peter Piot, from the London School of Hygiene and Tropical Medicine and Richard Peto, from the University of Oxford) publicly commented. Notably, bold moves were made by the genomics community to commit to data sharing emerging from their sequencing centres as soon as the data were generated. This degree of transparency is not present for GBD2010. The project has produced some amazing interactive visualisations, which can be used to chart estimates of causes and risk factors for death and disability by country, age, and sex. But these data displays, however fun and nifty, do not equate to the type of transparency that is needed for full external scrutiny and reanalysis.

As many participants from the floor commented, we know that estimates from this type of work are subject to huge uncertainty. Changes to analytical choices and assumptions will modify those estimates and the attendant uncertainty. The global health research community is desperate to scrutinize the methods of the GBD2010 group, to rerun the underlying data with different assumptions or newer weightings, and to see how these changes impact on the overall estimates. It is encouraging that the GBD group have acknowledged this issue and are making assurances that data “will ultimately be made available”.  However, it is critical that GBD moves towards full and open public sharing of original datasets sooner rather than later, which would bolster the assertions of some in the GBD group that these estimates should not be seen as the definitive outputs of a global consensus on burden of disease, but rather a starting point for scientific discussion and debate.

 

Category: General, Global Health, Policy, Public | 2 Comments

Silent takedown of the pharma trials database…and more

Those of you with more than a passing interest in publication bias and other threats to the integrity of the research literature may have noticed the publication of a study in this week’s PLoS Medicine which looks at the effects of such bias on apparent efficacy of antipsychotic drugs. While the article was in press at PLoS Medicine the lead author, Erick Turner, noticed that a database, initially set up by PhRMA (Pharmaceutical Research and Manufacturers of America) to “serve the valuable function of making clinical trial results for many marketed pharmaceuticals more transparent….” had mysteriously disappeared from the internet – thank you for the hat tip to Erick. (The authors had included this database, amongst others, to try to identify all trials conducted on the antipsychotics they investigate in their analysis). Cynical readers can view an example of what the database used to look like at this page through the Internet Archive – the database used to be present at  http://www.clinicalstudyresults.org/).

At the time that Clinicalstudyresults.org was set up, PhRMA touted the initiative as an important venture for achieving transparency and access to the results of all phase III and IV studies (both positive and negative) which had been conducted on PhRMA-member company approved drugs. The last version of the clinicalstudyresults.org website in internet archives seems to suggest that PhRMA now views this database as irrelevant, commenting that other databases have “expanded dramatically, including the National Library of Medicine’s www.ClinicalTrials.gov”. However, the clinicalstudyresults.org website is now entirely inaccessible, and previous versions of it available through internet archiving give no obvious information about plans for rehousing data previously held by clinicalstudyresults.org in public repositories. Merck has announced that it will be moving data for its trials from clinicalstudyresults.org to its own website (see notice on Merck’s site). But it’s clear that patients, clinicians and investigators have no guarantee of permanent accessibility of trial results previously housed by clinicalstudyresults.org. In addition, there’s now no way to identify how many trials were previously reported at clinicalstudyresults.org, and for how many the data may be available elsewhere, or indeed, were unique and are now lost from the public domain. Pharma assures us of its ethical credentials but backtracks on its promises of transparency whenever it wants.

Why is this still important? A detailed study published earlier this year in BMJ shows that although in the US there is legislation mandating that, for certain types of study, trial sponsors deposit results in clinicaltrials.gov within a year of completion, compliance is still very poor. For trials falling within the mandatory requirements, only 22% actually had their results deposited in clinicaltrials.gov. This low level of deposition is hugely concerning because timely deposition of data in Clinicaltrials.gov was intended to provide guarantees that findings of all trials would be available in the public domain even if investigators have not been able to publish findings in journal articles. It’s clear that voluntary mechanisms adopted by pharma – and even unenforced government mandates – are not currently solving the perennial problem of publication bias.

Category: General, Public, Research Ethics | Tagged , , , , , | 2 Comments

Obama’s bioethics commission reports on international research protections

A US standing Presidential Commission, asked to assure “that current rules for research participants protect people from harm or unethical treatment, domestically as well as internationally”, has concluded that current regulations in the US provide adequate protections and that ethically dubious work could not happen today. The concluding report is now published, and the investigations were stimulated by acknowledgement that the US Public Health Service supported research done in Guatemala in the 1940s which involved infecting individuals, without their consent, with bacteria that cause syphilis and gonorrhea.

Although the committee set up to investigate these transgressions did find that current protections were adequate, they summarise a set of fourteen recommendations to strengthen the rights of human research participants in the US. These recommendations range from issues such as improving public access to information about research studies involving human participants, to finding better ways of compensating individuals who are harmed in the course of research, to expanding teaching about bioethics in undergraduate and further training.

However, the remit of the investigation covers only federally funded research. Issues relating to research funded by the pharmaceutical or other industries, or by non-governmental organisations, were not considered at all. Given that the majority of trials (the type of research raising the greatest risks for participants) are thought to be industry-funded, this is a serious omission. Although the Guatemala studies were funded by a public agency, industry-funded studies come with a substantial conflict of interest that may distort the way research studies are set up and how those studies are monitored – they may be more likely to use placebo rather than an established therapy, for example. Research done by industry may be regulated by for-profit ethics boards, so they are not as independent of the companies whose research they regulate as would ideally be the case.

In the press conference announcing the report, Amy Gutmann explained that the committee had confidence that research done in other countries did not raise any special ethical issues. This may be true for studies funded through a government agency, which will be protected by ethics committees set up by those agencies. However the picture could be very different, and far more complicated, for studies done by a myriad of other companies and organisations, where the patchwork of protections may be less stringent, robust, or even lacking in some countries entirely.

Category: General, Policy, Public, Research Ethics | 1 Comment

What should the ethical protections be in cluster trials?

This week I was delighted to listen in to a consensus conference of researchers, ethicists, editors and other interested parties, aiming to produce a set of guidelines for the ethical conduct and review of cluster randomized trials.

A cluster trial has been defined as a trial in which “the study, randomize[s] interventions to groups of patients (e.g., families, medical practices) rather than to individual patients”.

The group coordinating this consultation has developed a set of wiki pages, and has published a series of papers in the open-access journal Trials outlining what the issues are (each of the papers is accessible via the group’s wiki site above, as well as the journal website itself).

The consultation is dividing its efforts into defining and discussing each of six key ethical problems, each of which affects cluster trials in unique ways.

“Firstly, who is the research subject in a cluster trial? Secondly, when, and from whom, is informed consent needed in a cluster trial? Thirdly, does clinical equipoise apply in cluster trials? Fourthly, how should benefits and harms be assessed in cluster trials? Fifthly, do gatekeepers have a role to play in protecting the interests of those affected by cluster trials? Finally, what are the issues for cluster trials in vulnerable populations?”

These are important issues, as convincingly set out in the conference and in the introductory papers. Data presented in the webcast showed that a sizeable proportion (up to 30%) of published reports of cluster trials do not provide information on the ethical oversight, or consent procedures for the study. The chairs of research ethics boards consider the issues raised to be important, but do not apply protections consistently, and there’s a risk of both over- and under-regulating participants in cluster trials. For example, if ethics committees require consent from everyone affected in the randomized communities, for some types of cluster trials this may make the study unfeasible or consent meaningless (eg if interventions are applied at a community level, exposing everyone in a particular setting to the intervention, it may be completely unfeasible to avoid exposing individuals to the intervention even if they decline consent). The series of papers give some great real life examples highlighting what the problems and questions are.

So, what are the solutions on the table? The consultation sets out possible definitions for the research “subject” (or perhaps, more accurately, the research participant) in cluster trials. Specifically, this definition proposes:

“A research subject is an individual whose interests may be compromised as a result of interventions in a research study”

This definition would include individuals who are directly intervened upon by investigators for research purposes; individuals who are intervened upon by manipulation of their environment by investigators for research purposes; individuals who interact with investigators for the collection of data; and individuals from whom investigators obtain identifiable private information for the purpose of collecting data.

The guidelines under development also distinguish, as previous work has done, between different types of cluster trials, and this helps define at what level, and which individuals should be protected as research participants. Three types of cluster trials are defined – “cluster-cluster” trials (interventions are applied to entire clusters and not to individuals); “physician-cluster” trials (interventions are intended to affect the process of care offered by physicians, and thus their patients are indirectly affected); and “individual-cluster” trials (in which groups of individuals are randomized, but interventions are applied to individuals within those groups). Defining these different types makes it easier to work out whose interests need to be protected in the trial, and how. It’s also clear that editors need to do a much better job of encouraging authors to explain how they viewed the ethical implications of their cluster study, and at what level they applied specific protections such as consent, and why.

The guidelines group is still keen to hear from interested parties and you can post your discussion questions via the wiki pages – so if these issues are important to you, read the papers and get involved with the group. A summary of the webcast will also be posted on the site.

Recent cluster trials published in PLoS Medicine include:

Roca A, Hill PC, Townend J, Egere U, Antonio M, et al. (2011) Effects of Community-Wide Vaccination with PCV-7 on Pneumococcal Nasopharyngeal Carriage in The Gambia: A Cluster-Randomized Trial. PLoS Med 8(10): e1001107.

Cuevas LE, Yassin MA, Al-Sonboli N, Lawson L, Arbide I, et al. (2011) A Multi-Country Non-Inferiority Cluster Randomized Trial of Frontloaded Smear Microscopy for the Diagnosis of Pulmonary Tuberculosis. PLoS Med 8(7): e1000443.

Kangwana BP, Kedenge SV, Noor AM, Alegana VA, Nyandigisi AJ, et al. (2011) The Impact of Retail-Sector Delivery of Artemether–Lumefantrine on Malaria Treatment of Children under Five in Kenya: A Cluster Randomized Controlled Trial. PLoS Med 8(5): e1000437.

Luoto R, Kinnunen TI, Aittasalo M, Kolu P, Raitanen J, et al. (2011) Primary Prevention of Gestational Diabetes Mellitus and Large-for-Gestational-Age Newborns by Lifestyle Counseling: A Cluster-Randomized Controlled Trial. PLoS Med 8(5): e1001036.

My 2010 competing interests are on the PLoS Medicine site.

Category: Authors, General, Policy, Research Ethics | 1 Comment

How can we improve peer review: the impact of reporting guidelines

There’s a lot of nay-saying about peer review out there – it’s messy, inadequate, timeconsuming, boring, and nobody knows what it’s supposed to do anyway. But despite that, peer review is widely regarded as indispensable by many – including bodies such as the UK Government’s Select Committee, which concluded:

“Peer review in scholarly publishing, in one form or another, is crucial to the reputation and reliability of scientific research… However, despite the many criticisms and the little solid evidence on its efficacy, editorial peer review is considered by many as important and not something that can be dispensed with”.

So I was pleased to see a recent study reported in BMJ which takes a rigorous approach to evaluating the impact of one particular component of peer review. This study aimed to find out whether the use of reporting guidelines in peer review can improve the quality of published papers.

The researchers use this definition of a reporting guideline:

“statements that provide advice on how to report research methods and findings… they specify a minimum set of items required for a clear and transparent account of what was done and what was found in a research study, reflecting in particular issues that might have introduced bias into the research”

The study used a randomized design, in which 92 papers under evaluation at the journal Medicina Clinica were randomized to intervention or control arms. For “intervention” papers, the authors received an additional, reporting-guideline driven evaluation from a senior statistician (on top of regular peer reviewers). For “control” papers, authors received just the regular reviews. An interesting feature of the trial design was that all papers received the additional, reporting-guideline led review, but this was only sent to authors in the intervention (and not the control) arm; randomization was actually done after the reporting guideline review was completed. By doing this, the investigators were able to collect detailed baseline data on study quality, and to ensure the person doing the guideline-led review could not be biased with respect to group assignment. The instrument used to rate study quality was the scale developed by Goodman and colleagues.

What did the study show? Well, it’s a little bit hard to tell, and the evidence does not look desperately strong. In the study four papers in the “control” arm had to go through a reporting guideline-led additional review, because the editors were worried about protocol deviations in the studies reported in the papers. So these 4 papers crossed over from one arm to another. The investigators then analyse their data both intention to treat (handling the 4 crossover studies in their randomized group) and as-treated (handling the 4 crossover studies in the reporting-guideline led review group). The intention to treat analysis gives you an effect estimate for improvement of 0.25 (95% CI -0.05 to 0.54) (comparing intervention vs control arm) on the Goodman scale. The “as-treated” comparison is better, 0.33 (95% CI 0.03 to 0.63) – so if you are keen on per-protocol analyses you might conclude from this that reporting guidelines improve study quality (a bit) more than not using them… A cynic might say the study is just a bit underpowered, and if there is an effect, it’s fairly small. Of course, there does also seem to be evidence that papers in both study arms improve during peer review, although this wasn’t an objective of the project.

So what can you conclude? Firstly that it is tough to deliver properly designed studies such as this, which aim to concretely investigate the benefit (or otherwise) conferred by specific facets of peer review. The study did not definitively answer its question as to the benefit provided by use of reporting guidelines but will help others design similar studies in the future. And finally, editors and journals still operate with a huge set of “reasonability” assumptions about what works and what doesn’t, but we lack strong evidence informing a huge amount of what we do.

My competing interests are declared here. In addition, since that page was updated I have received reimbursement for local travel expenses to contribute to seminars organised by the EQUATOR group (an initiative aimed at promoting the quality and transparency of health research – and which collects together reporting guidelines). I have also contributed to the development of a number of reporting guidelines, some of which also had the involvement of some of the authors of the BMJ paper discussed in this blog.

Category: Authors, Peer review | 1 Comment

The science of outcomes: how far have we come?

UPDATE 1 AUGUST: The COMET website has now launched. Its database lists the outcomes standardisation efforts that are going on across clinical fields as well as additional information about why outcomes standardisation is important and how you can get involved.

———————–

This week I was delighted to be able to attend the second COMET (Core Outcome Sets in Effectiveness Trials) meeting in Bristol, UK, at which the rapidly developing science of outcomes standardisation was discussed. As highlighted in numerous studies (some of which have appeared in PLoS journals, such as this one), effectiveness trials studying the same clinical condition very often examine very many different, and sometimes irreconcilable, endpoints. This phenomenon has a number of effects ranging from the annoying (eg that different trials can’t be easily compared or combined in systematic reviews) to the pretty shameful (trials that focus on measuring surrogate biomarkers rather than outcomes that really matter to patients or to clinical care).

It is encouraging to hear about the progress that’s being made to understand how outcomes standardisation needs to happen (with some not-trivial issues to be overcome!) as well as to develop core outcome sets in specific clinical areas. However, there are challenges. Trials don’t always measure outcomes that reflect patient priorities; and as participants at the meeting almost unanimously agreed, patient voices must be heard as a critical part of the process of developing core outcome sets. The lessons learnt from OMERACT (Outcome Measures in Rheumatology) – are key here. Early OMERACT consultations on rheumatoid arthritis outcomes showed that patients identified fatigue as a crucial outcome domain for them, but one which was rarely measured in trials. It took many years, and extensive research, to define what is meant by fatigue and to develop validated tools for collecting reliable data within this outcome domain. John Kirwan (University of Bristol) explained that, early on during OMERACT’s efforts, it became clear that outcome sets would not be agreed for use unless patients were involved in their development. All the same, we don’t yet have a good handle on when something becomes a “standard”, ready for community-wide adherence – but the degree of consultation during early stages of developing core outcome sets is no doubt critical to getting adherence later on.

Other issues up for debate and extensively discussed over coffee and posters included whether we take a top-down (regulators and funders imposing required outcomes) or bottom-up approach to adherence; where will the money come from for development of outcome sets (nearly everyone seems to agree this is important, non-trivial, and requires ££ or $$); and how to deal with the conflict between wanting to include outcomes which can be precisely measured but which may not be important (surrogate markers again), and including outcomes which are known to be important but which cannot easily be measured.

Clearly what’s critical to success here is the widespread involvement of clinicians, patients, and researchers to agree measures which are valid, meaningful and important across trials. For that to happen, people should engage. A new COMET website will soon launch with a searchable database to help identify which efforts are taking place in different fields: check back to find out what initiatives are taking place in your area of research. I’ll update this post with the new link when the new COMET site launches.

Category: Conference news, Policy | Tagged | Comments Off

“No Health Without Research” Collection: Minding the Gap Between Research Needs and Research Output

Research output is forever increasing — an average of seventy-five trials, and eleven systematic reviews per day, on the last count, which adds up to over twenty-seven thousand trials per year — but does this mammoth volume of studies deliver answers to the questions that really matter? Some have argued that research prioritisation is flawed because patients and clinicians are not sufficiently involved in setting research agendas, and because over half of studies are done without reference to systematic reviews of prior evidence.

Consequently there is a huge, and avoidable burden of waste in the research process. It’s not that we need less research rather than more (the banal conclusion of so many published studies) — but that we need more of the right kind of research, done in the right way. So how does research funding correlate with the burden of disease? An analysis of US National Institutes of Health funding streams shows that certain areas are “overfunded” relative to their burden in the US (such as HIV/AIDS, breast cancer, and diabetes) and others, largely including conditions associated with substance use or mental health, are badly underfunded (such as lung cancer, chronic obstructive pulmonary disease, alcoholism, and depression). On a global scale, the same inequity holds, but to an even greater degree. There is no evidence that this situation has changed following more explicit recommendations regarding priority setting for research.

Earlier this year, PLoS Medicine issued a Call for Papers, in conjunction with the World Health Organization, on the theme of “No Health Without Research”. The aim of this Collection was to highlight case studies and policy articles which demonstrate how key functions of national health research systems can be strengthened. A particular focus in the Call was on how countries can prioritise health research questions in order to generate the kind of evidence which is truly relevant to their health needs. We have had a fantastic response to the Call for Papers. Articles accepted for publication in PLoS journals are being selected by a joint WHO/PLoS panel for inclusion in the Collection, and potentially may also be  highlighted in the WHO’s World Health Report for 2012.

Some of the articles chosen for inclusion in the Collection show what can be done: a Viewpoint published in PLoS Neglected Tropical Diseases discusses the special issues faced by researchers conducting trials in resource-limited settings, and how these can be overcome through global collaboration. An Essay from researchers at the Wellcome Trust and Makerere University, Uganda highlights networks which are strengthening research capacity for maternal, neonatal and child health in Africa. And a paper published in PLoS ONE describes the outcomes of a distance training model for building research ethics capacity in Peru. There is still time to have your paper included in this Collection.

We urge authors with studies describing initiatives aimed at strengthening key components of national health research systems to submit as soon as possible to stand a chance of possible inclusion in the Collection. Submit your paper / presubmission enquiry or contact us on whr2012@plos.org with queries. The Collection is being constantly updated with new content throughout 2011 and 2012, in the run up to the publication of the World Health Report 2012. We hope that by the time the World Health Report is published in 2012, the Collection will provide an abundance of compelling examples showing how the gap between research needs and relevant research output can be narrowed.

Category: Global Health, Policy | Tagged | Comments Off

Bringing the Global into Global Health Ethics

Clinical ethics have become far too preoccupied with the individual, autonomy and informed consent, argued Baroness Onora O’Neill at the Nuffield Council on Bioethics20th anniversary lecture last night. This preoccupation risks marginalising important ethical issues of increasing relevance for global and public health, through its focus on the ethical implications of any particular action for the individual patient. Addressing these issues requires re-establishing the ethics of global health within political philosophy, so that we can use the framework of justice, accountability, and trust to evaluate the ethical ramifications of particular public health interventions. This broadening of the scope of public health ethics relates closely to the mission of the Nuffield Council on Bioethics – specifically its focus on the ways in which modern medicine and medical research raises new ethical problems. As Baroness O’Neill highlighted so clearly, modern medicine (and research) is a systems enterprise, in which established structures have major implications for a patient or clinician but cannot be directly chosen by them. Within these systems, many components are global public goods – their use by one individual (or group) does not prevent others from using them, and it is not easy to prevent others from using the goods without payment. Astonishingly, an example proposed in the lecture by Baroness O’Neill of a global public good in the global health arena was Open Access to health information. It takes effort (and convoluted structures) to prevent individuals from accessing and using knowledge. Wider access benefits all. Yet we all have a role to play in providing global public goods such as access to information. As Baroness O’Neil proposes, a new approach to thinking about global health ethics gives us tools to recognise our responsibilities with regard to public health interventions and systems– notably, even publishing systems.

Category: Global Health, Policy, Public | Tagged | Comments Off

Encouraging best practice in systematic reviews

The PLoS Medicine editorial this month focuses on initiatives ensuring best practice in the conduct and reporting of systematic reviews. This editorial announces key changes in journal policy towards this type of article; the PLoS Medicine editors will now ask authors whether their systematic review was registered during the planning stage (a practice we support, as helping to reduce bias in carrying out and reporting the systematic review). We will also ask authors whether they had a protocol for the review, and if so ask to see a copy to help in editorial and peer-review assessment of the submitted article. Although such policies have existed for some time for clinical trials, it is only now that an international registry exists that is available to all researchers carrying out systematic reviews – the PROSPERO registry. As this is still a comparatively recent initiative, we are very keen to hear researchers’ reactions regarding their experiences of registering systematic reviews and the change in policy.

Category: Authors, Policy | Tagged | Comments Off