UPDATE 1 AUGUST: The COMET website has now launched. Its database lists the outcomes standardisation efforts that are going on across clinical fields as well as additional information about why outcomes standardisation is important and how you can get involved.
This week I was delighted to be able to attend the second COMET (Core Outcome Sets in Effectiveness Trials) meeting in Bristol, UK, at which the rapidly developing science of outcomes standardisation was discussed. As highlighted in numerous studies (some of which have appeared in PLoS journals, such as this one), effectiveness trials studying the same clinical condition very often examine very many different, and sometimes irreconcilable, endpoints. This phenomenon has a number of effects ranging from the annoying (eg that different trials can’t be easily compared or combined in systematic reviews) to the pretty shameful (trials that focus on measuring surrogate biomarkers rather than outcomes that really matter to patients or to clinical care).
It is encouraging to hear about the progress that’s being made to understand how outcomes standardisation needs to happen (with some not-trivial issues to be overcome!) as well as to develop core outcome sets in specific clinical areas. However, there are challenges. Trials don’t always measure outcomes that reflect patient priorities; and as participants at the meeting almost unanimously agreed, patient voices must be heard as a critical part of the process of developing core outcome sets. The lessons learnt from OMERACT (Outcome Measures in Rheumatology) – are key here. Early OMERACT consultations on rheumatoid arthritis outcomes showed that patients identified fatigue as a crucial outcome domain for them, but one which was rarely measured in trials. It took many years, and extensive research, to define what is meant by fatigue and to develop validated tools for collecting reliable data within this outcome domain. John Kirwan (University of Bristol) explained that, early on during OMERACT’s efforts, it became clear that outcome sets would not be agreed for use unless patients were involved in their development. All the same, we don’t yet have a good handle on when something becomes a “standard”, ready for community-wide adherence – but the degree of consultation during early stages of developing core outcome sets is no doubt critical to getting adherence later on.
Other issues up for debate and extensively discussed over coffee and posters included whether we take a top-down (regulators and funders imposing required outcomes) or bottom-up approach to adherence; where will the money come from for development of outcome sets (nearly everyone seems to agree this is important, non-trivial, and requires ££ or $$); and how to deal with the conflict between wanting to include outcomes which can be precisely measured but which may not be important (surrogate markers again), and including outcomes which are known to be important but which cannot easily be measured.
Clearly what’s critical to success here is the widespread involvement of clinicians, patients, and researchers to agree measures which are valid, meaningful and important across trials. For that to happen, people should engage. A new COMET website will soon launch with a searchable database to help identify which efforts are taking place in different fields: check back to find out what initiatives are taking place in your area of research. I’ll update this post with the new link when the new COMET site launches.
The The science of outcomes: how far have we come? by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.