Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Speaking of Medicine and Health

Introducing the Tripod Statement for Reporting Clinical Prediction Models

tripod

Gary Collins (@GSCollins) of the TRIPOD Steering Group introduces the TRIPOD Statement, which provides guidance for reporting clinical prediction models.

Clinical predictions are routinely made throughout medicine and at all stages in pathways of health care and are the basis for communicating risk or prognosis to patients and therefore inform the clinical decision making process. In the diagnostic setting, predictions are (for example) made as to whether a particular disease is present informing the referral for further testing, initiate treatment or reassure patients that a serious cause for their symptoms is unlikely. In the prognostic setting, predictions can be used for planning lifestyle or therapeutic decisions based on the risk of developing a particular outcome over a time period.

The multifactorial nature of making a clinical prediction makes it difficult for doctors to simultaneously combine and weight multiple risk factors to produce a reliable and accurate estimate of risk. Furthermore, it is unsurprising that numerous studies have shown that doctors are generally poor prognosticators, as they see relatively few cases and are given to cognitive biases.

Increasingly doctors, often based on recommendations in national clinical guidelines, are using multivariable prediction models to support and guide the clinical decision making process.  A clinical prediction model is a mathematical equation that relates multiple predictors for an individual to the probability (or risk) that a particular disease or condition is present or will occur in the future. Well known prediction models include the Framingham Risk score, Apgar Score, Ottawa Ankle Rules, EuroSCORE, Gail Model and the Simplified Acute Physiology Score (SAPS).

The last 10-15 years have seen an explosion in the number of published articles describing the development (and occasionally the validation) of clinical prediction models. Particular clinical areas have seen considerable numbers of models being developed for the same outcome (e.g. diabetes, cancer, chronic kidney disease). Developing a prediction model can be very easy; all we need is some data (often collected for a different purpose) and a statistical package. Clearly this is an oversimplification, but cynically, many prediction models are developed, with no compelling clinical need and thus with little intention of ever being used, and are merely an easy publication to add to one’s curriculum vitae.  In areas where many models have been developed, deciding which one to use is difficult. Which is made particularly more difficult, as many are developed under the pretense of being a new discovery and rarely compared against existing models. Furthermore, after models are published, they are rarely evaluated (often referred to as external validation) by independent researchers. Unlike drug research, which is obviously well regulated, prediction models are not— there are no immediate side-effects. Yet, what is worrying is that thousands of these (largely untested) models are freely available to the general public via the internet— at the hands of anyone with access to a computer.

To evaluate the methodological conduct and reporting we conducted a number of systematic reviews describing the development or validation of multivariable prediction models across different medical areas and found the reporting to be particularly poor.  What was surprising when we analysed the results from these and other systematic reviews was the clear lack of crucial information being presented. Without full and transparent reporting, evaluating, even implementing and synthesizing the results (see CHARMS Checklist) from such studies is problematic.

At the most critical level, authors were developing models yet failing to provide information on the actual model so that other researchers were unable to evaluate the model or use on their patients— which is clearly ludicrous, to go to the extent to develop a model yet fail to tell readers what the model is.   For example, the FRAX model, a system of 62 models for 57 countries (FRAX v3.9) for predicting the 10-year risk of developing osteoporotic fracture (and recommended in numerous clinical guidelines) has been criticized on numerous occasions for failing to report the full model, despite requests from independent researchers to evaluate and compare the model against competing models on different data.  As specified in the TRIPOD guideline, it is essential that all regression coefficients and model intercept (for logistic regression) or baseline hazard at one or more clinical relevant time points are reported.

It was clear to us that authors, reviewers, editors and readers needed clear guidance on what issues should be included when describing the development and validation of a prediction model, which led us to develop the TRIPOD Statement.

The TRIPOD Statement is an annotated checklist items that were identified following discussions during a 3-day consensus meeting in 2011 with international experts in prediction modelling (statisticians, epidemiologists, clinicians and journal editors). The resulting checklist comprises 22 items that were regarded essential for good reporting of studies developing or validating multivariable prediction models. The recommendations within TRIPOD are guidelines only for reporting research and do not prescribe how to develop or validate a prediction model.

Whilst the TRIPOD Statement is foremost a guidance for reporting, accompanying the guidance is extensive 22,000 word Explanation & Elaboration article discussing not only rationale and example of good reporting but also methodological aspects for investigators to consider when developing, validating or updating a prediction model.

To increase the visibility of the TRIPOD Statement we are co-publishing the article simultaneously in 11 leading general medical and specialty journals including the Annals of Internal Medicine, BJOG, BMC Medicine, British Journal of Cancer, British Journal of Surgery, British Medical Journal, Circulation, Diabetic Medicine, European Journal of Clinical Investigation, European Urology and the Journal of Clinical Epidemiology. We welcome other journals in endorsing TRIPOD, by including prediction model studies as a distinct study type, include TRIPOD in their instructions for authors and require authors to complete and submit a TRIPOD Checklist in their submission.

Gary CollinsGary Collins (@GSCollins) is an Associate Professor, Deputy Director of the Centre for Statistics in Medicine, University of Oxford and member of the TRIPOD Steering Group (@TRIPODStatement).

 

Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top