Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS PLOS Biologue

Double bill on citation metrics: A new alternative to the Impact Factor plus a handy guide to citation normalization

On Monday, PLOS Biology published two articles on citation metric-based evaluation of published research.

 

In the first, a Meta-Research Article (a new article type we launched earlier this year), a group from the NIH’s Office of Portfolio Analysis develops a new article-level metric in an attempt to replace the controversial Journal Impact Factor [1]. Known as the Relative Citation Ratio (RCR), this new metric allows researchers and funders to quantify and compare the influence of a scientific article. While the RCR cannot replace expert review, it does overcome many issues plaguing previous metrics. In their article George Santangelo and colleagues describe the RCR, which measures the influence of a scientific publication in a way that is article-level and, importantly, field-independent.

 

Historically, bibliometrics were calculated at the journal level, and the influence of an article was assumed based on the journal in which it was published. These journal-level metrics are still widely used to evaluate individual articles and researcher performance, based on the flawed assumption that all articles published by a given journal are of equal quality and that high quality science is not published in low impact factor journals.

 

A collaborative article by a number of publishers, including PLOS, eLIFE, Nature, Science and the EMBO Journal, makes the case for just how skewed and misleading the Journal Impact Factor can be, and how plotting the journal distributions of citations per article illustrates the underlying complexity that is lost in journal-level metrics  (see related PLOS blog here – Measuring Up: Impact Factors Do Not Reflect Article Citation Rates). All this has highlighted the need for article-level metrics.

 

Citation is the primary mechanism for scientists to recognize the importance of each other’s work, but citation practices vary widely between fields. In the article “Relative Citation Ratio (RCR): A new metric that uses citation rates to measure influence at the article level”, the authors describe a novel method for field-normalization: the co-citation network. The co-citation network is formed from the reference lists of articles that cite the article in question. For example, if Article X is cited by Article A, B, and C, then the co-citation network of Article X would contain all the articles from the reference lists of Articles A, B, and C. Comparing the citation rate of Article X to the citation rate in the co-citation network allows each article to create its own individualized field.

 

In addition to using the co-citation network for normalization, the RCR is also benchmarked to a peer comparison group, so that it is easy to determine the relative impact of an article. The authors argue that this unique benchmarking step is particularly important as it allows ‘apple to apples’ comparisons in comparing groups of papers, e.g. comparing research output between similar types of institutions or between developing nations.

 

The authors demonstrate that their quantitative RCR values correlate well with the qualitative opinions of subject experts. Many in the scientific community already support the use of the RCR. Dr. Stefano Bertuzzi, Executive Director of the American Society for Microbiology, calls the RCR a “new and stunning metric” and says in a blog post that the RCR “evaluates science by putting discoveries into a meaningful context. I believe that RCR is a road out of the JIF [Journal Impact Factor] swamp.” Additionally, two studies have already been published comparing RCR to other, simpler metrics, with the RCR outperforming those metrics with regard to correlation with expert opinion [3,4]. Importantly, the authors and the NIH provide full access to the algorithms and data used to calculate RCR. They also provide a free, easy to use web tool for calculating RCR of articles listed in PubMed at https://icite.od.nih.gov.

 

One of the primary criticisms raised against an initial version of the RCR (initially submitted to bioRXiv) is that, due to the field normalization method, it could undervalue inter-disciplinary work, especially for researchers who work in fields with low citation practices. The authors investigate this possibility in the PLOS Biology article, but find little evidence in their analyses that inter-disciplinary work is penalized by the RCR calculation.

 

While the RCR represents a major advance, the authors acknowledge that it should not be used as a substitute for expert opinion. While it does measure an article’s influence, it does not measure impact, importance, or intellectual rigor. It is also too early to apply the RCR in assessing individual researchers. Senior author George Santangelo explains that “no number can fully represent the impact of an individual work or investigator. Neither RCR nor any other metric can quantitate the underlying value of a study, nor measure the importance of making progress in solving a particular problem.” While the gold-standard of assessment will continue to be qualitative review by experts, Santangelo says the RCR can assist in “the dissemination of a dynamic way to measure the influence of articles on their respective fields”.

 

Developing a perfect citation metric that satisfies all criteria and all parties with an opinion in this contentious field may be close to impossible. However, PLOS is a big proponent of articles being judged on their own, article-level, merits, and we at PLOS Biology are always gratified to see advances being offered by new methods, in this case the RCR, and its approach of fied normalization, as a very welcome venture in this direction.

 

 

In the second article published this week, a Perspective, John Ioannidis, Kevin Boyack and Paul Wouters provide a useful guide on normalizing citation metrics [2]. As Ioannidis et al. explain, the “use and misuse [of citation metrics] cause controversies not only for technical reasons but also for emotional reasons because these metrics judge scientific careers, rewards, and reputations….Scientists, journals, or institutions scoring badly in specific metrics may hate them and those scoring well may love them.”

 

The article walks us through the nuances, and pros and cons, of normalizing across scientific fields (as done in the RCR), age of paper, type of publication (e.g. reviews versus primary literature) and other such factors. It’s an insightful and valuable read for anyone using metrics to make judgements calls regarding publication success. As the authors conclude “some metrics may be better suited than others in particular applications. E.g., different metrics and normalizations may make more sense in trying to identify top researchers versus finding out whether an institution is above or below average. Judicious use of citation metrics can still be very useful, especially when they are robust, transparent, and their limitations are properly recognized.”

 

We at PLOS feel all metrics, including article-level metrics, should be used with caution and we welcome these two articles that provide a nice reflection on the normalization of article citations.

 

References:

  1. Hutchins BI, Yuan X, Anderson JM, Santangelo GM (2016) Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level. PLoS Biol 14(9): e1002541. doi: 10.1371/journal.pbio.1002541
  2. Ioannidis JPA, Boyack K, Wouters PF (2016) Citation Metrics: A Primer on How (Not) to Normalize. PLoS Biol 14(9): e1002542. doi: 10.1371/journal.pbio.1002542
  3. Bornmann L, Haunschild R. Relative Citation Ratio (RCR): A first empirical attempt to study a new field-normalized bibliometric indicator. J Assoc Inf Sci Technol 2015; In Press. doi: 10.1002/asi.23729.
  4. Ribas S, Ueda A, Santos RLT, Ribeiro-Neto B, Ziviani N. Simplified Relative Citation Ratio for Static Paper Ranking: UFMG/LATIN at WSDM Cup 2016.http://arxiv.org/pdf/1603.01336v1.pdf

Press thus far:

The Chronicle of Higher Education: Better Than Impact Factor? NIH Team Claims Key Advance in Ranking Journal Articles

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top