Comment

Reading between the lines on citation value

COMMENT: Cited work often has little, if any, influence.

  • Adrian Barnett

Credit: PHOTODISC

Reading between the lines on citation value

An audit reveals many papers on which cited work had little, if any, influence.

13 June 2017

COMMENT

Adrian Barnett

PHOTODISC

Citations are an important currency in research. Citation figures have traditionally been an indicator of the influence of a researcher's work on their peers, but these numbers can be artificially inflated.

The research community has for some time questioned the value of citations as a valid impact measure, and yet they remain a significant tool. For instance, citations are the key input for two other research metrics: the h-index, which ranks researchers, and the impact factor, which ranks journals. Both these metrics have been used by institutions to determine employment and promotion.

But the more valuable citations become as a currency for research, the more they are targeted by fraudsters. Researchers have shown it can be easy to artificially inflate citations.

As a simple exercise, I randomly sampled 100 citations of my work on Google Scholar, and read how that work had been cited. I assessed whether the authors had correctly cited my paper, and whether they had been influenced by my work based on their reaction to specific results and discussion of ideas.

In one paper, the citation was unmerited as my publication was in the reference list but not mentioned in the text. For 12 other papers, I judged the citation to be inaccurate because the authors had misinterpreted my results (in one case this happened because they had only read the abstract). This suggests that 13% of my citations were undeserved. However, with a sample size as small as 100, the 95% confidence interval for the percentage of inaccurate citations falls within the range 6–20%.

In 43 citations I examined, I was cited at the end of a sentence among a long list of other papers — my work was just a citation filler rather than an influencer of the author’s own work. In one study, a sentence referring to studies conducted in many regions on increasing mortality with lower temperatures ended with eight cited papers — one of which was mine. I could not find a single paper where my publication was the key motivation for the authors’ own paper.

In general, I could see my work was having a greater influence where my publication had been repeatedly cited. For example, articles citing my paper in the introduction and the discussion often meant the authors were considering my work before and after completing their own study. I found this 36 times out of the 100 citations I examined.

This simple exercise confirms to me that not all citations are equal. An author citing a paper can reflect genuine interest or a simple error. Researchers are developing methods to account for this discrepancy in the use of citations for measuring influence. For example, the influence-primed h-index, or hip-index, developed by researchers in Canada, weights citations by the number of times they are mentioned in a paper. These initiatives will benefit from efforts by the Initiative for Open Citations (I4OC) to make citation data more accessible. Some journals also ask authors to explain why they cited a paper.

Citation counts could become a more useful measure of impact if we move from simply counting them as equal to counting them based on their frequency, uniqueness and location within papers.

Disclaimer: I only considered journal papers that I could access and where the authors did not include my colleagues. I looked at a maximum of two papers per cited publication to examine a broad range of my publications, 67 papers in total.

Professor Adrian Barnett is a statistician who works in meta-research at the Queensland University of Technology, Brisbane. He tweets @aidybarnett

Sign up to the Nature Index newsletter

Get regular news, analysis and data insights from the editorial team delivered to your inbox.

Nature Index mail icon