Conservation Biology is not a single field of science: how to judge citation impact properly
(Oral)
Janne-Tuomas Seppänen
SEE PEER REVIEW
Conservation Biology is not a single field of science. It is a multitude of very different fields of science. It ranges from continental-scale distribution change simulations using planetary-scale climate data, to understanding how a frog responds to calls, to economic optimization models for society's resource extraction, to teasing apart molecules or atomic isotope ratios to infer past and present, and much more.
Yet, items of research - and individual scientists - in all these fields are published in the same platforms, compete for the same scarce attention of peers and of society at large, and the same but even more scarce funding, and the same but mythologically scarce tenures. Somehow, we need ways to compare science of frog calls to science of forestry offset policies, science of bear population genetics, and to everything else.
Citation metrics based on co-citation networks are very promising ways to achieve impact comparison between items and people in different fields of science. The first implementation of this [1] is fatally flawed, as it allowed the diseased concept of "a journal" to be included in the algorithm, but the co-citation network comparison can be implemented on a pure basis of just the research items. I will briefly introduce my Co-citation Percentile Ratio (CPR) algorithm and examples how it ranks research items from different corners of Conservation Biology.
Citation analysis at the article and individual level is slowly but certainly delivering the welcome death of the journal impact factor, and promises to be a better way to judge academic importance. However, we need to be careful in adopting the new tools, and not let them grow to monsters worse than the one we just killed.
In particular, we need to recognize what "a citation" is, and ideally develop ways to give it more nuanced, richer, but still machine-readable meaning. At present, every citation is considered a source that contributed to the science that was done, when in reality most citations are tools to build the author's narrative [2]. But that does not mean citation analysis is useless, it just means that the inference we draw from it needs to take into account what "a citation" really is.
[1] Hutchins BI, Yuan X, Anderson JM, Santangelo GM (2016) Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level. PLoS Biology. https://doi.org/10.1371/journal.pbio.1002541
[2] MacRoberts MH, MacRoberts BR (2017) The mismeasure of science: Citation analysis. Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.23970
INTRO: Interesting discussion on the use, definition, and impacts of citations.
MERITS: Great topic for discussion.
CRITIQUE: I enjoyed your modern writing style and frankness.
DISCUSSION: Great topic for discussion.
- - -
INTRO: This position paper makes the case that existing measures of disciplinary impact (e.g. journal impact factor, relative citation ratio) cannot apply to a field as diverse as conservation biology. The author suggests that an approach called the 'Co-citation Percentile Ratio (CPR) algorithm' is a better way to understand impact in this field.
MERITS: The argument that citations need to be understood for what they are--primarily, a way to craft an evidence-based narrative--resonates with me. Yet while there have been a number of attempts to better contextualize citations using methods like sentiment analysis over the years, no approach or tool has taken root quite in practice like the journal impact factor has. Much to the chagrin of metrics creators and tool builders alike, few end users want such nuance and context to their data--they often want a single number indicator that allows them to compare apples to apples, however imperfectly. (Though few would admit to that publicly!) The CPR approach in theory could resuscitate (pardon the pun) attempts to 'do metrics better' in the field of conservation biology and beyond, dependent of course upon the validity of the algorthim (which is not discussed in the abstract, likely due to space limitations).
CRITIQUE: There are two weaknesses to this abstract that I would encourage the author to consider in preparing his full presentation: 1) arguments against existing measures like the journal impact factor and relative citation ratio seem to be situated in the author's opinion, rather than empirical research; 2) the mechanics of the CPR algorithm are not explained, leaving one to only guess at the approach taken to calculate it. I would also encourage the author to briefly address how deeply entrenched research evaluation practices are in the academy, and how one might successfully encourage adoption of CPR both conservation biology as well as even more conservative disciplines.
DISCUSSION: Were the author to present a viable algorithmic approach that accurately reflects impact in a field as diverse as conservation biology, resulting in a single-number indicator that were easily understandable by metric validators (those who wish to test the algorthim for validity) and end users (e.g. department chairs or grant review committees) alike, it would undoubtedly improve research evaluation practices in all fields for the better.
I am curious to see how the author's approach to developing this algorithm compares to that of the NIH in developing the Relative Citation Ratio and to the Dimensions team's experience [1] in developing other field-level citation indicators, especially in terms of transparency and replicability, as I understand that the creators of these indicators found the process to be not as straightforward as one would guess.