Show simple item record

dc.contributor.authorKonkiel, Stacy
dc.identifier.citationKonkiel, S. (2018). Towards greater context for scholarly metrics. 5th European Congress of Conservation Biology. doi: 10.17011/conference/eccb2018/108206
dc.description.abstractThis talk will discuss how most quantitative approaches to evaluating research do not yet fully capture the true disciplinary and public influence of science research. I suggest how machines can aid in human judgement and improve context for decision-making, in a scalable manner. Counting citations in the peer-reviewed literature has long been used to understand a paper’s influence on a field, usually as a complement to traditional peer review practices in evaluation scenarios such as promotion and tenure. Recently, field-normalized citation metrics [1] have been suggested as a more accurate way to understand disciplinary research performance. However, quantitative approaches to understanding research impact are not without their critics. At best, research metrics have been suggested as an evaluation tool that can only “support, not supplant, expert judgement [of research]” [2]. At worst, it has been proposed that altmetrics and “other bibliometric nightmares” are “for people who aren’t prepared to take the time (or lack the mental capacity) to evaluate research by reading about it” [3]. In this talk, I will briefly address some of the benefits and limitations of current attempts to improve quantitative scholarly impact indicators by providing numerical context. Though improvements over the status quo, these metrics by and large still do not: Account for the content of discussions around research (e.g. citations that confirm or refute previous findings); Reflect the stakeholder groups discussing research (e.g. citations from policymakers using research to develop conservation laws); nor Reflect the implicit values that drive research practices (e.g. metrics that accurately capture the impact of open source research software development) However, outright rejection of all impact indicators also misses the mark. With the ever-increasing volume of published research, it is impossible to rely only upon expert peer reviewers to identify influential research. Clearly, evaluation practices must scale using thoughtful, humane machine learning approaches to aid in impact assessment. I will conclude by sharing solutions to the above limitations from across academia and industry. References [1] Hutchins BI, Yuan X, Anderson JM, Santangelo GM (2016) Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level. PLoS Biol 14(9): e1002541. [2] Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., … Johnson, B. (2015). Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Higher Education Funding Council for England. London. [3] Colquhoun, D. (2014). Why you should ignore altmetrics and other bibliometric nightmares. DC’s Improbable Science. 16 January 2014. Retrieved from
dc.publisherOpen Science Centre, University of Jyväskylä
dc.rightsCC BY 4.0
dc.titleTowards greater context for scholarly metrics
dc.type.coarconference paper not in proceedings
dc.rights.copyright© the Authors, 2018
dc.relation.conferenceECCB2018: 5th European Congress of Conservation Biology. 12th - 15th of June 2018, Jyväskylä, Finland

Files in this item


This item appears in the following Collection(s)

  • ECCB 2018 [712]
    5th European Congress of Conservation Biology. 12th - 15th of June 2018, Jyväskylä, Finland

Show simple item record

CC BY 4.0
Except where otherwise noted, this item's license is described as CC BY 4.0