|dc.description.abstract||This talk will discuss how most quantitative approaches to evaluating research do not yet fully capture the true disciplinary and public influence of science research. I suggest how machines can aid in human judgement and improve context for decision-making, in a scalable manner.
Counting citations in the peer-reviewed literature has long been used to understand a paper’s influence on a field, usually as a complement to traditional peer review practices in evaluation scenarios such as promotion and tenure. Recently, field-normalized citation metrics  have been suggested as a more accurate way to understand disciplinary research performance.
However, quantitative approaches to understanding research impact are not without their critics. At best, research metrics have been suggested as an evaluation tool that can only “support, not supplant, expert judgement [of research]” . At worst, it has been proposed that altmetrics and “other bibliometric nightmares” are “for people who aren’t prepared to take the time (or lack the mental capacity) to evaluate research by reading about it” .
In this talk, I will briefly address some of the benefits and limitations of current attempts to improve quantitative scholarly impact indicators by providing numerical context. Though improvements over the status quo, these metrics by and large still do not:
Account for the content of discussions around research (e.g. citations that confirm or refute previous findings);
Reflect the stakeholder groups discussing research (e.g. citations from policymakers using research to develop conservation laws); nor
Reflect the implicit values that drive research practices (e.g. metrics that accurately capture the impact of open source research software development)
However, outright rejection of all impact indicators also misses the mark. With the ever-increasing volume of published research, it is impossible to rely only upon expert peer reviewers to identify influential research. Clearly, evaluation practices must scale using thoughtful, humane machine learning approaches to aid in impact assessment.
I will conclude by sharing solutions to the above limitations from across academia and industry.
 Hutchins BI, Yuan X, Anderson JM, Santangelo GM (2016) Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level. PLoS Biol 14(9): e1002541. https://doi.org/10.1371/journal.pbio.1002541
 Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., … Johnson, B. (2015). Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Higher Education Funding Council for England. London.
 Colquhoun, D. (2014). Why you should ignore altmetrics and other bibliometric nightmares. DC’s Improbable Science. 16 January 2014. Retrieved from http://www.dcscience.net/2014/01/16/why-you-should-ignore-altmetrics-and-other-bibliometric-nightmares/||