Scientists look to AI for help in peer review

News Image

22 February 2017

More News...
In a cover article in the March 2017 issue of Communications of the ACM, the Association for Computing Machinery's flagship magazine, computer scientists from the University of Bristol review how state-of-the-art tools from machine learning and artificial intelligence are making inroads to automate parts of the academic peer-review process, and discuss opportunities for streamlining and improving the process further. As inventors of the SubSift system for profiling and matching papers and potential reviewers, Dr Simon Price and Prof Peter Flach from Bristol's Intelligent Systems Laboratory were invited by the CACM Editor-in-Chief to discuss a range of computational tools supporting assignment of papers to reviewers, calibration of reviewers' scores, and assembling peer review panels. 

Peer review is a cornerstone of the scientific publishing process. Papers submitted to academic journals or conferences are reviewed by experts on the topic -- the authors' 'peers' -- to decide whether the paper makes a significant contribution to the scientific literature and whether its claims are verifiable. Submitted papers are often substantially improved in the process, particularly for journals where the reviewing process can take many iterations before the paper is deemed publishable. Price and Flach suggest thinking of a reviewer’s job to "profile" the paper in terms of its strong and weak points, and advocate a separation of the reviewing job proper from the eventual accept/reject decision: more "expert witness" than "judge". Profiling, matching and expert finding are key tasks that can be addressed using feature-based representations commonly used in machine learning.

The SubSift system, short for 'submission sifting', was originally developed to support paper assignment at the 2009 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, and subsequently generalized into a family of Web services and re-usable Web tools with funding from Jisc. The submission sifting tool composes several SubSift Web services into a workflow driven by a wizard-like user interface that takes the Program Chair through a series of Web forms of the paper-reviewer profiling and matching process. An alternative user interface to SubSift that supports paper assignment for journals was also built. Known as MLj Matcher in its original incarnation, this tool has been used since 2010 to support paper assignment for the Machine Learning journal edited by Flach, as well as other journals.

More information about SubSift is available here:

The CACM article can be accessed here:

An accompanying video can be seen here: