Abstract
In many scientific fields, post-publication surveys of the literature find that peer reviewers routinely overlook methodological flaws and statistical errors, avoid reporting suspected instances of fraud, and commonly reach a level of agreement barely exceeding what would be expected by chance. Other studies expose the extent of gender bias in peer review, and questionable editorial protocols that lack transparency. Anecdotally, editors also report increasingly difficulty in recruiting reviewers. What can be done about these well-known problems? This talk proposes an alternative model of peer review, drawing from expert elicitation, deliberation and decision-making literature and our experience running the repliCATS project. There is perhaps a limited role for AI in an overhaul of peer review, but that is not the focus of the talk.
Now in its seventh year, the repliCATS project has evaluated over 4,000 published social science articles across 8 disciplines, including psychology, economics, and education, as well as hundreds of preprints on PsyArXiv. For each paper, a diverse group of experts discuss and forecast the likely replicability of the research findings and make a variety of other judgements about the credibility of the evidence presented using a structured deliberation protocol. This talk will present our approach to evaluating research, and for cases where we have the outcome of actual replication studies, data about the accuracy of our forecasts.
Speaker
Fiona Fidler is Professor and Head of the History & Philosophy of Science (HPS) Program at the University of Melbourne. She is broadly interested in how experts, especially scientists, make decisions and change their minds. Her past research has examined how methodological change occurs in different disciplines, including psychology, medicine and ecology. She is also interested in methods for eliciting reliable expert judgements to improve decision making, including peer review decisions. She has been active in establishing the Metascience community in Australia, and was the founding president of the Association for Interdisciplinary Metaresearch and Open Science (AIMOS). She is co-director of the MetaMelb Research Initiative at the University of Melbourne, and lead PI of the repliCATS project (Collaborative Assessments for Trustworthy Science).