TY - JOUR
T1 - Reproducibility in machine-learning-based research
T2 - Overview, barriers, and drivers
AU - Semmelrock, Harald
AU - Ross-Hellauer, Tony
AU - Kopeinik, Simone
AU - Theiler, Dieter
AU - Haberl, Armin
AU - Thalmann, Stefan
AU - Kowald, Dominik
N1 - Publisher Copyright:
© 2025 The Author(s). AI Magazine published by John Wiley & Sons Ltd on behalf of Association for the Advancement of Artificial Intelligence.
PY - 2025
Y1 - 2025
N2 - Many research fields are currently reckoning with issues of poor levels of reproducibility. Some label it a “crisis,” and research employing or building machine learning (ML) models is no exception. Issues including lack of transparency, data or code, poor adherence to standards, and the sensitivity of ML training conditions mean that many papers are not even reproducible in principle. Where they are, though, reproducibility experiments have found worryingly low degrees of similarity with original results. Despite previous appeals from ML researchers on this topic and various initiatives from conference reproducibility tracks to the ACM's new Emerging Interest Group on Reproducibility and Replicability, we contend that the general community continues to take this issue too lightly. Poor reproducibility threatens trust in and integrity of research results. Therefore, in this article, we lay out a new perspective on the key barriers and drivers (both procedural and technical) to increased reproducibility at various levels (methods, code, data, and experiments). We then map the drivers to the barriers to give concrete advice for strategies for researchers to mitigate reproducibility issues in their own work, to lay out key areas where further research is needed in specific areas, and to further ignite discussion on the threat presented by these urgent issues.
AB - Many research fields are currently reckoning with issues of poor levels of reproducibility. Some label it a “crisis,” and research employing or building machine learning (ML) models is no exception. Issues including lack of transparency, data or code, poor adherence to standards, and the sensitivity of ML training conditions mean that many papers are not even reproducible in principle. Where they are, though, reproducibility experiments have found worryingly low degrees of similarity with original results. Despite previous appeals from ML researchers on this topic and various initiatives from conference reproducibility tracks to the ACM's new Emerging Interest Group on Reproducibility and Replicability, we contend that the general community continues to take this issue too lightly. Poor reproducibility threatens trust in and integrity of research results. Therefore, in this article, we lay out a new perspective on the key barriers and drivers (both procedural and technical) to increased reproducibility at various levels (methods, code, data, and experiments). We then map the drivers to the barriers to give concrete advice for strategies for researchers to mitigate reproducibility issues in their own work, to lay out key areas where further research is needed in specific areas, and to further ignite discussion on the threat presented by these urgent issues.
UR - http://www.scopus.com/inward/record.url?scp=105002653591&partnerID=8YFLogxK
U2 - 10.1002/aaai.70002
DO - 10.1002/aaai.70002
M3 - Article
AN - SCOPUS:105002653591
SN - 0738-4602
VL - 46
JO - AI Magazine
JF - AI Magazine
IS - 2
M1 - e70002
ER -