Reproducibility in biomedical research is uneven and often lower than stakeholders expect. John Ioannidis at Stanford University argued that statistical biases, small sample sizes, and selective reporting make many published findings likely to be false, a claim that shifted how scientists evaluate evidence. Empirical signals from multiple sources reinforce that concern: surveys and targeted replication efforts show that failures to reproduce results are common across laboratory and clinical contexts, although the degree varies by field and methodology.
Estimates and evidence
A broad survey reported by Monya Baker at Nature found that a majority of researchers had experienced difficulty reproducing others’ experiments and that many had failed to reproduce their own work, indicating systemic problems rather than isolated incidents. In preclinical oncology, Christopher G. Begley at Amgen reported that a large fraction of landmark findings could not be reproduced in company laboratories, highlighting a gap between academic discovery and industrial validation. The Reproducibility Project focused on cancer biology coordinated by the Center for Open Science documented practical and logistical barriers to replication even when original materials and protocols were available, underscoring that reproducibility is as much an operational challenge as a statistical one.
Causes and systemic incentives
Multiple interlocking causes explain current reproducibility gaps. Statistical issues such as low power and publication bias interact with researcher practices including selective reporting and analytical flexibility. Structural incentives favor novel, positive findings over careful confirmatory work, shaping career advancement and funding decisions. Resource and infrastructure differences create territorial and cultural variation: laboratories in resource-limited settings may lack access to reagents, standardized equipment, or training that support reproducible methods, while commercial laboratories face different pressures that can prioritize throughput over detailed methodological disclosure. Limitations in materials sharing, incomplete methodological reporting, and absence of raw data or code further impede independent verification.
Consequences and responses
Consequences extend from inefficient use of research funding to delayed translation of treatments and erosion of public trust in biomedical science. Failures to reproduce preclinical findings can lead to costly downstream failures in drug development and clinical trials. Major institutions and funders have responded with concrete measures. The National Institutes of Health introduced policies and guidance aimed at improving rigor in experimental design and transparency in reporting. The Center for Open Science promotes open data, preregistration, and registered reports to reduce selective reporting and analytic flexibility. Journals and professional societies increasingly emphasize checklists, methodological standards, and data availability requirements.
Progress is incremental and uneven. Reproducibility is best understood as a spectrum influenced by study design, discipline, and the social and material context of research. Strengthening reproducibility will require coordinated cultural change in incentives, routine methodological transparency, and investment in training and infrastructure across academic, industrial, and global research environments.
Science · Scientific Research
How reproducible are current biomedical research findings?
February 25, 2026· By Doubbit Editorial Team