Who is liable for AI-driven scientific errors?

Artificial systems increasingly assist or automate scientific inference, but when models err the question of who is legally and ethically responsible is complex and unsettled. Courts and regulators treat AI like other tools, yet scholars argue that conventional doctrines do not cleanly map onto algorithmic systems. I. Glenn Cohen Harvard Law School has analyzed clinical AI through the lens of medical malpractice and regulatory premarket review, emphasizing that responsibility often turns on whether a human actor exercised reasonable judgment. Frank Pasquale University of Maryland has highlighted how opacity and corporate practices complicate accountability, calling for transparency obligations to assign fault more clearly.

Legal frameworks and accountable actors

Potentially liable parties include developers who design and train models, deployers such as research teams and institutions that apply models to data, manufacturers that package AI as devices or software, and publishers or funders when errors arise from dissemination. Sector-specific regulators shape liability outcomes: the U.S. Food and Drug Administration regulates software as a medical device and issues guidance for AI/ML-based tools, which can shift legal fault toward manufacturers when regulatory obligations are unmet. The European Commission’s proposed AI Act assigns duties according to risk categories, legally anchoring responsibilities for high-risk scientific applications. Karen Yeung University of Birmingham argues that legal accountability must be paired with regulatory design to avoid gaps where no single actor is clearly at fault.

Causes, consequences, and contextual nuance

Errors emerge from biased or unrepresentative training data, mis-specified objectives, overfitting, or researcher misuse. These technical failures interact with cultural pressures in science—publish-or-perish incentives, rushed peer review, and uneven reproducibility norms—amplifying the chance that flawed outputs affect downstream decision-making. Consequences vary: in biomedical research, incorrect model outputs can cause patient harm and undermine public trust; in environmental science, flawed projections can lead to misallocated conservation resources with territorial consequences for Indigenous and local communities. Such harms are not evenly distributed; marginalized groups often bear disproportionate risk when models reflect existing social biases.

Civil-law doctrines that often apply include negligence, when a party fails to meet a standard of care; product liability, for defective code released as a commercial product; and professional malpractice, for practitioners who rely unreasonably on AI. However, real-world allocation of liability depends on facts: who controlled inputs, who validated outputs, and whether warnings and safeguards were adequate. I. Glenn Cohen Harvard Law School emphasizes that negligence assessments will hinge on evolving standards of care in fields adopting AI.

Bridging gaps requires multidisciplinary governance: regulators such as the U.S. Food and Drug Administration and lawmakers must pair rules with incentives for transparency, independent validation, and post-deployment monitoring. Scholars like Frank Pasquale University of Maryland and Karen Yeung University of Birmingham advocate for legal and institutional reforms that align technical accountability with ethical and social responsibilities, ensuring liability frameworks protect public welfare while encouraging responsible innovation.