Autonomous clinical systems shift traditional lines of responsibility from individual practitioners to networks of designers, deployers, institutions, and regulators. Accountability will not be a single actor but a layered set of obligations: designers and manufacturers who create algorithms, clinicians who use and interpret outputs, health systems that deploy and monitor tools, and public bodies that certify safety. Luciano Floridi University of Oxford has argued for distributed responsibility in AI ethics, emphasizing that moral and legal duties must map to each actor’s capacity to prevent harm. David W. Bates Brigham and Women's Hospital and Harvard Medical School has highlighted the importance of system-level safety and incident reporting in health information technologies, lessons that translate directly to autonomous AI.
Regulatory and legal frameworks
Regulatory agencies already shape responsibility through premarket review, postmarket surveillance, and labeling requirements. Where models make autonomous recommendations that affect diagnosis or therapy, regulation will determine whether liability primarily rests with manufacturers for design defects, with clinicians for misuse, or with health systems for inadequate integration and oversight. Courts and policymakers will weigh product liability, malpractice doctrines, and statutory standards. Nuanced legal outcomes will depend on transparency of model development, provenance of training data, and the foreseeability of harm.
Causes of accountability gaps and practical consequences
Gaps arise from opaque algorithms, fragmented procurement, and commercial incentives to prioritize performance over interpretability. Data bias and unequal representation can shift harms toward marginalized communities, producing territorial and cultural consequences when tools developed in high-income settings are exported without local validation. Environmental costs of training large models also create indirect harms that may factor into institutional responsibility for sustainability. Consequences include patient injury, erosion of trust in care, and increased regulatory burden. Health systems that adopt robust monitoring, explainability standards, and clear contractual risk allocation reduce these harms, as shown in health IT safety literature.
Ultimately, a combination of clearer legal standards, stronger regulatory frameworks, contractual risk-sharing, and clinical governance will determine who is accountable in practice. Strengthening reporting systems, requiring independent validation, and embedding human oversight preserve clinician judgment while holding designers and institutions liable for foreseeable risks. This multi-stakeholder approach aligns ethical theory and clinical safety principles to protect patients across diverse cultural and territorial contexts.