Wearable devices with autonomous decision-making combine sensors, models, and actuators in tight proximity to the human body. When those decisions cause harm, liability is rarely simple: responsibility can fall on multiple actors depending on control, foreseeability, and the applicable legal framework. Legal scholars and regulators emphasize three persistent fault lines: manufacturer liability, user responsibility, and software developer accountability.
Legal frameworks and implicated parties
Under traditional tort and product liability principles, designers and manufacturers who place defective products into the market can be liable for physical harm. Ryan Calo at the University of Washington has argued that existing law often treats robots and automated systems as products, making producers potentially responsible where defects or inadequate safeguards are present. Regulators in the European Union approach liability through instruments such as the Product Liability Directive and the General Data Protection Regulation, which shift burdens and require transparency from firms, according to analyses by the European Commission. Where human operators retain meaningful control, courts frequently assess negligence or failure to follow usage instructions. Where software updates or third-party models drive behavior, developers and platform providers may bear responsibility under principles of failure to warn or defective design.
Causes, consequences, and contextual nuances
Causes of harm range from sensor failure and biased training data to unforeseen interactions with environments and incomplete specifications of acceptable behavior. Kate Crawford at New York University’s AI Now Institute has highlighted how biased data and opaque development practices escalate risks for marginalized groups, making harms not only physical but also cultural and social. Consequences include medical injury, privacy breaches, workplace discrimination, and erosion of trust in assistive technologies. Territorial differences matter: common law jurisdictions emphasize case-by-case fault assessment while civil law and EU regimes feature stronger statutory duties and strict liability approaches for certain products. In low-resource settings, limited regulatory oversight and informal distribution chains can leave harmed individuals without effective remedies.
Determinations of liability therefore hinge on evidence of control, foreseeability, and compliance with safety and disclosure obligations. Remedies may require legal reforms that combine updated product standards, clearer obligations for model explainability, and insurance mechanisms to compensate victims. Nuanced policy design must account for cultural impacts and unequal exposure to risk so that legal responsibility maps onto real-world power and capabilities rather than technological complexity alone.