Regulation of who “owns” data from wearable devices is not governed by a single global rule; it is a patchwork of legal regimes, consumer-protection enforcement, and contract law. Major regulators treat wearable data through existing frameworks that emphasize data control, processing responsibilities, and consumer rights rather than a simple property model of ownership. I. Glenn Cohen Harvard Law School has written about gaps between health law and emerging digital health technologies that make questions of ownership complex and context-dependent.
Regulatory landscape
In the European Union the General Data Protection Regulation embodies a rights-based approach: data subjects have rights to access, portability, correction, and deletion. Andrea Jelinek European Data Protection Board emphasizes enforcement focused on transparency and lawful bases for processing rather than declaring user ownership in property terms. National data protection authorities implement GDPR rules and can compel companies to change practices, impose fines, and require data portability when processing relies on consent or contract.
In the United States regulation is sectoral and enforcement-driven. The Federal Trade Commission uses consumer protection authority to police unfair or deceptive practices around privacy and security, requiring companies to honor privacy promises. For health-related wearable data that enters the clinical domain, the U.S. Department of Health and Human Services Office for Civil Rights enforces HIPAA, which governs covered entities’ obligations but does not confer a general ownership right to consumers for all wearable-generated data. State statutes such as the California Consumer Privacy Act create consumer rights resembling control over personal data, including rights to access and deletion, but still stop short of declaring data property.
Causes and consequences
The current regulatory patchwork arises from the rapid diffusion of wearables, the diversity of data types they collect, and historical legal frameworks designed for different technologies. Wearable devices produce biometric, behavioral, and location data that can be both deeply personal and commercially valuable; regulators therefore focus on risk mitigation, informed consent, and accountability. Consequences of this approach include variability in user protections across jurisdictions and uncertainty for consumers about whether they can transfer, sell, or restrict secondary uses of their data.
Practically, companies often rely on terms of service and privacy notices to allocate rights; courts and regulators then interpret those contracts under consumer law. Where enforcement is strong, companies may change practices to offer greater transparency and portability. Where enforcement is weak, commercial actors may monetize aggregated datasets, raising privacy and equity concerns for populations whose data are overrepresented in training algorithms.
Cultural and territorial nuances shape outcomes: European frameworks prioritize individual dignity and data subject rights, while the U.S. emphasis on sectoral rules creates gaps for commercially held wearable data. Indigenous communities and marginalized groups face heightened risks when data about movement, health, or cultural practices are collected without meaningful consent or benefit-sharing, creating ethical and territorial sovereignty issues for regulators and policymakers.
Ultimately, the question of who “owns” wearable data is reframed by regulators into who may access, correct, delete, and decide uses of the data. That reframing is reflected in enforcement practice and academic commentary, which advise stronger transparency, clearer contractual terms, and harmonized rights to reduce uncertainty and protect individuals’ interests.