Confidential computing prevents exposure of plaintext during processing by isolating code and data inside hardware-protected environments. Hardware vendors built trusted execution environments to address the blind spot that remains when data is encrypted at rest and in transit but must be decrypted for computation. Victor Costan and Srinivas Devadas at MIT documented the enclave model behind Intel Software Guard Extensions in their analysis of enclave design and threat models, helping practitioners understand both guarantees and limits.
Protecting data while in use
A core mechanism is memory isolation and encryption enforced by the processor. Enclaves or secure virtual machines keep sensitive pages encrypted outside the CPU and only decrypt them inside a protected boundary. That boundary relies on hardware-rooted keys and mechanisms such as attestation, which lets a remote party verify that specific code is running inside a genuine enclave before provisioning secrets. Major cloud providers now offer services that leverage these hardware features so tenants can run confidential workloads without exposing data to the cloud operator. This reduces the need for customers to place absolute trust in administrative controls and improves the technical ability to comply with policies that limit who may view raw data.
Causes, consequences, and caveats
The push for confidential computing stems from regulatory, economic, and social drivers. Regulations like data protection regimes in many jurisdictions increase demand for technical controls that mitigate insider access risk, and enterprises in sectors such as healthcare and finance require stronger assurances before migrating sensitive analytics to public clouds. Culturally, organizations and communities with historical concerns about surveillance or cross-border data flows may find confidential computing helpful to preserve autonomy while participating in cloud-based research or services. Territory-specific laws governing data residency also make in-use protections attractive because they reduce the need to hold plaintext within particular physical borders.
Consequences include greater ability to run multi-party analytics and collaborative research without exposing raw inputs, enabling joint medical studies or federated machine learning where participants keep data confidential yet contribute to shared models. These advances change trust relationships between institutions, vendors, and end users by shifting some trust to hardware vendors and their supply chains.
Important caveats remain. Academic and industry researchers have demonstrated side-channel and implementation attacks against enclaves, and rigorous analysis is required to understand residual risks; Victor Costan and Srinivas Devadas at MIT highlighted several architectural considerations that affect real-world guarantees. Hardware bugs, firmware updates, and the complexity of remote attestation introduce operational burdens. There are also performance and environmental trade-offs, as additional cryptographic and isolation overheads can increase CPU utilization and energy consumption, which matters for both cost and sustainability.
In practice, confidential computing is most effective when combined with traditional controls: strong encryption at rest and in transit, audited code running inside the enclave, robust attestation workflows, and careful key management. Used judiciously, it materially improves protection of data in use but does not eliminate the need for comprehensive security practices and independent verification of hardware and software implementations.