The Agentic Web: Why Autonomous AI Demands Verifiable Infrastructure
40% of enterprise applications will ship with task-specific AI agents by 2026. That's up from less than 5% in 2025.1
75% of enterprises are adopting confidential computing. The market hits $350 billion by 2032.2 3
By 2029, Gartner projects 75% of operations on untrusted infrastructure will be secured by confidential computing.4
These numbers tell a story. Not about hype cycles or speculative trends. About a fundamental replatforming of how computation gets processed, protected, and verified.
The question isn't whether confidential computing becomes foundational infrastructure. That's already happening. The question is whether the agentic web: the emergent network of autonomous AI systems managing real assets and making consequential decisions, will be built on verifiable guarantees or trust assumptions that collapse under adversarial conditions.
The Problem: AI Agents on Broken Infrastructure
AI agents are shipping. They manage portfolios, execute trades, coordinate workflows, and make decisions with real economic consequences. Gartner projects agentic AI could drive 30% of enterprise application software revenue by 2035, exceeding $450 billion.5
The infrastructure they run on was never designed for this.
Traditional cloud infrastructure requires trust. Trust in the provider. Trust in the hypervisor. Trust that your data isn't being observed, copied, or manipulated. For AI agents operating autonomously, especially those handling financial assets, personal data, or enterprise workflows, this trust model doesn't hold.
The security numbers are brutal:
- 6% of organizations have an advanced AI security strategy.6
- 40%+ of agentic AI projects will be canceled by end of 2027: escalating costs, unclear value, inadequate risk controls5
- 40% of AI data breaches by 2027 will trace to improper cross-border GenAI use7
- Shadow Agents: unsanctioned AI tools deployed without IT approval, now exceed 50% of enterprise AI usage6
One security analyst framed it directly: "An agent is always on, never sleeps, never eats; but if improperly configured, it can access the keys to the kingdom: privileged access to critical APIs, data, and systems, and it's implicitly trusted. If enterprises aren't as intentional about securing these agents as they are about deploying them, they're building a catastrophic vulnerability."6
The agentic web demands verifiable guarantees. Not promises.
The Transparency-Confidentiality Paradox
Blockchains solved half the trust equation. Transparency. Immutability. Verifiable execution. Every transaction auditable. Every state change provable. Exactly what autonomous agents need for accountability.
But blockchains achieve this through radical openness. Every computational step executes across thousands of nodes. Every byte of state gets replicated, inspected, permanently recorded. For value transfers and deterministic logic, this works. For AI inference, it creates an impossible architecture.
Consider what on-chain inference would actually require.
A single forward pass through a large language model: billions of floating-point operations across matrices with hundreds of millions of parameters. Ethereum's block gas limit sits at 60 million gas units. The math doesn't work. Not by a small margin. By orders of magnitude.
Even if compute weren't a constraint, the transparency model breaks AI at a fundamental level.
Model weights become public state. Deploy inference logic to a smart contract and your weights are readable by anyone querying the chain. Model IP, training investments, competitive differentiation: all exposed.
User inputs become transaction data. Every prompt, every query, every sensitive document you send for processing gets broadcast to the mempool, visible to validators during execution, immutably recorded for permanent retrieval.
Validators see everything. This isn't a bug. It's how consensus works. Nodes must execute identical computations to agree on state. There is no encryption scheme that lets validators verify execution without seeing what they're executing.
The result: AI agents get forced off-chain, posting only results back to the blockchain. But this reintroduces exactly the trust assumptions blockchain was supposed to eliminate. When inference happens in an opaque environment, "verified on-chain" means nothing more than "someone claims this is the output." The chain becomes an expensive database for unverifiable assertions.
This is the paradox. Blockchain's transparency enables trust but destroys confidentiality. Off-chain execution preserves confidentiality but abandons verifiability.
The agentic web requires both. Until now, no architecture delivered them together.
Confidential Computing: The Third Pillar
The solution: Trusted Execution Environments. Hardware-isolated enclaves that protect code and data during execution. A TEE functions as a cryptographic vault embedded directly into the CPU and GPU. Computation happens in isolation from the host system, the hypervisor, and even the infrastructure operator.
Gartner defines confidential computing as technology that "uses hardware-based trusted execution environments (TEEs) to keep data and workloads protected while in use... facilitating secure processing of sensitive data and AI, meeting strict privacy and compliance demands in any environment."4
For decades, we protected data at rest with disk encryption. Data in transit with TLS. TEEs complete the triad: protecting data in use.
| State | Protection | Maturity |
|---|
| Data at rest | Disk encryption | Mature |
| Data in transit | TLS/encryption | Mature |
| Data in use | TEE isolation | Emerging |
Before any sensitive operation begins, a process called attestation cryptographically proves the enclave is authentic and untampered. Only then is private data or a proprietary model unlocked for use inside the secure boundary. Trust that is verifiable, not assumed.
Performance matters here. Unlike fully homomorphic encryption or multi-party computation, TEEs execute at near-native speeds. Phala Network's 2025 benchmarks: 0.5-5% overhead in production conditions, processing over 1.34 billion LLM tokens in a single day.8
For real-time AI inference with tight SLAs, this is the difference between viable and theoretical.
Confidential Inference: AI That Proves Its Integrity
Confidential inference combines TEE isolation with cryptographic attestation. The result: AI systems where:
- Model integrity is verifiable: Attestation proves which model is running, that it hasn't been tampered with
- Inputs remain private: User data never leaves the encrypted enclave
- Outputs are authenticated: Signed, tamper-evident responses prove they originated from the attested model
- The infrastructure operator is untrusted: Even the cloud provider cannot observe computation
This isn't theoretical.
NVIDIA's Confidential Computing extends TEE protections to GPU memory and execution flows, enabling secure training and inference where both model and data remain confidential.9
The NVIDIA H100 Tensor Core GPU was first to support confidential computing with a hardware-based TEE anchored in an on-die hardware root of trust. The latest NVIDIA Vera Rubin NVL72 delivers rack-scale confidential computing across NVLink: a unified security domain spanning 72 GPUs, 36 CPUs, and interconnects.9
For AI agents on blockchain infrastructure, this changes everything. An agent can prove it executed a specific model, followed a specific policy, produced a specific output: without revealing the model weights, the input data, or any proprietary logic.
The Hard Truth: TEEs Are Not a Silver Bullet
TEE security has been broken before. It will be broken again.
Late 2025, security researchers disclosed WireTap: a physical attack that compromises Intel SGX on server processors using DRAM bus interposition. The attack exploits deterministic encryption in Intel's Total Memory Encryption. As documented: "a given plaintext and key will result in a fixed ciphertext over every execution," enabling attackers to map encrypted memory to unencrypted values.10
The practical impact:
- Researchers extracted SGX attestation keys
- They forged valid SGX quotes that pass Intel's verification
- Working attacks demonstrated on live production infrastructure
- Hardware required: less than $1,000
Intel's response is that physical access attacks remain "out of scope" for SGX.
The consequences were real. Integritee Network, a Polkadot-based confidential computing project, shut down in late 2025 partly due to WireTap's implications. Their post-mortem was direct: "Today, only permissioned (data center-controlled) deployments are considered viable. That breaks the premise of trustless oracles, which relied on not having to trust operators or data centers at all."11
This is not an argument against TEEs. It is an argument against single-layer security assumptions.
Defense in Depth: Proof of Cloud
The emerging response: defense in depth. Requiring attackers to breach multiple independent security barriers rather than one.
Proof of Cloud represents this approach: a vendor-neutral alliance maintaining a signed, append-only registry that binds TEE hardware identities to verified physical locations.12
Three components:
- Hardware Identity Binding: TEE attestation generates quotes linking unique hardware identifiers (Intel's DCAP PPID, AMD's Chip ID) to workload measurements
- Independent Verification: Alliance members independently verify hardware locations through facility visits, extracting IDs via attestation and cross-validating findings
- Transparent Registry: Verified entries populate an append-only signed log resembling Certificate Transparency, requiring supermajority signatures for updates
The security model is explicit: attackers must breach both TEE security and physically compromise a facility verified by multiple independent organizations. Neither layer alone is sufficient. Both together materially raise the cost of attack.
This is how production infrastructure actually gets secured. Not through perfect components. Through layered defenses where each failure mode requires independent compromise.
Enterprise Adoption: Beyond Pilots
The shift to confidential computing is not speculative. Adoption has crossed from experimentation to production.
The Numbers
A global IDC survey of 600+ IT leaders across 15 industries: 75% of organizations are now adopting confidential computing.2
Not pilot-phase curiosity. Of these organizations:
- 18% already in production
- 57% actively piloting or testing
- 71% of public cloud users implementing
Gartner positioned confidential computing as one of its Top 10 Strategic Technology Trends for 2026, alongside AI supercomputing as a foundational enabler. Gartner places it among the three core "Architect" technologies shaping enterprise infrastructure over the next five years.4
Market projections:
| Source | 2025 Value | Projected Value | CAGR |
|---|
| Fortune Business Insights | $24.2B | $350B (2032) | 46.4% |
| Precedence Research | $14.8B | $1,281B (2034) | 64.1% |
| Mordor Intelligence | $9.3B | $115.5B (2030) | 65.5% |
These aren't incremental growth numbers. This is fundamental replatforming.
Adoption Drivers
| Driver | % of Adopters |
|---|
| Workload security / external threats | 56% |
| PII protection | 51% |
| Compliance requirements | 50% |
Measurable benefits:
| Reported Benefit | % of Adopters |
|---|
| Improved data integrity | 88% |
| Confidentiality with technical assurances | 73% |
| Better regulatory compliance | 68% |
Regulatory Acceleration
The regulatory environment is forcing this shift:
- EU DORA explicitly requires encryption for data at rest, in transit, and in use. 77% of organizations more likely to consider confidential computing due to DORA requirements.2
- EU AI Act adds compliance pressure for AI systems processing sensitive information. Fines: 7% of global revenue.
- Colorado SB24-205 mandates risk management and impact assessments for high-risk AI, effective June 2026 (delayed from February 2026 via SB 25B-004)
- Singapore MAS guidelines recommend confidential computing for data-in-use protection
Gartner predicts by 2027, at least one global company will see its AI deployment banned by a regulator for noncompliance with data protection or AI governance legislation.7
Vertical Adoption: Where Confidential Computing Ships Today
Finance and Banking
Market position: BFSI accounted for 46.8% of confidential computing revenue in 2024. Largest vertical.3
Financial institutions were early adopters: stringent regulatory mandates, high-value data. Key use cases:
Fraud Detection and AML: Banks pool encrypted transaction data for cross-institutional fraud pattern detection. Multiple institutions train AI models on combined, never-decrypted datasets to identify fraud patterns. No single bank sees another's raw data.13
Multi-Party Computation: Federated learning enables competing banks to craft money laundering detection models using each other's transaction data without exposing sensitive raw data to competitors.
Secure Cloud Migration: Financial institutions can now move workloads previously deemed too risky for public cloud, with cryptographic assurance that data remains confidential.
Risk Analytics: Firms outsource complex risk calculations to third-party providers with proprietary models and portfolio data protected within enclaves.
Healthcare
Healthcare organizations use confidential computing to solve a fundamental tension: the need to analyze patient data for better outcomes versus the obligation to protect that data under HIPAA and GDPR.
Decentriq and Datavant partnered to enable privacy-preserving data collaboration between health researchers and European hospitals. Confidential computing deployed a secure data clean room where patient-level data could be analyzed without ever being exposed. GDPR compliance maintained, clinical research advanced across borders.14
Super Protocol and Yma Health demonstrated real-world patient information can be safely used for AI-driven innovation without compromising privacy or compliance. All operations, from EHR extraction to AI inference, run entirely inside TEEs. Sensitive data never exposed, even to infrastructure providers.15
Government and Defense
Government agencies implement confidential computing for workloads where the traditional cloud trust model is unacceptable:16
- Classified data processing: Secure analysis of sensitive defense information in cloud environments
- Multi-level security: Operations requiring different classification levels within single environments
- Secure communications: Protecting military communications during processing and routing
- Joint operations: Secure information sharing between allied forces and agencies
This isn't about cost optimization. It's about capability: enabling cloud adoption for workloads that could never move off-premise under traditional security models.
Retail and E-Commerce
Growth rate: 66.9% CAGR. Fastest growing vertical.3
The driver: privacy-preserving analytics. Merchants segment customers, personalize recommendations, analyze purchasing patterns without exposing raw purchase histories.
Cloud Provider Positioning
The hyperscalers have made their positions clear.
Microsoft Azure
First to market in 2017. Broadest offering. Contributed the Open Enclave SDK to the Confidential Computing Consortium. Supports Intel SGX, AMD SEV-SNP, and Intel TDX.
Latest developments:
- Azure Intel TDX Confidential VMs (DCesv6, ECesv6 series): GA expected Q1 202617
- Azure Integrated HSM security chip rolled out across all Azure servers (August 2025)
- Confidential GPUs via NCCads_H100_v5 VM series extending TEE to attached GPUs
Azure CTO Mark Russinovich stated publicly: "What we ultimately believe and what we're pushing for is that confidential computing, at least as a basic capability, will eventually be ubiquitous."
Google Cloud
Confidential VMs, Confidential GKE, Confidential Dataproc, and Confidential Space. AMD SEV for full VM memory encryption with strict Zero Trust enforcement. Expanded to Confidential Accelerators with NVIDIA H100 GPUs for AI workloads.18
AWS
Proprietary approach with Nitro Enclaves: isolated execution environments within EC2 instances. Unlike Azure and Google, AWS doesn't build on Intel SGX or AMD SEV. Custom Nitro architecture instead.
The Verification Stack: Blockchain as Trust Anchor
Confidential computing solves execution privacy. But the agentic web requires more: a verification layer that establishes agent identity, enforces policies, creates an auditable record of autonomous actions.
Blockchain infrastructure becomes essential.
Identity and Attestation
Decentralized Identity provides agents with verifiable credentials not controlled by any single authority. Combined with TEE attestation, this creates agents that can prove both who they are and what they're running.19
Policy Enforcement
Smart contracts encode the rules agents must follow. Asset limits, approved actions, risk parameters: all enforced on-chain where they can't be silently modified. The agent operates within boundaries that are transparent and immutable.
By 2026, Gartner predicts most enterprises will rely on an AI gateway layer to centralize routing, policy enforcement, cost controls, and observability across LLMs, agents, and tools.6
Audit Trail
Every material action recorded on-chain. Not the sensitive computation itself, but the attestation proofs, the policy checks, the economic outcomes. Accountability without sacrificing confidentiality.
Oasis Network articulates the standard: "If an agent is not open source, backed by a decentralized key management service, audited, with a reproducible build, and running in a TEE periodically attested on-chain, then it shouldn't be trusted."20
This is the standard the agentic web requires.
Governance Agents
Sophisticated approaches emerging: governance agents that monitor other AI systems for policy violations. Security agents that detect anomalous agent behavior. The shift in 2026 is from viewing governance as compliance overhead to recognizing it as an enabler.5
The Infrastructure Landscape
The decentralized AI compute stack is fragmenting into specialized layers: privacy-first infrastructure, compute aggregation, verification, and incentive coordination. The trend is toward composable architectures where projects integrate across layers.
Privacy-First Infrastructure
Oasis Network deployed Runtime Offchain Logic (ROFL), extending confidentiality and verifiability to off-chain computations. Intel TDX live on mainnet, TEE-enabled GPU support in development. Partnered with io.net to extend confidential computing across decentralized GPU infrastructure.20
Phala Network provides TEE-based confidential computing for AI inference. 2025 benchmarks: 0.5-5% overhead, 1.34 billion LLM tokens processed in a single day. Partnerships with Hyperbolic, OLLM, and 0G extend its TEE worker nodes across ecosystems.8
Super Protocol combines NVIDIA H100 Confidential Computing with blockchain orchestration. Confidential AI training where models learn from sensitive datasets without data ever leaving its owner.21
iExec has operated confidential computing infrastructure since 2019, now deploying fully confidential AI agent support with Intel TDX integration on its 2026 roadmap.22
Verified Inference and AI Oracles
Ritual Network supports TEEs via a dedicated precompile contract on its upcoming L1, combining hardware enclaves with zero-knowledge proofs for dual-layer computational integrity. Its Infernet oracle already connects on-chain contracts to off-chain AI models on existing chains.23
Hyperbolic AI pairs Proof of Sampling (probabilistic verification of inference correctness) with Phala's TEE infrastructure for a hybrid verification model: cheaper than full re-execution, stronger than either layer alone.24
Chutes (Bittensor Subnet 64) ships confidential inference via sek8s: Intel TDX confidential VMs with NVIDIA Protected PCIe creating encrypted CPU-to-GPU channels. Cosign-verified container images and strict egress controls. Enterprise-grade TEE on a permissionless subnet.25
Compute Aggregation Networks
Bittensor coordinates 129+ subnets through validator consensus and TAO incentives. No native protocol-level confidentiality, but subnet operators like Chutes and Targon Compute (Subnet 4) are independently layering TEE on top. The architecture enables confidential computing without mandating it.26
Akash Network runs a Kubernetes-based compute marketplace with reverse auction pricing. AEP-65 targets confidential computing via Kata Containers with Intel TDX, AMD SEV-SNP, and NVIDIA NVTrust support. Estimated completion: July 2026.27
io.net aggregates 327,000+ GPUs from data centers and individual contributors. Partnered with Oasis Network to integrate confidential computing, including TEE-enabled GPU attestation via ROFL. Not native yet, but actively building toward it.28
Training Verification
Gensyn takes a different approach: probabilistic proof of learning for ML training verification via its Verde arbitration protocol. Over 1 million models trained on testnet. No native TEE, but verification guarantees complement confidential compute layers in a composable stack.29
The pattern: raw compute is commoditizing. Verification and confidentiality are where defensibility lives. Projects that started as GPU marketplaces are racing to add TEE support. Projects that started privacy-first are scaling compute capacity. The stack is converging.
The Adoption Gap
Despite strong momentum, challenges remain. The IDC study identifies primary barriers:2
| Challenge | % Reporting |
|---|
| Attestation validation complexity | 84% |
| Misconception as niche technology | 77% |
| Skills gap | 75% |
Gartner warns: "Integrating TEEs across different chip types, cloud providers, and environments can be complex... specialized skills or third-party platforms may be needed to orchestrate and manage confidential computing as adoption grows."4
The technology is mature enough for production. Market demand is clear. Regulatory pressure mounting. What's missing: execution capacity. Teams who understand both cryptographic foundations and operational realities of deploying these systems at scale.