Register your app. Publish measured versions. Deploy instantly to hardware-verified enclaves from warm capacity pools, at the clearing price.
Staging network is untrusted. It exists for CI and integration testing only. Do not use staging for secrets, private production data, or trust-sensitive workloads.
Why use staging anyway? It runs the newest features from main first, and gives developers low-cost access to strong CPU/GPU capacity for build validation, load testing, and model/app experimentation before production.
Production network is trusted. Production rollout is release-gated: strict attestation/auth policy plus pinned trusted measurements, signatures, and GCP image descriptor for an explicit release tag.
Staging endpoint URL is deployment-specific and configured via STAGING_CP_URL in CI/CD.
See details: CI/CD Networks
Every deployment flows through a secure, auditable pipeline
Register your app once with its name and source repository. This creates an entry in the app catalog.
Each push creates a new version (timestamp + git SHA). Your docker-compose and source commit are recorded.
The control plane resolves each version to immutable SHA256 digests, applies signature policy, and records per-size trusted values before deploy.
Attested versions deploy to verified TDX agents running dm-verity protected root filesystems. Hardware attestation proves your code runs in a secure enclave.
Every deployment is tied to a specific version with full audit trail back to source.
Code is automatically inspected before deployment. Rejected versions never run.
All deployments go through the catalog. No way to deploy untracked code — validated on every main rollout.
No picking servers. No waiting for VMs. The control plane schedules deployments onto verified warm capacity.
Datacenters keep a buffer of attested, idle agents already running. Deploys are assigned immediately.
Capacity is priced by supply and demand. If multiple datacenters can fill your request, you get the lowest price that clears.
You request a size and constraints. The control plane chooses a verified agent. Users never “assign” to an agent except for upgrades.
If there is no warm verified capacity, the request fails immediately (or can be queued if you opt in). No silent retry loops.
One-way trust through a proxy: clients trust the control plane, the control plane trusts agents
Verifies control plane attestation once, then trusts it to verify agents
TLS termination, DDoS protection
Runs in TDX. Verifies agents. Routes traffic.
app.easyenclave.comYour service runs here
agent-abc.easyenclave.comAnother service
agent-xyz.easyenclave.comClient verifies control plane's TDX attestation. Once verified, client trusts the control plane to make decisions about agents.
Control plane verifies each agent's attestation on registration. A nonce challenge prevents replay attacks, TCB enforcement rejects vulnerable firmware, and only agents with MRTD in the trusted list receive deployments.
Clients don't need to verify every agent. They verify the control plane once, then trust its proxy to only route traffic to attested agents. This is one-way trust through a proxy.
All nodes connect to the outside world through Cloudflare Tunnels. This creates a trust barrier:
Hardware-level isolation that protects your code and data from the cloud provider, OS, and hypervisor
CPU enforces memory encryption with per-VM keys. No software — not even the hypervisor — can read TEE memory.
Cryptographic proof that your exact code is running in a genuine TEE, verified by Intel hardware.
Run your existing Docker containers inside a TDX VM. No code changes, no special libraries required.
| Threat | Protection | Notes |
|---|---|---|
| Remote attacker | Full | Encrypted memory, network isolation |
| Malicious hypervisor | Full | Hardware-enforced memory encryption |
| Malicious OS | Full | TEE isolated from host OS |
| Physical memory dump | Partial | Memory encrypted, keys hardware-protected |
| Side channels | Limited | Add encrypted storage and app-level protocol hardening |
Trust Domain Extensions with full VM isolation and remote attestation via Intel Trust Authority.
ProductionSecure Encrypted Virtualization with Secure Nested Paging for memory integrity.
Coming SoonConfidential Compute Architecture for ARM-based confidential VMs.
PlannedLearn more about TEE technologies, threat analysis, and defense-in-depth strategies in the README.
Multiple security layers protect agents from boot to runtime
Read-only verified filesystem built with mkosi. The roothash is bound to boot measurements, so any tampering is detected before the VM starts.
Firmware security is checked via Intel Trust Authority's TCB status before agent registration. Only platforms with up-to-date firmware are accepted.
One-time nonces embedded in TDX quote REPORTDATA with a 5-minute TTL. Prevents replay of old attestation quotes during registration.
Measurer apps run on verified tiny TDX capacity (bare metal or GCP), resolve tags to immutable digests, and submit node-size-specific measurements that deployment must match.
Every push runs two clean builds. If TDX measurements drift or artifact hashes differ, deployment is blocked.
Admin access is secured with GitHub OAuth and CSRF-protected state tokens, alongside password authentication as fallback. Reproducibility mode details: docs/REPRODUCIBLE_BUILDS.md.
Join the network - provide compute or deploy workloads
You need an Intel TDX-capable machine. Check with cloud providers (Azure, GCP) or use bare metal with 4th/5th Gen Xeon.
git clone https://github.com/easyenclave/easyenclave
cd easyenclave
cp .env.example .env # Add your control plane URL
# Build and launch a TDX agent VM
bash crates/ee-ops/assets/gcp-nodectl.sh vm new --size tiny --cp-url "$CP_URL" --ita-api-key "$ITA_API_KEY" --wait
# Your agent will register automatically with the control plane
Once attestation passes, your agent appears in the control plane dashboard. You'll start receiving workloads automatically.
The key question: can your client do attestation?
For clients that cannot do TDX attestation: browsers, curl, REST clients. Traffic routes through the control plane, which verifies agents on your behalf.
# Your app is just a normal FastAPI/Flask/Express app
# No attestation code, no special libraries
curl https://app.easyenclave.com/proxy/my-app/api/data
# Or from a browser - same URL
fetch("https://app.easyenclave.com/proxy/my-app/api/data")
See it in action: The private-llm example runs Ollama (completely unmodified) inside a TDX enclave. Its test script queries the LLM through the proxy on every push — tested docs, not just words.
For clients that want to independently verify attestation: use the EasyEnclave SDK to verify the control plane's TDX quote and check service attestation status directly.
from easyenclave import EasyEnclaveClient
client = EasyEnclaveClient("https://app.easyenclave.com", verify=True)
# Control plane TDX attestation verified on connect.
# All services routed through the attested proxy.
my_service = client.service("my-app")
response = my_service.post("/api/data", json={"input": "secret"})
Key insight: The SDK verifies the control plane is running in a genuine TDX enclave, then trusts its proxy to only route to attested agents. No special code needed in your service.
| Client Type | Attestation Model | Trust Model |
|---|---|---|
| Browser / Web App | Proxy Trust | Trust control plane to verify agents |
| curl / REST client | Proxy Trust | Trust control plane to verify agents |
| SDK client | Client Verification | Independently verify CP attestation + proxy through verified CP |
Most users should use Proxy Trust - it requires no special code and works with any HTTP client. Use Client Verification when you want your application to independently verify the control plane's TDX attestation.
Use GitHub's built-in OIDC token to call /api/deploy directly.
This deploy path prefers GitHub Actions OIDC and falls back to API keys only when needed.
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Mint OIDC token
run: |
TOKEN=$(curl -sS -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
"${ACTIONS_ID_TOKEN_REQUEST_URL}&audience=easyenclave" | jq -r '.value')
echo "token=$TOKEN" >> "$GITHUB_OUTPUT"
- name: Deploy
run: |
curl -sS -X POST https://app.easyenclave.com/api/deploy \
-H "Authorization: Bearer ${{ steps.oidc.outputs.token }}" \
-H "Content-Type: application/json" \
-d '{"compose":"services: {}","app_name":"hello-tdx","agent_name":"my-app-123","dry_run":true}'
Full copy/paste workflow: examples/deploy-with-github-oidc.yml
Your EasyEnclave account must be linked with github_org or github_login matching github.repository_owner.
Create an entry in the app catalog. This is a one-time setup.
curl -X POST https://app.easyenclave.com/api/apps \
-H "Content-Type: application/json" \
-d '{"name": "my-app", "source_repo": "myorg/myrepo"}'
Every push publishes a version and deploys it.
- uses: easyenclave/easyenclave/.github/actions/deploy@main
with:
app_name: my-app
compose_file: docker-compose.yml
service_name: my-app
Use any HTTP client, or the SDK for attestation verification.
from easyenclave import EasyEnclaveClient
client = EasyEnclaveClient("https://app.easyenclave.com")
response = client.service("my-app").get("/api/data")
print(response.json())
This pattern is tested end-to-end on every push. See examples/private-llm/test.py for a working example.
Browse existing apps, register your own, or check the API documentation.