Ray AI Compute Engine in a TEE

Powered by Lunal's Trusted Execution Environments

What is this?

We deployed a Ray cluster inside a Trusted Execution Environment (TEE). The entire cluster, including the head node and workers, runs within hardware-protected confidential VMs so that workloads stay private even from the cloud provider.

TEEs provide a tamper-proof secure boundary where your compute workloads run. Everything operates within hardware-protected memory that guarantees the following:

Try it now (2 demos deployed)

Ray Dashboard

Test Ray's distributed computing and ML serving capabilities directly from your browser:

Distributed Computing Demo

Execute parallel computation across the Ray cluster:

Result

Or use the API programmatically:

curl -X POST https://ray-demo.lunal.dev/api/ray/distributed-task \
  -H "Content-Type: application/json" \
  -d '{"numbers": [1, 2, 3, 4, 5], "operation": "square"}'

Ray Serve ML Inference

Deploy and query ML models with Ray Serve:

Result

Or use the API programmatically:

curl -X POST https://ray-demo.lunal.dev/api/ray/serve \
  -H "Content-Type: application/json" \
  -d '{"features": [5.0, 3.2, 1.8]}'

What is Lunal?

Lunal is the trusted compute company that makes TEEs simple, usable, and scalable. We provide unified software and infrastructure for deploying AI workloads in TEEs with zero configuration.

Learn more about Lunal and why secure AI needs TEEs.