KubeCon Atlanta wrapped up with a clear message: cloud native isn’t just about containers anymore, it’s about extending Kubernetes to the edge. That is where AI and security demands are reshaping architectures. In my quick interview with Stephen Rust, Principal Architect at Akamai, we dove into how Akamai Cloud is tackling these shifts, drawing on his two decades of experience in open source, storage, and distributed systems.
Stephen’s role puts him at the intersection of Akamai’s global network and cloud native tools, making this chat a practical look at what’s next for enterprises scaling K8s beyond central clouds.
Extending Cloud Native to the Edge
A key theme from our discussion was pushing compute closer to users without sacrificing manageability. Akamai’s 4,400+ edge PoPs enable this, allowing Kubernetes workloads to run distributed while integrating with CNCF standards like Istio for service mesh and Prometheus for monitoring. Stephen highlighted how this setup addresses latency in AI inference, where traditional hyperscalers often force data backhauls that kill performance.
We also talked at the event on runtime security, where tools like eBPF provide kernel-level visibility into container behaviors; catching issues like unauthorized network calls early. For Akamai, this ties into their Lincoln platform, offering managed K8s (LKE) that scales edge-native without the ops overhead of custom clusters.
Navigating AI and Security Challenges
With AI workloads exploding, there is increased urgency to have secure supply chains in model deployment. Akamai leverages patterns like signed artifacts with Cosign and policy enforcement via Kyverno to ensure only verified models hit production edges. This mitigates risks from poisoned models or evasion attacks, common in exposed APIs.
On the threat side, Akamai has strong perimeter defenses: DDoS mitigation baked into the network, API gateways with rate limiting, and Wasm-based inspectors for real-time threat detection. These aren’t add-ons, they’re core to Akamai’s stack, helping teams handle volumetric attacks or adversarial inputs without slowing down user traffic.
A simplified view of the challenges and approaches we discussed:
| Challenge | Edge Impact | Akamai Approach |
|---|---|---|
| Latency in AI Inference | Data routing spikes response times | Distributed PoPs for local execution |
| Supply Chain Risks | Tampered models in pipelines | Verifiable deployments with CNCF tools |
| API Security | Exposed endpoints to evasion/DDoS | Wasm inspectors and anycast absorption |
These draw from Akamai’s real-world telemetry, where edge placement turns reactive security into proactive containment.
From the Ops View
I’ve wrangled enough multi-region K8s setups to appreciate when a platform simplifies the hard parts, like federated observability or zero-trust at scale. Stephen’s perspective reinforces Akamai’s bet on open source to make edge cloud native accessible, without the lock-in of proprietary tweaks.
This is also why the acquisition of Fermyon makes so much sense for both their short and long term goals for the Akamai Cloud ecosystem.
If you’re evaluating managed K8s for 2026, this interview spotlights why Akamai’s ecosystem play matters. It’s about building on community standards to deliver resilient, low-latency apps.
Watch the full conversation here, then check out Akamai’s LKE docs for a test drive. Your distributed architecture might just get a little less chaotic.

