Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Kubernetes for Agentic AI: Best Practices for Security and Observability

Agentic AI workloads are shipping to production on Kubernetes faster than the standards to secure them. Many teams deploying autonomous, tool-calling agents as containerized microservices do so without a shared baseline for securing or monitoring those containers. The CNCF AI Technical Community Group recently published a comprehensive article on cloud-native agentic standards, marking the first attempt to define best practices for such deployments.

Why Affordable Web Hosting Providers Are Enhancing Built-In Security Features

Affordable web hosting used to mean basic service. The assumption was straightforward. Paying less meant fewer protections and more site security responsibilities. That view is growing inaccurate. Even cheap hosting companies realize that tiny websites, startups, bloggers, and rising online retailers need protection.

eBPF for AI Agent Enforcement: What Kernel-Level Security Catches (and What It Misses)

Your team deployed Tetragon six months ago. TracingPolicies are humming along—you’re catching unauthorized binary executions, blocking suspicious network connections, and generating seccomp profiles from observed behavior. Runtime security for your traditional workloads is solid. Then engineering ships their first autonomous AI agent into production. A LangChain agent connected to internal databases, external APIs through MCP tool runtimes, and a vector database for RAG.

Observe-to-Enforce: How Progressive Security Policies Reduce Blast Radius

Last Tuesday, your security architect opened a pull request to add network policies to the payments namespace. The PR sat for six days. Three engineers commented with variations of “how do we know this won’t break checkout?” Nobody could answer. The PR got marked “needs discussion” and moved to a backlog column where it joined the fourteen other security hardening tickets nobody will touch.

Securing AI Agents on GKE: Where gVisor, Workload Identity, and VPC Service Controls Stop Working

You enable GKE Sandbox on a dedicated node pool, bind Workload Identity Federation to your AI agent pods, wrap your data services in a VPC Service Controls perimeter, and deploy your agents with the Agent Sandbox CRD using warm pools for sub-second startup. Your security posture dashboard shows every control configured and active. And then an attacker uses prompt injection to trick an agent into exfiltrating sensitive data through API calls that every single one of those layers explicitly allows.

Poisoned Axios: npm Account Takeover, 50 Million Downloads, and a RAT That Vanishes After Install

On March 30-31, 2026, threat actors published two malicious versions of the popular HTTP library axios (versions 1.14.1 and 0.30.4) to the npm registry. Both versions included a new dependency named plain-crypto-js which, in its 4.2.1 release, contained a fully-featured cross-platform dropper that silently installed a Remote Access Trojan (RAT) on developer machines.

Let's Encrypt simulated revoking 3 million certificates. Most ACME clients didn't notice.

On March 19th, Richard Hicks, one of our customers, emailed us about a certificate that had renewed after only a week. It was a 90-day certificate and he had not initiated the renewal. That’s the kind of thing that sends you straight to the logs. We found the answer right away. The certificate’s ARI renewal window had been shortened dramatically.

Securing OpenClaw Access So It Can't Go Rogue

In this video, we demonstrate how to securely grant an AI agent (OpenClaw) access to Teleport-protected Kubernetes resources using Teleport Machine Identity and tbot, without exposing secrets, API keys, or long-lived tokens. You’ll see how Teleport treats AI agents as first-class identities, enforcing strict RBAC controls so the agent can only do what it’s allowed to do, like reading logs, while being blocked from sensitive actions like deleting resources or accessing secrets.

CVE-2026-32922: Critical Privilege Escalation in OpenClaw - What Cloud Security Teams Need to Know

The adoption of personal AI assistants is on the rise. everywhere. Developers, power users, and in a few cases, entire teams self-host them to connect messaging apps, automate tasks, and interact with AI models across their infrastructure. But when these self-hosted gateways become compromised, the blast radius can extend far beyond a single user’s chat history.