Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

CVE-2026-32922: Critical Privilege Escalation in OpenClaw - What Cloud Security Teams Need to Know

The adoption of personal AI assistants is on the rise. everywhere. Developers, power users, and in a few cases, entire teams self-host them to connect messaging apps, automate tasks, and interact with AI models across their infrastructure. But when these self-hosted gateways become compromised, the blast radius can extend far beyond a single user’s chat history.

AI Workload Security on Azure: Evaluating Defender for Cloud Against Specialized Runtime Tools

Your SOC gets a Defender for Cloud alert: “Suspicious API call from AI workload pod.” You click through and find a LIST secrets call against the Kubernetes API server from a pod running your invoice-processing agent on AKS. The pod’s Workload Identity has Contributor access to your key vault. By the time your analyst opens the AKS Security Dashboard, the pod has been rescheduled.

AI Agent Security Framework on AWS EKS: Implementation Guide

You’ve enabled GuardDuty EKS Runtime Monitoring across your clusters. You’ve configured IRSA for your Bedrock-calling agents. CloudTrail is logging every bedrock:InvokeModel event. And last Tuesday, one of your AI agents exfiltrated 12,000 customer records through a sequence of API calls that every one of those tools recorded as completely normal—because at the control plane level, they were.

The Library That Holds All Your AI Keys Was Just Backdoored: The LiteLLM Supply Chain Compromise

We just published a deep breakdown of the Trivy supply chain attacks yesterday. Twenty-four hours later, we’re writing about the next one. Same threat actor. Different target. Worse implications. This time it’s LiteLLM, the Python library that acts as a universal API gateway for over 100 LLM providers. If you’re building anything with AI agents, MCP servers, or LLM orchestration, there’s a good chance LiteLLM is somewhere in your dependency tree.

When Your Friend's House Burns Down Twice: The Trivy Supply Chain Attacks Explained

We’ve been going back and forth on whether to publish this post. As the maintainers of Kubescape, a fellow CNCF open-source security project, we feel the weight of what happened to Trivy not as distant observers, but as people who see their successes and failures as our own. The Trivy maintainers are our friends. We share the same CNCF community, attend the same KubeCon-s, and fight the same fights (and share the same flights ).

AI Workload Security for Financial Services: What CISOs Need to Know

When your SOC alerts on “suspicious AI activity” in a production trading system, your response team faces a question that didn’t exist two years ago: can you explain to regulators exactly which function processed the malicious prompt, which internal tool it called, and how customer data ended up leaving your environment?

Why Generic Container Alerts Miss AI-Specific Threats

It’s 2:47 AM and your SOC dashboard lights up. Six alerts fire across three hours from a single Kubernetes cluster: an outbound HTTP fetch to an unfamiliar domain, a tool invocation inside a customer support agent, an API call to an internal service the agent has never contacted, a service account token read, a file write to a model artifact directory, and an outbound data transfer that looks like normal API usage.

AI Workload Security Tools: Runtime vs. Declarative Compared

You’re forty-five minutes into a vendor demo for AI workload security. The dashboard looks polished—posture scores, misconfiguration findings, vulnerability counts, all tagged with an “AI workload” label that wasn’t there last quarter. You ask the obvious question: “Show me how this detects a prompt injection attack on our production agent.” Long pause. The SE pulls up a generic process anomaly rule.

Cloud-Native Security for AI Workloads: Why It Matters and What's Changed

You’ve been securing Kubernetes workloads for years. Your CSPM is running, your CNAPP is configured, your team knows how to triage container alerts. Then an AI agent lands in your cluster — maybe from the data science team, maybe from a vendor integration, maybe from a tool you didn’t even know was running. Within a week, it’s making API calls nobody planned, accessing data stores that aren’t in the architecture diagram, and executing code it generated itself.

AI Workload Security on AWS: Evaluating Native Tools vs Third-Party Solutions

Your Bedrock agent running on EKS receives a prompt through your RAG pipeline. CloudTrail logs it as a normal bedrock:InvokeModel event—status 200, authorized IAM role, expected endpoint. But inside the container, the agent’s response triggers a tool call that spawns curl to an external IP, exfiltrating the context window. GuardDuty doesn’t flag it because the connection routes through a permitted VPC endpoint. You open your AWS console and see a healthy API call.