Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

A Critical Look at OpenClaw and NemoClaw

Surprise, surprise, agentic AI is advancing very quickly, and security isn’t quite keeping up. While most attention in recent times has focused on improving model capability, we’ve often been left wondering how to actually make these systems safe enough to trust with real-world tasks and limited interaction. This challenge has become particularly evident with the rise of platforms like OpenClaw, where autonomous agents can execute multi-step actions with minimal human oversight.

The Exploit Window Collapse: Claude Mythos and the Future of Incident Response

Every so often, something comes along that forces you to recalibrate how you think about cyber risk. Not incrementally, but fundamentally. Claude Mythos feels like one of those moments. The cybersecurity industry has spent decades racing attackers to close vulnerabilities faster. Claude Mythos suggests that race may be entering an entirely new phase. One where speed itself becomes the defining risk factor.

The Mythos Moment: Why the Future of Cybersecurity Is Software Trust

Anthropic’s Mythos announcement is not just another cybersecurity headline. It is a signal. AI is transforming software faster than security teams can adapt. The organizations that win won’t be the ones that simply find more flaws. They’ll be the ones that can prove their software can be trusted. A signal that software risk has entered a new era; one where AI can accelerate both the creation of software and the discovery of its weaknesses faster than human teams can respond.

Auditing Agentic Behavior for FedRAMP Compliance | Teleport

AI agents are tireless, highly capable, eager to please, but difficult to manage. George Chamales (CriticalSec) and Josh Rector (Ace of Cloud) unpack the identity and access challenges posed by agentic AI. How do you verify it was the right agent, doing the right action, approved by the right person? How do we bound, constrain, govern agentic behavior? Ultimately, the same frameworks built for human identity and access should be applied to agents.

George Kurtz + Dan Ives on AI Agents Bypassing Security Policies

One AI agent didn’t have permission to fix an issue… so it asked another agent with access to do it. Another? It rewrote the security policy to achieve its goal. This isn’t theory. This is happening. George_Kurtz sat down with DivesTech to discuss why AI needs guardrails.

Introducing our open source AI-native SAST

Static application security testing (SAST) tools help developers quickly catch potential vulnerabilities as they code. However, these tools rely on inflexible rules that often generate a high number of false positives, reducing trust in their accuracy and slowing adoption. To help developers access context-aware vulnerability detection, we’ve released an open source AI-native SAST solution. This tool scans code changes incrementally and surfaces security issues in real time.

How AI is changing IGA

It’s no surprise that AI is being integrated into identity governance and administration (IGA) platforms. Automation promises productivity boosts, risk detection can be in real-time and cloud environments allow greater scalability. What’s more, the pace of AI means IGA is quickly moving beyond slower, more rigid, rule-based approaches.

The AI Supply Chain is Actually an API Supply Chain: Lessons from the LiteLLM Breach

The recent supply chain attack involving Mercor and the LiteLLM vulnerability serves as a massive wake-up call for enterprise security teams. While the security industry has spent the last year fixating on prompt injections and model jailbreaks, this breach highlights a far more systemic vulnerability. The weakest link in enterprise AI is not necessarily the model itself. It is the middleware connecting the models to your data.

Your Convenient AI Agent Is a Backdoor to Your Files #agenticai #promptinjection

People are installing powerful AI agents on everyday laptops without realising those tools can access files, emails and operating system functions. Once prompt injected, that agent can behave like a malicious version of its user, which turns convenience into a direct path for deletion, exfiltration and loss of control.