Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

We let OpenClaw loose on an internal network. Here's what it found

Following our article on the challenges posed by agentic AI, we gave OpenClaw access to one of our legacy networks In my previous article on OpenClaw I wrote: “Even the most ‘risk-on’ organizations with deep AI and security experience, will likely find it challenging to configure OpenClaw in a way that effectively mitigates the risk of compromise or data loss, while still retaining any productivity value.” The Red Team here at Sophos took that as ‘challenge accepted’, s

Frontier AI Models Mark a Turning Point for Cybersecurity

This week Anthropic announced Project Glasswing, a cybersecurity initiative built around Claude Mythos Preview, an unreleased frontier AI model capable of autonomously discovering and developing exploits for zero-day vulnerabilities across major operating systems and web browsers. According to early details, the model has already identified thousands of critical vulnerabilities that traditional tools have missed for years.

AI Phishing Attack Prevention Strategies: How AI Identifies and Limits Human Risk

AI is making phishing attacks easier to create and scale. Tasks that once required manual effort can now be automated, allowing attackers to generate realistic messages, launch campaigns, and adapt tactics quickly to evade security controls. In fact, KnowBe4’s 2025 Phishing Threat Trends Report found that more than 73% of phishing emails analyzed in 2024 showed signs of AI involvement. As a result, phishing threats are becoming harder to detect using traditional methods alone.

Your AI SOC still needs a SIEM. Here's why that won't change.

Everyone is building sophisticated intelligence layers with improved models and smarter agents to automate threat detection, investigation, and response. It’s what is needed in order to mature into an AI SOC. However, the organizations seeing the most value from AI in their SOC are not focusing solely on the intelligence layer. They’re focusing on the data foundation first.

Enterprise AI Security Use Cases: What Security Teams Are Solving For

Enterprise AI adoption is no longer a future problem. The average organization uses 54 generative AI (genAI) applications, and endpoint AI agent adoption is accelerating, with Cyberhaven research tracking 276% growth in 2025. Security programs have struggled to keep pace with either trend. The AI security gap is technical, not philosophical. Most organizations have AI acceptable use policies.

Secret Scanning For AI Coding Tools With ggshield

Introducing ggshield AI hooks from GitGuardian to help stop AI coding assistants from leaking secrets. See how ggshield can scan prompts, tool calls, file reads, MCP calls, and tool output inside AI coding tools like Cursor, Claude Code, and VS Code with GitHub Copilot. When a secret is detected, ggshield can block the action before sensitive data is sent or exposed. You will also see how simple the setup is, with flexible install options for local or global use. This adds practical guardrails to AI-assisted development and helps teams move fast without increasing secret sprawl.