Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Phishing Campaigns Abuse AI Workflow Automation Platforms

Threat actors are abusing agentic AI automation platforms to deliver malware and send phishing emails, according to researchers at Cisco Talos. The researchers observed attackers using n8n, a legitimate platform that automates workflows in web apps and services like Slack, GitHub, Google Sheets, and others.

CrowdStrike Expands Real-Time Cloud Detection and Response to Google Cloud

Complexity has become a defining security challenge as organizations expand across hybrid and multi-cloud environments. In fact, 52% of surveyed organizations ranked multi/hybrid cloud complexity among their top three infrastructure concerns.1 This complexity creates fragmented visibility across cloud providers, workloads, and Kubernetes environments — gaps that adversaries increasingly exploit to move undetected.

Building a Governed AI Model Supply Chain: Integrating AWS SageMaker and the JFrog Platform

Amazon SageMaker accelerates the process of training and deploying machine learning models. However, as AI adoption scales from individual experiments to enterprise-wide production, the focus of leading Fortune 500 software development operations and security teams must shift from pure velocity to governance.

Defending at Machine Speed in the Autonomous Age

Frontier AI models are accelerating the discovery of new vulnerabilities combined with the ability to exploit those weaknesses at speed and scale. This alone isn’t the problem. Trust in AI‑driven security outcomes is. With AI dominating headlines, security leaders are asking what models like Mythos or GPT‑5.4‑Cyber mean for their business. The real issue runs deeper. Teams need to be able to trust tools and technology that move at machine speed.

Logging Is Not Observability: The AI Security Gap MSSPs Can't Ignore

Every MSSP is fielding the same question from clients right now:"Are we safe with AI?" Most are answering with some version of"yes, we're logging everything." In a recent Defender Fridays episode, Saurabh Shintre, Founder and CEO of Realm Labs drew a hard line between these two concepts."You can log prompt and response and this bare minimum you have to do.

AI SecOps Worskhop Series: Accelerating Cloud Security Operations with Claude Code and LimaCharlie

In this workshop we will show how to use Claude Code with LimaCharlie to accelerate cloud security operations. We will have Claude Code deploy agents, create detections and identify issues before they become incidents. This hands-on workshop is designed to demonstrate the transformative power of integrating Anthropic's Claude Code, with the versatile security platform, LimaCharlie. Our focus will be on leveraging the capabilities of Claude Code to significantly accelerate and streamline various aspects of cloud security operations, turning reactive tasks into proactive, automated workflows.

You're Not Watching MCPs. Anthropic's Vulnerability Shows Why You Should Be.

Last week, researchers at OX Security published findings that should stop every security leader in their tracks. They discovered a critical vulnerability baked directly into Anthropic's Model Context Protocol SDK, affecting every supported language: Python, TypeScript, Java, and Rust. The result: remote code execution on any system running a vulnerable MCP implementation, with direct access to sensitive user data, internal databases, API keys, and chat histories. Over 7,000 publicly accessible servers.

Anthropic's Mythos and the New Reality of AI Cybersecurity Risk

I was on ABC News recently discussing why banks are on alert as new AI systems like Anthropic’s Claude Mythos raise cybersecurity concerns. What struck me most is how quickly the conversation has shifted. This is no longer a hypothetical risk or something we are planning for in the future. Financial institutions and regulators are reacting in real time to what AI is already capable of doing. From my perspective, we are still underestimating how fast this is moving.