Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI SecOps Worskhop Series: Accelerating Cloud Security Operations with Claude Code and LimaCharlie

In this workshop we will show how to use Claude Code with LimaCharlie to accelerate cloud security operations. We will have Claude Code deploy agents, create detections and identify issues before they become incidents. This hands-on workshop is designed to demonstrate the transformative power of integrating Anthropic's Claude Code, with the versatile security platform, LimaCharlie. Our focus will be on leveraging the capabilities of Claude Code to significantly accelerate and streamline various aspects of cloud security operations, turning reactive tasks into proactive, automated workflows.

Logging Is Not Observability: The AI Security Gap MSSPs Can't Ignore

Every MSSP is fielding the same question from clients right now:"Are we safe with AI?" Most are answering with some version of"yes, we're logging everything." In a recent Defender Fridays episode, Saurabh Shintre, Founder and CEO of Realm Labs drew a hard line between these two concepts."You can log prompt and response and this bare minimum you have to do.

Defending at Machine Speed in the Autonomous Age

Frontier AI models are accelerating the discovery of new vulnerabilities combined with the ability to exploit those weaknesses at speed and scale. This alone isn’t the problem. Trust in AI‑driven security outcomes is. With AI dominating headlines, security leaders are asking what models like Mythos or GPT‑5.4‑Cyber mean for their business. The real issue runs deeper. Teams need to be able to trust tools and technology that move at machine speed.

Building a Governed AI Model Supply Chain: Integrating AWS SageMaker and the JFrog Platform

Amazon SageMaker accelerates the process of training and deploying machine learning models. However, as AI adoption scales from individual experiments to enterprise-wide production, the focus of leading Fortune 500 software development operations and security teams must shift from pure velocity to governance.

Phishing Campaigns Abuse AI Workflow Automation Platforms

Threat actors are abusing agentic AI automation platforms to deliver malware and send phishing emails, according to researchers at Cisco Talos. The researchers observed attackers using n8n, a legitimate platform that automates workflows in web apps and services like Slack, GitHub, Google Sheets, and others.

Beyond the Prompt: Data Security in Generative AI Platforms

Generative AI tools have changed how people work and play online. Everyone is excited about the speed and creativity these systems offer. Users often type sensitive info into prompts without thinking about where it goes. Security experts worry about how these platforms handle personal data. It is easy to forget that anything typed into a public bot might be stored. Staying safe means knowing how to use these tools without giving away secrets.

MyClaw Detailed Review: Is This OpenClaw Managed Hosting Worth It?

I've been working in the AI tools space for a while now, and one thing that comes up repeatedly is the gap between open-source AI frameworks and the actual effort required to run them. OpenClaw is a great example - powerful, flexible, and genuinely useful for building AI agents. But getting it deployed and keeping it running? That's a different story. That's what led me to try MyClaw AI. Here's an honest look at what the platform actually offers, who it's for, and whether it's worth the cost.

LimaCharlie is the most secure way to run AI security agents

The idea that AI agents will run security operations is becoming reality. But most platforms ignore the most important question: how do you secure the agents themselves? In this video I walk through why LimaCharlie is the most secure platform for running agentic security operations and demonstrate the architectural controls that make it possible. We look at the core mechanisms that allow AI agents to operate safely inside a SecOps environment, including.

Agentic AI at risk after MCP design flaw discovery? #ai #cybersecurity #podcast

In this week's Intel Chat, Chris Luft and Matt Bromiley discuss a design flaw in Anthropic's Model Context Protocol (MCP) that could enable large-scale supply chain attacks on agentic AI systems. Researchers at OX Security found that MCP's command execution allows malicious commands to run silently without sanitization checks or warnings.