Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How multi-agent systems work in LimaCharlie

This video walks through how single agents and multi-agent systems are built and run inside the LimaCharlie platform. Agents in LimaCharlie are defined declaratively. Each agent specifies the model it runs, its instructions, the tools it can access, what events trigger it, and the guardrails it operates under. This approach makes agents version controllable, reviewable, and portable across tenants.

Are we blindly giving AI access to everything?

Users are connecting AI tools without understanding the security implications. In this week's Intel Chat, Chris Luft and Matt Bromiley discuss a security breach at Vercel that originated from a compromised third-party AI tool used by one of its employees. The attacker gained control of the employee's Google Workspace account, which provided access to Vercel's internal environment.

AI SecOps Worskhop Series: Detection Engineering with LimaCharlie and Claude Code

This hands-on workshop is designed for security professionals interested in learning how to integrate advanced AI capabilities into their detection and response workflows. Attendees will receive practical, step-by-step instruction on leveraging the power of Claude Code, a sophisticated AI agent, to significantly enhance security operations within the LimaCharlie platform for detection engineering use cases.

The Adversary's Speed Just Changed - What Mythos Means for Your Security Posture

The cybersecurity threat landscape just changed — and most organizations don't know it yet. In this conversation, Tanium's Pedro (CRO) and Mark Liu (VP of Solution Engineering) break down what Anthropic's Mythos really is, why security leaders everywhere are asking about it, and what organizations need to do right now. No marketing pitch — just a straight conversation about a consequential shift that's already underway.

CrowdStrike Expands ChatGPT Enterprise Integration with Enhanced Audit Logging and Activity Monitoring

As organizations scale ChatGPT Enterprise across departments, AI is becoming embedded in everyday business operations. Finance teams are building custom GPTs. Developers are leveraging Codex to act on codebases. Employees are invoking third-party tools within AI conversations to automate workflows. As adoption accelerates, security teams face a fundamental challenge: visibility around agents deployed and running in SaaS environments.

How bail bond scams are using AI to target families

Bail bond scams are getting smarter with AI. Here's how to spot them before they cost you thousands. A call saying someone you love has been arrested and needs money ASAP can feel so real that you act before you think. Learn how bail bond scams work and what to watch for to help protect you and your family from falling for the scheme. Getting a call about bail isn’t something most people prepare for, and that’s exactly what scammers count on.

Behavior Intelligence: The New Model for Securing the Agentic Enterprise

Behavior Intelligence is a security operations model that detects risk by analyzing behavior, automates investigation and response using AI, and measures whether security outcomes are improving over time. It focuses on how users, systems, and AI agents operate rather than relying only on predefined rules or knowns indicators of compromise. This shift matters because modern attacks rarely look malicious at first. They look normal.

Runtime Observability for LangChain and AutoGPT on Kubernetes

A platform team at a mid-size SaaS company runs three LangChain agents and one AutoGPT-derived planner on EKS. LangSmith is wired in. OpenTelemetry traces flow into their observability stack. Falco runs on every node. The setup is what most security teams would consider thorough. A pip dependency in one of the agents’ tool packages ships a malicious update.

AI Inference Server Observability in Kubernetes: The Four Signals MLOps Tools Don't Capture

In August 2025, a vulnerability chain in NVIDIA Triton Inference Server was found that allowed an unauthenticated remote attacker to send a single crafted inference request, leak the name of an internal shared memory region, register that region for subsequent requests, gain read-write primitives into the Triton Python backend’s private memory, and achieve full remote code execution. The exploit chain ran entirely through Triton’s standard inference API. No anomalous traffic volume.

Runtime Observability for MCP Servers: A Security Guide

Your security team sees an MCP tool server throw an error. Your APM dashboard shows a latency spike. Your logs capture the JSON-RPC request with its method name and parameters. But none of that tells you whether the tool just read a harmless config file or dumped credentials to an external IP. Traditional observability tools—the APM platforms, the OpenTelemetry traces, the centralized logging pipelines—track performance across your Model Context Protocol deployments.