Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How multi-agent systems work in LimaCharlie

This video walks through how single agents and multi-agent systems are built and run inside the LimaCharlie platform. Agents in LimaCharlie are defined declaratively. Each agent specifies the model it runs, its instructions, the tools it can access, what events trigger it, and the guardrails it operates under. This approach makes agents version controllable, reviewable, and portable across tenants.

SMB Risks, AI, and Regional Realities with Paul Harris - The 443 Podcast - Episode 368

This week on the podcast, Marc and Corey sit down with Paul Harris, CEO of BGLA and Futurity Corp at WatchGuard's Impact Partner Conference in Tulum, to explore the evolving cybersecurity landscape across Latin America. Paul shares his journey from early days in cybersecurity to leading organizations in the region, while breaking down the biggest concerns facing LATAM SMBs today. The conversation also covers how AI is reshaping cybersecurity, the challenges of securing partners across diverse markets, and practical advice for business leaders looking to stay ahead of cyber risk in LATAM.

Detection Engineering with LimaCharlie and Claude Code

Detection engineering is fundamentally a translation problem: rules need to be converted between formats, IOCs need to be converted into detection logic, and noisy alerts need to be converted into precise suppressions. That translation work is what consumes analyst time, and it's what Claude Code handles well.

System Prompts Are Not Security Controls: A Deleted Production Database Proves It

On April 25th, a Cursor AI coding agent running Anthropic's Claude Opus 4.6, one of the most capable models in the industry, deleted the production database for PocketOS, a software platform used by car rental businesses across the country to manage their entire operations. The deletion took 9 seconds.

The Research Behind Of Detecting And Attributing LLM-Generated Passwords - Gäetan Ferry

GitGuardian Senior Cybersecurity Researcher Gaetan Ferry’s latest research shows that AI-generated passwords are leaving fingerprints in the wild. In this interview, he explains how he used Markov chains, a century-old statistical model, to detect patterns in passwords generated by modern LLMs, attribute them to model families, and identify 28,000 likely LLM-generated passwords across public GitHub. The findings are a warning for teams adopting AI coding agents.

Accelerating AI Discovery & Governance with the Falcon Platform

As AI adoption accelerates, so does shadow AI. Without a complete inventory of AI tools, agents, and activity, organizations are exposed to unapproved usage and data risk. In this video, you will see how the Falcon platform helps teams: Discover AI tools, models, and services in seconds Identify unapproved and risky usage See where AI is running and what it can access across endpoints Take action and enforce governance at scale.

Shutdowns, power outages, and conflict: a review of Q1 2026 Internet disruptions

In the first quarter of 2026, government-directed shutdowns figured prominently, with prolonged Internet blackouts in both Uganda and Iran, a stark contrast to the lack of observed government-directed shutdowns in the same quarter a year prior. This quarter, we also observed a number of Internet disruptions caused by power outages, including three separate collapses of Cuba's national electrical grid.

Runtime Observability for MCP Servers: A Security Guide

Your security team sees an MCP tool server throw an error. Your APM dashboard shows a latency spike. Your logs capture the JSON-RPC request with its method name and parameters. But none of that tells you whether the tool just read a harmless config file or dumped credentials to an external IP. Traditional observability tools—the APM platforms, the OpenTelemetry traces, the centralized logging pipelines—track performance across your Model Context Protocol deployments.

AI Inference Server Observability in Kubernetes: The Four Signals MLOps Tools Don't Capture

In August 2025, a vulnerability chain in NVIDIA Triton Inference Server was found that allowed an unauthenticated remote attacker to send a single crafted inference request, leak the name of an internal shared memory region, register that region for subsequent requests, gain read-write primitives into the Triton Python backend’s private memory, and achieve full remote code execution. The exploit chain ran entirely through Triton’s standard inference API. No anomalous traffic volume.