Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Prompt and Tool Call Visibility: What Your AI Agents Are Actually Doing

It is 11:47 p.m. and the on-call security engineer is staring at two dashboards. On the left, LangSmith — the ML team’s debugging stack — showing the agent’s prompts, model responses, tool calls, and tokens consumed. On the right, the runtime detection console showing eBPF-captured syscalls, network connections, and process trees from the same Pod. Both are populated.

Whole-of-state cyber defense: How AI-driven security helps US states protect what matters most

Short answer: Because attackers exploit fragmentation faster than governments can respond This shift toward collective cyber defense is a cornerstone of the new federal vision. The March 2026 National Cyber Strategy for America explicitly calls for a "new level of relationship between the public and private sectors" and demands "unprecedented coordination across government" to protect the American people.

How bail bond scams are using AI to target families

Bail bond scams are getting smarter with AI. Here's how to spot them before they cost you thousands. A call saying someone you love has been arrested and needs money ASAP can feel so real that you act before you think. Learn how bail bond scams work and what to watch for to help protect you and your family from falling for the scheme. Getting a call about bail isn’t something most people prepare for, and that’s exactly what scammers count on.

Behavior Intelligence: The New Model for Securing the Agentic Enterprise

Behavior Intelligence is a security operations model that detects risk by analyzing behavior, automates investigation and response using AI, and measures whether security outcomes are improving over time. It focuses on how users, systems, and AI agents operate rather than relying only on predefined rules or knowns indicators of compromise. This shift matters because modern attacks rarely look malicious at first. They look normal.

Runtime Observability for LangChain and AutoGPT on Kubernetes

A platform team at a mid-size SaaS company runs three LangChain agents and one AutoGPT-derived planner on EKS. LangSmith is wired in. OpenTelemetry traces flow into their observability stack. Falco runs on every node. The setup is what most security teams would consider thorough. A pip dependency in one of the agents’ tool packages ships a malicious update.

AI Inference Server Observability in Kubernetes: The Four Signals MLOps Tools Don't Capture

In August 2025, a vulnerability chain in NVIDIA Triton Inference Server was found that allowed an unauthenticated remote attacker to send a single crafted inference request, leak the name of an internal shared memory region, register that region for subsequent requests, gain read-write primitives into the Triton Python backend’s private memory, and achieve full remote code execution. The exploit chain ran entirely through Triton’s standard inference API. No anomalous traffic volume.

Runtime Observability for MCP Servers: A Security Guide

Your security team sees an MCP tool server throw an error. Your APM dashboard shows a latency spike. Your logs capture the JSON-RPC request with its method name and parameters. But none of that tells you whether the tool just read a harmless config file or dumped credentials to an external IP. Traditional observability tools—the APM platforms, the OpenTelemetry traces, the centralized logging pipelines—track performance across your Model Context Protocol deployments.

Accelerating AI Discovery & Governance with the Falcon Platform

As AI adoption accelerates, so does shadow AI. Without a complete inventory of AI tools, agents, and activity, organizations are exposed to unapproved usage and data risk. In this video, you will see how the Falcon platform helps teams: Discover AI tools, models, and services in seconds Identify unapproved and risky usage See where AI is running and what it can access across endpoints Take action and enforce governance at scale.