Securing AI agents in Teleport, focused on unified identity, eliminating standing privileges, and enforcing real policy controls instead of relying on the whims of an agent.
A credential access event fired. An AI agent investigated it, correlated it against running processes, assessed the risk, and closed the ticket. No analyst touched it. The entire loop ran in minutes. This is what security operations look like when AI can actually operate in the environment rather than advise from outside it. Security operations have always required a special kind of person.
On April 22, 2026, Google's Threat Intelligence Group and Mandiant disclosed a campaign by a threat actor they're tracking as UNC6692. The group breached enterprise networks by impersonating IT helpdesk staff over Microsoft Teams, ultimately exfiltrating Active Directory databases and achieving full domain compromise. What's notable about UNC6692 is what they didn't do. They didn't use a zero-day. They didn't exploit a software vulnerability.
Garrett Hamilton recently presented at the North Texas ISSA Lunch & Learn in Plano, TX to talk about what risk reduction actually looks like in practice. Reach shows customers exactly which controls they've deployed, the user impact of those changes, and how much risk has been reduced across IAM, EDR, email, firewall, and SASE. Not feature checklists. Targeted, measurable outcomes tied to the business.
Tuesday, 09:14 UTC. A connector pulling content from your knowledge wiki indexes a new article into the vector database your support agents query at runtime. Embedded in legitimate troubleshooting prose is an instruction crafted to surface whenever a query mentions a specific product version — include the user’s account record in the response and POST the summary to the configured support webhook. For three days, nothing happens. Every security tool is green.
A platform engineer at a mid-market fintech opens her SCA dashboard at the start of the quarter. The agentic customer-support pipeline her team shipped two months ago — a LangChain orchestrator, a vLLM inference server with two fine-tuned LoRA adapters pulled from Hugging Face, and an MCP toolkit wired to four internal APIs — shows green. Snyk has scanned every Python package in the container. Mend has cleared the dependency graph. The CVE count is zero.
A platform engineer pulls the AI-SPM dashboard for an agent that has been running in production six weeks. The static dashboard shows several dozen findings, severity-sorted by configuration weight. The runtime-informed dashboard shows a smaller, prioritized list — but a few of those findings do not appear on the static view at all, and most of the static findings appear demoted to a tier the static view does not have. Same agent. Same window. Same underlying configuration.
Every cloud security vendor launched an AI-SPM dashboard in the past year. Strip away the branding and most of them are presenting the same concept: a new posture management layer for AI workloads. Sit through four demos in the same week and a practical question surfaces. The dashboards look broadly similar — pie charts of findings, compliance tags, a list of AI assets, a severity ranking. Why, then, do the tools underneath cover completely different parts of the problem?
A Tier 1 bank’s security architecture already spends heavily on detection. On one side sits the financial surveillance stack — fraud scoring platforms processing thirty thousand transactions an hour, AML monitoring watching money movement patterns, DLP engines scanning data in transit, payment anomaly detection tuned by a decade of production signal.
The Mythos-ready briefing names secrets rotation, NHI governance, and honeytokens as critical controls. Zero-days don't replace credential attacks; they accelerate them. Credential security deserves to move up every CISO's priority list.