Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Understanding shadow AI in your endpoint environment

Generative AI–and large language models in particular–reached mass consumer adoption beginning in late 2022 and early 2023, with ChatGPT reaching 100 million users faster than any consumer application in history. Since then, AI has advanced at a breakneck pace and now seems to be incorporated in every tool, app, and website–regardless of how useful it might actually be.

How Financial Services Teams Should Secure AI Agents in 2026

Your fraud detection agent scores 30,000 transactions per hour. Your KYC agent processes identity verifications against government watchlists. Your customer service chatbot resolves disputes and initiates balance transfers. Each agent runs on Kubernetes with inherited service account permissions that span payment APIs, customer databases, and compliance systems. Now imagine one of those agents is compromised through a prompt injection embedded in a customer support ticket.

How to Make AI Security Foundational to Your Data Security Stack

Most organizations treat AI security as a finishing touch: A policy written after an incident or a product category evaluated after the core stack is already in place. That sequencing is the problem. AI has fundamentally changed how sensitive data moves inside an organization, through prompts, agents, summarization tools, and third-party models that operate entirely outside traditional security perimeters.

Zenity Joins CoSAI: Why Agentic AI Standards Need Practitioners at the Table

The agentic AI security standards your enterprise will adopt in the next 18 months are being written right now, inside working groups most CISOs have never heard of. The Coalition for Secure AI (CoSAI), an OASIS Open Project with more than 45 sponsor organizations, including Google, Microsoft, NVIDIA, IBM, and Meta, is producing the frameworks, reference architectures, and secure design patterns that will define how autonomous agents operate inside enterprise environments.

Composable AI Agents and the SOC That Runs Itself

Picture a SOC that investigates its own alerts, hunts threats across customer tenants, isolates compromised endpoints, and writes its own detection rules. Envision the same SOC attacking itself every morning to find the gaps it missed, all before your analysts arrive for the day. This is not a roadmap item, but an operational reality on LimaCharlie. It’s what agentic AI security looks like on a platform built to support it.

Announcing Justification Coach: AI-Powered Guidance for Better Access Requests and Stronger Audits

Today, we’re introducing Justification Coach, a new AI-powered capability that helps users write better access request justifications in real time, so admins get the context they need for audits and investigations without having to chase people down after the fact.

New KnowBe4 Agent Risk Manager Addresses Pervasive AI Agent Risk

By Roger A. Grimes and Matthew Duren AI agents can deliver incredible productivity gains, but their operational complexity makes effective threat modeling harder than ever, including for developers, administrators and especially end users. At the same time, both developers and non-developers are increasingly vibe-coding, or using AI to generate functional software from natural language prompts.