Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Cato's ASK AI Assistant: Turning Complex Network Operations Into Simple Conversations

Every superhero needs a sidekick. For your network and security teams, that is Cato’s ASK AI Assistant, our new AI Assistant built to help you see, solve, and secure faster than ever. This isn’t a basic Q&A tool. It brings customer-specific information and ability to work with other tools to answer complex questions.

AI Compliance Training: EU AI Act & 90-Day Implementation Strategy

Executive Summary: A technical briefing on navigating the AI compliance landscape, focusing on the EU AI Act, US federal mandates, and state-level regulations. This session provides a structured 90-day roadmap for AI system governance, risk mitigation, and role-based training deployment. Key Knowledge Domains.

Secure by Default: Why Snyk and Augment Code are the New Standard for AI Development

AI coding assistants have fundamentally changed development velocity. With tools like Augment Code, developers can now build and iterate at a pace that was unimaginable just a few years ago. However, this explosion in speed has created a new challenge: security teams, often still relying on manual review processes, are becoming the bottleneck.

The Legitimate Bot Traffic Security Teams Can No Longer Overlook

Security teams have spent years refining their ability to detect and stop malicious bots. That work remains critical. Automated traffic now accounts for more than half of all web traffic, according to Imperva's 2025 Bad Bot Report. What has changed is the scale and influence of legitimate bots and the blind spots they introduce into modern security programs.

Exabeam Introduces First Connected System for AI Agent Behavior Analytics and AI Security Posture Insight

Industry leadership expanded with connected capabilities that not only uncover AI agent activity, but centralize investigation, and deliver measurable AI security posture insights.

Secrets in the Machine: Preventing Sensitive Data Leaks Through LLM APIs

In this webinar, we break down a simple but increasingly common problem: secrets leak wherever text flows, and modern LLM apps and agentic workflows are built to move text fast. We walk through concrete demos showing how API keys and passwords can surface through RAG-based assistants when secrets accidentally live in knowledge bases (tickets, docs, internal wikis). We also show why “just harden the system prompt” isn’t a reliable fix, and how output-only redaction can be bypassed (for example by simple formatting/encoding tricks). Most importantly, we explore real-world agent architectures.

Agentic AI Security: How Microsoft Prevents Autonomous Agent Attacks?

As agentic AI systems move into the mainstream—powered by tool calling, MCP, and autonomous workflows—security is no longer a “nice to have.” It’s mission-critical. In this episode, we sit down with Raji, Principal Engineer & Manager for AI and Safety at Microsoft, to deep-dive into the rapidly evolving world of AI security, autonomous agents, and enterprise governance. Discover how Microsoft identifies and mitigates risks in agentic AI, distinguishes AI Security vs AI Safety, and enables organizations to deploy autonomous systems safely at scale—without slowing innovation.

Will AI agents 'get real' in 2026?

In my house, we consume a lot of AI research. We also watch a lot—probably too much—TV. Late in 2025, those worlds collided when the AI giant Anthropic was featured on “60 Minutes.” My husband tried to scroll past it, but I snatched the controller away, unable to resist a headline calling out the first widely acknowledged case of an “agentic AI cyberattack.” The framing itself was irresistible, a milestone moment in the rapid acceleration of AI.