Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI is Actively LEAKING Your Data (And You Don't Know It) #apisecurity #airisks #dataprotection #ai

AI agents don't think. They pattern-match. Critical to understand: Generative AI (ChatGPT, Claude, etc.) does NOT reason like humans. It: The API Security problem: When you give an AI agent access to an API, it will: AI agents can't reason. They recreate patterns based on weights. You need to be very careful: data in, data out. Practical example: text User: "Show me the account balance for user" AI agent → calls GET /api/account/123 API → returns { balance: 5000, name: "John", SSN: "123-45-6789" } AI agent → outputs EVERYTHING to user (including SSN!)

MCP & AI Agent Security: Addressing the Growing Data Exfiltration Vector

The security landscape is shifting. For the past two years, security teams have focused primarily on what users type into chatbots by monitoring interactions with ChatGPT, Gemini, and Claude. But a new risk vector is emerging, one that operates largely outside traditional security controls: AI agents accessing corporate data autonomously through the Model Context Protocol (MCP).

From IDE to CLI: Securing Agentic Coding Assistants

Today we’re excited to announce that Zenity now protects the most powerful, enterprise-critical coding assistants - Cursor, Claude Code, and GitHub Copilot - from build-time to runtime. As AI becomes a first-class developer tool, Zenity gives security teams the visibility and control they need to safely embrace coding assistants everywhere they’re used, in IDEs, CLIs or in the cloud.

Semantic Guardrails for AI/ML - Protegrity AI Developer Edition

In this installment of our AI Developer Edition Set-up series, Dan Johnson, a software engineer at Protegrity, introduces semantic guardrails. Learn how to protect your LLM and chatbot workflows from malicious prompts and insecure AI responses. As AI becomes central to enterprise operations, controlling the context of conversations is a major challenge. Semantic guardrails provide a safety layer that ensures your AI stays on topic and never leaks sensitive PII.

Agentic SecOps Workspace (ASW) office hours with LimaCharlie

Join us for a special Defender Fridays Office Hours session where the LimaCharlie team demonstrates the new Agentic SecOps Workspace (ASW) and explores what's possible when AI agents operate security infrastructure directly. At Defender Fridays, we delve into the dynamic world of information security, exploring its defensive side with seasoned professionals from across the industry. Our aim is simple yet ambitious: to foster a collaborative space where ideas flow freely, experiences are shared, and knowledge expands.

Sumo Logic's 2026 Security Operations Insights report: AI, siloed tools, and team alignment

Security threats have always been expanding and evolving, but recent data shows that modern applications are more complex for security and operations than ever before. And AI is only a piece of that puzzle. To stay on top of the changing market and hear directly from security leaders on what’s really top of mind, Sumo Logic surveyed over 500 security leaders with the help of UserEvidence. We asked about data pipelines, tool sprawl, confidence in SIEM, and, of course, AI.

Productivity at a Price: The Rising Cost of AI Convenience

Humans have always sought to streamline productivity through the most convenient solutions available, prioritizing speed to stay ahead and gain an edge over the competition. From the assembly line to the cloud, the goal remains the same: do more with less friction. Today, that convenience is synonymous with AI. While these tools have revolutionized how we work, the reality remains that rapid innovation always comes with a hidden cost.