Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

When Agentic AI Becomes an Attack Surface: What the Ask Gordon Incident Reveals

Pillar Security’s recent analysis of Docker’s Agentic AI assistant, Ask Gordon, offers an early glimpse into the security challenges organizations will face as AI systems begin operating inside the development stack. Their researchers discovered that a single poisoned line of Docker Hub metadata caused the agent to run privileged tool calls and quietly exfiltrate internal data.

AI and Data Security: Why Your Data Security Model Is Hurting Innovation

Why Your Data Security Model Is Outdated For over 20 years, we’ve focused on the Data Envelope—securing the perimeter, the cloud, and the network. But in a world of AI and rapid data sharing, protecting the envelope is not enough. In this video, James Rice (VP of Product Marketing at Protegrity) explains why traditional security has become the biggest bottleneck for modern innovation. Whether you are a security leader, a data architect, or a business innovator, understanding this paradigm shift is essential for the next decade of growth.

Protecting the Language of AI: Why API Security is No Longer Optional

Protecting the Language of AI: Why API Security is No Longer Optional As AI continues to reshape the digital landscape, APIs have become the "language" of innovation—but they've also become a massive target for attackers. In this clip from the A10 Networks webinar, "APIs are the Language of AI: Protecting Them is Critical," security experts Jamison Utter and Carlo Alpuerto discuss the complexities of modern API security.

Expert Roundup -How to Prepare for AI Data Processing Under GDPR?

As AI adoption accelerates across business functions, December’s expert roundup focuses on a question many organizations are now confronting in practice rather than theory: how should companies prepare for AI related data processing under GDPR. Unlike traditional automation, AI systems often rely on large, dynamic datasets, continuous learning, and opaque decision logic.

Why Knowing ATT&CK Isn't Enough: Mapping Real Control Coverage with Reach

Security teams know the attack techniques. What they don’t always know is how those techniques actually land in their environment. Reach maps your existing controls to MITRE ATT&CK (and D3FEND) and shows—visually—︎ which techniques are covered︎ which tools provide that coverage︎ and where real gaps exist Because “we have the tool” isn’t the same as “the technique is stopped.”

Garrett Hamilton & Todd Graham on How AI Agents Change the Way We Think About Security

Garrett Hamilton, CEO and Co-Founder of Reach Security, sits down with Todd Graham, Managing Partner at Microsoft’s venture fund M12, to discuss why modern cybersecurity programs struggle to reduce real risk — despite massive spending on tools. Recorded at Black Hat, the conversation explores how misconfigurations, unused controls, and operational blind spots create exposure long before attackers need advanced techniques.

Zenity 2025 Year in Review: Building AI Security for the Enterprise

For security teams, the adoption of agents showed up operationally before it showed up strategically - creating new expectations and requirements. Risk is no longer tied to prompts or the model alone. It shows up in what agents do once they are connected to critical systems - coming from permissions they inherit, tools they invoke, and data they move.