Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The latest News and Information on Application Security including monitoring, testing, and open source.

How to Build AI Agents That Don't Break: Design, Risk & Defense Explained #aiagents #AISecurity

Agentic AI is evolving fast — but building agents that are *both* effective and secure is still a major gap for most teams. In this webinar, Mend.io’s Bar-El Tayouri and AI21 Labs’ Yehoshua “Shuki” Cohen share a practical, deeply technical walkthrough of what it really takes to design and defend AI agents. You’ll learn: This is a tactical, no-fluff guide for anyone building AI agents in production engineers, security leaders, and innovators shaping the next wave of AI systems.

Bits AI Security Analyst: Automate Cloud SIEM investigations

Datadog's Bits AI Security Analyst transforms the way security teams handle investigations by autonomously triaging Datadog Cloud SIEM signals. Built natively in Datadog, it conducts in-depth investigations of potential threats and delivers clear, actionable recommendations. With context-rich guidance for mitigation, security teams can stay ahead of evolving threats with greater efficiency and precision.

AppSec metrics fail, Mend.io's Risk Reduction Dashboard fixes it

Today, we’re introducing our Risk Reduction Dashboard. This is a new way for security leaders to quantify their AppSec program’s impact, prioritize high-value fixes, and prove ROI with data-backed insights that go beyond raw vulnerability counts.

When AI writes code, who fixes the flaws?

Veracode's Chief Security Evangelist Chris Wysopal on AI's Coding Secret: 45% of Code Has Vulnerabilities Chris (aka @WeldPond), Wysopal, a veteran in application security and former member of the legendary L0pht hacker group, shares practical insights on shifting security left while embracing AI-powered development. Whether you're a CISO, AppSec leader, or developer using Copilot/GitHub Copilot, Claude, or other AI coding assistants, this discussion will change how you think about secure AI adoption.

Secure Your App with Mend.io's AI-Native AppSec Platform (featuring ByteGrad)

This video, originally created by Wesley from ByteGrad, walks through how to secure your applications using Mend.io’s AI-Native AppSec Platform — including SAST, SCA, and SBOM scanning. Wesley explores how Mend integrates with GitHub, automates code fixes, and helps developers stay ahead of vulnerabilities. Creator: ByteGrad YouTube Channel Timestamps.

Optimize Your Application Security with Custom WAF Rules

Your website is unique, and so are the attacks against it. Generic Web Application Firewall (WAF) rules protect everyone a little, but leave your site exposed to specialized attacks. Custom WAF rules are your line of defense against targeted threats—the ones tailored to your specific application, industry, or code base. Key Advantages of Custom WAF Rules.

If AI Security were food...What's on the menu? #aisecurity #food

How do you explain AI Security without the jargon? Easy you make it food. In this video, we asked leading AI Security professionals to describe AI Security as a dish. Their answers turn complex ideas like prompt injection, data leaks, and model hardening into bite-sized insights you’ll actually remember. From layered lasagna to spicy tacos, each response brings a fresh perspective on what it means to build and protect secure AI systems.

AI as a Power Tool: How Windsurf and Devin Are Changing Secure Coding

We brought together Ian Moritz, Deployed Engineer at Cognition, and Mackenzie Jackson from Aikido Security for a live masterclass on AI-assisted coding. The goal wasn’t to hype new tools. It was to talk about how developers can stay in control while AI starts writing, testing, and securing code beside them.

Building Fast, Staying Secure: Supabase's Approach to Secure-by-Default Development

As part of Aikido’s Security Masterclass series, Mackenzie Jackson sat down with Bill Harmer (CISO, Supabase) and Etienne Stalmans (Security Engineer, Supabase) to explore how Supabase approaches security as part of design, not something to bolt on later. From Row Level Security (RLS) to the risks of AI-assisted coding, the discussion focused on what it takes to build fast and stay secure.

Direct vs. Indirect AI Risks: What Security Teams Need to Know #AIsecurity #AppSec #AInative

AI coding assistants don’t just speed up development — they introduce two kinds of risks you can’t afford to ignore. Direct risks: vulnerabilities added straight into generated code. Indirect risks: exposure through how AI tools shape workflows, dependencies, and external connections. Both can create blind spots — and both demand visibility. Watch to learn how recognizing these layers helps secure your AI-driven workflows.