Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

We Asked AI Security Experts to Explain Their Work Using Emojis #AISecurity #AI #AppSec

Can you explain AI Security using only emojis? We challenged AI Security professionals to do just that — no words, just symbols. Their creative combos reveal how experts really think about risks, models, and protection in today’s AI-driven world. From to to , each emoji tells a story about securing the systems behind the world’s most powerful models. Subscribe for more creative takes on AppSec, AI Security, and secure development from the Mend.io team.

Hackers hijack Google Smart Home #aisecurity #mcpserver

Building AI agents that can think, act, and adapt securely isn't easy. From prompt design to deployment, every stage brings new challenges and new risks. In this session, Bar-El Tayouri, Head of Mend AI at Mend.io, and Yehoshua (Shuki) Cohen, VP of Data and AI Evangelist at AI21 Labs, shared practical strategies for designing and defending agentic systems that actually deliver. Key topics covered: Originally recorded: October 29, 2024.

How to Build AI Agents That Don't Break: Design, Risk & Defense Explained #aiagents #AISecurity

Agentic AI is evolving fast — but building agents that are *both* effective and secure is still a major gap for most teams. In this webinar, Mend.io’s Bar-El Tayouri and AI21 Labs’ Yehoshua “Shuki” Cohen share a practical, deeply technical walkthrough of what it really takes to design and defend AI agents. You’ll learn: This is a tactical, no-fluff guide for anyone building AI agents in production engineers, security leaders, and innovators shaping the next wave of AI systems.

Secure Your App with Mend.io's AI-Native AppSec Platform (featuring ByteGrad)

This video, originally created by Wesley from ByteGrad, walks through how to secure your applications using Mend.io’s AI-Native AppSec Platform — including SAST, SCA, and SBOM scanning. Wesley explores how Mend integrates with GitHub, automates code fixes, and helps developers stay ahead of vulnerabilities. Creator: ByteGrad YouTube Channel Timestamps.

If AI Security were food...What's on the menu? #aisecurity #food

How do you explain AI Security without the jargon? Easy you make it food. In this video, we asked leading AI Security professionals to describe AI Security as a dish. Their answers turn complex ideas like prompt injection, data leaks, and model hardening into bite-sized insights you’ll actually remember. From layered lasagna to spicy tacos, each response brings a fresh perspective on what it means to build and protect secure AI systems.

Direct vs. Indirect AI Risks: What Security Teams Need to Know #AIsecurity #AppSec #AInative

AI coding assistants don’t just speed up development — they introduce two kinds of risks you can’t afford to ignore. Direct risks: vulnerabilities added straight into generated code. Indirect risks: exposure through how AI tools shape workflows, dependencies, and external connections. Both can create blind spots — and both demand visibility. Watch to learn how recognizing these layers helps secure your AI-driven workflows.

Why Security Can Be Stricter: A Zero Trust Approach to AppSec with AI | Mend.io

Is AI making application security easier or harder? We spoke to Amit Chita, Field CTO at Mend.io, the rise of AI agents in the Software Development Lifecycle (SDLC) presents a unique opportunity for security teams to be stricter than ever before. As developers increasingly use AI agents and integrate LLMs into applications, the attack surface is evolving in ways traditional security can't handle. The only way forward is a Zero Trust approach to your own AI models. Join Ashish Rajan and Amit Chita as they discuss the new threats introduced by AI and how to build a resilient security program for this new era.

Securing AI Applications in the Cloud: Shadow AI, RAG & Real Risks | Mend.io

What does it take to secure AI-based applications in the cloud? In this episode, host Ashish Rajan sits down with Bar-el Tayouri, Head of Mend AI at Mend.io, to dive deep into the evolving world of AI security. From uncovering the hidden dangers of shadow AI to understanding the layers of an AI Bill of Materials (AIBOM), Bar-el breaks down the complexities of securing AI-driven systems. Learn about the risks of malicious models, the importance of red teaming, and how to balance innovation with security in a dynamic AI landscape. What is an AIBOM and why it matters The stages of AI adoption.

The AppSec Bottleneck: Why Fixing Can't Wait

Vulnerability detection isn’t the main problem - remediation is. In today’s fast-paced development world, security teams are overwhelmed with alerts, while developers struggle to keep up with security tasks that feel disconnected from their workflow. The real risk? Vulnerabilities that sit unaddressed in a growing backlog. Join Daniel Wyrzykowski, Product Manager at Mend.io and Saoirse Hinksmon, Senior Product Marketing Manager at Mend.io as they explore.

Proven Best Practices for Safer Code that Work: AppSec for the Win | Webinar Mend.io

In this session, Chris Lindsey discusses proven best practices for building a robust AppSec program, offering actionable insights for both developers and security teams. Chris, with over 35 years of experience in software development and 15+ years in security, shares strategies that helped him run a successful security program.