Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Sensitive Data Leaks from AI Model Use | The 443 Podcast

How are you using ChatGPT at work? On this week's episode of, Corey Nachreiner and Marc Laliberte dig into a report on sensitive data leakage caused by AI model use. They also cover a recent report that highlights a drop in ransomware payments in 2024, as well as a recent attack targeting ASP.NET web servers.

Episode 15: Are You Making This Mistake With Your Endpoint Security? ft. Santhosh Narasimhamoorthy

Welcome to another electrifying episode of Server Room! This week, we’re tearing up the rulebook on cybersecurity with Santhosh Narasimhamoorthy, Manager and Technical Evangelist for Endpoint Management Security. Buckle up as we explore how AI is not just a buzzword—it’s rewriting the play book for protecting every device, user, and byte in your network. Timestamps: Perfect for: IT managers, CISOs, and anyone who’s ever side-eyed a “Nigerian prince” email.

CISA Reports a Massive Spike in API Security Risks #CISAReport #ProtectAPIs APIExploit

In 2024, API-related vulnerabilities on CISA’s Known Exploited List jumped from 20% to 50%, making APIs a prime target for attackers. This sharp increase highlights the critical need for a dedicated API security strategy in 2025. Don’t wait—invest in API security today.

Speed meets security: Pascal Wehrlein races Cato's Etay Maor

Get ready for a high-speed showdown as ABB FIA Formula E Drivers' Champion Pascal Wehrlein teams up with Etay Maor, Chief Security Strategist at Cato Networks, in a thrilling race on Formula E simulators. Can Etay keep up with Pascal on the track? And can they make the right calls in the world of IT security? Hit play and see who comes out on top! Let’s connect.

EP 1 - AI Gone Rogue: FuzzyAI and LLM Threats

In the inaugural episode of the Security Matters podcast, host David Puner dives into the world of AI security with CyberArk Labs' Principal Cyber Researcher, Eran Shimony. Discover how FuzzyAI is revolutionizing the protection of large language models (LLMs) by identifying vulnerabilities before attackers can exploit them. Learn about the challenges of securing generative AI and the innovative techniques used to stay ahead of threats. Tune in for an insightful discussion on the future of AI security and the importance of safeguarding LLMs.