Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

EP 1 - AI Gone Rogue: FuzzyAI and LLM Threats

In the inaugural episode of the Security Matters podcast, host David Puner dives into the world of AI security with CyberArk Labs' Principal Cyber Researcher, Eran Shimony. Discover how FuzzyAI is revolutionizing the protection of large language models (LLMs) by identifying vulnerabilities before attackers can exploit them. Learn about the challenges of securing generative AI and the innovative techniques used to stay ahead of threats. Tune in for an insightful discussion on the future of AI security and the importance of safeguarding LLMs.

How AI-powered Secure Email Gateways Fight Back vs. AI-armed Bad Actors

As bad actors use artificial intelligence to step up their phishing game, mounting an effective defense means using a secure email gateway that likewise employs AI to detect even the most cleverly crafted phishing emails and the fraudulent websites to which the emails attempt to direct recipients. The concern is not just with generative AI (GenAI) tools like ChatGPT, which has some (rather limited) guardrails to prevent nefarious use.

Protecting Sensitive Data in Snowflake through Protecto's External Tokenization

With the rapid expansion of cloud data storage and analytics, enterprises are increasingly leveraging platforms like Snowflake for their scalability and performance. However, this also introduces new challenges in data security, particularly for industries dealing with sensitive data such as finance, healthcare, and e-commerce.

7 Questions Tech Buyers Should Ask About How Their Vendors Use AI

As AI becomes an increasingly critical component in the digital supply chain, tech buyers are struggling to appropriately measure and manage their AI risk. Keeping tabs on emerging risk from the AI technology they use is hard enough. But often the most crucial AI business functions that organizations depend upon aren’t directly under their control or care, but instead are governed by the tech vendors that embed them into their underlying software.

What You Need to Know about the DeepSeek Data Breach

DeepSeek, founded by Liang Wenfeng, is an AI development firm located in Hangzhou, China. The company focuses on developing open source Large Language Models (LLMs) and specializes in data analytics and machine learning. DeepSeek gained global recognition in January 2025 with the release of its R1 reasoning model rivalling OpenAI's o1 model in performance but at a substantially lower cost.

Guarding open-source AI: Key takeaways from DeepSeek's security breach

In January 2025, within just a week of its global release, DeepSeek faced a wave of sophisticated cyberattacks. Organizations building open-source AI models and platforms are now rethinking their security strategies as they witness the unfolding consequences of DeepSeek’s vulnerabilities. The attack involved well-organized jailbreaking and DDoS assaults, according to security researchers, revealing just how quickly open platforms can be targeted.

A Phased Approach: Thoughts on EU AI Act Readiness

The European Union’s (EU) AI Act (the Act) represents landmark artificial intelligence (AI) regulation from the EU designed to promote trustworthy AI by focusing on the impacts on people through required mitigation of potential risks to health, safety and fundamental rights. The Act introduces a comprehensive and often complex framework for the development, deployment and use of AI systems, impacting a wide range of businesses across the globe.