Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

DeepSight by Protecto: AI-Native Sensitive Data Detection for Developers

Thanks to a wide range of use cases that automate manual activities, enterprises are rushing to integrate GenAI into their IT stack, only to realize they’ve hit a privacy wall. A concerning number of use cases involve the use of sensitive data like PII and PHI, risking data privacy and compliance. Enterprises today are becoming increasingly aware of these multifaceted risks associated with unfiltered AI usage and turning to the common solution available in the market – AI privacy tools.

AI Trust in Action: How Snyk Agent Redefines Secure Development

One word defines success or failure in the race to adopt AI in security workflows: trust. While the industry moves fast toward automation and autonomy, adoption often stalls when developers and the teams supporting them can’t trust what the AI delivers. It’s not enough for a tool to explain what it did. Developers want to know: Did it actually fix the problem? Will this change break something else? Can I rely on it again next time? Nowhere is that skepticism more justified than in security.

AI and Human Expertise: A Key Alliance in Cybersecurity

While many cybersecurity tasks - like log monitoring, event correlation, or alert classification - are repetitive and operational, they can be exhausting for professionals in the field. Artificial intelligence (AI) has become a key enabler for automating processes, reducing false positives, and optimizing incident prioritization. However, this does not mean cybersecurity can do without essential human abilities such as creativity and critical thinking.

Can Google Jules Build a SECURE Note Taking App?

In this video, I test out Google Jules, Google’s brand new AI developer assistant, to see if it can build a secure note-taking app from scratch. With a focus on privacy, authentication, and data protection, I challenge Jules to create something functional and secure. This is part of an ongoing series where I test different AI models and tools to see how well they handle real-world development tasks. Check out our playlist where we're putting these various models to the test!

How to Secure MCP Servers | A Walkthrough

While the hype continues to build around MCP, or model context protocol, a growing number of engineers and organizations are becoming concerned about the potential security risks that MCP invites. In this video, I'll give a demo of how Teleport provides secure access to your MCP servers and how the new Teleport Secure MCP integration gives you a robust solution for protecting your LLM endpoints and data sources.

The AI SOC Analyst That Offloads 90%+ of Tier-1 Cases - Meet Socrates

Security Operations Centers (SOCs) continue to struggle in 2025. The perfect storm of growing alert volume, consistent talent shortage, and the well-documented limitations of legacy SOAR solutions have brought many SOC teams to a breaking point. At the same time, bad actors continue to innovate, and cybercriminals have become more sophisticated in their tactics and techniques, including using AI to launch attacks at scale.

Do we need an AI compliance framework?

Compliance isn’t just a checkbox. It’s the frontline of cybersecurity defense. In this episode of the Cybersecurity Defenders podcast, Joshua Hoffman, Chief Revenue Officer at ControlCase, shares critical insights on the evolving role of compliance in cybersecurity. From frameworks like CMMC and SOC 2 to the rising pressure from new SEC regulations, we examine how organizations can move beyond surface-level audits and adopt a scalable security posture.

Poison everywhere: No output from your MCP server is safe

The Model Context Protocol (MCP) is an open standard and open-source project from Anthropic that makes it quick and easy for developers to add real-world functionality — like sending emails or querying APIs — directly into large language models (LLMs). Instead of just generating text, LLMs can now interact with tools and services in a seamless, developer-friendly way.

Verifying Bots and Agents with Cryptography in the Age of AI

In this episode, host João Tomé is joined in Cloudflare’s Lisbon office by our Senior Research Engineer Thibault Meunier to explore a new proposal that could reshape how bots interact with the web in the age of AI. Timestamps: We go into Cloudflare’s proposal of using cryptographic signatures for bots, enabling websites to verify their identity. Why is this important? As AI systems rely increasingly on online content, this standard could help build a better relationship between content creators and AI platforms.