Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Asymmetric Data: The New Challenge for API Security

Asymmetric Data: The New Challenge for API Security In this A10 Networks video, "APIs are the Language of AI: Protecting Them is Critical," security experts Jamison Utter and Carlo Alpuerto discuss the unique challenges of securing AI-driven data exchanges. Unlike traditional API interactions—where a request for a video clearly results in a video—AI interactions are defined by a "phenomenal" level of asymmetry. A tiny text request can trigger a massive, unpredictable response, making traditional security prediction models nearly obsolete.

2025 Predictions for the Future of Cybersecurity with all our guests [279]

On this episode of The Cybersecurity Defenders Podcast, we revisit the 2025 predictions shared by our guests throughout the year. From attackers and defenders to AI and the broader security industry, these forecasts capture what experts expected was coming next. Rather than judging accuracy - which is still too early to assess - we're examining the predictions themselves: where they aligned, how they clustered, and what those patterns reveal about the industry’s mindset as this year came to a close. Free from hindsight bias, this episode explores what remained uncertain as we entered 2026.

2025 Ends With a Bang - The 443 - Episode 352

This week on the podcast, we cover a wave of attacks against network edge equipment and internet-exposed systems including an update on the recently patched Firebox 0-Day. After that, we cover two stories on browser extensions siphoning off data and making unwanted modifications to victim’s web browsing activity.

From Code to Agents: Proactively Securing AI-Native Apps with Cursor and Snyk

The rapid adoption of AI agents for development is creating a critical security gap. We are moving from predictable logic, deterministic code paths, and human-driven workflows to non-deterministic agents that reason, plan, and act autonomously using large language models across the broader software development lifecycle. As enterprises adopt these autonomous AI agents, the core challenge isn’t just the new risks and attack vectors; it’s a loss of runtime control.

How CrowdStrike Trains GenAI Models at Scale Using Distributed Computing

Large language models (LLMs) have revolutionized artificial intelligence and are rapidly transforming the cybersecurity landscape. As these powerful models become commonly used among both attackers and defenders, developing specialized cybersecurity LLMs has become a strategic imperative. The CrowdStrike 2025 Global Threat Report highlights a concerning trend: Threat actors are increasingly enhancing social engineering and computer network operations campaigns with LLM capabilities.

The Breach You Didn't Expect: Your AppSec Stack

Imagine this. Your phone rings on January 2nd, and it’s your DevSecOps and AppSec groups. A major security vulnerability is exposing your business, and your teams are trying desperately to find and fix it to protect your data. You probably have scars as far back as Log4j, as well as threats from more recent incidents like npm attacks, Glassworm and others ringing in your ears. With CVEs expected to rise by tens of thousands a year, you can envision that the situation will only worsen.