Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How Weak AI Governance Is Creating A Security Disaster #cybersecurity #aisecurity

This episode explores why CTEM matters in a world of vibe coding, AI agents and rapidly expanding attack surfaces. It covers prompt injection, hidden threats, deepfakes, weak governance and the growing fear that businesses are deploying AI far faster than security teams can understand or control it.

What is the NIST AI Risk Management Framework?

The NIST AI Risk Management Framework is a guide that helps organizations spot and reduce risks in AI systems. This framework was released in January 2023 by the U.S. National Institute of Standards and Technology. The framework is built around four key steps, namely: Govern, Map, Measure, and Manage, and is meant to help teams responsibly use AI. It doesn’t matter which industry you work in or which AI you use; this framework works everywhere.

Introducing the Datadog Code Security MCP

AI-assisted development helps teams write code faster, but that speed comes with added security risk. As agents generate more code, they can introduce vulnerabilities, insecure dependencies, or exposed secrets, often before a human reviewer ever sees the change. Security teams are left reviewing more code with the same resources, which makes it harder to catch issues early.

AI-Powered Human Risk Management Shifts the Focus to Adaptive, Behavior-Based Training

Human risk management (HRM) focuses on one of the most persistent cybersecurity vulnerabilities: humans. Social engineering attacks that trick users into taking risky actions are a factor in 98% of cyberattacks not because they are technically complex, but because they manipulate employee behavior. Unlike traditional, one-size-fits-all security awareness training, human risk management focuses on changing employee behavior through monitoring and targeted reinforcement.

Secure What Matters: Scaling Effortless Container Security for the AI Era

In November, we shared our vision for the Future of Snyk Container, outlining a fundamental shift in how teams secure the modern container lifecycle. We promised a future where security doesn’t just “scan” but scales effortlessly with the speed of the AI-driven, agentic world. Today, we are thrilled to announce that we are moving from vision to reality.

The Agentic Identity Crisis: Why Your AI Agents Are Your Biggest Identity Blind Spot in 2026

An intern gets admin access to production for a temporary task, but nobody remembers to revoke it. Imagine that intern works at machine speed, never sleeps, and can chain dozens of actions before you’ve read the Slack ping—and has no instinct for when they’re about to do something irreversible.

IREX Upgrades FireTrack AI for Faster and More Accurate Fire Detection

WASHINGTON, DC - IREX has announced a major update to its FireTrack fire and smoke detection module, introducing significant improvements in speed, accuracy, and operational flexibility across a wide range of environments. According to an article on The Next Web, the updated solution is designed to work seamlessly with existing camera infrastructure, enabling organizations to enhance fire detection capabilities without deploying additional hardware.

Anthropic Claude Mythos Preview: The More Capable AI Becomes, the More Security It Needs

The Claude Mythos Preview matters for every enterprise. Frontier models raise the ceiling for both offense and defense. Our job is to make sure defenders hold the advantage. That is what we have always done. That is what we do today. Today, CrowdStrike is a founding member of Project Glasswing. Anthropic builds the model. CrowdStrike secures AI where it executes. That’s the division of labor the industry needs.