Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

From Dugouts to Data Lakes: Applying Moneyball to the AI SOC

In AI-powered security, advantage comes not from automation alone, but from clear insight into how decisions are made. At Arctic Wolf, home to one of the world’s largest commercial security operations centers (SOC), we process over 10 trillion security events weekly. Rather than chasing automation for its own sake, we build AI that scales human expertise – preserving judgment where it matters most. But what is the optimal combination of humans and machines for security operations?

Sensitive Data Is the Common Thread Across Most OWASP Top 10 Issues. Here's Why

The OWASP Top 10 is usually presented as a list of technical failures. Broken access control. Injection. Insecure design. Misconfiguration. Each category points to something that went wrong in the application. What it doesn’t say explicitly is what was actually at risk when it went wrong. In most real incidents, the answer is not “the application.” It’s the data inside it. Sensitive data is the reason attackers care about OWASP failures in the first place. Credentials.

How X-Design's AI Agent Is Replacing Drag-and-Drop Branding Tools

The timeline for launching a brand has crashed. Two years ago, building an identity was a month-long slog of negotiations and revisions. Today, it happens in the afternoon. The old method of stitching together disjointed tools is dead; the market simply moves too fast for that.

The MCP Security Blueprint: What a Hardened MCP Server Looks Like

Over the last year, Model Context Protocol (MCP) servers have transitioned from "cool developer experiments" into critical production infrastructure. Developers love them because they allow AI agents to open tickets, query databases, and update records with almost zero integration backlog. But there is a fundamental truth we must acknowledge before moving forward: The AI revolution is actually an API revolution.

Advancing AI Security: Zenity's Contributions to MITRE ATLAS' First 2026 Update

MITRE ATLAS has become a critical resource for cybersecurity leaders navigating the rapidly evolving world of AI-enabled systems.Traditional threat models are built for human-initiated workflows, APIs, and infrastructure, so they are no longer sufficient to describe modern AI attacks..

A New Era for AI Coding? GPT 5.2 vs. Security Vulnerabilities

Can OpenAI’s GPT 5.2 actually build a production-ready, secure application from a single prompt? In this video, we put the latest model to the test by asking it to build a full-stack Node.js note-taking app. We evaluate its dependency choices, dive into a surprising fix for a long-standing CSRF vulnerability, and run a full security audit using Snyk. Is this the new gold standard for AI coding models?

How AI is Re-Building the Cybersecurity Landscape with Max Lamothe-Brassard from LimaCharlie [280]

On this episode of The Cybersecurity Defenders Podcast we're starting the new season off with the hottest topic of 2025: AI. Join an in-depth discussion January 20, 2026 and witness LimaCharlie's fundamentally different approach to AI-powered security operations. Sitting down with Maxime Lamothe-Brassard, Founder and CEO of LimaCharlie, we discuss the ways AI has rapidly changed how companies are building security tools.

PGA of America Trusts LevelBlue as Official Cybersecurity Advisor

LevelBlue and the PGA of America share a commitment to excellence under pressure. As the Official Cybersecurity Advisor of the PGA of America, LevelBlue brings championship standards of protection, continuity, and trust to the organizations that keep the game - and business - moving forward. From fairways to firewalls, LevelBlue safeguards mission-critical operations, member data, and high-profile events with always-on defense, accelerated response, and expert-led security operations powered by AI-driven threat intelligence.

The Silent Killer in Security Stacks: Configuration Drift | Todd Graham x Garrett Hamilton

The silent killer in modern security programs? Garrett Hamilton and Todd Graham discuss how the real killer is settings quietly slipping out of alignment over time — even in environments packed with “best-in-class” tools and clean audit results. Misconfigurations don’t announce themselves. They accumulate. They age. They slowly pull your security posture away from original intent. What teams think is “turned on” often isn’t enforced consistently — or at all. Without continuous validation, drift becomes invisible risk.