Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Can Scan Your Code. It Can't Secure Your Organization.

When Anthropic announced Claude Code Security on February 20th—a tool that scans codebases for vulnerabilities and suggests patches for human review—the reaction from markets was swift and brutal. Major cybersecurity names watched their stock prices fall by double digits within days. The implied thesis behind the selling: AI can now do what these companies do, so why pay for them? It's a compelling fear and an inaccurate conclusion at the same time. The DLP space is a clear example of why.

Rare Not Random: Using Token Efficiency for Secrets Scanning

In Regex is (almost) All You Need, we learned that using a combination of regular expression patterns, entropy, and rule-based filters are an effective way to detect candidate secrets. Regex is used for casting a wide net to identify candidates. Entropy is used as a primary filter on the captured candidates and additional filters like presence of commonly used english words, or filtering on known “safe” files like go.sum are applied last.

The Next Market Disruption: Agentic SOC

Predicting a market disruption is difficult, but the vast rewards of being correct make it worthwhile. Unfortunately, prediction becomes tougher when marketing teams start labelling everything as a "market disruptor". Much like the stock market, if something is being sold to you as “the investment of a lifetime”, it almost certainly is not. Yet market disruptors do exist, and the organizations that identify them enjoy generational success.

The Machine War: Why MSPs Must Move from AI-Assistance to Autonomy

In 2026, the digital landscape has shifted from a world of "AI assistants" to one of autonomous operators. For managed service providers (MSPs), this evolution marks the end of the traditional "land and expand" human services playbook and the beginning of a high-speed era of machine-on-machine warfare.

What is a Prompt Injection Attack?

AI tools are quickly becoming part of everyday business workflows. From chatbots to automation tools, large language models now handle sensitive tasks and data. But with this growth comes new security risks. One of the biggest emerging threats is the prompt injection attack, in which attackers manipulate inputs to cause AI systems to ignore their original instructions. Unlike traditional cyberattacks, this method exploits weaknesses through language rather than code.

AI Compliance: 5 Key Frameworks, Challenges, and Best Practices

AI compliance ensures AI systems follow laws, ethics, and standards by managing risks like bias, privacy violations, and lack of transparency through robust governance, documentation, and continuous monitoring, using frameworks like the EU AI Act and NIST AI Risk Management Framework (RMF) to build trust and avoid penalties in developing, deploying, and operating AI.

AI Moves Fast, Privacy Has to Move Faster with Ojas Rege

In this episode, Caleb Tolin welcomes Ojas Rege of OneTrust for a practical, wide-ranging conversation on how data privacy and governance must evolve alongside enterprise AI adoption. Ojas explains why AI fundamentally changes the privacy conversation: the same systems that enable organizations to move faster can also cause harm faster when guardrails aren’t in place. From agentic AI systems that dynamically repurpose data to general-purpose models that blur traditional notions of “intended use,” the challenge isn’t just compliance—it’s trust.

AI Agent Sandboxing & Progressive Enforcement: The Complete Guide

Your CISO just got word that engineering is deploying AI agents into production Kubernetes clusters next quarter. Not chatbots—autonomous agents that generate and execute code, call external APIs through MCP tool runtimes, access internal databases, and make decisions without human review. The question lands on your security team: “How are we securing these?”