Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Continuous Threat Exposure Management (CTEM): The Complete Guide to Proactive Cybersecurity

The cybersecurity landscape has fundamentally changed. Organizations today manage sprawling digital environments - cloud workloads, remote endpoints, SaaS applications, third-party APIs, and hybrid infrastructure - all of which expand the attack surface at a pace that traditional security programs simply cannot match.

6 Lessons Security Leaders Must Learn About AI and APIs

Most organizations treating AI security as a model problem are defending the wrong layer. Security teams filter prompts, patch jailbreaks, and tune model behavior, which is all necessary work, while the actual attack surface sits largely unexamined underneath. That surface is the API layer: the endpoints AI systems use to retrieve data, call tools, and take action on behalf of users. This isn't a theoretical gap.

Guide: DORA Compliance Evidence for Agentic AI

→ What DORA assessors actually evaluate → How DORA controls map to specific evidence requirements → Common evidence gaps that can interfere with audits → The evidence challenges of agentic AI → The full blueprint for DORA compliance now and in the future The Digital Operational Resilience Act (DORA), otherwise known as Regulation (EU) 2022/2554, represents a fundamental shift in how financial institutions must show their compliance.

What SOC Analysts Actually Want From AI

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo Rick Bosworth is a cybersecurity marketing executive with nearly two decades of experience driving GTM strategy across technology startups. His uniquely technical perspective bridges the gap between complex solutions and practical customer outcomes. Rick has deep expertise spanning EDR, CNAPP, CWPP, AppSec, CTEM, and agentic SecOps.

Privacy in Enterprise AI: Why It's the Foundation, Not a Feature

Last week, OpenAI released Privacy Filter, an open-weight model for detecting and redacting PII in text. It is a thoughtful release: Apache 2.0 licensed, able to run locally, designed for high-throughput workflows, and built to go beyond regex-based detection. This is good news for everyone building enterprise AI. Privacy at the model layer is getting real attention. What we liked most was how clearly OpenAI described the role of the model.

How Do AI Agents Create Data Exfiltration Risk?

AI agents create data exfiltration risk by combining three capabilities that are dangerous together: access to private data, exposure to untrusted content, and the ability to communicate externally. When all three exist in one agent, an attacker can hide instructions inside an email, document, or webpage the agent processes and trick it into sending sensitive data out. No software vulnerability is required. The attacker doesn't need to break in. They just need to talk to your agent.

Agentic SecOps: Build a security AI agent that automatically investigates detections

A credential access event fired. An AI agent investigated it, correlated it against running processes, assessed the risk, and closed the ticket. No analyst touched it. The entire loop ran in minutes. This is what security operations look like when AI can actually operate in the environment rather than advise from outside it. Security operations have always required a special kind of person.