Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI in Cybersecurity Certification

Positive feedback can lead to unintended consequences. A dog learned that saving kids from the River Seine earned food and praise. So he started dragging them in to “save” them. AI models optimize for feedback in a similar way. Cato’s AI in Cybersecurity course shows how to manage the risks. It’s free and earns you CPE credits.

You Can Create a Convincing Deepfake in Under an Hour

A non technical user can produce a credible deepfake in under an hour using off the shelf tools and footage from normal video meetings. Common habits such as recording calls for later review give attackers enough material to train models, so every routine sales or onboarding call becomes potential training data. ⸻ For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com. We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion..

AppSec in the age of AI: An RSA Conference preview

Application security is at a breaking point as development teams move faster than ever, aided by AI-powered coding assistants. While these tools boost productivity, they also introduce subtle errors and insecure patterns at scale. The result: a growing backlog of vulnerabilities that outpaces traditional AppSec models. This webcast examines the risks and opportunities of AI in AppSec and who will be addressing it at RSA Conference. We’ll explore how defenders can use AI to level the playing field with automated scanning, intelligent prioritization, and secure-by-design practices.

How Artificial Intelligence (AI) Can Increase Threat Detection and Response

Security leaders are being squeezed from both sides. On one side, threat actors are scaling operations with AI automation, using it to craft more convincing social engineering attacks, accelerating reconnaissance, and improving lateral movement. On the other side, defenders are drowning in telemetry, suffering under staffing constraints, and facing the harsh reality that threat actors don’t keep business hours.

How Governments Use AI Safely | AI Governance Explained

How are governments using AI while protecting citizens’ data and privacy? In this episode of AI on the Edge, Ciara Maerowitz, Chief Privacy Officer for the City of Phoenix, explains how cities implement AI governance, manage bias, ensure transparency, and assess AI risks. Learn how responsible AI frameworks, policies, and risk management help governments safely adopt artificial intelligence.

Why Soft Guardrails Get Us Hacked: The Case for Hard Boundaries in Agentic AI

One recurring theme in my research and writing on agentic AI security has been the distinction between soft guardrails and hard boundaries. As someone who serves on the Distinguished Review Board for the OWASP Agentic Top 10, and who spends every day thinking about how to secure agents across enterprise environments at Zenity, this distinction is not academic. It is potentially the single most important conceptual framework practitioners need to internalize right now.

An AI Agent Didn't Hack McKinsey. Its Exposed APIs Did.

This week’s McKinsey incident should be a wake-up call for every enterprise moving fast to deploy AI. Not because AI itself is inherently insecure. But because too many organizations are still thinking about AI security at the model layer, while the real enterprise risk sits in the action layer: the APIs, MCP servers, internal services, and shadow integrations that AI agents can reach, invoke, and manipulate. That is the part most companies still do not see.

Use Agentic SOC-as-Code to Right-Size Your AI Operations

Let’s start by drawing a strong distinction between what LimaCharlie does and what others offer in their AI SOCs. LimaCharlie's Agentic SecOps Workspace is an architecture that integrates AI as part of the security fabric. It's agentic AI security you own and control, not a black box you subscribe to. We introduce an easily deployable SOC-as-code approach that increases your control and capabilities.

A Comprehensive Guide to Continuous Threat Exposure Management (CTEM)

Continuous Threat Exposure Management is a continuous security framework for identifying, assessing, validating, and reducing the exposures that matter most to an organization. Rather than treating every exposure, alert, or control issue as equally urgent, CTEM helps organizations focus on the exposures that are actually reachable, relevant to likely attack paths, and meaningful in a business context.