Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

A January Snapshot: Real-World AI Usage

AI is no longer a fringe productivity experiment inside organisations, it is embedded, habitual, and increasingly invisible. This snapshot from CultureAI’s January usage data highlights how AI is actually being used across everyday workflows, and where risk is forming as a result. Rather than focusing on hypothetical threats or model-level concerns, the findings below surface behavioural signals from real interactions: prompts, file uploads, and context accumulation.

How to Prevent Active Directory Attacks by Securing Privileged Accounts

Let’s be honest—when Active Directory is compromised, the incident is never small. Almost every major enterprise breach involves Active Directory at some point. Attackers may enter through phishing, malware, or a misconfigured endpoint, but their real goal is always the same: gain control over privileged identities and Domain Admin accounts. Once that happens, containment becomes difficult and recovery becomes painful. Preventing Active Directory attacks isn’t about adding more tools.

How do AI guardrails protect infrastructure from the unsafe and unpredictable territory of LLM risks

How do AI guardrails protect infrastructure from the unsafe and unpredictable territory of LLM risks? An AI firewall or guardrail device sits between your applications and large language models to keep the data sent and received from LLMs safe, compliant, and high-quality. Its design is to inspect natural-language traffic and protect your infrastructure against LMM vulnerabilities, including prompt injection, jailbreak attacks, data poisoning, system prompt leakage, and OWASP Top 10 vulnerabilities, using advanced, proprietary reasoning models.

Using SSL Inspection and AI Guardrails to Protect Infrastructure

Using SSL Inspection and AI Guardrails to Protect Infrastructure How do you protect your AI infrastructure from threats without impacting user experience? In this video, we'll cover the methods organizations can use to inspect encrypted traffic, including what is sent to AI chatbots, and add guardrails to protect against security risks. We'll cover.

How hackers REALLY operate #cybersecurity #exposé

The episode explores how modern cybercrime works, from the meaning of hacker and the growth of an underground industry to scapegoats, lone wolves and cartel style structures. Listeners hear how criminals cash out, protect themselves better than victims, exploit new AI tools and treat attacks as business, with no honour in sight. ⸻ For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com. We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion..

0-Click RCE in Claude Desktop: How AI Extensions Threaten Endpoint Security

The modern enterprise software ecosystem increasingly relies on desktop AI applications enhanced through extensible plugin or extension frameworks. These extensions are designed to improve productivity by enabling integrations with local files, browsers, APIs, developer tools, and internal systems. However, this same extensibility introduces a high-risk attack surface when extension permissions, sandboxing, and input validation are weakly enforced.

Exabeam Agent Behavior Analytics: First-of-Its-Kind Behavioral Detections for AI Agents

AI agents are moving into real workflows faster than most teams expected. According to PwC’s 2025 AI Agent Survey, 79% of companies are already adopting AI agents, and 88% of executives expect to increase AI-related budgets in the next year. These agents are now handling research, summarization, customer engagement, and operational tasks at a scale humans can’t match.

LevelBlue SpiderLabs: Breaking Down the Ransomware Groups Targeting the Education Sector

Ransomware attack groups have ramped up their efforts, launching attacks on the education sector with recent incidents striking a range of targets from an Australian institution of higher learning to a school district in North Carolina. These facilities contain a large amount of very valuable data, such as student records, intellectual property, and financial information that threat groups can leverage for financial gain. An additional reason education is targeted is that it must stay in operation.

AI Agent-to-Agent Communication: The Next Major Attack Surface

We are witnessing the end of the "Human-in-the-Loop" era and the beginning of the "Agent-to-Agent" economy. Until recently, most AI interactions were hub-and-spoke models where a human user prompted a central model, reviewed the output, and then took action. That model provided a natural safety brake. If the AI hallucinated or suggested a malicious action, a human was there to catch it. That safety brake is disappearing.