Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What is Slopsquatting? The AI Package Hallucination Attack Already Happening

Typosquatting, registering a typoed version of a popular package and waiting for a developer to accidentally type and install the wrong package, has been around for a decade in npm. It’s nothing new— the registry has protections for it. Then AI came along and changed everything again. Slopsquatting is the new, AI flavor of typosquatting. Instead of betting on human typos, attackers bet on AI hallucinations, the package names that LLMs confidently recommend that don't actually exist.

Outpacing Modern Adversaries with the CrowdStrike Agentic SOC

Adversaries are weaponizing AI, accelerating tradecraft and moving from access to impact at machine speed. As breakout times collapse to seconds, security teams cannot rely on manual processes or static automation to keep up. Meet the CrowdStrike Agentic SOC, a new operating model built for the AI era.

Reduce False Positives Automatically with @claude Code and LimaCharlie

Noisy alerts slow down every SOC. See how Claude Code with LimaCharlie can analyze your existing detection logic and trigger alerts to identify what's generating the noise and what can be done about it. After running the prompt, Claude Code reviews your rules and their trigger frequency, identifies the ones generating false positives, and produces specific recommendations for suppression rules to apply. In this example, it flags three rules and provides the logic to address each one, whether the issue stems from a syntax problem or detection logic that needs tightening.

Humans Will Give AI Anything If You Make It Sound Cool Enough

There's a beautiful moment happening right now, and by "beautiful" I mean "horrifying in that can't-look-away-from-the-car-crash sense”. People are giving OpenClaw access to, well, pretty much their entire lives. The results are exactly what you'd expect… One user gave his agent $500 and watched it create 25 trading strategies, generate 3,000+ reports, build 10 new algorithms, scan every post on X, and trade 24/7 non-stop. The result? It lost everything. Not most of it. Everything.

NIST AI Risk Management Framework Insights for Cybersecurity

AI is now widely used across security, automation, and digital infrastructure. With that shift, risk is no longer limited to technical failures – it also includes trust, data misuse, and system authenticity. This article explains what the NIST AI Risk Management Framework is, how AI risk affects security, the key risk categories, and how cybersecurity infrastructure supports trustworthy AI systems.

Is AI Making Us Mentally Lazy? The Hidden Security Risk of Cognitive Offloading

Modern aviation offers a powerful warning about overreliance on automation. When autopilot systems became highly advanced, pilots transitioned from hands-on flying to supervising computer-driven controls. Efficiency improved-but skill degradation followed. In rare moments when automation failed, even seasoned pilots sometimes struggled with basic manual maneuvers.

AI Under Control: Link11 Launches AI Management Dashboard for Clean Traffic

Link11 launches its new "AI Management Dashboard", closing a critical gap in how companies manage AI traffic. Artificial intelligence is fundamentally changing internet traffic. But while many companies are already feeling the strain of AI crawlers on their infrastructures, they often lack clarity, reliable data, and operational control. With the new solution, the European IT security provider is, for the first time, making AI traffic transparent, controllable, and auditable within existing workflows.

Teleport Named to Futuriom 50 for Second Consecutive Year, Recognized as an AI Infrastructure Identity Leader

Teleport has been selected for the Futuriom 50 (2026) - marking Teleport's second consecutive year on the list and recognition as an AI Infrastructure Leader. Futuriom Founder and Principal Analyst Scott Raynovich highlighted Teleport's differentiated approach to identity-based security for infrastructure, cloud, and AI access.

Report: AI-Driven Fraud Surged by 1200% in December 2025

AI-driven fraud attacks spiked by more than 1200% in December 2025, according to a new report by Pindrop Security. Threat actors are using AI to assist in every stage of the attack, from deploying bots to conduct reconnaissance to using deepfakes to trick humans. “According to Pindrop internal data, AI fraud (or non-live fraud) surged 1210% by December 2025,” the researchers write.

How AI is Reshaping Cyber Threats

In Episode of Guardians of the Enterprise, Ashish Tandon, Founder & CEO, Indusface, spoke with Madhur Joshi, CISO at HDB Financial Services (part of the HDFC Group), about how AI is reshaping the cyber threat landscape. They discussed how attackers are now leveraging AI to launch more sophisticated phishing campaigns, automate malware, and scale attacks faster than ever before. As AI lowers the barrier to entry, the speed and complexity of attacks continue to increase, making it harder for organizations to keep up.