Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI security: A comprehensive guide for evolving teams

The AI boom has introduced intelligent tools into most industries, not just in tech-first organizations. But the rising adoption also opens the door to new risks. ‍ Vanta’s AI governance survey found that 63% of organizations rate data privacy and protection as the top concern with AI, followed by security and adversarial threats at 50%. These numbers emphasize how urgently organizations want to prioritize defenses for AI-specific attack vectors.

AI Code Review in 2025: Technologies, Challenges & Best Practices

AI code review leverages artificial intelligence models and machine learning techniques to analyze and provide feedback on source code, automating and improving the traditional code review process. It is crucial for software development workflows, offering significant advantages to developers and teams. AI code review can scan for bugs, style violations, security vulnerabilities, and other issues.

CTI Roundup: SystemBC, ShinyHunters, AI-obfuscated Phishing

This week, Tanium’s Cyber Threat Intelligence (CTI) team investigates SystemBC, a large-scale proxy botnet that’s leveraging compromised virtual private server (VPS) infrastructure to support cybercriminal operations, including ransomware and credential theft. Next, the team looks at ShinyHunters—a financially motivated data extortion group that’s now targeting enterprise cloud applications.

How Falcon ASPM Secures GenAI Applications and Lessons from Dogfooding

The widespread availability of large language models (LLMs) has driven the rapid development of generative and agentic AI applications for business use cases. These systems can reason, plan, and act autonomously, creating security risks that traditional security tools weren’t built to handle. Their popularity has widened the attack surface, both for organizations using external LLMs and those building their own GenAI applications.

Beyond Agent-Washing: How Torq Delivers True Agentic Automation for Security

Eldad Livni is the Co-Founder and Chief Innovation Officer at Torq. Prior to founding Torq, Eldad co-founded and served as CPO of Luminate Security, a pioneer in Zero Trust/SASE. Following Luminate’s acquisition by Symantec, he went on to act as CPO of Symantec’s Zero Trust/Secure Access Cloud offering. The security industry has a new buzzword problem.

Understanding the OWASP AI Maturity Assessment

Today, almost all organizations use AI in some way. But while it creates invaluable opportunities for innovation and efficiency, it also carries serious risks. Mitigating these risks and ensuring responsible AI adoption relies on mature AI models, guided by governance frameworks. The OWASP AI Maturity Assessment Model (AIMA) is one of the most practical. In this article, we’ll explore what it is, how it compares to other frameworks, and how organizations can use it to assess their AI maturity.

0Click Attacks: When TTPs Resurface Across Platforms

If there’s one lesson security teams should take from recent disclosures, it’s this: AI agent attack techniques don’t disappear - they resurface, across vendors and platforms, with only small variations. What researchers called out months ago is showing up again, now in Salesforce as the ForcedLeak vulnerability.

AI Data Privacy Regulations: Legal and Compliance Guide

The regulatory landscape for AI and privacy reached a turning point in 2025. The headlines are familiar: laws multiply, consumer expectations harden, and enforcement accelerates. What is different this year is the shift from occasional audits to always-on proof. Regulators and enterprise customers want to see working controls inside your pipelines, not just policy PDFs.