Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

New Cloudflare report warns of a 'Technical Glass Ceiling' stifling AI growth and weakening cybersecurity

New research shows that organisations modernising apps are 3x more likely to see AI payoffs, while those clinging to legacy systems face rising security risks and developer talent shortages.

ServiceNow's Virtual Agent Vulnerability Shows Why AI Security Needs Traditional AppSec Foundations

The recent disclosure of what security researchers are calling "the most severe AI-driven vulnerability uncovered to date" in ServiceNow's platform serves as a stark reminder: securing agentic AI isn't just about new AI-specific controls; it requires getting the fundamentals right first.

Beneath the AI iceberg: The forces reshaping work and security

In conversations about AI, there’s a tendency to treat the future like a horizon we’re walking toward, always somewhere ahead, always a question of when. But if we look closely, the forces reshaping work, identity, and security beneath the surface are far more consequential than most people realize. More importantly, that reshaping is already happening.

Arctic Wolf and AWS: AI-Powered SOC and Security Incident Response

Discover how Arctic Wolf partners with Amazon Web Services (AWS) to deliver cutting-edge, AI-powered Security Operations Center (SOC) capabilities and advanced security incident response solutions. This video explores how Arctic Wolf leverages AWS cloud infrastructure and artificial intelligence to provide: Learn how this powerful combination enhances your organization's security posture, reduces response times, and protects against evolving cyber threats through intelligent automation and comprehensive managed detection and response (MDR) services.

How Agentic AI Creates Shadow APIs: Security Risks Explained

How Agentic AI Creates Shadow APIs: Security Risks Explained As businesses move from static applications to Agentic AI, the security landscape is shifting beneath our feet. In this clip from the A10 Networks webinar, "APIs are the Language of AI: Protecting Them is Critical," experts Jamison Utter and Carlo Alpuerto discuss a new frontier in cybersecurity: AI that builds its own APIs.

Stop buying niche tools to secure your AI. #cybersecurity #aisecurity #engineering

In his first prediction for 2026, Ev explains why that strategy is about to fail. We used to let microservices run anonymously because we had bigger fires to fight. But when all software becomes autonomous AI, anonymity is a risk you can't afford. If your software behaves like a human, why separate it from your human identity strategy? The future isn't "NHI." It's a Unified Identity Layer where humans and non-humans are managed as equals.

How Security Teams Can Tackle Information Overload and Work Smarter

The modern security professional drowns in data every single day. Between threat intelligence reports, compliance documentation, vendor assessments, and incident logs, there's simply too much to read and not enough hours to read it. This isn't just frustrating. It's a genuine security risk. When critical information gets buried under mountains of PDFs and reports, threats slip through the cracks. The good news? There are practical strategies and tools that can help security teams cut through the noise. Let's explore how to manage this avalanche of information without burning out your team.

LLM Red Teaming: Threats, Testing Process & Best Practices

LLM red teaming is a proactive security practice that involves systematically testing large language models (LLMs) with adversarial inputs to find vulnerabilities before deployment. By using manual or automated methods to probe for weaknesses, red teamers can identify issues like harmful content generation, bias, or security exploits, which are then addressed through a continuous “break-fix” loop to improve the model’s safety and reliability.

AI Deepfakes Are Impersonating Religious Figures to Solicit Donations

WIRED reports that deepfake attacks are impersonating pastors and other religious figures in order to scam congregations. Father Mike Schmitz, a priest who hosts a podcast with over a million followers, warned his listeners in November that AI-generated deepfakes were using his likeness to fraudulently solicit donations. WIRED found that several of these fake accounts are still active on TikTok, and they appear when a TikTok user searches for Father Schmitz.