Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What is a Prompt Injection Attack?

AI tools are quickly becoming part of everyday business workflows. From chatbots to automation tools, large language models now handle sensitive tasks and data. But with this growth comes new security risks. One of the biggest emerging threats is the prompt injection attack, in which attackers manipulate inputs to cause AI systems to ignore their original instructions. Unlike traditional cyberattacks, this method exploits weaknesses through language rather than code.

AI Compliance: 5 Key Frameworks, Challenges, and Best Practices

AI compliance ensures AI systems follow laws, ethics, and standards by managing risks like bias, privacy violations, and lack of transparency through robust governance, documentation, and continuous monitoring, using frameworks like the EU AI Act and NIST AI Risk Management Framework (RMF) to build trust and avoid penalties in developing, deploying, and operating AI.

AI Moves Fast, Privacy Has to Move Faster with Ojas Rege

In this episode, Caleb Tolin welcomes Ojas Rege of OneTrust for a practical, wide-ranging conversation on how data privacy and governance must evolve alongside enterprise AI adoption. Ojas explains why AI fundamentally changes the privacy conversation: the same systems that enable organizations to move faster can also cause harm faster when guardrails aren’t in place. From agentic AI systems that dynamically repurpose data to general-purpose models that blur traditional notions of “intended use,” the challenge isn’t just compliance—it’s trust.

Agentic AI Security: Spin Up a Fully Configured Tenant in Minutes

LimaCharlie built a SecOps Cloud Platform that connects every component, including agentic AI, via API. This architectural approach unlocks the full potential of AI, allowing it to do more than advise. We call it the Agentic SecOps Workspace. With LimaCharlie, AI can provision tenants, deploy rulesets, configure integrations, and manage infrastructure directly. Our bring-your-own-LLM approach makes AI a native part of your security stack, not a layer on top of it.

Who's Winning the AI Arms Race: Threat Actors or Cybersecurity Defenders?

The modern threat landscape is an ever-evolving battlefield of innovation and escalation. Thanks to the rapid adoption of artificial intelligence, both attackers and defenders now have powerful new tools at their disposal. But who has the edge when it comes to the artificial intelligence (AI) arms race? Unsurprisingly, the answer is complicated.

The Case for Behavioral AI in Legal Email Security

For legal organizations, the integrity of communication isn't just a business requirement, it’s a foundational pillar of the profession. Whether it’s a sensitive case strategy, a confidential merger agreement, or personal client data, the information contained within firm emails represents an immense amount of trust and significant liability. However, as law firms increasingly migrate to cloud environments like Microsoft 365, they face a double-edged sword.

CrowdStrike Falcon AI Detection and Response

Cyber threats are evolving faster than ever — and security teams need AI that doesn’t just detect threats, but understands and responds to them in real time. In this video, we explore CrowdStrike Falcon AI Detection and Response (AIDR) and how it transforms modern security operations. Powered by the CrowdStrike Falcon platform, AIDR leverages advanced artificial intelligence to automatically identify, categorize, and prioritize threats with speed and precision — helping SOC teams cut through alert noise and focus on what truly matters.

AI on the Radar: Securing AI Driven Development

Join Vandana and Rob in this insightful webinar exploring the rapidly evolving landscape of AI security. As we shift from simple query-response models to complex autonomous agents that can plan, execute code, and access sensitive APIs, the traditional security "locks" are no longer sufficient. This session dives deep into the OWASP AI Exchange, a community-driven initiative providing practical guidance and technical controls for securing AI systems.