Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Dawn of Agentic AI in the SOC

Now that six in ten security leaders view AI as a “game changer” across all security functions and 85% of security professionals report increased AI investment and usage in the past year, it’s clear that AI is no longer a fringe technology in security operations. But the AI conversation has evolved recently as a new buzzword has taken over: agentic AI.

Does Claude 3.7 Sonnet Generate Insecure Code?

With the announcement of Anthropic’s Claude 3.7 Sonnet model, we, as developers and cybersecurity practitioners, find ourselves wondering – is the new model any better at generating secure code? We commission the model to generate a classic CRUD application with the following prompt: The model generates several files of code in one artifact, which the user can manually copy and organize according to the file tree suggested by Claude alongside the main artifact.

Keep AI interactions secure and risk-free with Guardrails in AI Gateway

The transition of AI from experimental to production is not without its challenges. Developers face the challenge of balancing rapid innovation with the need to protect users and meet strict regulatory requirements. To address this, we are introducing Guardrails in AI Gateway, designed to help you deploy AI safely and confidently.

Key Updates in the OWASP Top 10 List for LLMs 2025

Last November, the Open Web Application Security Project (OWASP) released its Top Ten List for LLMs and Gen AI Applications 2025, making some significant updates from its 2023 iteration. These updates can tell us a great deal about how the LLM threat and vulnerability landscape is evolving - and what organizations need to do to protect themselves.

The Use Of Artificial Intelligence In Threat Intelligence

Artificial Intelligence (AI) is a double-edged sword in cybersecurity, empowering both defenders and attackers. AI-driven security systems are often used to detect threats in real-time, analysing large datasets for anomalies, and automating responses to cyberattacks. However, cybercriminals are also leveraging AI to create advanced malware, automate phishing attacks, and evade traditional defenses.

UK Cracks Down on AI-Generated Child Abuse Content

As AI tools grow more sophisticated and accessible, sadly exploitation of these tools also increases. Recognising this, the Home Office has made the UK the first country in the world to introduce new legislation that targets predators producing AI-generated child sexual abuse material (CSAM). AI-generated content has severe consequences for victims. CSAMs may be used to manipulate or blackmail children, perpetuate harmful narratives, or retraumatise victims whose likenesses have been altered.

Empowering Data Security in GenAI: Step-by-Step Guide to PII Safeguarding in Bedrock using Protegrity

Generative AI (GenAI) applications, especially through Retrieval-Augmented Generation (RAG) pipelines, are transforming business interactions with data. These pipelines leverage language models and extensive enterprise knowledge bases for real-time queries of large internal datasets. Robust data privacy and security solutions are essential. Amazon Bedrock’s native security guardrails address this need.