Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI-Generated Attacks: What are They and How to Avoid Them?

AI-generated attacks, such as social engineering, phishing, deepfakes, malicious GPTs, data poisoning, and more, are disrupting the current security landscape speedily. But there are ways to avoid them and strengthen our defences with miniOrange IAM solutions.

How Exabeam Detects LLM Abuse for Google Cloud Model Armor

In this demo, see how the Exabeam New-Scale Security Operations Platform integrates with Google Cloud Model Armor to detect and stop abuse of large language models (LLMs). You’ll learn how Exabeam: Monitors AI activity for suspicious or malicious behavior Uses advanced analytics to spot LLM misuse in real time Helps security teams enforce responsible AI use policies Watch how Exabeam and Google Cloud work together to provide stronger visibility, detection, and protection against emerging threats targeting LLMs.

Is This the Best Coding Model in the World? Claude Sonnet 4.5

In this episode of our AI Coding Tools series, we test Claude Sonnet 4.5 to see if it can build a secure note-taking app. The model claims to be the best in the world — but does it live up to the hype? We’ll cover how it codes, where it shines (or struggles), and how it stacks up against other AI coding assistants.

Verifiable AI: Policy Management for Next-Gen AI Security

As AI agents increasingly automate complex B2B workflows, how do organizations ensure security and compliance? In this segment, A10 Networks' security experts, Jamison Utter, Diptanshu Purwar, and Madhav Aggarwal, dive into the critical steps for securing AI deployments. Diptanshu emphasizes the importance of integrating AI agents into existing governance platforms, leveraging systems such as role-based access control and policy management.

Security Leaders Cite AI-Driven Phishing Attacks as a Top Concern

A new report has found that nearly 40% of security leaders believe their organizations are least prepared for phishing and other social engineering attacks, Help Net Security reports. According to the report from VikingCloud, these concerns are driven by the increasing use of AI tools to assist in cyberattacks. “Generative or agentic AI-driven phishing attacks (51%) are leadership teams’ top concern when it comes to new cyberattack techniques,” the report says.

An AI/ML Deep Dive with Luke Wolcott

This week on the podcast, we bring on WatchGuard's head of MDR data science Luke Wolcott to discuss the evolution of machine learning and artificial intelligence in cybersecurity. We dive into the differences in common (and uncommon) machine learning models, the pros and cons of supervised vs unsupervised learning, and why some of the coolest things happening in AI aren't the ones you hear about in the news.

AI agents in financial services: The hidden org chart

AI agents are quickly becoming “first-class citizens” in financial services, mimicking human behavior and holding privileged access that rivals employees. Yet unlike people, they don’t appear on your official org chart. The financial services sector already lives in a state of constant tension: the race to adopt new technologies for a competitive edge often faces off with the duty to preserve customer trust earned over decades of reliability, regulation, and security.

The AI Revolution: Embracing the Future of eDiscovery

The eDiscovery landscape is undergoing a profound transformation, driven by the rapid evolution of artificial intelligence (AI). What was once a labor-intensive, manual process is now being revolutionized by technologies capable of analyzing vast volumes of data with speed, precision and insight. AI is not just a buzzword; it’s a catalyst for smarter, faster and more defensible legal workflows.

Privacy Concerns with AI in Healthcare: 2025 Regulatory Insight

Healthcare has always been one of the toughest environments for maintaining privacy. Now add AI assistants, retrieval-augmented generation, and multimodal inputs like clinical images and voice notes. Sensitive information travels farther and faster than ever before, and the fallout from a single leak can be devastating, affecting clinical, legal, and reputational aspects. The question for 2025 is simple: how do we harness the advantages of AI without compromising private health data?