Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Understanding AI Compliance When Choosing AI-Enabled Solutions

2001: A Space Odyssey introduced the world to HAL 9000, the fictional artificial intelligence (AI). HAL’s capabilities include everything from facial recognition to natural language processing and automated reasoning. As HAL malfunctions over time, the computer becomes violent to prevent the humans from disconnecting it. The story serves as a morality tale suggesting that without human oversight, AI is dangerous.

Session on Ghost in the Machine: Attacking Non-Human Identities in the Age of AI Agents

In this eye-opening talk - DEF CON Pune (DCG-9120) held at Indira Group of Institutes, Mr. Kalpesh Hiran, VP of Technology at miniOrange, exposes the hidden dangers of Non-Human Identities (NHIs) - the API keys, service accounts, OAuth tokens, and AI agents powering your infrastructure. He spoke on organizations create 92 NHIs for every human user, Yet 97% are over-privileged, lack MFA, and linger as "orphans" post-project, fueling 80% of cloud breaches.

Securing OpenClaw Access So It Can't Go Rogue

In this video, we demonstrate how to securely grant an AI agent (OpenClaw) access to Teleport-protected Kubernetes resources using Teleport Machine Identity and tbot, without exposing secrets, API keys, or long-lived tokens. You’ll see how Teleport treats AI agents as first-class identities, enforcing strict RBAC controls so the agent can only do what it’s allowed to do, like reading logs, while being blocked from sensitive actions like deleting resources or accessing secrets.

Claude Code Auto Mode: What It Means for AI Agent Privilege Management

Anthropic’s new Claude Code Auto Mode Auto Mode is generating well-deserved attention. It introduces a classifier that sits between the developer and every tool call, reviewing each action for potentially destructive behavior before it executes. It’s a real improvement over the only previous alternative to manual approval: the –dangerously-skip-permissions flag. But the announcement is also useful for a broader reason.

AI Workload Security on Azure: Evaluating Defender for Cloud Against Specialized Runtime Tools

Your SOC gets a Defender for Cloud alert: “Suspicious API call from AI workload pod.” You click through and find a LIST secrets call against the Kubernetes API server from a pod running your invoice-processing agent on AKS. The pod’s Workload Identity has Contributor access to your key vault. By the time your analyst opens the AKS Security Dashboard, the pod has been rescheduled.

AI Agent Security Framework on AWS EKS: Implementation Guide

You’ve enabled GuardDuty EKS Runtime Monitoring across your clusters. You’ve configured IRSA for your Bedrock-calling agents. CloudTrail is logging every bedrock:InvokeModel event. And last Tuesday, one of your AI agents exfiltrated 12,000 customer records through a sequence of API calls that every one of those tools recorded as completely normal—because at the control plane level, they were.

AI Adoption Surging in Financial Services - But Control Lagging

Artificial intelligence is moving rapidly from experimentation into everyday use across financial services. From client servicing and research to operations and risk analysis, AI is increasingly embedded in core workflows. This shift is widely recognised within the industry. Recent research indicates that 67% of financial services organisations report rapid AI adoption, with 93% ranking AI as a top security priority heading into 2026. At the same time, governance structures are being established.

What MSP and IT leaders need to know about security, compliance and AI in 2026

Artificial intelligence (AI) is transforming how organizations operate, but it’s also reshaping one of the most complex areas of IT: compliance. What was once a structured, checklist-driven process is now one that is continuous and fast-moving and that introduces new risks, dependencies and expectations. As AI adoption accelerates, so does the pressure on both managed service providers(MSPs) and IT professionals to interpret and comply with evolving regulations.

RSA 2026: The Shift Toward Security FOR AI

RSA Conference 2026 made one thing clear very quickly. Security leaders are done with generic AI pitches. After two years of relentless “AI everything,” the market is now pushing back. There is a growing fatigue with vague promises, surface-level features, and what many are calling outright AI washing. The result is a trust gap. What cut through this year was not another AI-powered detection claim. It was a much more grounded question.