AI Workload Security on Azure: Evaluating Defender for Cloud Against Specialized Runtime Tools

Your SOC gets a Defender for Cloud alert: “Suspicious API call from AI workload pod.” You click through and find a LIST secrets call against the Kubernetes API server from a pod running your invoice-processing agent on AKS. The pod’s Workload Identity has Contributor access to your key vault. By the time your analyst opens the AKS Security Dashboard, the pod has been rescheduled.

Session on Ghost in the Machine: Attacking Non-Human Identities in the Age of AI Agents

In this eye-opening talk - DEF CON Pune (DCG-9120) held at Indira Group of Institutes, Mr. Kalpesh Hiran, VP of Technology at miniOrange, exposes the hidden dangers of Non-Human Identities (NHIs) - the API keys, service accounts, OAuth tokens, and AI agents powering your infrastructure. He spoke on organizations create 92 NHIs for every human user, Yet 97% are over-privileged, lack MFA, and linger as "orphans" post-project, fueling 80% of cloud breaches.

Planning a spring break trip? Don't fall for these 7 travel scams

Don't let fraud ruin your trip. Discover 7 common spring break scams and learn how to protect yourself with these expert travel tips. Spring break scams are out to ruin your vacation, but they don't have to. With a little awareness and Avast Free Antivirus protecting your devices, you can hit the beach without handing criminals an opening. Spring break is supposed to be about poolside playlists and late-night tacos, not calling your bank from a hotel lobby because your card’s been maxed out.

AI Agent Security Framework on AWS EKS: Implementation Guide

You’ve enabled GuardDuty EKS Runtime Monitoring across your clusters. You’ve configured IRSA for your Bedrock-calling agents. CloudTrail is logging every bedrock:InvokeModel event. And last Tuesday, one of your AI agents exfiltrated 12,000 customer records through a sequence of API calls that every one of those tools recorded as completely normal—because at the control plane level, they were.

GitHub Spark vs. Replit - Vibe Code Challenge

We pit GitHub Spark (in public preview) against Replit's AI agent. The challenge? Build a fully functional community forum for DIY tips from a single prompt. We compare design aesthetics, mobile responsiveness, login security, and deployment speed to see which tool creates a truly production-ready application. Which one do you think deserved the win? Let me know in the comments!

From Shai-Hulud to LiteLLM: Supply Chain Attackers Are Coming for Your Agents

The LiteLLM supply chain compromise of March 24, 2026, is not an isolated incident. It is the latest and perhaps most dangerous chapter in an evolving attacker playbook that JFrog Security Research has been tracking for years. The target has shifted from developers to the AI agents that developers now rely on to build software.

AI Adoption Surging in Financial Services - But Control Lagging

Artificial intelligence is moving rapidly from experimentation into everyday use across financial services. From client servicing and research to operations and risk analysis, AI is increasingly embedded in core workflows. This shift is widely recognised within the industry. Recent research indicates that 67% of financial services organisations report rapid AI adoption, with 93% ranking AI as a top security priority heading into 2026. At the same time, governance structures are being established.

Securing OpenClaw Access So It Can't Go Rogue

In this video, we demonstrate how to securely grant an AI agent (OpenClaw) access to Teleport-protected Kubernetes resources using Teleport Machine Identity and tbot, without exposing secrets, API keys, or long-lived tokens. You’ll see how Teleport treats AI agents as first-class identities, enforcing strict RBAC controls so the agent can only do what it’s allowed to do, like reading logs, while being blocked from sensitive actions like deleting resources or accessing secrets.