Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Workload Security on AWS: Evaluating Native Tools vs Third-Party Solutions

Your Bedrock agent running on EKS receives a prompt through your RAG pipeline. CloudTrail logs it as a normal bedrock:InvokeModel event—status 200, authorized IAM role, expected endpoint. But inside the container, the agent’s response triggers a tool call that spawns curl to an external IP, exfiltrating the context window. GuardDuty doesn’t flag it because the connection routes through a permitted VPC endpoint. You open your AWS console and see a healthy API call.

How to Evaluate AI Workload Security Tools for Enterprise Teams

You’ve sat through three vendor demos this week. Vendor A showed you an AI-SPM dashboard with a pie chart of misconfigurations. Vendor B showed you a nearly identical dashboard with different branding and a slightly wider set of compliance frameworks. Vendor C showed you posture findings with an “AI workload” tag that wasn’t in their product last quarter.

Lovable vs. Bolt - Vibe Code Challenge

Which AI tool is better for building a real app without writing code, Bolt or Lovable? In this video, I put both AI app builders head-to-head using the exact same prompt to create a DIY home repair forum. From database setup to authentication, UI design, publishing, and security checks, we compare how each platform performs in real time. The goal isn’t just to generate something that looks like an app, it’s to see whether these tools can actually create something usable, functional, and potentially production-ready. We evaluate.

How to Apply NIST 800-53 to AI Systems

Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI. Over the last 15 years, he has authored standards, guidance and best practices with ISO, NIST, and other governing bodies. Smith strives to create actionable resources for organizations seeking to minimize technological risk and increase value to customers.

Methods for Designing AI Identity | Teleport x The Cyber Hut

Three methods for issuing identity to AI agents — and why static credentials will always eventually leak no matter how well you vault them. Ev Kontsevoy breaks down standard credentials, durable identity, and digital twins, and explains why the issuer of identity needs to be the same across your entire environment.

The Need for Infrastructure Identity | Teleport x The Cyber Hut

Most organizations have identity over here and infrastructure over there — and they don't talk. By default, infrastructure has no identity. It's naked. Ev Kontsevoy explains why bringing identity into your infrastructure stack is a prerequisite for safe AI adoption — and what a trusted state actually looks like.

Video On Demand - Configuration Drift and the Risk of Misconfiguration

Misconfigurations can undermine security even on fully patched systems. In this webinar, CalCom’s Co-Founder and Director of Business Development Roy Ludmir explains what configuration vulnerabilities are, how configuration drift happens, and why it matters for both cyber risk and compliance. Questions? Want to talk about server hardening for your organization? Contact us at info@calcomsoftware.com.

Why Legacy Security Tools Fail to Protect Cloud AI Workloads

Your CNAPP flags a misconfigured service account. Your CSPM warns about an overly permissive IAM role. Your container scanner reports vulnerabilities in a model-serving image. But none of these tools can tell you that an AI agent just called an internal admin API it has never touched before — or that a prompt injection caused your LLM to leak customer data through a RAG connector.

AI Agent Escape Detection: How to Catch Agents Breaking Their Boundaries

Your SOC gets three alerts in quick succession: an unusual outbound connection from a container, a file read on a Kubernetes service account token, and a process spawn that doesn’t match the workload’s baseline. Three different tools, three separate dashboards, three tickets.