Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Container Security Without Context Is Just More Noise

Mend.io’s new Docker Hardened Images integration brings DHI intelligence directly into the AppSec workflow, giving a smarter, faster path to container security. Container scanning has a noise problem. Run a standard scan against any production image, and you’ll surface thousands of CVEs.

Your AppSec Metrics Are Lying to You. Here's What Actually Matters

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

CloudCasa Launches in the NKP Partner Catalog, Expanding Data Protection and Mobility for NKP Users

At Nutanix.NEXT, we’re excited to announce that CloudCasa is launching in the NKP Partner Catalog, giving Nutanix Kubernetes Platform (NKP) solution users an easier way to add Kubernetes-native backup, recovery, disaster recovery, and migration to their environments. This launch builds on CloudCasa’s existing Nutanix Ready foundation and extends that value even further by making CloudCasa available through the NKP Partner Catalog.

How Minimal Container Images Are Reshaping the Fight Against CVE Exposure in Modern Cloud Environments

As the adoption of containers grows across Cloud infrastructure, Cybersecurity experts and DevSecOps leaders continue to deal with the persistent surge of publicly available software vulnerabilities. The National Vulnerability Database documented an alarming figure of 29,000 CVEs for 2023, and the numbers since then show no signs of slowing down. Research shows that the majority of production container images have known vulnerabilities. This article explores the relationship between container images and CVE vulnerabilities (exposure), the growing burden of compliance, and the target risk reduction of minimal-image strategies.

CMMC Requirements for AI Systems: What Assessors Actually Look For

Josh Rector is the Compliance Director, Public Sector at Ace of Cloud, a security and compliance consulting firm, certified CMMC Third-Party Assessor Organization (C3PAO), and Registered Provider Organization (RPO). With more than a decade of experience in cybersecurity compliance, he has worked both sides of the assessment table, leading internal and external assessments, serving as ISSO for systems at federal agencies, and guiding cloud service providers through the FedRAMP authorization process.

The AI Compliance Gap No One's Talking About (ISO, NIST, EU AI Act)

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

AI Application Security: 6 Focus Areas and Critical Best Practices

AI application security protects AI-powered apps, including those powered by large language models ( LLMs), from unique threats like prompt injection, data poisoning, and model theft. It achieves this by securing the entire lifecycle, including code, data, algorithms, and APIs, using specialized tools and processes that go beyond traditional security measures. It involves securing the AI model’s behavior, training data, and outputs.

Kubernetes for Agentic AI: Best Practices for Security and Observability

Agentic AI workloads are shipping to production on Kubernetes faster than the standards to secure them. Many teams deploying autonomous, tool-calling agents as containerized microservices do so without a shared baseline for securing or monitoring those containers. The CNCF AI Technical Community Group recently published a comprehensive article on cloud-native agentic standards, marking the first attempt to define best practices for such deployments.

Detecting Rogue AI Agents: Tool Misuse and API Abuse at Runtime

When your CNAPP flags a suspicious dependency in an AI agent container, your WAF logs an unusual API spike, and your SIEM shows a burst of cloud storage calls—are those three separate incidents or one rogue agent attack? Most security teams treat them as three tickets in three queues, investigated by three people who may never connect the dots. By the time someone pieces together that a single compromised agent drove all three signals, the attacker has already moved laterally and exfiltrated data.

What is an AI-BOM? Why Static Manifests Fall Short

Your AI-BOM shows every model, tool, and data source you deployed. But when your SOC investigates an alert about unusual agent behavior, that inventory tells them nothing about what actually happened at runtime. Static AI-BOMs document what you intended to run. Attackers exploit what your AI workloads actually do in production: which APIs they call, what data they touch, and how they use approved tools in unapproved ways.