Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What Real AI Security Incidents Reveal About Today's Risks

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

MAS TRM Compliance Checklist 2026

Singapore’s financial sector faces its most demanding regulatory environment yet in 2026. AI-powered cyberattacks, cloud-native banking infrastructure, and decentralised finance have pushed the Monetary Authority of Singapore (MAS) to sharpen its supervisory focus — and its expectations of every regulated institution. If you are a CISO, CTO, Head of Compliance, or technology risk officer at a Singapore financial institution, this guide answers the question your regulators are already asking.

Inside the Hidden VM: How Attackers Stay Undetected

Threat actors are getting better at hiding in plain sight through using virtual environments to evade detection and deliver ransomware. New research from Sophos X-Ops reveals an increase in the abuse of QEMU, an open-source emulator, to conceal malicious activity inside virtual machines. While this technique isn’t new, its use for defense evasion is accelerating, making visibility and detection even more challenging for defenders.

From Alerts to Action: Automating MSP Security

MSPs today face growing security demands alongside increasing operational complexity. Disconnected tools and manual processes create noise, slow response times, and limit scalability. The solution? Automation and integration. By connecting security platforms with PSA and RMM tools, MSPs can streamline workflows, reduce alert fatigue, and improve service delivery, turning reactive processes into proactive, efficient operations.

Mythos, Attackers, and The Part People Still Want To Skip

Anthropic built a powerful AI model and then kept it on a short leash. The important part is not that a model found bugs, which has been coming for a while. What’s worth acknowledging is that Anthropic looked at what Mythos could do and decided broad release was a bad idea. Attackers do not need a perfect autonomous system. They need leverage.

MCP: The AI Protocol Quietly Expanding Your Attack Surface

In February 2026, researchers uncovered something that should give every security leader pause. A malware operation called SmartLoader, previously known for targeting consumers who downloaded pirated software, had completely pivoted its infrastructure. SmartLoaders new target was developers, and its new entry point was a protocol most security teams had never heard of. The payload delivered to victims: every saved browser password, every cloud session token, every SSH key on the machine.

The Shadow Supply Chain: A Pivot To Usage-Based Discovery

We’ve established the new forensic reality: a massive 72.9% inventory gap exists between the vendors you monitor and those invisible to your security. We have seen the shortcomings of SSO and its inability to holistically monitor all the vendor applications your users engage with, along with a Shadow AI explosion that is compounding both issues. The era of procurement-only discovery is over. To secure the modern cyber workforce, we must pivot from "buying-based" to usage-based discovery.

1 in 15 MCP Servers are Lookalikes: Is Your Org at Risk?

Researchers recently analyzed 18,000 Claude Code configuration files pulled from public GitHub repositories. What they found was straightforward and alarming: developers are already installing mistyped, misconfigured, and near-identical MCP server names — often without realizing it. The human-error condition that makes typosquatting work was already present at scale before any attacker needed to exploit it.

AI Agents are moving your sensitive data: Nightfall built a solution where DLP fails

Somewhere in your environment right now, an AI agent is reading files, querying a database, and passing output through a channel your DLP has never seen. It's running under a legitimate user credential, inside a sanctioned tool, and it will not trigger a single alert. When it's done, there will be no record of what it accessed or where that data went. This is not an edge case. It is the default state of most enterprise environments in 2026.