Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Latest posts

AI Cybersecurity & Fact Check

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

The 2025 Cost of a Breach Report - The 443 Podcast - Episode 340

This week on the podcast, we discuss key findings from IBM and the Ponemon Institute's 2025 Cost of a Breach Report, including a deep analysis of AI impacts in cybersecurity. Before that, we cover Norway's claim that Russian-aligned hackers opened a floodgate in one of their dams. We also discuss a vulnerability in Microsoft 365 Copilot that allowed the AI to delete its own audit logs. The 443 Security Simplified is a weekly podcast that gets inside the minds of leading white-hat hackers and security researchers, covering the latest cybersecurity headlines and trends.

Tackling cybersecurity today: Your top challenge and strategy

In this article Shadow IT used to be a fringe problem, a rogue Dropbox account here, a personal Gmail there. Now, it’s everywhere. One customer said it best: “We don’t have a Shadow IT problem. We are Shadow IT.” That stuck. It’s not malice. It’s urgency. People move fast. Procurement doesn’t. So teams swipe cards, spin up tools, and get on with it. The intentions are good. The risks are massive. We’ve seen it firsthand.

Inside the Kimsuky APT Leak: Stolen GPKI Certificates, Rootkits, and a Personalized Cobalt Strike from North Korea's Cyber Unit

In a rare and unprecedented incident, a massive operational dump belonging to the North Korean Kimsuky APT group was leaked on a dark web forum. The leak containing virtual machine images, VPS dumps, phishing kits, rootkits, and thousands of credentials offers an unparalleled look into the inner workings of one of Pyongyang’s most prolific cyber espionage groups.

5 healthcare cybersecurity regulations and frameworks to follow in 2025

As AI and automation increasingly become embedded into healthcare operations, securing these technologies becomes critical, especially for organizations managing protected health information (PHI), which are frequent targets for cybersecurity threats such as data breaches and unauthorized access. ‍ To safeguard this sensitive data, regulatory agencies like the U.S. Department of Health and Human Services (HHS) enforces strict cybersecurity and privacy regulations under HIPAA.

The Next Level of Managed Vulnerability Scanning: Authenticated and Unauthenticated Scans

Trustwave, A LevelBlue Company, is a huge proponent of employing offensive security tactics to ensure a client is properly protected. For Trustwave, the reason is obvious. Offensive security is an effective approach to evaluate and enhance an overall security posture. We’ve written about this before (just check here, here, and here), but today we will explore the difference between an Authenticated Scan and an Unauthenticated Scan. Let’s set the stage by defining the two types of scans.

Improve Prompt Quality, Consistency, and Productivity With Egnyte's AI Prompt Library

Generative AI can deliver great improvements in work productivity and quality. But business users must be able to rely on the dependability of the responses their AI tools generate for them. That’s only possible with sophisticated, often complex prompts. In addition, companies want AI solutions that ensure a high level of consistent results across teams. With gen AI, when 10 users ask the same questions using their own prompts, they get 10 different responses.

Top AI Data Privacy Risks in Organizations [& How to Mitigate Them]

What if just one line in a chatbot prompt could turn into a regulatory nightmare? That’s the reality enterprises face today. In fact, Gartner predicts the average data breach will exceed $5M by 2025—and AI-driven systems multiply those risks in ways traditional IT never prepared us for. Unlike legacy apps, AI doesn’t just use data—it feeds on it, reshapes it, and sometimes leaks it right back out.

Beyond the ban: A better way to secure generative AI applications

The revolution is already inside your organization, and it's happening at the speed of a keystroke. Every day, employees turn to generative artificial intelligence (GenAI) for help with everything from drafting emails to debugging code. And while using GenAI boosts productivity—a win for the organization—this also creates a significant data security risk: employees may potentially share sensitive information with a third party.