Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why AWS-native companies choose Vanta for compliance

Building products while pursuing compliance frameworks like SOC 2 or HIPAA can feel complex and time-consuming. Challenges such as unclear integrations, manual evidence collection, and procurement delays are common, but with AWS-native automation tools, companies can overcome these hurdles and accelerate their compliance journey. ‍ In this post, we'll break down three core ways Vanta simplifies compliance for cloud-forward teams, so you can move faster, stay secure, and focus on building. ‍

OpenAI Report Describes AI-Assisted Social Engineering Attacks

OpenAI has published a report looking at AI-enabled malicious activity, noting that threat actors are increasingly using AI tools to assist in social engineering attacks and influence operations. In one case, the company banned ChatGPT accounts that were likely being used in North Korean attempts to fraudulently obtain jobs at US companies. “Similar to the threat actors we disrupted and wrote about in February, the latest campaigns attempted to use AI at each step of the employment process.

Open Chroma Databases: A New Attack Surface for AI Apps

Chroma is an open-source vector store–a database designed to allow LLM chatbots to search for relevant information when answering a user’s question–and one of many technologies that have seen adoption grow with the recent AI boom. Like many databases, Chroma can be configured by end users to lack authentication and authorization mechanisms.

How Safe is the ChatGPT Android App? An Appknox Study

Brilliant AI, broken defenses? AI-powered apps are revolutionizing how we search, learn, and communicate, but the rapid pace of innovation has come at a cost: security is often an afterthought. As part of our AI App Security Analysis Series, we’ve been scrutinizing some of the most popular AI tools on Android for hidden vulnerabilities that could put millions of users at risk.

Build Fast, Stay Secure: Guardrails for AI Coding Assistants

AI coding assistants like GitHub Copilot and Google Gemini Code Assist are changing how developers work — accelerating delivery, removing repetition, and giving teams back time to build. But speed isn’t free. Studies show that around 27% of AI-generated code contains vulnerabilities, not because the tools are broken, but because they generate code faster than most teams can review it. The result? A growing wave of insecure code is making it into production.

The EU AI Act: What MSPs Need to Know ?

The EU AI Act is the most comprehensive law in the world to regulate artificial intelligence. This law doesn’t just apply to organizations inside the European Union, it also affects anyone doing business with the EU or offering AI-powered services in that market. If you use AI tools like ChatGPT, Copilot, Jasper, or Bard for automation, reporting, or client communication, yes, then definitely this applies to you.

AI-automated Fuzzing Uncovers Two More Vulnerabilities in wolfSSL

Daniel Pouzzner from wolfSSL has challenged us to find 3 more vulnerabilities in the wolfSSL library, after we found the first one in October 2024. We weren't quite able to find three, but here are the additional two that we found: Both vulnerabilities were fixed in wolfSSL version 5.8.0, released on 24 April 2025. The fuzz tests that found these vulnerabilities were generated by our AI Test Agent.