Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What No One Tells You About Scaling Enterprise AI | Ep: 1 | AI On The Edge

Watch this exclusive LinkedIn Live conversation at the frontier of privacy, security, and GenAI. Learn how to successfully scale AI initiatives in your enterprise with proven strategies that prioritize business value over technology complexity, featuring insights from Rakuten India's VP of AI & Data. Guest: Anirban Nandi – Vice President AI & Data @ Rakuten India Host: Amar Kanagaraj – Founder & CEO @ Protecto.

Cato CTRL Threat Research: Uncovering Nytheon AI - A New Platform of Uncensored LLMs

With the introduction of WormGPT in 2023, threat actors have been using uncensored large language models (LLMs) for malicious activities. Following the shutdown of WormGPT in the same year, numerous alternatives have emerged—including BlackHatGPT, FraudGPT, and GhostGPT, among others—primarily accessible through Telegram channels.

AI-automated Fuzzing Uncovers Two More Vulnerabilities in wolfSSL

Daniel Pouzzner from wolfSSL has challenged us to find 3 more vulnerabilities in the wolfSSL library, after we found the first one in October 2024. We weren't quite able to find three, but here are the additional two that we found: Both vulnerabilities were fixed in wolfSSL version 5.8.0, released on 24 April 2025. The fuzz tests that found these vulnerabilities were generated by our AI Test Agent.

The EU AI Act: What MSPs Need to Know ?

The EU AI Act is the most comprehensive law in the world to regulate artificial intelligence. This law doesn’t just apply to organizations inside the European Union, it also affects anyone doing business with the EU or offering AI-powered services in that market. If you use AI tools like ChatGPT, Copilot, Jasper, or Bard for automation, reporting, or client communication, yes, then definitely this applies to you.

Build Fast, Stay Secure: Guardrails for AI Coding Assistants

AI coding assistants like GitHub Copilot and Google Gemini Code Assist are changing how developers work — accelerating delivery, removing repetition, and giving teams back time to build. But speed isn’t free. Studies show that around 27% of AI-generated code contains vulnerabilities, not because the tools are broken, but because they generate code faster than most teams can review it. The result? A growing wave of insecure code is making it into production.

How Safe is the ChatGPT Android App? An Appknox Study

Brilliant AI, broken defenses? AI-powered apps are revolutionizing how we search, learn, and communicate, but the rapid pace of innovation has come at a cost: security is often an afterthought. As part of our AI App Security Analysis Series, we’ve been scrutinizing some of the most popular AI tools on Android for hidden vulnerabilities that could put millions of users at risk.

Open Chroma Databases: A New Attack Surface for AI Apps

Chroma is an open-source vector store–a database designed to allow LLM chatbots to search for relevant information when answering a user’s question–and one of many technologies that have seen adoption grow with the recent AI boom. Like many databases, Chroma can be configured by end users to lack authentication and authorization mechanisms.

OpenAI Report Describes AI-Assisted Social Engineering Attacks

OpenAI has published a report looking at AI-enabled malicious activity, noting that threat actors are increasingly using AI tools to assist in social engineering attacks and influence operations. In one case, the company banned ChatGPT accounts that were likely being used in North Korean attempts to fraudulently obtain jobs at US companies. “Similar to the threat actors we disrupted and wrote about in February, the latest campaigns attempted to use AI at each step of the employment process.