Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Understanding and Securing Exposed Ollama Instances

Ollama is an emerging open-source framework designed to run large language models (LLMs) locally. While it provides a flexible and efficient way to serve AI models, improper configurations can introduce serious security risks. Many organizations unknowingly expose Ollama instances to the internet, leaving them vulnerable to unauthorized access, data exfiltration, and adversarial manipulation.

A litmus test for AI agents

What is an ”AI agent”? Confusion abounds. There is also some consensus: agents must of course be AI-driven systems. They should have some degree of autonomy, and they should be able to use tools in addition to understanding and reasoning. But why isn't, say, ChatGPT an agent? According to most definitions out there, it actually is. Yet most (including OpenAI themselves) don’t describe it that way.

Data Leaks and AI Agents: Why Your APIs Could Be Exposing Sensitive Information

Most organizations are using AI in some way today, whether they know it or not. Some are merely beginning to experiment with it, using tools like chatbots. Others, however, have integrated agentic AI directly into their business procedures and APIs. While both types of organizations are undoubtedly realizing remarkable productivity and efficiency benefits, they may not know they are putting themselves at a significant security risk.

Cloudflare for AI: supporting AI adoption at scale with a security-first approach

AI is transforming businesses — from automated agents performing background workflows, to improved search, to easier access and summarization of knowledge. While we are still early in what is likely going to be a substantial shift in how the world operates, two things are clear: the Internet, and how we interact with it, will change, and the boundaries of security and data privacy have never been more difficult to trace, making security an important topic in this shift.

Take control of public AI application security with Cloudflare's Firewall for AI

Imagine building an LLM-powered assistant trained on your developer documentation and some internal guides to quickly help customers, reduce support workload, and improve user experience. Sounds great, right? But what if sensitive data, such as employee details or internal discussions, is included in the data used to train the LLM?

An early look at cryptographic watermarks for AI-generated content

Generative AI is reshaping many aspects of our lives, from how we work and learn, to how we play and interact. Given that it's Security Week, it's a good time to think about some of the unintended consequences of this information revolution and the role that we play in bringing them about.

How to Secure Your Information on AWS: 10 Best Practices

About one in three organizations that leverage cloud service providers (CSPs) use Amazon Web Services (AWS), according to November 2024 research from Synergy Research Group. This means two things. One is that when attackers are looking to get the most out of a single exploit, they will likely craft them to target AWS systems. And two, that AWS data security best practices are a timely topic for a wide range of today's organizations.