Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

LLM Security in 2025: Risks, Mitigations & What's Next

Large language model (LLM) security refers to the strategies and practices that protect the confidentiality, integrity, and availability of AI systems that use large language models. These models, such as OpenAI’s GPT series, are trained on vast datasets and can generate, translate, summarize, and analyze text. However, like any complex software component, LLMs present unique attack surfaces because they can be influenced by the data they process and the prompts they receive from users.

6 Ways Technology Strengthens Supply Chain Compliance and Security

More than 80% of global trade by volume moves through maritime routes, according to the United Nations Conference on Trade and Development. Each container crossing borders carries not just goods, but pages of documentation, compliance checks, and security verifications. Managing all this manually leaves room for costly mistakes and unnecessary delays.

How AI Is Reshaping Cybersecurity in K12

It is first period in a busy school district. Teachers are opening their learning management systems to take attendance, preparing lesson slides, and answering a few messages from parents. Students are logging into Chromebooks after sneaking in a final Snap before leaving their phones in lockers. In the finance office, payments are being processed.

8 fundamental AI security best practices for teams in 2025

Organizations worldwide are increasingly developing or implementing AI-powered tools to streamline operations and scale efficiently. However, the benefits come with unpredictable risks unique to AI that need to be mitigated with the right safeguards. ‍ One of the biggest AI security challenges is the lack of formalized oversight. According to Vanta’s State of Trust Report, only 36% of organizations have AI-informed security policies in place or are in the process of building them.

Inside Protecto: The Technology Powering Context Security for AI

In this video, we take you under the hood of Protecto’s technology stack and show how it powers context-aware security for AI—while hiding the complexity behind simple APIs. At the core are two intelligence layers: You’ll also see how Protecto’s DeepSight engine, entropy-based tokenization, secure vault, and inference-level APIs deliver enterprise-scale security, compliance, and auditability. Protecto enables enterprises to safely unlock their data for GenAI, copilots, and Agentic workflows — without leaks, oversharing, or loss of AI capability.

The Hidden Data Compliance Risk in AI Agents at Financial Institutions

Artificial intelligence is reshaping financial services, from fraud detection to personalized banking assistants. But with innovation comes risk. AI agents—particularly those powered by large language models (LLMs)—are increasingly being embedded into financial workflows. While they promise efficiency, they also introduce a new layer of data compliance challenges.

AI Learning: It's copying everything we do!!! | AI Avenue: Ep 4

Don’t you hate it when your robot hand co-host tries to hijack your show? Yorick makes his OWN version of AI Avenue, prompting a conversation about ethics and learning in AI. Craig reaches out to experts like Amanda Haskell from @anthropic-ai to discuss how we can all use AI more responsibly. AMECA from @EngineeredArtsLtd makes a cameo to get Yorick in line. And Nick from @heygen_official swings by to make a new Craig Avatar, ethically.

Empowering Safe GenAI Adoption at a 3,600-Employee Fintech - And Stopping 20+ Data Leaks a Day

Despite having modern DLP and CASB tools in place, they lacked the behavioural insights and real-time context needed to guide employee use of GenAI tools. Shadow AI use was growing, and SecOps lacked clear visibility into which incidents required intervention.