Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Closing the Guardrail Gap: Runtime Protection for OpenAI AgentKit

OpenAI’s AgentKit has democratized AI agent development in a big way. Tools like Agent Builder, ChatKit, and the Connector Registry make it possible for teams to spin up autonomous agents without writing custom code. That kind of accessibility changes everything, including the AI agent security threat model. The easier it becomes to build agents, the harder it gets to secure them.

One Platform. One Agent. One Giant Leap for MSP Efficiency.

Managing security shouldn’t mean juggling a dozen tools, agents, and spreadsheets. WatchGuard is cutting through the noise with two major updates designed to give managed service providers (MSPs) the simplicity and control they’ve been asking for: expanded PSA integrations and the new WatchGuard Agent.

Snyk Studio: Now for All Customers, Powering Secure AI Development at Scale

The way we build software has fundamentally changed. AI code assistants are no longer a novelty; they are the new standard, creating a revolutionary leap in developer productivity. Back in May, we launched Snyk Studio with a focus on our partners, creating an open framework to build a vibrant ecosystem for securing AI-driven development. Our goal was to ensure that as the AI landscape evolved, Snyk’s market-leading security intelligence could be embedded into any AI-native tool.

Essential LLM Privacy Compliance Steps for 2025

Large language models are no longer side projects. Sales teams rely on them for emails, support teams for ticket summaries, legal for first-draft reviews, and product teams for search and personalization. That ubiquity changes the risk math. Sensitive information flows through prompts, fine-tuning sets, retrieval indexes, analytics stores, and vendor logs. Regulators now expect the same discipline for LLM pipelines that they expect for core systems handling customer data.

How to monitor MCP server activity for security risks

The Model Context Protocol (MCP) is a popular framework for connecting AI agents to data sources, such as APIs and databases. Because this technology is still new and evolving, its security standards are also in the early stages. This means that MCP servers are susceptible to misuse, so teams building and running them internally need visibility into server interactions to keep their environments safe from attacks.

Report: AI Poisoning Attacks Are Easier Than Previously Thought

Attackers can more easily introduce malicious data into AI models than previously thought, according to a new study from Antropic. Poisoned AI models can produce malicious outputs, leading to follow-on attacks. For example, attackers can train an AI model to provide links to phishing sites or plant backdoors in AI-generated code.

Smarter SIEM starts here: Context, speed, and the power of MCP

Traditional SIEMs were built for a simpler time, when infrastructure was static, data was structured, and threats were easier to spot. Designed to collect logs and centralize alerts, they gave organizations a single pane of glass into their environment. Visibility isn’t enough anymore.