Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Complete Guide for SaaS PMs to Develop AI Features Without Leaking Customer PII

Enterprises are making bold, strategic changes in their tech stack to ramp it up by incorporating AI. With positive results of AI showing, investments are rapidly flowing in – but all this does not come without consequences. Today, privacy has become a key concern around safe AI use, especially without strong guardrails. Managing innovation and compliance risks become a challenge for SaaS product managers unless they know the right way of balancing both.

Adapting to the Changing AI Threat Landscape

Adapting to the Changing AI Threat Landscape In this video, A10 Networks' security leaders, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, discuss the evolving security risks associated with AI and Large Language Model (LLM) adoption, as well as what organizations must do to stay protected. Diptanshu Purwar outlines key shifts in the approach to AI security.

Can AI hackers access the smart devices in my home? #ai #cybersecurity

New research shows how attackers could hijack Google's Gemini AI through a simple calendar invite to control smart lights, shutters, and even boilers. The key insight from our latest podcast is that it's not about the AI vulnerability, it's about network segmentation. The real lesson? Don't give AI systems direct access to your physical devices. The simple fix is to segment your IoT devices on separate networks.

When AI Agents Go Awry

When your AI agents go awry, rewind those changes easily with Agent Rewind from Rubrik. As AI agents gain autonomy and optimize for outcomes, unintended errors can lead to business downtime. Agent Rewind will enable organizations to undo mistakes made by agentic AI by providing visibility into agents' actions and enabling enterprises to rewind those changes to applications and data. We’ve integrated Predibase's advanced AI infrastructure with Rubrik's recovery capabilities to enable enterprises to embrace agentic AI confidently.

Why Authorization Is Still the Weakest Link in API Security? #apisecurity #authorization #zerotrust

Even as authentication improves, broken authorization remains one of the most exploited vulnerabilities in APIs. In this clip, Wallarm and Oracle experts discuss real-world authorization flaws—including how missing or weak access checks can let attackers access sensitive data and functions. Learn why robust, field-level authorization is essential to protecting your APIs.

The Unified IT Imperative: Simplifying Complexity and Future-Proofing Your Organization

In this episode of the Make Work Happen podcast, we explore the strategic imperative of unified IT and how it helps leaders shape the future of their organizations. We draw on key findings from JumpCloud’s latest IT trends report to understand why IT fragmentation is a critical challenge for leaders worldwide. Joining us is JumpCloud customer Ricky Jordan, who provides a real-world case study on how a unified platform can simplify complex IT environments, address security risks, and drive strategic conversations.

MadeYouReset: An HTTP/2 vulnerability thwarted by Rapid Reset mitigations

On August 13, security researchers at Tel Aviv University disclosed a new HTTP/2 denial-of-service (DoS) vulnerability that they are calling MadeYouReset (CVE-2025-8671). This vulnerability exists in a limited number of unpatched HTTP/2 server implementations that do not sufficiently enforce restrictions on the number of times a client may send malformed frames. If you’re using Cloudflare for HTTP DDoS mitigation, you’re already protected from MadeYouReset.

IBM 2025 Cost of a Data Breach Report: Lessons for API and AI Security

IBM’s 2025 Cost of a Data Breach Report offers one of the clearest and most comprehensive views yet of how AI adoption is shaping the security landscape. While breach numbers are relatively low – only 13% of organizations reported breaches involving AI models or applications – the report reveals a troubling pattern: APIs and integrations are often the real entry point, and they’re frequently under-secured. At Wallarm, we’ve been banging this drum for a while.

Beyond the Prompt: Securing the "Brain" of Your AI Agents

Imagine an autonomous AI agent tasked with a simple job: generating a weekly sales report. It does this reliably every Monday. But one week, it doesn't just create the report. It also queries the customer database, exports every single record, and sends the file to an unknown external server. Your firewalls saw nothing wrong. Your API gateway logged a series of seemingly valid calls. So, what happened? The agent wasn't hacked. Its mind was changed.