Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Preventing Data Poisoning in Training Pipelines Without Killing Innovation

Data poisoning occurs when cyber criminals intentionally compromise the integrity of a data set used for training machine learning models. They corrupt the information to manipulate the model’s outcome in the form of incorrect predictions by introducing vulnerabilities that reduce the effectiveness, add security risks, and fundamentally shape its decision making capabilities.

Top 7 Tools to Manage Cybersecurity Risks from AI-Generated Code and Software

Managing AIcoded ("vibe code") software vulnerabilities doesn't require a full rebuild of your security program. By combining runtime visibility with targeted guardrails, teams can close blind spots in days instead of months. Spektion makes that possible as the leading runtimefirst solution for securing and managing vulnerabilities in from AIgenerated code in live apps, delivering live behavioral insight the moment code executes.

Can ChatGPT Help with a Penetration Test? Real-World Hacking Test vs PentestGPT

Can ChatGPT really assist in a penetration test? In this short clip, security expert Brian Johnson puts it to the test against an Active Directory environment… and let’s just say, the results are less than helpful. Find out why tools like PentestGPT are gaining momentum in ethical hacking in this webinar, "Hack the Hackers: Exploring ChatGPT and PentestGPT in Penetration Testing": netwrix.com/go/exploring-chatgpt-and-pentestgpt-yt.

When "Private" Isn't: The Security Risks of GPT Chats Leaking to Search Engines

In late July 2025, users discovered that ChatGPT chats, initially shared via link, were appearing in search engine results on platforms such as Google, Bing, and DuckDuckGo. These shared conversations included personal content relating to mental health, career concerns, legal issues, and more, without any indication of a data breach. Instead, the exposure resulted from a now-removed feature that enabled discoverability via search indexing.

July Release Rollup: Copilot - Improved File Search and Selection, Project Center, and more

We’re excited to share new updates and enhancements for July, including: For more information on these updates and others, please read the complete list below and follow the links for more detailed articles.

Webinar Replay - Navigating AI Governance In Retail: Lessons from Real-World Scenarios

As AI continues to innovate the retail industry in areas such as supply chain management, personalizing customer experience and data insights, businesses must navigate the complex challenges of data privacy, secure and compliant AI deployment and ethical use. During this briefing, Kroll experts highlighted the key steps for building a resilient AI Governance program using real-life use cases from the retail industry that will help not only to understand, implement and monitor responsible AI but clear the way for innovation to generate successful return on investment and build consumer trust.

Why Omdia recommends Extended Access Management to secure agentic AI

Omdia, a global analyst and advisory leader, recently released a report called “How Extended Access Management (XAM) closes the gaps in security.” In it, they describe how existing tools have failed to address the most serious security challenges: application sprawl, device sprawl, and identity sprawl.

The unopinionated AI advantage: Building AI-powered SecOps on your terms

After years of hype, hesitation, and false starts, the age of AI in cybersecurity is finally here, and it is moving fast. Security teams are no longer wondering if they should be using AI, but how to harness it without getting trapped by vendor lock-in or rigid solutions that can't adapt to their needs. In this webinar, LimaCharlie founders Maxime Lamothe-Brassard (CEO) and Christopher Luft (CCO) reveal how we are integrating AI into cybersecurity using the same principles that define everything we build: massively scalable, unopinionated, and flexible by design.