Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Agentic AI and NonHuman Identities Demand a Paradigm Shift In Security: Lessons from NHIcon 2026

In the race to innovate, software has repeatedly reinvented how we define identity, trust, and access. In the 1990's, the web made every server a perimeter. In the 2010's, the cloud made every identity a workload. Here in 2026, agentic AI makes every action autonomous.

OpenClaw (Moltbot) Personal Assistant Goes Viral - And So Do Your Secrets

Early 2026, Moltbot a new AI personal assistant went viral. GitGuardian detected 200+ leaked secrets related to it, including from healthcare and fintech companies. Our contribution to Moltbot: a skill that turns secret scanning into a conversational prompt, letting users ask "is this safe?".

Planning Your Workload Identity Roadmap: Standards, Patterns, and the Path Ahead - Webinar

With 100x more non-human identities than human identities expected in 2025, the way we manage machine credentials is fundamentally broken. 83% of attacks involve compromised secrets, yet many organizations still rely on hardcoded keys, sprawling secrets, and scattered vault deployments.

Save Time With GitGuardian's ML-Powered Similar Incident Grouping

GitGuardian is excited to introduce Machine Learning Powered Similar Incident Grouping, which cuts through the noise by identifying incident-specific patterns across your inventory and clustering incidents that belong together, so you can handle repetitive cases efficiently and reduce incident response toil.

Meet GitGuardian's Machine Learning-Powered Risk Scoring

The GitGuardian Platform now automatically ranks every secrets incident with a risk score from 0–100, turning alert floods into a prioritized, trustworthy work queue. Scores are computed from incident context (like validity, exposure, where it was found, and exploitability) and build on existing ML capabilities like Secret Enricher and our False-Positive Remover, which cuts false positives by 80%+.

Jeremy Brown, CTO at GitGuardian, on AI, NonHuman Identities, and the Governance Gap in 2026

AI isn’t creating new security problems, it’s exposing existing ones at scale. GitGuardian saw 24M secrets leaked on public GitHub last year (+25%), and private repos are far more likely to contain secrets because people get careless when they feel safe. AI also enables more non-developers to ship apps without security training and generates oversized PRs that can’t be realistically reviewed, increasing leak risk. Attackers increasingly don’t “hack”, they use leaked credentials to log in and blend in like normal users, making traditional incident response less effective.

Secrets in the Machine: Preventing Sensitive Data Leaks Through LLM APIs

In this webinar, we break down a simple but increasingly common problem: secrets leak wherever text flows, and modern LLM apps and agentic workflows are built to move text fast. We walk through concrete demos showing how API keys and passwords can surface through RAG-based assistants when secrets accidentally live in knowledge bases (tickets, docs, internal wikis). We also show why “just harden the system prompt” isn’t a reliable fix, and how output-only redaction can be bypassed (for example by simple formatting/encoding tricks). Most importantly, we explore real-world agent architectures.

Honeytokens with ggshield: plant tripwires that alert on secret use

In this video, we introduce ggshield honeytoken and why it’s one of the most powerful tools in the GitGuardian toolbox. A honeytoken is a decoy secret that alerts you the moment someone tries to use it or validate it. Think of it like a digital tripwire. In GitGuardian, honeytokens can be created through the dashboard or API, and they look like real AWS keys because they are valid credentials. The difference is they grant zero access and are isolated to an AWS account GitGuardian maintains specifically for this purpose.

Has My Secret Leaked (HMSL) with ggshield: check public GitHub exposure safely

Since 2018, GitGuardian has been scanning for secrets added to GitHub public repositories. When a secret is found, GitGuardian hashes it and stores only a fingerprint of the secret. That fingerprint is what you can search against to verify whether any of your secrets have leaked in public repositories, gists, or issues on GitHub. This service is called Has My Secret Leaked, and in ggshield you’ll see it as the HMSL commands. There’s also a web interface, but in this section we stay in the terminal and use ggshield end to end.