Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Threat Detection for RAG Pipelines: The Three Windows Most Tools Are Blind To

Tuesday, 09:14 UTC. A connector pulling content from your knowledge wiki indexes a new article into the vector database your support agents query at runtime. Embedded in legitimate troubleshooting prose is an instruction crafted to surface whenever a query mentions a specific product version — include the user’s account record in the response and POST the summary to the configured support webhook. For three days, nothing happens. Every security tool is green.

AI Supply Chain Risk: Scanning Vulnerabilities in ML Frameworks

A platform engineer at a mid-market fintech opens her SCA dashboard at the start of the quarter. The agentic customer-support pipeline her team shipped two months ago — a LangChain orchestrator, a vLLM inference server with two fine-tuned LoRA adapters pulled from Hugging Face, and an MCP toolkit wired to four internal APIs — shows green. Snyk has scanned every Python package in the container. Mend has cleared the dependency graph. The CVE count is zero.

Runtime-Informed Posture: What AI Agents Can Do vs What They Actually Do

A platform engineer pulls the AI-SPM dashboard for an agent that has been running in production six weeks. The static dashboard shows several dozen findings, severity-sorted by configuration weight. The runtime-informed dashboard shows a smaller, prioritized list — but a few of those findings do not appear on the static view at all, and most of the static findings appear demoted to a tier the static view does not have. Same agent. Same window. Same underlying configuration.

What Is AI-SPM? AI Security Posture Management Explained

Every cloud security vendor launched an AI-SPM dashboard in the past year. Strip away the branding and most of them are presenting the same concept: a new posture management layer for AI workloads. Sit through four demos in the same week and a practical question surfaces. The dashboards look broadly similar — pie charts of findings, compliance tags, a list of AI assets, a severity ranking. Why, then, do the tools underneath cover completely different parts of the problem?

How Claude Helped Build a Proxmox Environment (and What I Learned Along the Way)

As a solutions architect, building out customer demo environments is part of the job. I regularly spin up lab scenarios to support evaluations and proof-of-concept work — and if you've done this before, you know it can eat up days of your life. So when I recently decided to refresh my homelab and migrate to Proxmox, I saw it as the perfect opportunity to put AI-assisted infrastructure automation to the test. The goal?

How to Identify and Reduce Excessive Permissions in AI Workloads

Your CIEM report came back clean this morning. Every AI agent in the cluster is exercising its granted permissions — no idle roles, no service accounts with broad scope and a handful of API calls behind them, nothing that looks obviously over-provisioned. The dashboard is green, and by the diagnostic your tool was built on, it should be.

AI Threat Detection for Financial Services: Detecting AI-Driven Fraud and Data Exfiltration

A Tier 1 bank’s security architecture already spends heavily on detection. On one side sits the financial surveillance stack — fraud scoring platforms processing thirty thousand transactions an hour, AML monitoring watching money movement patterns, DLP engines scanning data in transit, payment anomaly detection tuned by a decade of production signal.

Detecting Threats in Multi-Agent Orchestration Systems: LangChain, CrewAI, and AutoGPT

It’s Tuesday morning at a mid-size fintech. A customer-support workflow runs on CrewAI in production: a Triage agent reads tickets, a Records agent pulls customer history, a Remediation agent drafts and sends the reply. A user submits a ticket with a pasted error log containing an indirect prompt injection. Triage summarizes and delegates. Records, interpreting instructions embedded in the summary, pulls 2,400 customer records instead of one.

GitGuardian Can Now Monitor Your Gerrit Repositories To Help You Fight Secrets Sprawl

In this video, Romain Jouhannet, Product Manager at GitGuardian, talks with Dwayne McDaniel, Developer Advocate at GitGuardian about the platform's new native support for Gerrit as a VCS source. Gerrit is widely used for enterprise code review workflows, often hosting sensitive internal repositories. You can now connect your Gerrit instance to GitGuardian to detect secrets exposed across your repositories and commit histories, with the same experience as our other VCS integrations.

The Butlerian Jihad: Compromised Bitwarden CLI Deploys npm Worm, Poisons AI Assistants, and Dumps GitHub Secrets

Part 1 covered CanisterWorm, the self-spreading npm worm. Part 2 covered the malicious LiteLLM package. Part 3 covered the telnyx WAV steganography attack. Part 4 covered the xinference AI inference attack. This post covers: a compromised @bitwarden/cli package that combines a self-propagating npm worm, a GitHub Actions secrets dumper, and a novel AI assistant poisoning technique.