Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Introduction to Linea AI by Cyberhaven

Resolve incidents 5x faster, detect 40% more critical incidents, and reduce future incidents by 90% with Linea AI by Cyberhaven. Linea AI thinks like the smartest security analyst, precisely spotting insider risks across billions of workflows and every piece of data. It understands how people work the way a human would, but it never loses focus and can apply human-like insight at an incredible scale.

AI as a Power Tool: How Windsurf and Devin Are Changing Secure Coding

We brought together Ian Moritz, Deployed Engineer at Cognition, and Mackenzie Jackson from Aikido Security for a live masterclass on AI-assisted coding. The goal wasn’t to hype new tools. It was to talk about how developers can stay in control while AI starts writing, testing, and securing code beside them.

From Detection to Protection: A Look at End-to-End AppSec Solutions

Modern application development moves at an incredible pace, but this speed often creates a gap between innovation and security. Effective AppSec Solutions close this gap by shifting security from a reactive bottleneck to a proactive, integrated part of the entire software development lifecycle (SDLC). This end-to-end approach doesn’t just detect flaws; it provides a unified framework to manage and reduce risk from the first line of code to the final cloud deployment.

How CIOs and CISOs are unlocking AI's full value: 5 real-world takeaways

Recent research from Forrester Consulting commissioned by Tines, Unlocking AI’s full value: How IT orchestrates secure, scalable innovation, underscores the essential role IT leaders must play in AI orchestration, as well as the challenges that stall adoption – and the opportunities that await those who overcome them. But how do these findings translate to real life, and what are leaders and practitioners doing to navigate this landscape?

Mastering LLM Privacy Audits: A Step-by-Step Framework

Language models now touch contracts, tickets, CRM notes, recordings, and code. That means personal data, trade secrets, and regulated content move through prompts, embeddings, caches, and third-party endpoints. If your audit still reads like a generic security review, you will miss the places where leaks actually happen. A modern LLM Privacy Audit Framework starts where the risk starts.

It's time to rethink shadow AI.

It's time to rethink shadow AI. We've been told it's a fringe activity. A risk from rogue employees. Our new research proves that wrong. This is, ironically, no longer a "shadow" problem. It's a universal workflow hiding in plain sight. The question is no longer "how do we stop it?" It's "how do we manage it?" Our new report lands next week with the date you need to start answering that important question.

Seemplicity's AI Agents: Clarity

Meet Clarity, the first of Seemplicity’s four new AI Agents transforming how security teams understand and act on vulnerabilities. Instead of cryptic scanner outputs and confusing CVE text, Clarity turns dense technical data into clear, actionable narratives — explaining what happened, why it matters, and how to fix it. With Clarity, you can: Translate vulnerability data into plain language Improve collaboration between security, IT, and engineering Accelerate remediation and reduce exposure fatigue.

What is KeeperAI?

KeeperAITM is an agentic, AI-powered engine embedded within KeeperPAM that delivers real-time threat detection and response, as well as privileged session analysis. Built for Privileged Access Management (PAM), KeeperAI monitors user activity, providing behavioral insights and automated incident response in both live SSH sessions and post-session playback.

Experience Over Hype: How Reach Built AI for Real-World Security

Innovation comes from experience — and from taking a pragmatic, problem-driven approach. As Garrett Hamilton told Ed Amoroso, Reach’s foundation is built on the work of co-founder Colt Blackmore — whose experience building machine-learning models at Cylance and Proofpoint now drives how we apply AI to exposure management today. That experience shapes how Reach approaches AI: practical, proven, and focused on results — not trends.

The New Attack Surface: How to Break (and Defend) Large Language Models

Large Language Models now automate customer support, write code, classify emails, generate content, and - disturbingly - execute tasks through plugins and agents. Once an AI can act on your behalf, it becomes part of your operational infrastructure, not a toy. OWASP’s Top-10 for LLM Applications formalized the threat landscape, and quietly confirmed what security researchers have been yelling for two years.