Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

New in Breach Risk: Threat Monitoring Powered by an AI Analyst #cybersecurity #tprm #ai #security

Peter, Senior Product Marketing Manager at UpGuard, shares how our new Threat Monitoring feature helps security teams detect and triage real threats across the open, deep, and dark web—faster and with more clarity. Now in early access. Talk to your UpGuard rep to get started.

Welcome-to-The New Era of AI-Driven Development

Artificial intelligence is no longer a future consideration. It’s here — and it’s changing how software is built. Fast. Enterprise teams are moving beyond AI pilots and proof-of-concepts. They’re rolling out real-world, high-value use cases and doing it at scale. According to IDC forecasting, AI spend will more than double by 2028. At the center of that surge is AI-assisted software development.

AI Is Reshaping Software. Is Your Security Strategy Keeping Up?

Software development is undergoing its biggest shift since the rise of cloud and DevOps. The difference this time? The shift is being driven by artificial intelligence, and it’s moving fast. AI-powered coding tools have rapidly made their way into developer workflows. Agents and LLMs are helping teams move faster, automate more, and build in entirely new ways. But speed often comes with tradeoffs.

Ensuring ISO/IEC 23894:2023 Compliance for AI Systems with AppTrana WAAP

ISO/IEC 23894:2023 is a relatively new international standard focused on AI risk management. It is designed to help organizations manage risks arising from the development, deployment, and use of Artificial Intelligence (AI) systems. While it’s AI-specific, many of its security-related clauses—especially those concerning web applications, APIs, and external-facing systems—apply broadly to ensure AI systems are secure, trustworthy, and resilient.

Charlotte AI - Agentic Workflows - Impossible Time Travel

Logins from New York and Singapore—two minutes apart? That’s not time travel, that’s trouble. CrowdStrike’s Charlotte AI spots these impossible login anomalies instantly. By correlating RDP activity, calculating travel speeds, and taking risk-based action, Charlotte AI Agentic Workflows deliver real-time response to your SOC. No dashboards. No log diving. Just lightning-fast threat detection and action.

How Do You Safeguard AI When Development Outpaces Security? With Ante Gojsalić - SplxAI

Generative AI is moving faster than our defences — can we catch up? In this episode of Razorwire, host James Rees (aka Jim) speaks with Ante Gojsalić, CTO and co-founder of SplxAI, to dissect the growing risks, complexities, and opportunities in securing AI systems before they outpace our ability to protect them.

Securing the future of AI Agents: Reflections from the Microsoft Build Stage

Standing on stage at Microsoft Build, surrounded by innovators shaping the future in the era of AI Agents, I felt equal parts inspired and responsible. Inspired by the rapid momentum around AI, and responsible for raising a flag about something we don’t talk about enough - how we secure the very systems that are now acting on our behalf. This post isn’t a recap, rather a continuation, a chance to go deeper into the story I shared (and the one we’re still writing.)

Welcome to Snyk Labs: Charting the Course for AI-Native Security

Software development is in the midst of a monumental shift, powered by the rapid advancements in Artificial Intelligence. AI isn't just changing how we build software; it's transforming the very nature of applications themselves. As AI-native applications become more prevalent, we're also seeing new, complex security threats emerge. Traditional security approaches aren’t designed for the dynamic and often unpredictable nature of Large Language Models (LLMs), agents, and other AI-driven systems.