Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

More Regulatory Scrutiny Means IRR Must Be a Priority | SEC, NIS2, and CIRCIA Compliance Insights

As global cybersecurity regulations tighten, security leaders are under increasing pressure to demonstrate strong Incident Readiness and Response (IRR). New requirements like the SEC cybersecurity disclosure rules, the EU’s NIS 2 Directive, and the forthcoming CIRCIA mandate faster reporting, stronger governance, and greater accountability. In this session, LevelBlue experts share insights from a survey of 500 security leaders on how organizations are adapting their IRR strategies for today’s regulatory climate.

"Trust in AI Starts with Transparency | Sebastian Goodwin (Autodesk) x Reach Security"

Trust in AI starts with transparency. In our recent conversation, “No Time to Drift,” Sebastian Goodwin, Chief Trust Officer at Autodesk, shares how his team is putting that principle into practice — by creating AI Transparency Cards. Think of them like nutrition labels for AI: clear, consistent, and designed to help customers understand what’s inside. Each one outlines what the model does, how it’s trained, safeguards in place, and more.

The Secret Backdoor in Your Firewall... How Attackers Get In WITHOUT Hacking!#cybersecurity#InfoSec

Your WAF is Providing a False Sense of Security Improper network configuration can completely nullify the effectiveness of your Web Application Firewall. If attackers can discover your origin server's direct IP address: They can bypass your expensive security controls entirely. Your "internal" services become externally exposed. You have a massive, unknown gap in your defenses. This animation is a clear example of why security doesn't end with buying a tool. Proper integration and a zero-trust mindset are non-negotiable.

Language Switching Attacks: The New Threat Vector in LLM Security

Language Switching Attacks: The New Threat Vector in LLM Security In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar discusses the growing trend of language-switching attacks. These techniques exploit the ongoing development and training gaps in Large Language Models (LLMs). Diptanshu explains how attackers can evade an LLM's built-in filters and guardrails by rapidly shifting between different languages, particularly less common ones, to find weaknesses where the model's safety data is sparse.

Off the Blocks | Ep. 3: What Does the Future of Onchain Finance Mean to You?

New question. Real answers. No fluff. Welcome back to Off the Blocks — Fireblocks’ rapid-fire video series, shot live at TOKEN2049 Singapore. In Episode 3, we asked industry leaders just one thing: In one sentence, what does the future of onchain finance mean to you? From programmable liquidity to permissioned DeFi, their responses are bold, honest, and sharply focused on what comes next. This is where ideas become infrastructure, and where vision meets execution.