Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Ep 35: RSAC FOMO? Dojo AI Demo

As we gear up for RSA Conference, we give viewers a sneak peek at Sumo Logic's SOC analyst agent, which turns a 45-minute analyst investigation into a five-minute AI-powered sprint. We walk through live demos showing how the agent automatically generates queries, maps threats to MITRE ATT&CK, and hands you recommended remediation actions all without making you switch tabs or tools. We also show off MCP integration that lets teams collaborate on active investigations right from Slack, because no one should be chained to their war room when there's dinner to be had.

WebPromptTrap - New Indirect Prompt Injection Vulnerability in BrowserOS

Cato researchers have discovered a new indirect prompt injection exploit pattern workflow in BrowserOS (an open-source agentic AI browser). We named it “WebPromptTrap” because the prompt originates from untrusted web content and it traps users into approving an authorization step through a trusted-looking AI summary.

Spring 2026 GenAI Code Security Update: Despite Claims, AI Models Are Still Failing Security

The last six months have been nothing short of revolutionary for AI-powered coding. OpenAI‘s “Code Red” release brought us GPT-5.1 and 5.2. Google unveiled Gemini 3 with its touted “unprecedented reasoning capabilities.” Anthropic rolled out Claude 4.5 and 4.6, powering the increasingly ubiquitous Claude Code features. Enterprise adoption of tools like OpenClaw has exploded, with developers praising unprecedented productivity gains.

The AI Control Gap: Why Partners Are Now on the Front Line

For channel partners, AI has quickly moved from a future conversation to a current customer problem. Clients are already using AI across their organisations, often faster than governance can keep up. What’s emerging is not just another technology trend, but a new class of risk that customers cannot fully see or control. Our latest research, based on insights from senior security leaders in highly regulated industries, highlights the scale of the issue.

The Library That Holds All Your AI Keys Was Just Backdoored: The LiteLLM Supply Chain Compromise

We just published a deep breakdown of the Trivy supply chain attacks yesterday. Twenty-four hours later, we’re writing about the next one. Same threat actor. Different target. Worse implications. This time it’s LiteLLM, the Python library that acts as a universal API gateway for over 100 LLM providers. If you’re building anything with AI agents, MCP servers, or LLM orchestration, there’s a good chance LiteLLM is somewhere in your dependency tree.

How Connected Vehicles and AI Are Redefining Insurance and Digital Security Risks

The way we drive is changing. Cars are no longer just machines that take us from one place to another. They are now connected systems that collect data, communicate with networks, and use artificial intelligence to improve safety and performance. These connected vehicles are transforming industries like insurance and cybersecurity in ways we are only beginning to understand.

How to Manage Identity Sprawl in the Age of AI Agents and NHIs

Non-human identities (NHIs) and AI Agents including service accounts, CI/CD credentials and cloud workload identities, now eclipse human identities in enterprise identity systems by 50:1 to 100:1. Modern identity security platforms must assign identities to these assets and furthermore, apply roles, access control policies, visibility and governance in order to secure the modern enterprise.

How to Manage Unauthorized AI Tool Usage in Your Business

In only a few years, artificial intelligence (AI) has changed almost every aspect of life, and especially so in business. Today, employees are using generative AI tools to draft emails, code software, and analyze data at lightning speed. However, there is a hidden side to this productivity boost: unauthorized AI use. Many employees are bypassing official IT channels and using shadow AI applications to get their work done.

New CrowdStrike Innovations Secure AI Agents and Govern Shadow AI Across Endpoints, SaaS, and Cloud

As organizations race to adopt new AI tools, deploy AI agents, and build AI-powered software, they create new attack surfaces that traditional security controls were never designed to protect. A key example is the prompt and agentic interaction layer, which faces novel threats like indirect prompt injection and agentic tool chain attacks.

AI vs AI: Securing the Expanding Cyber Attack Surface | Mr. Anirban Mukherji at ET Studios

In this exclusive interview byte at ET Studios, Our Founder & CEO Mr. Anirban Mukherji discusses how increasing enterprise connectivity through cloud applications, third-party integrations, and remote work is exploding the enterprise cyber attack surface making identity security and access control more critical than ever. He dives into key threats like traditional ransomware, zero-day supply chain attacks, hyper-personalized AI phishing, and systemic incidents.