Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

NVIDIA Just Made AI Agents Production-Ready #ai #shorts

AI agents just became production-ready overnight. With NVIDIA’s new NeMo Guardrails / NemoClaw-style agent control systems, AI agents can now operate in controlled environments with policies, sandboxing, and guardrails. Sounds safe… but there’s a catch. Agent safety protects what the AI does. But it doesn’t secure what the AI knows. And that’s where the real enterprise risk appears. In this video we break down the difference between.

Smishing AI

Cybercriminals are evolving—and so are their tactics. Smishing, or SMS phishing, has become one of the fastest-growing mobile threats. With AI, attackers can now create convincing, personalized messages in seconds—removing language barriers and making scams harder than ever to detect. That’s where Lookout Smishing AI comes in. Our advanced AI-powered detection goes beyond scanning for malicious links. It identifies the intent behind every message—stopping social engineering attacks before they reach you. Whether there’s a URL or not, Lookout keeps your mobile workforce protected.

Mobile Threat Report Briefing

In this video, David Richardson, Product CTO at Lookout, provides a strategic overview of the evolving mobile threat landscape based on Q3 2025 global enterprise data. Key Insights: Dominant Attack Vectors: Mobile phishing and social engineering remain the most significant threat categories. Attackers are increasingly using AI-powered tools to craft authentic-looking messages and conduct deep research for highly targeted attacks.

Everyone is Deploying AI Agents. Almost Nobody Knows What They're Doing

AI agents are operating inside your enterprise; querying databases, triggering workflows, and taking action through APIs. As AI agents are adopted, organizations cannot see, track, or control what these agents are actually doing. In this session, Roey Eliyahu, Co-Founder and CEO of Salt Security, challenges the industry’s narrow focus on LLM safety and exposes the much larger, invisible attack surface created by agentic systems.

Using Agentic AI to Scale Threat Detection in Healthcare

For every human in a healthcare organization, there are 82 machine identities—service accounts, API keys, cloud functions, medical devices.2 That's the 82:1 ratio, and it means your team is fundamentally outnumbered. The Change Healthcare breach in 2024, which started with one unprotected Citrix credential and disrupted 40% of US claims processing,1 showed exactly what happens when that ratio goes unmanaged. The numbers back this up.

How we built Organizations to help enterprises manage Cloudflare at scale

Cloudflare was designed to be simple to use for even the smallest customers, but it’s also critical that it scales to meet the needs of the largest enterprises. While smaller customers might work solo or in a small team, enterprises often have thousands of users making use of Cloudflare’s developer, security, and networking capabilities. This scale can add complexity, as these users represent multiple teams and job functions.

What Is AI Data Exfiltration and How Do You Stop It?

AI adoption does not happen uniformly across an organization. Some employees have integrated generative AI (genAI) tools into core parts of their workflow. Others have barely opened one. Most are somewhere in between, experimenting on an ad hoc basis, without consistent visibility into what data those tools handle or where it goes. That variance is the problem. Security programs built around either universal AI adoption or zero AI adoption will miss most of the actual risk.