Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Episode 29: When AI becomes a security problem ft. Tamaghna Basu

AI has quietly moved from experiments to real-world systems that now write, decide, and reason alongside us. But as these systems scale, so do the risks. From hallucinations and data leakage to prompt injection and model abuse. In this episode of Server Room, we sit down with Tamaghna Basu, Founder of DeTaSECURE, to explore what it really takes to build and secure AI systems in production and why the future of AI will depend not just on intelligence, but on trust.

The 2026 AI SOC Leadership Report: What 450 Security Leaders Told Us

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo When we started building Torq four years ago, we had a thesis: the SOC was broken, and automation — real automation, not another tool bolted onto the stack — was the way to fix it. AI has since changed the game entirely. But has it streamlined the SOC, or introduced new complexity? We wanted to find out.

Is Your Patch Management Strategy Ready for AI-Powered Attacks? | Nishith Datta | Titan

In this Episode of Guardians of the Enterprise, Ashish Tandon, Founder & CEO, Indusface and Nishith Datta, Head of Cybersecurity at Titan, discusses one of the most pressing challenges in modern security, vulnerability patching in the age of AI. As AI accelerates both the scale and sophistication of attacks, traditional patching cycles are no longer enough. Nishith shares his frontline perspective on how enterprises securing omnichannel consumers must rethink their approach to exposure management.

Android Component Security: Common Misconfigurations That Expose Mobile Apps

When teams think about Android app security, the focus is usually on code for encryption, obfuscation, or binary protection. But in practice, many of the most critical Android app vulnerabilities don’t originate in code at all. They come from misconfigurations. Issues in the AndroidManifest, insecure component exposure, and unsafe inter-app communication often create direct entry points for attackers. These are not edge cases. They are common, repeatable, and frequently exploited.

Gemini XSS Vulnerability: When AI Executes Malicious Code

Artificial intelligence is no longer just generating text. It generates and executes code in real time. With tools like Google Gemini, features such as code canvases and live previews are turning AI systems into interactive execution environments. This shift introduces a new and rapidly growing category of risk: AI security vulnerabilities tied to real-time code execution.

4 Phases, 357 Crashes, 2 Bugs: What AFL++ Campaign Actually Looks Like

357 crash files. 2 real bug sites. That’s the outcome of this AFL++ campaign after roughly 8.5 billion executions across multiple harnesses, binaries, and phases. At first glance, everything looked like success. Crashes were increasing steadily. New inputs were being generated every few seconds. Coverage appeared to improve over time. From a surface-level perspective, the campaign looked productive. Then triage began.

How to Improve Work-From-Home Productivity with a Minimal Tech Setup? Know Here!

Working from the comfort of your home sounds like a beautiful dream. There's no commute, it allows for flexible working hours, and, if you are an introvert, you can get through the day without intrusive small talk. But somewhere along the way, the beautiful dream got cluttered with endless tabs, constant notification buzzes, and an overload of productivity apps trying to keep you efficient.

Understanding AI Compliance When Choosing AI-Enabled Solutions

2001: A Space Odyssey introduced the world to HAL 9000, the fictional artificial intelligence (AI). HAL’s capabilities include everything from facial recognition to natural language processing and automated reasoning. As HAL malfunctions over time, the computer becomes violent to prevent the humans from disconnecting it. The story serves as a morality tale suggesting that without human oversight, AI is dangerous.

Stop Fearing AI - Learn To Use It #shorts #ai

Many people are afraid of Artificial Intelligence. Questions like: The truth is simple: AI is not going anywhere. Instead of fearing AI, the smarter approach is learning how to use AI tools responsibly in your daily work and career. Just like the internet and smartphones changed industries, AI is the next big technological shift. Start small, learn AI tools, and adapt to the future. Watch The Full Podcast: Link Below.