Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Top tips to stop hackers from exploiting your office printers

Top tips is a weekly column where we highlight what’s trending in the tech world and list practical ways to explore these trends. This week, we are tackling a lesser-known but growing cybersecurity risk in modern workplaces: printer-based attacks. Let's start with a simple scenario. It's a quiet evening at the office. Most employees have gone home, the lights are dimmed, and the network continues running as usual. In one corner of the floor sits a printer that has been there for years.

How Can Network-Based Detection Help Stop Zero-Day Exploits?

Zero-day exploits rarely announce themselves. There is no public advisory yet. No CVE identifier. No detection signature sitting inside a rule library. The vulnerability exists quietly until someone discovers it and unfortunately attackers often discover it first. Once that happens, the exploit becomes a test of visibility. Attackers do not usually rush into environments using zero-days. They explore carefully. They check which systems respond. They observe how security tools behave.

How to Gain Value from AI in Cybersecurity

The Terminator is often people’s reference point for artificial intelligence (AI), especially when they worry that technology will be the end of civilization. However, on the other end of the AI spectrum is the beloved, marshmallow fluff Baymax, the helper robot providing assistance to those in his presence. The reality of AI sits somewhere between these two extremes. For security teams, AI initially seemed like a revolutionary technology that would offer faster detection and automated analysis.

AI Agent Data Leakage: Hidden Risks and How to Prevent Them

AI or artificial intelligence has significantly altered how we work. From customer support bots to internal copilots, they help teams move faster and smarter. But there is a growing concern that many companies are still not ready for. It is data leakage in AI. When an AI agent accidentally or unknowingly shares private information with the wrong person or another system, it is called a data leak. When AI systems handle sensitive data, even a small mistake can expose private information.

Are attacks on industrial systems increasing? #cybersecurity #podcast #OT

Public awareness of industrial system attacks is finally catching up to what security professionals have known for years. On The Cybersecurity Defenders Podcast, Justin Searle, Director of ICS Security at InGuardians, traces the shift from Conficker in 2008 taking out OT systems on flat networks to Stuxnet in 2010 making the warfare implications clear. Since then, awareness among governments and critical infrastructure operators has grown steadily, and so have the attacks.

See, Govern, and Secure All AI Usage in Your Enterprise

Do you happen to know which AI tools your employees are using right now, or what data they're sending into them? Cato AI Security automatically discovers every AI application in your environment, provides security teams with session-level visibility into how those tools are being used, and enforces data policies in real time, so employees can keep working and sensitive data stays where it belongs.

The 7 Best AI Governance Tools in 2026

AI adoption has accelerated faster than most organizations’ ability to manage it. Security and compliance teams are now responsible for overseeing machine learning models, large language models (LLMs), agentic AI systems, and shadow AI—often with frameworks and processes that weren’t built for any of it. The gap between deploying AI and governing it responsibly is where risk lives. AI governance tools exist to close that gap.

What Happens When Healthcare Systems Go Dark

• What happens inside a healthcare system when ransomware takes down Active Directory and authentication fails? In this episode, Josh Howell sits down with Nelson, Executive Healthcare Strategist at CDW, to explore real-world cyber incidents and the architectural shifts required to recover safely. You’ll learn: YouTube Chapters Final Lessons for Healthcare Leaders If you enjoyed this episode, be sure to subscribe to our YouTube channel.

Understanding Malicious Packages in Modern Software Supply Chains

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.