Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Is AI dangerous?

AI is everywhere—writing emails, creating videos, even cloning voices. But artificial intelligence also comes with real risks, including privacy concerns, deepfakes, and smarter online scams. Artificial intelligence learns by spotting patterns in massive amounts of data—and that power can be misused. AI tools may collect personal information, create realistic fake content, or help scammers craft messages that look completely legit.

Moltworker (for OpenClaw) & Markdown for Agents: Running AI on Cloudflare

Celso explains how Markdown for Agents was conceived, built, and shipped in just one week, why AI systems prefer markdown over HTML, and how converting a typical blog post from 16,000 HTML tokens to roughly 3,000 markdown tokens can reduce cost, improve speed, and increase accuracy for AI models. We also explore Moltworker, a proof-of-concept showing how a personal AI agent originally designed to run on a Mac Mini can instead run on Cloudflare’s global network using Workers, R2, Browser Rendering, AI Gateway, and Zero Trust.

OpenClaw Security Checklist for CISOs: Securing the New Agent Attack Surface

OpenClaw exposes a fundamental misalignment between how traditional enterprise security is designed and how AI agents actually operate. As an AI agent assistant, OpenClaw operates with human permissions, executes actions autonomously, and processes untrusted content as input, all while sitting outside the visibility of conventional security tools.

What is OpenClaw andAgentic AI? The Security Issues You Need to Be Aware of Now

Over the past several weeks, OpenClaw and MaltBook have exploded across the headlines. Outlets have published stories about AI agents organizing themselves or even acting independently on Moldtbook. SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research and Anne Griffin, Head of AI Product Strategy discuss what OpenClaw is, how agentic AI works, and where the real security issues are based on new research from SecurityScorecard's STRIKE Threat Intelligence team.

From Shadow APIs to Shadow AI: How the API Threat Model Is Expanding Faster Than Most Defenses

The shadow technology problem is getting worse. Over the past few years, organizations have scaled microservices, cloud-native apps, and partner integrations faster than corporate governance models could keep up, resulting in undocumented or shadow APIs. We’re now seeing this pattern all over again with AI systems. And, even worse, AI introduces non-deterministic behavior, autonomous actions, and machine-to-machine decision-making. Put simply, shadow AI is much, much riskier than shadow APIs.

AI Attacks, CaaS & the New Reality of Banking Security

This week, in the episode – Guardians of the Enterprise, Ashish Tandon, Founder & CEO, Indusface, speaks with Madhur Joshi, CISO at HDB Financial Services (part of the HDFC Group), on how large financial institutions are navigating a rapidly evolving cyber threat landscape. The conversation covers the rise of AI-driven attacks, Cybercrime-as-a-Service (CaaS), and the growing complexity that comes with expanding digital footprints across cloud, applications, and APIs.