Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

From Shadow APIs to Shadow AI: How the API Threat Model Is Expanding Faster Than Most Defenses

The shadow technology problem is getting worse. Over the past few years, organizations have scaled microservices, cloud-native apps, and partner integrations faster than corporate governance models could keep up, resulting in undocumented or shadow APIs. We’re now seeing this pattern all over again with AI systems. And, even worse, AI introduces non-deterministic behavior, autonomous actions, and machine-to-machine decision-making. Put simply, shadow AI is much, much riskier than shadow APIs.

What is OpenClaw andAgentic AI? The Security Issues You Need to Be Aware of Now

Over the past several weeks, OpenClaw and MaltBook have exploded across the headlines. Outlets have published stories about AI agents organizing themselves or even acting independently on Moldtbook. SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research and Anne Griffin, Head of AI Product Strategy discuss what OpenClaw is, how agentic AI works, and where the real security issues are based on new research from SecurityScorecard's STRIKE Threat Intelligence team.

OpenClaw Security Checklist for CISOs: Securing the New Agent Attack Surface

OpenClaw exposes a fundamental misalignment between how traditional enterprise security is designed and how AI agents actually operate. As an AI agent assistant, OpenClaw operates with human permissions, executes actions autonomously, and processes untrusted content as input, all while sitting outside the visibility of conventional security tools.

Moltworker (for OpenClaw) & Markdown for Agents: Running AI on Cloudflare

Celso explains how Markdown for Agents was conceived, built, and shipped in just one week, why AI systems prefer markdown over HTML, and how converting a typical blog post from 16,000 HTML tokens to roughly 3,000 markdown tokens can reduce cost, improve speed, and increase accuracy for AI models. We also explore Moltworker, a proof-of-concept showing how a personal AI agent originally designed to run on a Mac Mini can instead run on Cloudflare’s global network using Workers, R2, Browser Rendering, AI Gateway, and Zero Trust.

Mobile App Release Readiness Checklist

Every mobile team has shipped an app that technically worked, and still caused problems. Sometimes it’s a last-minute App Store rejection. Sometimes it’s a privacy disclosure mismatch. Sometimes it’s a vulnerability discovered days after release, when rollback is no longer clean. The pattern is consistent, which isn’t a lack of tooling but a lack of release readiness clarity. Release readiness isn’t about perfection. It’s about answering one question with confidence.

Why Every Website Needs a Reliable URL Checker

Links are the connective tissue of the web. They guide users to content, help search engines understand structure and distribute authority across pages. When links fail, everything from user trust to search visibility can suffer. This is where a URL checker becomes essential. A URL checker is more than a quick "does this page load?" tool. At its most basic level, it confirms whether a URL resolves successfully. At a deeper level, it reveals status codes, redirect chains, DNS issues and server errors that aren't obvious from simply clicking a link.

The AI SOC Org Chart for 2026 and Beyond

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo John White is the Field CISO for EMEA at Torq. A respected security executive with more than 20 years of leadership experience, John previously served as CISO at Virgin Atlantic, where he led a multi-year transformation deploying the Torq AI SOC Platform to modernize cyber operations.

1Password's new benchmark teaches AI agents how not to get scammed

As we embed AI agents into our lives and workflows, we’re learning the (sometimes surprising) ways in which they outperform human beings, and other ways in which they fall short. And occasionally, we find an example where agents, paradoxically, are both better and worse than their human users.