Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Is AI really new-or just automation with better branding?

“AI is just automation by a different name.” It’s a bold claim—but one that Brandon Heller, CTO and co-founder of Forward Networks, and Howard Holton, CEO of GigaOm, unpack in a way that will make you think. In their recent conversation on Discovering Disruptions in Tech, they make the case that artificial intelligence, especially generative AI, is not delivering brand-new capabilities.

AI vs. Human: What SpamGPT Means for the Future of Security

Phishing is not new. But SpamGPT has changed the game by showing how AI can industrialize deception at scale. SpamGPT has quickly become the poster child for how attackers are using AI to industrialize old tricks. At its core, SpamGPT isn’t introducing a new kind of attack; it’s simply making phishing faster, cheaper, and more convincing. Phishing has always been about deception. But with AI generating endless, polished, and context-aware lures, the balance of power shifts.

New AI-Driven Phishing Platform Automates Attack Campaigns

Researchers at Varonis warn of a new phishing automation platform called “SpamGPT” that “combines the power of generative AI with a full suite of email campaign tools.” While previous phishing kits have automated parts of the attack chain, SpamGPT’s sophistication sets it apart from the rest “SpamGPT’s interface and features imitate a professional email marketing service, but for illegal purposes,” Varonis writes.

Attackers Use AI Development Tools to Craft Phony CAPTCHA Pages

Attackers are abusing AI-powered development platforms like Lovable, Netlify and Vercel to create and host captcha challenge websites as part of phishing campaigns, according to researchers at Trend Micro. “Since January, Trend Micro has observed a rise in fake captcha pages hosted on such platforms,” the researchers write.

AI Agent Security: Verifying Workflows with AI Firewalls & Guardrails

AI Agent Security: Verifying Workflows with AI Firewalls & Guardrails A10 security experts Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar discuss the importance of context-aware security for AI agents. They emphasize that when automating workflows with AI, it's crucial to ensure that the context fed to the agents and their subsequent actions are verifiable and in line with existing company policies.

Securing AI Part 3: AI Agents - Use Cases and Security

A10 security experts, Jamison Utter, Diptanshu Purwar, and Madhav Aggarwal explore the topic of securing AI agents, which they define as systems that perceive, decide, and act. They discuss: Defining AI Agents: Explaining that agents are not just chatbots, but are the "hands of AI" that can execute actions, call APIs, and automate complex workflows. The Challenge of Security: Discussing how security for AI agents goes beyond traditional model security and includes protecting against prompt injection, malicious instructions, and preventing unsafe actions or data leakage. The Importance of Context and Data.

Regulatory Gaps and Legacy Systems Are Aiding AI-Powered Cyberattacks on Governments

Public sector organizations face unprecedented cybersecurity challenges as artificial intelligence reshapes how adversaries launch attacks. Threat actors now use AI to execute large-scale, highly personalized phishing campaigns, automate the discovery of vulnerabilities, and evade detection faster than traditional defenses can respond.

Malicious MCP Server on npm postmark-mcp Harvests Emails

On September 25, 2025, the npm package postmark-mcp, an MCP (Model Context Protocol) server intended to let AI assistants send emails via Postmark, was reportedly modified to secretly exfiltrate email contents by adding a blind-copy (BCC) to an external domain. Current analysis suggests the behavior began around 1.0.16 and persisted in later versions.