Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Vibe Coding Speeds Up Mobile Apps But Creates New Security Risks

AI-assisted development has crossed a tipping point. Mobile teams are no longer debating whether to use AI to write code. They are deciding how fast they can ship with it. This shift, often called vibe coding, prioritizes intent and speed over manual implementation. Developers describe what they want, and AI fills in the rest. Velocity improves. Releases accelerate. But security assumptions quietly break. For mobile applications, that risk compounds.

Episode 7 - Practical AI for Zeek, MITRE, and Security Docs

In Episode 7 of Corelight DefeNDRs, join me, Richard Bejtlich, as I sit down with Dr. Keith Jones, Corelight's principal security researcher, to discuss the practical applications of AI in enhancing network security. We delve into how large language models (LLMs) can assist in cleaning up documentation and generating Zeek scripts, sharing insights from our extensive experience in incident response and coding. Keith reveals the challenges and successes he has encountered using LLMs to streamline processes, including their role in analyzing MITRE techniques.

Introducing Moltworker: a self-hosted personal AI agent, minus the minis

The Internet woke up this week to a flood of people buying Mac minis to run Moltbot (formerly Clawdbot), an open-source, self-hosted AI agent designed to act as a personal assistant. Moltbot runs in the background on a user's own hardware, has a sizable and growing list of integrations for chat applications, AI models, and other popular tools, and can be controlled remotely. Moltbot can help you with your finances, social media, organize your day — all through your favorite messaging app.

Measuring Agentic AI Posture: A New Metric for CISOs

In cybersecurity, we live by our metrics. We measure Mean Time to Respond (MTTR), Dwell Time, and Patch Cadence. These numbers indicate to the Board how quickly we respond when issues arise. But in the era of Agentic AI, reaction speed is no longer enough. When an AI Agent or an MCP server is compromised, data exfiltration happens in milliseconds rather than days. If you are waiting for an incident to measure your success, you have already lost.

Beyond Pattern Matching: How AI-Native File Classification Solves Modern DLP Challenges

Legacy DLP operates on a fundamental constraint: it identifies sensitive data by matching patterns. Credit card numbers follow the Luhn algorithm. Social Security numbers conform to a nine-digit format. API keys match specific string patterns. This approach works for structured data, but it fails to address a critical reality: Your most sensitive assets aren't numbers. They're documents.

Introducing Forward AI

As enterprises move toward agentic operations, speed without data accuracy becomes a liability. At Forward Networks, we recognized this challenge and set out to deliver a solution: speed backed by mathematical accuracy. In networking, acting on incomplete or approximate data is not an inconvenience, it is a cause of outages, security exposure, and operational risk.

AI is Actively LEAKING Your Data (And You Don't Know It) #apisecurity #airisks #dataprotection #ai

AI agents don't think. They pattern-match. Critical to understand: Generative AI (ChatGPT, Claude, etc.) does NOT reason like humans. It: The API Security problem: When you give an AI agent access to an API, it will: AI agents can't reason. They recreate patterns based on weights. You need to be very careful: data in, data out. Practical example: text User: "Show me the account balance for user" AI agent → calls GET /api/account/123 API → returns { balance: 5000, name: "John", SSN: "123-45-6789" } AI agent → outputs EVERYTHING to user (including SSN!)

Introducing Forward AI

The Network is Complex. Operating It Shouldn't Be. Forward AI transforms network operations by reducing manual analysis, expert dependency, and guesswork. By combining conversational interaction with a mathematically accurate digital twin, teams can validate intent, understand actual network behavior, and act with confidence across even the most complex environments.

OpenClaw (Moltbot) Personal Assistant Goes Viral - And So Do Your Secrets

Early 2026, Moltbot a new AI personal assistant went viral. GitGuardian detected 200+ leaked secrets related to it, including from healthcare and fintech companies. Our contribution to Moltbot: a skill that turns secret scanning into a conversational prompt, letting users ask "is this safe?".