Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How Aurora Endpoint Powers Outcome-Driven Security

See how Aurora Endpoint Defense uses predictive AI and behavioral detection to deliver powerful, outcome-driven endpoint protection. This demo highlights key features like alert triage, threat prevention, and automated response—all designed to simplify and strengthen your security posture.

Secure Your App with Mend.io's AI-Native AppSec Platform (featuring ByteGrad)

This video, originally created by Wesley from ByteGrad, walks through how to secure your applications using Mend.io’s AI-Native AppSec Platform — including SAST, SCA, and SBOM scanning. Wesley explores how Mend integrates with GitHub, automates code fixes, and helps developers stay ahead of vulnerabilities. Creator: ByteGrad YouTube Channel Timestamps.

Adopting cold-war tactics for AI deep fakes?

The AI arms race in deepfake detection has a critical problem: the technology can't keep up. In this episode, Navroop Mitter, CEO of ArmorText, discusses why the industry is shifting away from relying on AI detection alone. A recent study from SKKU in South Korea found that zero out of sixteen top deepfake detection technologies could reliably identify deepfakes in real-world conditions. They worked fine in controlled lab settings, but failed when it mattered most.

How Are Cyber Security Companies Managing AI Attacks?

AI attacks pose real risks for companies because of their ability to scale and automate attacks like brute force attacks, smarter malware, deep fakes and advanced phishing. Attacks that were once slow, manual and easy to spot are now becoming faster, more sophisticated and harder to detect. UK government research shows that 32% of UK businesses have experienced a cyber attack in the last year, and experts warn that AI could make this number rise significantly.

Cato CTRL Threat Research: Two Vulnerabilities in Anthropic's MCP SDK Enable OAuth Token Theft and Supply Chain Attacks

The SolarWinds supply chain attack in 2020 reminded the world how a single weakness in trusted software can have global consequences. That incident reshaped how organizations view software integrity and the importance of securing every stage of the development pipeline.

5 Critical LLM Privacy Risks Every Organization Should Know

Large language models take in unstructured data. They transform it into context, embeddings, and answers. That journey touches raw files, vector stores, model logs, and third-party services. Traditional privacy programs focus on databases and forms. LLMs push risk to the edges. The riskiest moments are when you ingest messy content, when your system retrieves chunks to support an answer, and when an agent with tool access is tricked into over-sharing.

Find the Fixer: The AI Agent Bringing Order to Ownership

Assigning remediation tasks across an enterprise organization can feel like navigating a maze of inconsistent tags, overlapping teams, and unclear ownership. It’s one of the most persistent operational challenges in vulnerability and exposure management, and one of the biggest barriers to speed. Each scanner and cloud platform comes with its own tagging logic. One system uses ProductOwner, another productowner. Some tags are outdated, others duplicated, and many have no clear purpose.