Can threat actors make ChatGPT malware? #ai #cybersecurity #gpt5
GPT-5 was jailbroken in under 24 hours using simple "storytelling" techniques that bypass safety guardrails.
The key insight from our podcast? Individual AI requests appear legitimate but become dangerous when combined. Bad actors can request network code in one session, convincing emails in another, and credential collection forms in a third. Each task seems normal individually, but together they form a complete phishing toolkit.
As Matt explains, "malware is not malicious until it's used maliciously." The threat isn't the AI technology itself, but how attackers segment and combine seemingly innocent requests to build harmful tools.
Watch the full podcast to learn how these attack patterns work, why version 1.0 technologies need careful deployment, and what other critical vulnerabilities researchers discovered in satellite systems and hardware security.
Full episode: https://limacharlie.io/podcast
#defenders #ai #cybersecurity #gpt5