The U.S. Department of Homeland Security's (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom's National Cyber Security Centre (NCSC) today jointly released Guidelines for Secure AI System Development in partnership with 21 additional international partners.
Researchers at Sophos have found that the criminal market for malicious generative AI tools is still disorganized and contentious. While there are obvious ways to abuse generative AI, such as crafting phishing emails or writing malware, criminal versions of these tools are still unreliable. The researchers found numerous malicious generative AI tools on the market, including WormGPT, FraudGPT, XXXGPT, Evil-GPT, WolfGPT, BlackHatGPT, DarkGPT, HackBot, PentesterGPT, PrivateGPT.
Snyk provides a comprehensive approach to developer security by securing critical components of the software supply chain, application security posture management (ASPM), AI-generated code, and more. We recognize the increasing risk of exposed secrets in the cloud, so we’ve tapped Nightfall AI to provide a critical feature for developer security: advanced secrets scanning.
The integration of artificial intelligence (AI) into various domains has become ubiquitous. One area where AI’s influence is particularly pronounced is in cybersecurity. As the digital realm expands, so do the threats posed by cybercriminals, making it imperative to employ advanced technologies to safeguard sensitive information.
While the EU AI Act is poised to introduce binding legal requirements, there's another noteworthy player making waves—the National Institute of Standards and Technology's (NIST) AI Risk Management Framework (AI RMF), published in January 2023. This framework promises to reshape the future of responsible AI uniquely and voluntarily, setting it apart from traditional regulatory approaches. Let's delve into the transformative potential of the NIST AI RMF and its global implications.
ChatGPT is proving to be something of a double-edged sword when it comes to cybersecurity. Threat actors employ it to craft realistic phishing emails more quickly, while white hats use large language models (LLMs) like ChatGPT to help gather intelligence, sift through logs, and more. The trouble is it takes significant know-how for a security team to use ChatGPT to good effect, while it takes just a few semi-knowledgeable hackers to craft ever more realistic phishing emails.