Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

AI Voice Cloning and Bank Voice Authentication: A Recipe for Disaster?

New advancements in generative AI voice cloning come at a time when banks are looking for additional ways to authenticate their customers – and they’re choosing your voice. Banks adopted the principles of multi-factor authentication years ago. But continued cyber attacks aimed at providing SIM swapping services have increased the risk of assuming the credential owner actually possesses the mobile device. So, where do they go next to prove you’re you? Voiceprint.

Navigating AI and Cybersecurity: Insights from the World Economic Forum (WEF)

Cybersecurity has always been a complex field. Its adversarial nature means the margins between failure and success are much finer than in other sectors. As technology evolves, those margins get even finer, with attackers and defenders scrambling to exploit them and gain a competitive edge. This is especially true for AI.

EP 50 - Adversarial AI's Advance

In the 50th episode of the Trust Issues podcast, host David Puner interviews Justin Hutchens, an innovation principal at Trace3 and co-host of the Cyber Cognition podcast (along with CyberArk’s resident Technical Evangelist, White Hat Hacker and Transhuman Len Noe). They discuss the emergence and potential misuse of generative AI, especially natural language processing, for social engineering and adversarial hacking.

Speed vs Security: Striking the Right Balance in Software Development with AI

Software development teams face a constant dilemma: striking the right balance between speed and security. How is artificial intelligence (AI) impacting this dilemma? With the increasing use of AI in the development process, it's essential to understand the risks involved and how we can maintain a secure environment without compromising on speed. Let’s dive in.

Protecto - AI Regulations and Governance Monthly Update - March 2024

In a landmark development, the U.S. Department of Homeland Security (DHS) has unveiled its pioneering Artificial Intelligence Roadmap, marking a significant stride towards incorporating generative AI models into federal agencies' operations. Under the leadership of Secretary Alejandro N. Mayorkas and Chief Information Officer Eric Hysen, DHS aims to harness AI technologies to bolster national security while safeguarding individual privacy and civil liberties.

Understanding AI Package Hallucination: The latest dependency security threat

In this video, we explore AI package Hallucination. This threat is a result of AI generation tools hallucinating open-source packages or libraries that don't exist. In this video, we explore why this happens and show a demo of ChatGPT creating multiple packages that don't exist. We also explain why this is a prominent threat and how malicious hackers could harness this new vulnerability for evil. It is the next evolution of Typo Squatting.

An investigation into code injection vulnerabilities caused by generative AI

Generative AI is an exciting technology that is now easily available through cloud APIs provided by companies such as Google and OpenAI. While it’s a powerful tool, the use of generative AI within code opens up additional security considerations that developers must take into account to ensure that their applications remain secure. In this article, we look at the potential security implications of large language models (LLMs), a text-producing form of generative AI.