Despite the security controls that OpenAI has imposed on ChatGPT to try to make it a secure space capable of assisting users in a variety of tasks, cybercriminals have managed to exploit this technology for malicious purposes. Recent research has shown that this generative artificial intelligence is capable of creating a new branch of polymorphic malware with relative ease. The main risk lies in ChatGPT's versatility, which allows it to create code that could easily be used for malware.
ChatGPT use is increasing exponentially among enterprise users, who are using it to help with the writing process, to explore new topics, and to write code. But, users need to be careful about what information they submit to ChatGPT, because ChatGPT does not guarantee data security or confidentiality. Users should avoid submitting any sensitive information, including proprietary source code, passwords and keys, intellectual property, or regulated data.
Cybercriminals are using AI to carry out various cyberattacks including password cracking, phishing emails, impersonation and deepfakes. It’s important you understand how cybercriminals are using AI to their advantage so you can better protect yourself and family, as well as your accounts and data. Continue reading to learn about AI-enabled cyberattacks and what you can do to keep yourself safe.
A researcher was alerted to a fake website containing fake quotes that appeared to be written by himself. The age of generative artificial intelligence (AI) toying with our public personas has truly arrived. As cybersecurity professionals we must ask, what are the implications of fake-news-at-scale-and-quality for individuals and organizations?
Diana Kelley, Chief Information Security Officer (CISO) at Protect AI joins host David Puner for a dive into the world of artificial intelligence (AI) and machine learning (ML), exploring the importance of privacy and security controls amid the AI Gold Rush. As the world seeks to capitalize on generative AI’s potential, risks are escalating.