Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Technology

New AI Bot FraudGPT Hits the Dark Web to Aid Advanced Cybercriminals

Assisting with the creation of spear phishing emails, cracking tools and verifying stolen credit cards, the existence of FraudGPT will only accelerate the frequency and efficiency of attacks. When ChatGPT became available to the public, I warned about its misuse by cybercriminals. Because of the existence of “ethical guardrails” built into tools like ChatGPT, there’s only so far a cybercriminal can use the platform.

Fireblocks' MPC-CMP code is Open-Source

In the pursuit of advancing security and transparency in the digital asset industry, Fireblocks has published our MPC-CMP code as open source under a limited license, along with the rest of our MPC library. As the demand for digital asset custody, tokenization, and Web3 among retail and financial institutions continues to rise, Fireblocks MPC-CMP has proven to be the most secure and reliable key management protocol.

Unmasking the top exploited vulnerabilities of 2022

The Cybersecurity and Infrastructure Security Agency (CISA) just released a report highlighting the most commonly exploited vulnerabilities of 2022. With our role as a reverse proxy to a large portion of the Internet, Cloudflare is in a unique position to observe how the Common Vulnerabilities and Exposures (CVEs) mentioned by CISA are being exploited on the Internet. We wanted to share a bit of what we’ve learned.

Cloud Threats Memo: Russian State-sponsored Threat Actors Increasingly Exploiting Legitimate Cloud Services

State-sponsored threat actors continue to exploit legitimate cloud services, and especially one group, the Russian APT29 (also known as Cozy Bear, Cloaked Ursa, BlueBravo, Midnight Blizzard, and formerly Nobelium), seems to be particularly active. Between March and May 2023, security researchers at Recorded Future’s Insikt Group have unearthed a cyber espionage campaign by the same threat actor allegedly targeting government-sector entities in Europe with interest in Ukraine.

Software Security 2.0 - Securing AI Generated Code

The integration of machine learning into software development is revolutionizing the field, automating tasks and generating complex code snippets at an unprecedented scale. However, this powerful paradigm shift also presents significant challenges including the risk of introducing security flaws into the codebase. This issue is explored in depth in the paper Do Users Write More Insecure Code with AI Assistants? by Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh.

GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security.

Since Nightfall’s inception in 2018, we’ve made it our mission to equip companies with the tools that they need to encourage safe employee innovation. Today, we’re happy to announce that we’ve expanded Nightfall’s capabilities to protect sensitive data across generative AI (GenAI) tools and the cloud. Our latest product suite, Nightfall for GenAI, consists of three products: Nightfall for ChatGPT, Nightfall for SaaS, and Nightfall for LLMs.

Worried About Leaking Data to LLMs? Here's How Nightfall Can Help.

Since the widespread launch of GPT-3.5 in November of last year, we’ve seen a meteoric rise in generative AI (GenAI) tools, along with an onslaught of security concerns from both countries and companies around the globe. Tech leaders like Apple have warned employees against using ChatGPT and GitHub Copilot, while other major players like Samsung have even go so far as to completely ban GenAI tools. Why are companies taking such drastic measures to prevent data leaks to LLMs, you may ask?