Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

KnowBe4

"We are hurtling toward a glitchy, spammy, scammy, AI-powered internet."

This MIT Technology Review headline caught my eye, and I think you understand why. They described a new type of exploit called prompt injection. Melissa Heikkilä wrote: "I just published a story that sets out some of the ways AI language models can be misused. I have some bad news: It’s stupidly easy, it requires no programming skills, and there are no known fixes.

Find Out What Users Think About KnowBe4

TrustRadius collected live user reviews from Black Hat 2022 on their experience with the KnowBe4 security awareness training and simulated phishing platform. In this short video, users talk through how they use KnowBe4, what the best features are, the return on investment they've had and rate how likely they are to recommend KnowBe4. A de minimus incentive was given to thank the reviewer for their time. The incentive was not used to bias or drive a particular response, nor was the incentive contingent on a positive endorsement.

Italy Bans ChatGPT: A Portent of the Future, Balancing the Pros and Cons

In a groundbreaking move, Italy has imposed a ban on the widely popular AI tool ChatGPT. This decision comes in the wake of concerns over possible misinformation, biases and the ethical challenges AI-powered technology presents. The ban has sparked a global conversation, with many speculating whether other countries will follow suit.

Scareware From a Phony Ransomware Group

BleepingComputer reports that a cybercriminal gang is sending phony ransomware threats to prior victims of ransomware attacks. The gang, which calls itself “Midnight,” claims to have stolen hundreds of gigabytes of data and threatens to leak it if the victim doesn’t pay a ransom. Security firm Kroll said the gang’s ransom notes use the names of more prolific ransomware actors.

Social Engineering Attacks Utilizing Generative AI Increase by 135%

New insights from cybersecurity artificial intelligence (AI) company Darktrace shows a 135% increase in novel social engineering attacks from Generative AI. This new form of social engineering that contributed to this increase is much more sophisticated in nature, using linguistics techniques with increased text volume, punctuation, and sentence length to trick their victim. We've recently covered ChatGPT scams and other various AI scams, but this attack proves to be very different.

Fake ChatGPT Scam Turns into a Fraudulent Money-Making Scheme

Using the lure of ChatGPT’s AI as a means to find new ways to make money, scammers trick victims using a phishing-turned-vishing attack that eventually takes victim’s money. It’s probably safe to guess that anyone reading this article has either played with ChatGPT directly or has seen examples of its use on social media. The idea of being able to ask simple questions and get world-class expert answers in just about any area of knowledge is staggering.

Latitude Forced To Stop Adding New Customers in Aftermath of Breach

Looks like Latitude Finance is trying to give consumers more "latitude" in their exposure to cyber risks. The Australian finance company admittedly fell victim to an attack that has exposed customer data and Latitude Financial has been forced to stop adding new customers from clients such as Apple, Harvey Norman and JB Hi-Fi as it tries to contain the damage from criminals, who still appear to be active in its computer systems.

Mid-Sized Businesses Lack the Staffing, Expertise, and Resources to Defend Against Cyberattacks

Mid-sized businesses – those with 250 to 2000 employees – don’t appear to have what they need to fend off attacks in a number of critical ways. Cybersecurity vendor Huntress’ latest report, The State of Cybersecurity for Mid-Sized Businesses in 2023, shows that mid-sized businesses are in a heap of trouble and simply aren’t prepared for an attack: In short, organizations have no internal resources to ensure the organization is improving its state of cybersecurity daily.

The New Face of Fraud: FTC Sheds Light on AI-Enhanced Family Emergency Scams

The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI that imitates the voice of a "family member in distress". They started out with: "You get a call. There's a panicked voice on the line. It's your grandson. He says he's in deep trouble — he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You've heard about grandparent scams. But darn, it sounds just like him.