Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Machine Learning

Falcon Fusion SOAR and Machine Learning-based Detections Automate Data Protection Workflows

Time is of the essence when it comes to protecting your data, and often, teams are sifting through hundreds or thousands of alerts to try to pinpoint truly malicious user behavior. Manual triage and response takes up valuable resources, so machine learning can help busy teams prioritize what to tackle first and determine what warrants further investigation.

Top tips: Watch out for these 4 machine learning risks

Top tips is a weekly column where we highlight what’s trending in the tech world today and list ways to explore these trends. This week, we’re looking at four machine learning-related risks to watch out for. Machine learning (ML) is truly mind-blowing tech. The very fact that we’ve been able to develop AI models that are capable of learning and improving over time is remarkable.

What You Need to Know About Hugging Face

The risk both to and from AI models is a topic so hot it’s left the confines of security conferences and now dominates the headlines of major news sites. Indeed, the deluge of frightening hypotheticals can make AI feel like we are navigating an entirely new frontier with no compass. And to be sure, AI poses a lot of unique challenges to security, but remember: Both the media and AI companies have a vested interest in upping the fright hype to keep people talking.

Elevating Security Intelligence with Splunk UBA's Machine Learning Models

One of the most challenging aspects of running an effective Security Operations Center (SOC) is how to account for the high volume of notable events that actually do not present a risk to business. These events often include common occurrences like users forgetting their passwords a ridiculous number of times or accessing systems at odd hours for valid reasons. Despite their benign nature, struggling to handle the volume of such potential threats may often overwhelm limited staff.

ALPHV Blackcat, GCP-Native Attacks, Bandook RAT, NoaBot Miner, Ivanti Secure Vulnerabilities, and More: Hacker's Playbook Threat Coverage Round-up: February 2024

In this version of the Hacker’s Playbook Threat Coverage round-up, we are highlighting attack coverage for newly discovered or analyzed threats, including those based on original research conducted by SafeBreach Labs. SafeBreach customers can select and run these attacks and more from the SafeBreach Hacker’s Playbook™ to ensure coverage against these advanced threats.

Data Scientists Targeted by Malicious Hugging Face ML Models with Silent Backdoor

In the realm of AI collaboration, Hugging Face reigns supreme. But could it be the target of model-based attacks? Recent JFrog findings suggest a concerning possibility, prompting a closer look at the platform’s security and signaling a new era of caution in AI research. The discussion on AI Machine Language (ML) models security is still not widespread enough, and this blog post aims to broaden the conversation around the topic.