Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Technology

AI can crack your passwords. Here's how Keeper can help.

As AI becomes more advanced, it’s important to consider all the ways AI can be used maliciously by cybercriminals, especially when it comes to cracking passwords. While AI password-cracking techniques aren’t new, they’re becoming more sophisticated and posing a serious threat to your sensitive data. Thankfully, password managers like Keeper Security exist and can help you stay safe from AI-password threats.

Defender for IoT's Firmware Analysis Tool is Exceptional

One of my "pastimes," if you will, is to check out the features of various security tools. I had been curious about Microsoft's Defender for IoT's just-released Firmware Analysis feature. Essentially, I wanted to test its capabilities because, as we all know, adversaries are continuously upping their game making tools like this increasingly important when it comes to maintaining an organization's security.

Introducing Cloudflare's 2023 phishing threats report

After shutting down a ‘phishing-as-a-service’ operation that impacted thousands of victims in 43 countries, INTERPOL recently noted, “Cyberattacks such as phishing may be borderless and virtual in nature, but their impact on victims is real and devastating.” Business email compromise (BEC), a type of malware-less attack that tricks recipients into transferring funds — for example — has cost victims worldwide more than $50 billion, according to the FBI.

The Role of API Inventory in SBOM and Cyber Security

Creating a Software Bill of Materials (SBOM) is crucial to software supply chain security management. It helps fortify your software supply chain and reduces the likeliness of your software being exploited. But did you know there's a way to enhance your software's security further? Well, that's when API inventory comes into the picture. Including API inventory in your SBOM can make your software solution more resilient to cyberattacks.

Combatting Cloud Threats: The Accelerated Attack Speed of 2023 (LIVE)

Cloud threats are evolving and attackers are moving faster than ever! Join Sysdig’s Michael Clark (Director, Threat Research) and Anna Belak (Director, Office of Cybersecurity Strategy) LIVE on Linkedin, Twitter, and Youtube, as they discuss key findings from Sysdig’s �������� ������������ ���������� ������������ ������������. From cloud automation as a weapon to software supply chain vulnerabilities — the annual report authored by Sysdig’s Threat Research Team exposes shocking statistics on the evolving tactics of attackers lurking within the clouds.

Ransomware Attacks Surge as Generative AI Becomes a Commodity Tool in the Threat Actor's Arsenal

According to a new report, cybercriminals are making full use of AI to create more convincing phishing emails, generating malware, and more to increase the chances of ransomware attack success. I remember when the news of ChatGPT hit social media – it was everywhere. And, quickly, there were incredible amounts of content providing insight into how to make use of the AI tool to make money.

Do You Use ChatGPT at Work? These are the 4 Kinds of Hacks You Need to Know About.

From ChatGPT to DALL-E to Grammarly, there are countless ways to leverage generative AI (GenAI) to simplify everyday life. Whether you’re looking to cut down on busywork, create stunning visual content, or compose impeccable emails, GenAI’s got you covered—however, it’s vital to keep a close eye on your sensitive data at all times.

Q2 Privacy Update: AI Takes Center Stage, plus Six New US State Laws

The past three months witnessed several notable changes impacting privacy obligations for businesses. Coming into the second quarter of 2023, the privacy space was poised for action. In the US, state lawmakers worked to push through comprehensive privacy legislation on an unprecedented scale, we saw a major focus on children's data and health data as areas of concern, and AI regulation took center stage as we examined the intersection of data privacy and AI growth.

Can machines dream of secure code? From AI hallucinations to software vulnerabilities

As GenerativeAI expands its reach, the impact of software development is not left behind. Generative models — particularly Language Models (LMs), such as GPT-3, and those falling under the umbrella of Large Language Models (LLMs) — are increasingly adept at creating human-like text. This includes writing code.

Coffee Talk with SURGe: The Interview Series featuring Jake Williams

Join Audra Streetman and special guest Jake Williams (@MalwareJake) for a discussion about hiring in cybersecurity, interview advice, the challenges associated with vulnerability prioritization, Microsoft's Storm-0558 report, and Jake's take on the future of AI and LLMs in cybersecurity.