Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

CultureAI

Separating Hype from Reality in HRM

Human risk management (HRM) has become a more established category in recent years. This development signals a crucial shift towards enabling security teams to accurately quantify and manage workplace risks. With the rise of HRM, a variety of new technologies have also emerged on the market. However, how do you navigate the sea of buzzwords and shiny promises to pick the solution that's right for you?

A trainer's take: "Training alone won't change behaviours"

I've spent over 35 years as a trainer in various capacities, so it might surprise you to hear me say that training alone isn't enough to change behaviours—particularly when it comes to security. This isn't just my opinion; it's a conclusion from our State of Human Risk Management in 2024 Report. To understand why training isn't the full solution, we need to delve into the field of human error. Mistakes—errors caused by wrongly applied knowledge—can often be corrected with training.

Security Awareness Isn't Enough - It's Time to Adapt

October 1st marks the start of Security Awareness Month. A global campaign launched two decades ago to improve cyber security awareness and equip people with the knowledge and resources they need to be secure online. But what impact has this campaign truly had in the workplace? Yes, it spotlights the issue and boosts high-level awareness of threats like phishing.

More than a security alert: A guide to nudges

As American poet Nikki Giovanni wisely observed, "Mistakes are a fact of life. It is the response to error that counts." This rings particularly true in the world of cyber security. Even the most vigilant individuals can make mistakes—after all, we’re only human. What truly matters is how we respond. Imagine a platform that automatically detects risky security behaviours, alerting employees and nudging them to fix their mistakes before they escalate?

Generative AI: Workplace Innovation or Security Nightmare

The field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fuelled by increased computational power and data availability, this AI boom brings with it both opportunities and challenges. AI tools fuel innovation and growth by enabling businesses to analyse data, improve customer experiences, automate processes, and innovate products – at speed. However, as AI becomes more and more commonplace, concerns about misinformation and misuse arise.

About CultureAI | Human Risk Management Platform

Monitor, reduce, and fix your human cyber risks. The CultureAI Human Risk Management Platform enables security teams to proactively monitor human risk across multiple applications, providing immediate visibility into the riskiest employee behaviours and security vulnerabilities within an organisation.

CultureAI raises $10 million in Series A funding to evolve the way organisations manage human risk

CultureAI has raised $10 million in capital. Mercia Ventures and Smedvig Ventures led the funding round. This funding will power CultureAI's product development and market expansion plans.

Stop your employees from sharing credentials

Need help with a task while you’re out of the office? Sharing your login details with a colleague can seem harmless. However, this seemingly innocent act can lead to unintended consequences, especially if you’re using the same credentials across multiple platforms. Imagine the implications if those shared credentials grant access to your company's network. That's why it's crucial to prioritise security over convenience, and prevent password sharing.
Featured Post

Generative AI: Productivity Dream or Security Nightmare

The field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fuelled by increased computational power and data availability, this AI boom brings with it opportunities and challenges. AI tools fuel innovation and growth by enabling businesses to analyse data, improve customer experiences, automate processes, and innovate products - at speed. Yet, as AI becomes more commonplace, concerns about misinformation and misuse arise. With businesses relying more on AI, the risk of unintentional data leaks by employees also goes up.

Deepfakes: The Next Frontier in Digital Deception?

Machine learning (ML) and AI tools raise concerns over mis- and disinformation. These technologies can ‘hallucinate’ or create text and images that seem convincing but may be completely detached from reality. This may cause people to unknowingly share misinformation about events that never occurred, fundamentally altering the landscape of online trust. Worse – these systems can be weaponised by cyber criminals and other bad actors to share disinformation, using deepfakes to deceive.