AI-Assisted Social Engineering is a Growing Concern

A survey by the World Economic Forum (WEF) found that 47% of organizations cite the advancement of adversarial capabilities as their top concern surrounding generative AI. These capabilities include phishing, malware development, and deepfakes, all of which are increasingly accessible due to AI tools. Additionally, 42% of organizations experienced a successful social engineering attack last year, and the researchers expect this number to rise as AI-assisted social engineering grows more advanced.

Preventing Data Breaches Before They Happen: Why Outbound Email Security Can't Be Ignored

While organizations invest heavily in stopping threats from entering their networks, a critical vulnerability often goes underprotected: sensitive data leaving the organization through email. Every day, employees send thousands of emails containing confidential information - patient records, financial data, legal documents, and personally identifiable information (PII). And every day, some of those emails go to the wrong recipient.

Anatomy of a Vishing Attack: Technical Indicators IT Managers Need to Track

If your organization hasn’t encountered a vishing attack yet, it’s probably only a matter of time. Vishing, or voice phishing, is a sophisticated type of social engineering that adds a whole new dimension to common scams. Rather than emails or text messages, threat actors employ phone calls or online voice calls to carry out vishing schemes. Particularly savvy attackers can even copy a real person’s voice to deceive, coerce, or manipulate potential victims.

New Databricks and Snowflake apps strengthen cloud data security and data pipeline visibility

If you’re like most companies we work with, you’re awash in opportunities (and a bit overwhelmed with pressure) to adopt AI. Of course, integrating new technologies means more data to manage and systems to monitor.

Understanding the LLM Mobile Landscape in Enterprise Technology

Mobile security has always been complex, but LLM technology has added a whole new dimension to the field. Behind every popular generative AI (genAI) tool is a comprehensive large language model (LLM) that provides data and parses queries in natural language. When used responsibly, LLMs can be useful tools for ideation and content generation. In the wrong hands, though, LLMs can help threat actors supercharge their social engineering scams.

Explore the best antivirus protection in 2026

Would you leave your wallet on the street and hope nobody will touch it? No, you would not. Then why would you risk living in 2026 without having a reliable and efficient computer antivirus protection? Do you know that cyberattacks have quadrupled since COVID-19 hit the world? Ransomware, malware, and phishing attacks are multiplying daily; they have become like a tsunami that destroys everything in its path.

Can Cloud Scanners Detect Insecure IAM Roles and Permissions?

In cloud service providers (CSPs) such as AWS, Azure, and Google Cloud Platform (GCP), Identity and Access Management (IAM) controls who has access to which resources through roles, policies, and permissions. IAM is about who can do what, like letting a developer read from a Database, but not delete it. Misconfigured IAM, such as roles with unnecessary privileges, is the common cause of unauthorized access/exploit/ data breaches, and resource abuse.

Certificate permissions with CertKit Applications

When you’re managing a handful of certificates, one big list works fine. Add a few dozen more and things get messy. Add multiple teams or projects and you’ve got a problem. Who should have access to the production certificates? What about staging? Does the contractor working on the marketing site really need to see your internal infrastructure? CertKit now supports multiple applications from our roadmap to help you sort this out.