Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

July 2023

Netwrix Password Secure

Netwrix Password Secure is a comprehensive password management solution that empowers users to securely store, generate, and share passwords while offering various authentication methods for enhanced security. With robust end-to-end encryption and customizable policies, it ensures organizations can strengthen their password security and compliance measures. Learn more at netwrix.com/vault.

Security Requires Speed

For as long as anyone can remember, organizations have had to balance 4 key areas when it comes to technology: security efficacy, cost, complexity, and user experience. The emergence of SASE and SSE brings new hope to be able to deliver fully in each of these areas, eliminating compromise; but not all architectures are truly up to the task. SASE represents the convergence of networking and security, with SSE being a stepping-stone to a complete single-vendor platform.

Introducing Tines professional services

We’ve piloted our professional services program for nearly 12 months and are delighted to introduce it to the world. While Tines is a highly intuitive product and most customers realize value extremely quickly, there are times when extra support is useful. Our professional services are aimed at supporting you in your business goals and getting the most value from Tines.

The Techniques that Attackers Use and Best Practices for Defending Your Organization

Even if you have implemented a Zero Trust security paradigm for network and infrastructure security, you need to plan for the inevitable — at some point, an attacker will get into your network with the intent to deploy ransomware or cause other damage A typical attack goes something like this: There is a misconception that lateral movement threats are limited to on-prem networks.

Red team exercises against social engineering attacks

How can you take a proactive approach to your organization’s cybersecurity strategy? Scoping the threat landscape and having a solid incident response plan is a good start. But you also need to continuously seek out vulnerabilities and weaknesses to remediate or mitigate. These vulnerabilities and weaknesses aren’t just limited to systems and processes – the human factor plays a prominent part in many cybersecurity breaches.

Cloak Ransomware: Who's Behind the Cloak?

Emerging between late 2022 and the beginning of 2023, Cloak Ransomware is a new ransomware group. Despite its activities, the origins and organizational structure of the group remain unknown. According to data from the group’s DLS (data leak site), Cloak has accessed 23 databases of small-medium businesses, selling 21 of them so far. Out of these, 21 victims paid the ransom and had their data deleted, 1 declined and 1 is still in negotiations, indicating a high payment rate of 91-96%.

RCE vulnerability CVE-2023-36884

A phishing campaign carried out by the threat actor known as Storm-0978 has been detected by Microsoft. The campaign specifically targeted defense and government entities in Europe and North America. It exploited the CVE-2023-36884 vulnerability through Word documents, enabling a remote code execution vulnerability. Notably, the attackers used lures associated with the Ukrainian World Congress before the vulnerability was disclosed to Microsoft.

Securing Voice Authentication in the Deepfake Era

Voice authentication is a biometric security method that verifies individuals based on their unique vocal characteristics. It has become increasingly popular in various applications, ranging from phone banking to smart home devices. However, the rise of deepfake technology poses a significant threat to the integrity of voice authentication systems. Deepfakes are highly realistic artificial audio clips that can be used to impersonate someone else’s voice.

Researchers uncover surprising method to hack the guardrails of LLMs

Researchers from Carnegie Mellon University and the Center for A.I. Safety have discovered a new prompt injection method to override the guardrails of large language models (LLMs). These guardrails are safety measures designed to prevent AI from generating harmful content. This discovery poses a significant risk to the deployment of LLMs in public-facing applications, as it could potentially allow these models to be used for malicious purposes.