Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI in Security: Navigating Deep Fakes & Identity Theft

Dive into the future of security with Brivo as we explore the cutting-edge intersection of artificial intelligence (AI) and security threats. In this eye-opening video, Steve Van Till, our expert, unveils the emerging challenges of deep fakes, identity theft, and large-scale cyber crimes that AI brings to the forefront. Discover how Brivo's innovative solutions are paving the way for safer, smarter spaces in the face of these digital dangers. Don't miss out on expert insights and actionable tips to protect your digital identity and assets.

New Charlotte AI Innovations Enable Prompt Collaboration and Demystify Script Analysis

Since CrowdStrike Charlotte AI became generally available, we’ve seen firsthand how genAI can transform security operations, enabling teams to save hours across time-sensitive tasks and accelerate response to match the speed of modern adversaries.

The Double-Edged Sword of Artificial Intelligence (AI) in Cybersecurity

As artificial intelligence (AI) continues to advance, its impact on cybersecurity grows more significant. AI is an incredibly powerful tool in the hands of both cyber attackers and defenders, playing a pivotal role in the evolving landscape of digital threats and security defense mechanisms. In this blog, let’s explore the ways AI is employed by attackers to conduct cyber attacks, and how defenders are using AI to deter and counter threats.

Mitigating Data Poisoning Attacks on Large Language Models

Large language models (LLMs) have experienced a meteoric rise in recent years, revolutionizing natural language processing (NLP) and various applications within artificial intelligence (AI). These models, such as OpenAI's GPT-4 and Google's BERT, are built on deep learning architectures that can process and generate human-like text with remarkable accuracy and coherence.

Safeguarding LLMs in Sensitive Domains: Security Challenges and Solutions

Large Language Models (LLMs) have become indispensable tools across various sectors, reshaping how we interact with data and driving innovation in sensitive domains. Their profound impact extends to areas such as healthcare, finance, and legal frameworks, where the handling of sensitive information demands heightened security measures.

AI's Role in Securing AEC Data: Paving the Path Forward

In the oft-obscure world of Architecture, Engineering, and Construction (AEC), the structures we see reaching for the skyline are not just feats of design and engineering but archives of data, each rivet and beam a data point in a colossal network of information. Yet, with these digital monoliths comes an invisible vulnerability – data control, a challenge that’s upending the AEC industry.

Nightfall's Firewall for AI

From customer service chatbots to enterprise search tools, it’s essential to protect your sensitive data while building or using AI. Enter: Nightfall’s Firewall for AI, which connects seamlessly via APIs and SDKs to detect sensitive data exposure in your AI apps and data pipelines. With Nightfall’s Firewall for AI, you can… … intercept prompts containing sensitive data before they’re sent to third-party LLMs or included in your training data.

Malicious Use of Generative AI Large Language Models Now Comes in Multiple Flavors

Analysis of malicious large language model (LLM) offerings on the dark web uncovers wide variation in service quality, methodology and value – with some being downright scams. We’ve seen the use of this technology grow to the point where an expansion of the cybercrime economy occurred to include GenAI-based services like FraudGPT and PoisonGPT, with many others joining their ranks.