Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI

Heart of the Matter: How LLMs Can Show Political Bias in Their Outputs

Wired just published an interesting story about political bias that can show up in LLM's due to their training. It is becoming clear that training an LLM to exhibit a certain bias is relatively easy. This is a reason for concern, because this can "reinforce entire ideologies, worldviews, truths and untruths” which is what OpenAI has been warning about.

Does ChatGPT Have Cybersecurity Tells?

Poker players and other human lie detectors look for “tells,” that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when he’s about to bluff, for example, or someone’s pupils dilate when they’ve successfully drawn to an insider straight.

What is DLP and How Does It Work

Data loss prevention, or DLP for short, is a technology that helps companies protect their data from unauthorized access or theft. It does this by scanning all incoming and outgoing data for sensitive information and then preventing that data from leaving the company's network. In this blog post, we will discuss what DLP is and how it works!

ScorecardX Integrates with OpenAI's GPT-4

As part of SecurityScorecard’s commitment to making the world a safer place, we are now the first and only security ratings platform to integrate with OpenAI’s GPT-4 system. With this natural language processing capability, cybersecurity leaders can find immediate answers and suggested mitigations for high-priority cyber risks.

Another Perspective on ChatGPT's Social Engineering Potential

We’ve had occasion to write about ChatGPT’s potential for malign use in social engineering, both in the generation of phishbait at scale and as a topical theme that can appear in lures. We continue to track concerns about the new technology as they surface in the literature.

Friend or foe: AI chatbots in software development

Yes, AI chatbots can write code very fast, but you still need human oversight and security testing in your AppSec program. Chatbots are taking the tech world and the rest of the world by storm—for good reason. Artificial intelligence (AI) large language model (LLM) tools can write things in seconds that would take humans hours or days—everything from research papers to poems to press releases, and yes, to computer code in multiple programming languages.

ChatGPT DLP Filtering: How to Use ChatGPT without Exposing Customer Data

Advancements in AI have led to the creation of generative AI systems like ChatGPT, which can generate human-like responses to text-based inputs. However, these inputs are at the discretion of the user and they aren’t automatically filtered for sensitive data. This means that these systems can also be used to generate content from sensitive data, such as medical records, financial information, or personal details.

OpenAI Transparency Report Highlights How GPT-4 Can be Used to Aid Both Sides of the Cybersecurity Battle

The nature of an advanced artificial intelligence (AI) engine such as ChatGPT provides its users with an ability to use and misuse, potentially empowering both security teams and threat actors alike. I’ve previously covered examples of how ChatGPT and other AI engines like it can be used to craft believable business-related phishing emails, malicious code, and more for the threat actor.