Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Best practices for using AI in the SDLC

AI has become a hot topic thanks to the recent headlines around the large language model (LLM) AI with a simple interface — ChatGPT. Since then, the AI field has been vibrant, with several major actors racing to provide ever-bigger, better, and more versatile models. Players like Microsoft, NVidia, Google, Meta, and open source projects have all published a list of new models. In fact, a leaked Google document makes it seem that these models will be ubiquitous and available to everyone soon.

No Ethical Boundaries: WormGPT

In this week's episode, Bill and Robin discover the dangerous world of an AI tool without guardrails: WormGPT. This AI tool is allowing people with limited technical experience to create potential chaos. When coupled with the rise in popularity of tools like the Wi-Fi pineapple, and Flipper Zero, do you need to be more worried about the next generation of script kiddies? Learn all this and more on the latest episode of The Ring of Defense!

You're Not Hallucinating: AI-Assisted Cyberattacks Are Coming to Healthcare, Too

We recently published a blog post detailing how threat actors could leverage AI tools such as ChatGPT to assist in attacks targeting operational technology (OT) and unmanaged devices. In this blog post, we highlight why healthcare organizations should be particularly worried about this.

Tines Technical Advisory Board (TAB) Takeaways with Pete: part one

I’m Peter Wrenn, my friends call me Pete! I have the pleasure of being the moderator of the Tines Technical Advisory Board (TAB) which is held quarterly. In it, some of Tines’s power users engage in conversations around product innovations, industry trends, and ways we can push the Tines vision forward — automation for the whole team. Well, that’s the benefit to our customers and Tines.

The New Era of AI-Powered Application Security. Part Two: AI Security Vulnerability and Risk

AI-related security risk manifests itself in more than one way. It can, for example, result from the usage of an AI-powered security solution that is based on an AI model that is either lacking in some way, or was deliberately compromised by a malicious actor. It can also result from usage of AI technology by a malicious actor to facilitate creation and exploitation of vulnerabilities.

[HEADS UP] See WormGPT, the new "ethics-free" Cyber Crime attack tool

CyberWire wrote: "Researchers at SlashNext describe a generative AI cybercrime tool called “WormGPT,” which is being advertised on underground forums as “a blackhat alternative to GPT models, designed specifically for malicious activities.” The tool can generate output that legitimate AI models try to prevent, such as malware code or phishing templates.

AI at Egnyte: The First Ten Years

In the 1960s, Theodore Levitt published his now famous treatise in the Harvard Business Review in which he warned CEOs of being “product oriented instead of customer oriented.” Among the many examples cited was the buggy whip industry. As Levitt wrote, “had the industry defined itself as being in the transportation business rather than in the buggy whip business, it might have survived. It would have done what survival always entails — that is, change.”

Darknet Diaries host Jack Rhysider talks about hacker teens and his AI predictions

It’s human nature: when we do something we’re excited about, we want to share it. So it’s not surprising that cybercriminals and others in the hacker space love an audience. Darknet Diaries, a podcast that delves into the how’s and why’s and implications of incidents of hacking, data breaches, cybercrime and more, has become one way for hackers to tell their stories – whether or not they get caught.