![Illustration representing cybersecurity concepts: a dark background with elements such as a shield with a lock, warning icons, a phishing email with a masked icon, and a login form. Lines and circuits connect the components, symbolizing interconnected threats and defenses in the digital landscape.](https://2a.consulting/wp-content/uploads/2025/01/Security-blog_Blog.png)
Image by Nicole Todd
A few months ago, I read the first novel in a new series by Richard Osman, author of the Thursday Murder Club mysteries. (I highly recommend all his books.) In the first chapter, a criminal mastermind instructs ChatGPT to write emails in a threatening tone using a voice and style that isn’t his own. Since truth is stranger than fiction, I decided to investigate whether generative AI is being used for nefarious purposes in real life. At 2A, we continuously look for new ways to use generative AI for good work. But it turns out other people are using it for bad work. Here’s what I learned.
Phishing and fooling with gen AI: The personal touch
Most smartphone users I know have received a text from the “USPS”—from a number with a Philippines country code, demanding their response in 24 hours to receive a valuable package. (Okay, as far as I know, that just happened to me, but I’m sure you’ve gotten one like it.) We’ve been trained to spot these texts because of their strange URLs or international phone numbers and to ignore or report them. Our IT and security friends also remind us not to respond to texts from CEOs or managers telling us to buy gift cards or share account information—without verifying first.
But now, according to two very different companies—one that builds databases and one that offers networking—hackers are creating deep fakes of voices. These deep fakes create the illusion that a victim of the hackers is talking to a real person. As a result, the victim is more apt to share financial or sensitive information. Some hackers even search for videos of someone online to create a biometric reproduction for illegally unlocking devices and applications that use facial recognition.
Generating fraud and identity theft with dark LLMs: Isn’t that malicious?
Large language models (LLMs) enable us to ask questions and have conversations with generative AI. Basically, they’re the reason we can type instructions or questions in a prompt and get answers in our own language and not a computer language like SQL. ChatGPT, Claude, and Gemini all use LLMs that are trained not to generate malicious code. Yet, that’s not stopping the bad guys. LLMs are a great way to spread disinformation, which can mislead, harm, or manipulate a person, social group, organization, or country, all because LLMs cannot distinguish between fact and fiction.
In addition, it’s possible to purchase dark LLMs which, unlike mainstream LLMs, have no restrictions. For example, FraudGPT and WormGPT are used to create deceptive content that fools people into sharing sensitive information. Other LLMs can be used to infect systems and applications.
Finding a security blanket for your identity, data, and systems
The good news is that companies from all corners of the tech world are building AI solutions to protect against and defeat these new tactics. A graph database added to an existing fraud-detection application can identify hidden patterns that indicate the use of generative AI to exploit identities and systems. APIs provided by a telecommunications giant use AI to ferret out fake videos generated to unlock systems protected by facial-recognition software. A data and network protection company offers AI-powered solutions for protection against personalized threats from emails, chats, and texts. These are just a few ways our tech friends are improving security and keeping customers safe with generative AI.