June 8, 2025

June 8, 2025

From ChatGPT to CrimeGPT: The Rise of LLM-Driven Cybercrime Playbooks

From ChatGPT to CrimeGPT: The Rise of LLM-Driven Cybercrime Playbooks

BreachX Threat Intelligence Division

8 min read

How threat actors are leveraging large language models to scale phishing, automate malware support, and weaponize social engineering—without writing a single line of code.

How threat actors are leveraging large language models to scale phishing, automate malware support, and weaponize social engineering—without writing a single line of code.

Introduction: When AI Became a Cybercrime Intern

When large language models (LLMs) like ChatGPT entered the mainstream, enterprises saw opportunity. So did adversaries.

Today, threat actors aren’t just coding smarter malware—they’re outsourcing the thinking. From writing ransomware negotiation scripts to crafting psychologically optimized phishing emails, attackers are now leveraging AI to scale faster, act more credibly, and automate operations.

At BreachX, we’ve tracked a growing trend across forums and closed actor channels: the operationalization of LLMs in cybercrime. What began as experimental prompts is now turning into playbooks, toolkits, and workflows, widely distributed to lower-tier actors who don’t even need to understand what they’re doing.

This is not just a technical threat—it’s a cognitive one.
The democratization of deception is here.

How Threat Actors Use LLMs Today

LLMs aren’t being used to write exploits (yet)—but they are transforming the pre-attack and post-breach lifecycle. Here’s how:

1. Phishing Optimization

  • LLMs generate highly personalized email content with native-level grammar, regional tone, and context.

  • Templates for BEC attacks, “invoice overdue” scams, and fake IT support messages are generated on demand.

  • Attackers simply prompt:
    “Write a phishing email as an HR executive asking for document verification before appraisal review.”

2. Chatbot Negotiators

  • Ransomware operators are integrating LLM-powered bots into extortion portals.

  • These bots handle:

    • Initial negotiation scripts

    • FAQ-style responses to panicked victims

    • Language switching on demand (via prompt tuning)

3. Code Translation for Malware Tutorials

  • Threat actors use LLMs to convert malware snippets between languages (e.g., from Python to PowerShell).

  • Helps non-coders understand functionality—or adapt tools for new environments.

4. Recon + Target Research

  • LLMs assist in summarizing public info about a target (e.g., extracting job titles, emails, press coverage).

  • Helps attackers craft context-aware social engineering lures.

5. Victim Profiling in Sextortion and Romance Scams

  • Prompt-based victim engagement scripts for romance, sextortion, or grooming scams are being shared as .txt playbooks.

  • These scripts now adapt across genders, cultures, and platforms—many AI-written and optimized.

From Playbook to Product: The Rise of CrimeGPT

Across BreachX’s monitored forums, we’ve seen the emergence of “CrimeGPTs”—custom-tuned LLMs that run locally or on Telegram bots, with jailbroken prompt sets.

Some examples include:

  • MalwormGPT – writes malware payloads disguised as batch scripts

  • PhisherGPT – generates localized spear phishing content

  • FraudGPT – prompt-engineered to create fake invoices, clone websites, or write scam text messages

  • SextorBot – simulates conversations used in sexual extortion setups

While these are crude imitations of open models, their usability is dangerously high. Some don’t even require a laptop—just a phone and a prompt.

Why This Is Dangerous

This isn’t just about better grammar or smarter chatbots. It’s about:

  • Lowering the barrier to entry: Non-technical actors can now run entire scams with AI-generated material.

  • Scaling emotional manipulation: Phishing and social engineering become psychologically tuned at industrial scale.

  • Automating credibility: LLMs help build fake personas, support chats, documentation, and multilingual lures in seconds.

  • Delegating deception: The attacker doesn't need to be smart. The LLM handles the thinking.

The skill gap is closing. And that means volume is rising.

How BreachX Tracks and Responds

While LLM abuse doesn’t leave conventional IOCs, BreachX tracks:

  • Forum chatter around prompt sets, payload bots, and Telegram-based AI assistants

  • Emerging CrimeGPT tools, especially those tailored to verticals like finance, healthcare, or insurance

  • Stylistic artifacts from AI-generated phishing content

  • Victim-side escalations triggered by AI-enhanced scam campaigns

  • Cross-pollination between known ransomware groups and LLM-assisted deception kits

The Adversary Now Has an Intern—And It Doesn’t Sleep

Large language models aren’t evil. But they’re now available at scale to people with bad intent.

As AI evolves, so do threat actors—and they’re using these tools not just to code, but to think, engage, and manipulate.

In this new reality, it's not just about stopping malware.
It's about detecting persuasion.

With BreachX, you get more than detection—you get foresight.

Discover how attackers might use LLMs to phish, negotiate, and manipulate your team in ways you haven’t anticipated.

The world's first cybersecurity platform focused

entirely on Zero Day Intelligence. Discover

threats before they become public, weaponized,

or exploited.

Quick Links

Home

About

Products

Contact

Contact

enterprise@breachx.com

www.breachx.com

Monday - Friday

9 AM - 6 PM IST

© 2025 BreachX. All rights reserved.

Privacy Policy

Terms of Service

Security

The world's first cybersecurity platform focused entirely on

Zero Day Intelligence. Discover threats before they become

public, weaponized, or exploited.

Contact

enterprise@breachx.com

www.breachx.com

Monday - Friday

9 AM - 6 PM IST

© 2025 BreachX. All rights reserved.