LLMs & Personalized Phishing: How AI Targets Crypto Users

LLMs & Personalized Phishing: How AI Targets Crypto Users

Table of Contents

The rapid development of artificial intelligence has brought remarkable advances, but it has also brought new risks – especially within the cryptocurrency ecosystem. Among the most troubling developments is how large language models enable cybercriminals to craft highly personalized phishing messages based entirely on a user’s social media fingerprint.

“This is no longer a classic.”Spray and pray“Phishing.
This is AI-powered phishing, targeted, psychologically tailored, and eerily believable.

Large language models can process tons of social media data, monitor users’ behavior, mimic writing styles, and thus produce personalized messages targeting the victim’s interests, habits, fears, or preferences. By nature, cryptocurrency users share updates, opinions, wallet movements, and experiences online, thus inadvertently creating a digital fingerprint that attackers can use.

This article will discuss how MBAs make phishing truly personal, how attackers construct and use social media data, and why encryption Users are the main targets, and what individuals can do to stay protected.

What’s different about LLM-based phishing?

Traditional phishing relies on generic messages such as:

“Your account has been hacked. Click here“.

LLM-based phishing is completely different:

  • Grammatically and naturally refined

  • Indicate your personal interests

  • Reflects your recent posts or activities

  • Creates an emotional connection

  • Seems relevant, timely and urgent

This shift makes AI-powered phishing extremely dangerous – especially for cryptocurrency users, where a single mistake can lead to irreversible financial loss.

How Cybercriminals Mine Social Media Data Using Phishing MBA

Cybercriminals do not need to hack your account to analyze your behavior. Most of the information is publicly available, and AI tools make it easy to extract it.

What data are they getting rid of?

Attackers input various data points into LLM to build a psychological and behavioral profile:

Public social media posts

Personal information

  • First and last names

  • Nicknames

  • Age (approximate)

  • location

  • Job role

  • Company Name

  • hobbies

  • Travel news/updates

  • Daily routine

Signs related to encryption

  • Exchanges you follow -Coinbase and Binance

  • The wallets (Metamask, Phantom, Ledger) you mentioned

  • Communities you’re a part of, whether that’s through r/CryptoCurrency, or through groups on Telegram.

  • Coins or tokens that you discuss publicly

  • Scammers to warn others about

Behavioral and psychological patterns

  • Writing tone: formal, informal, humorous, emotional

  • Publication frequency

  • Times when you are most active online

  • Accounts you interact with, or friends

  • Emotional posts include anger, celebration, and frustration.

All this then becomes training material.

LLM then turns these signals into a highly accurate profile of who you are, what you’re interested in, and how you communicate.

How LLMs turn that data into personalized phishing messages

Once attackers obtain significant data, Master’s degree in Law take over.

Here’s the full extended process:

Step by step explanation

1. Data extraction

Attackers use scraping tools to automatically capture public activity from Instagram, X, Facebook, Telegram, and LinkedIn.

These tools collect hundreds or thousands of data points in a few minutes.

2. Organizing data

LLM processes this information to determine:

  • Topics: travel, fashion, cryptocurrencies, gaming

  • Tone: Friendly, direct, or emotional

  • Personal interests: NFTs, trading, DeFi, staking

  • Oily security habits? neglected? My voice?

  • Specific trusted contacts: colleagues, friends, and platforms.

This allows the model to understand what type of message the user is most likely to trust.

3. Character creation

The LLM simulates the target’s social schema.

This includes:

  • Personality traits

  • Common vocabulary

  • Communication style

  • Social patterns

  • Potential insecurities

  • Financial interests

This character allows the AI ​​to compose messages that strike an emotional chord.

4. Create messages

Then, based on these personas, LLM creates personalized phishing content.

The message could look like this:

Because it’s in the tone you expected, you’re more likely to trust it.

5. Emotional engineering

  • Artificial intelligence improves messages using psychological manipulation.

  • urgency (“Action required within 30 minutes”)

  • Relevance: “Regarding your recent trade on Bitget

  • Acquaintance (“As we discussed in the comments yesterday…”)

  • Authority: Ledger Support ID #A37F91

6. Multi-channel delivery

The phishing message may arrive via:

  • Direct message

  • Email

  • cable

  • WhatsApp

  • Fake support chat

  • Reproduced site

The attacker will use the platform on which the user is most active.

7. Dynamic conversation

LLM continues the conversation if the victim responds.

The tone is set based on:

  • confusion

  • to hesitate

  • Questions

  • Complaints

This creates an interaction that is more like a customer service interaction, real and authentic.

Why does personal phishing work so well?

Personalization goes beyond doubt. When the message includes information you recognize, your brain lowers its guard.

Below is a list of psychological triggers:

Emotional triggers

  • Fear of losing money

  • The need to act quickly

  • Excitement about a reward or airdrop

  • Worried about account suspension

  • Curiosity about opportunities

  • Confidence in seeing familiar names or brands

Cognitive biases

  • Authority bias: Trust in platform messages

  • Confirmation bias: Believing information that matches your expectations

  • Familiarity bias: Trusting a tone that “sounds right.”

  • Scarcity bias: responding quickly to time-limited notifications

Cryptocurrency vulnerabilities

  • Irreversible transactions

  • Unstable market conditions encourage panic.

  • Relying on virtual communities

  • Regular exposure to new platforms

All of these factors also make users more vulnerable to well-crafted phishing.

Traditional phishing vs. LLM-powered phishing

Our offer on Sallar Marketplace