Concerned-looking woman using mobile phone.

The rise of dark AI: How scammers are using artificial intelligence against you

by Kate Hernandez | March 13, 2026 | Threat Lab

Reading Time: 3 mins

AI isn’t just for chatbots anymore

Artificial intelligence powers the tools we use every day, from writing assistants to smarter search results.

But AI isn’t just helping us be more productive.

It’s helping scammers, too.

Criminals now use AI for everything from writing convincing phishing emails to cloning voices and automating account takeover attempts. These scams don’t look sloppy. They look legitimate and that’s what makes them effective.

When AI is used for malicious purposes, it’s often referred to as “Dark AI.” Attackers us AI to:

  • Scale scams quickly
  • Personalize messages with real information
  • Remove traditional red flags
  • Evade basic detection tools

According to UC Berkeley’s Risk and Security Lab, AI has dramatically lowered the barrier to entry for cybercriminals, making it easier to launch phishing and impersonation attacks at scale.

At the same time, Deloitte’s 2025 fraud research warns that AI is accelerating identity and impersonation scams, with AI-enabled fraud losses projected to reach tens of billions of dollars annually in the coming years.

AI isn’t creating new crime categories — it’s making existing ones smarter.

How AI is showing up in everyday scams

AI-written phishing emails

Phishing emails used to be easy to spot. Poor grammar. Strange formatting. Obvious urgency.

Now, AI-generated phishing emails are polished and context-aware. Some reference your workplace, purchases, or location.

The FTC has warned that scammers are using AI tools to make phishing messages more believable and harder to distinguish from legitimate communication.

If you want to understand how phishing works, check out our blog - phishing scams and how to avoid them.

Deepfake Voice Fraud

With just seconds of audio from social media or voicemail, criminals can clone someone’s voice.

Victims have reported receiving urgent calls that sound exactly like a family member asking for financial help.

The FBI has issued public warnings that criminals are using generative AI to clone voices and carry out impersonation fraud.

When a voice sounds familiar, hesitation disappears and that’s the risk.

Fake online stores

AI website builders make it easy to create realistic-looking storefronts in minutes.

These fake sites often include:

  • Professional product descriptions
  • AI-generated customer reviews
  • Limited-time offers
  • Clean, modern design

Everything looks legitimate until your payment is processed and the site disappears.

AI-generated malware

AI isn’t just improving messaging; it’s helping malware evolve.

Some modern threats can:

  • Change their code to avoid detection
  • Adjust behavior based on the device they infect
  • Test themselves against security tools

That’s why traditional, signature-only antivirus solutions aren’t enough anymore.

Why AI-powered scams are harder to detect

AI removes the obvious warning signs.

There’s no broken English.
No obvious scam template.
No glaring formatting issues.

Instead, scams feel:

  • Urgent
  • Personalized
  • Context-aware
  • Professionally written

And the financial impact is real.

The FBI reported $16 billion in cybercrime losses in 2024, with phishing and impersonation scams among the most frequently reported threats. As AI enhances these tactics, the risk to consumers continues to grow.

But this isn’t about fear. It’s about staying prepared.

How Webroot helps protect against modern AI threats

If attackers are using AI to move quickly, your protection should be just as adaptive.

Webroot Antivirus protection is designed to detect more than known malware signatures. It uses:

Real-Time Threat Detection

Suspicious files and phishing websites are blocked before they execute.

Behavioral Monitoring

Programs are analyzed based on how they act, not just how they look. If something behaves like malware, it’s stopped.

Cloud-Based Intelligence

Threat data updates continuously, helping protection evolve alongside emerging AI-powered tactics.

For broader protection, Webroot Total Protection combines antivirus with identity protection features designed to help monitor for suspicious activity tied to your personal information.

Because modern scams don’t just infect devices, they target identities.

You can explore how identity protection works and why it matters as scams become more personalized.

Layered protection is the smart approach

No single tool can stop every threat.

A stronger defense includes:

  • Modern antivirus protection
  • Identity monitoring
  • Strong, unique passwords
  • Multi-factor authentication
  • Pausing before responding to urgent financial requests

This layered approach ensures that even if one safeguard is tested, others are in place.

AI is evolving - so should your protection

Artificial intelligence will continue to shape how we live and work online.

It will also continue to shape how scams are built.

The answer isn’t panic. It’s preparation.

With Webroot Essentials and Webroot Total Protection, you get real-time protection designed to adapt as threats evolve, so you can browse, shop, and connect with confidence.

Additional Information