Blog

AI didn’t invent scams—it made them unrecognizable

AI-powered scams now look authentic. Learn how modern fraud works, why trust signals fail, and what layered protection actually stops it.

Guillaume Pascuale headshot

Guillaume Pascual

May 13, 2026

Color photo illustration of shadowy hacker using AI to create online scams.

There was a time when a scam had tells. The grammar was broken. The urgency felt artificial. The sender’s name didn’t quite match the brand. You could feel the friction — something was slightly off, and that friction was your warning.

That friction is gone.

What’s replaced it is something more unsettling: scams that look exactly like the real thing. Emails that sound like your bank wrote them. Voice messages that sound like your CEO recorded them. Payment pages that look pixel-perfect. The warning signs haven’t just gotten smaller — they’ve been engineered out of existence.

AI didn’t create new categories of crime. It made existing ones operate at a scale and quality that wasn’t achievable even three years ago. And the data shows it.

The numbers have crossed a threshold

In 2025, the FBI’s Internet Crime Complaint Center received over one million complaints—nearly 3,000 per day—with reported losses surpassing $20 billion, a 26% increase over the prior year. That’s not a trend line. That’s a structural collapse in the cost of running a fraud operation.

The FBI center reported that AI-related complaints specifically reached 22,364, with adjusted losses of nearly $900 million. Investment scams tied to AI alone surpassed $632 million—and that figure almost certainly understates the real number, since overall investment fraud losses exceeded $8 billion, suggesting most victims never realize AI was involved in the scheme targeting them.

Phishing surged roughly 200% between 2024 and 2025. Across the millions of devices running Webroot, an average of 1.7 malicious URLs per user were detected and blocked over the past year. Tech support and government impersonation scams combined generated more than $2 billion in losses, the FBI said, most of it funneled through illegal overseas call centers.

These aren’t edge cases. They’re the new operating model.

Why it works: the trust gap

The effectiveness of AI-powered scams isn’t a failure of intelligence on the victim’s part. It’s a deliberate erosion of the signals we’ve trained ourselves to trust.

Only 71% of people globally know what a deepfake is—and a mere 0.1% can consistently identify one. That gap between awareness and detection is exactly where attackers operate. When a voice message sounds like your CFO and arrives on a channel you trust, your verification instincts don’t fire the way they should.

The goal of these attacks has also evolved. A year ago, many phishing campaigns were primarily credential-harvesting operations—steal the password, sell it, move on. Today, a significant portion are designed for direct financial theft or full account takeover. The ambition has scaled with the toolset.

SMS-delivered scam links and QR codes are becoming primary entry points—channels that many people still don’t scrutinize the way they’ve learned to scrutinize email. Attackers know this.

What protection actually looks like now

Traditional antivirus was built around a fundamentally different threat model: detect known malware signatures, block known bad files. Against an AI-generated phishing page that’s never existed before, that approach has limited reach.

Effective protection in 2026 operates across multiple dimensions simultaneously. Real-time URL and web page analysis catches fraudulent payment and banking pages before they load—before a user enters a single character. Behavioral monitoring on the device identifies suspicious activity as it happens, not after the fact. Cloud intelligence means threat data isn’t locked in a local database that needs to be updated—it evaluates files and websites in real time, continuously.

Identity signals matter too. Logins from unexpected locations, sudden changes in account activity, anomalous behavior patterns—these are the indicators that something has been compromised even when no malware is present. Modern scams don’t just infect devices. They target identities.

No single layer stops everything. The architecture has to assume some things will get through—and have rollback capabilities to undo damage when they do.

What you can actually do

Awareness is necessary but not sufficient. Here’s what makes a real difference:

Verify before you act. If a message creates urgency—a payment request, a login alert, a wire transfer instruction—slow down and verify through a separate channel. Call the number you already have. Don’t use the contact information in the message itself.

Scrutinize QR codes and SMS links. These channels are where scam traffic is migrating precisely because users are less conditioned to be skeptical. If a QR code comes unsolicited—on a poster, in an email, via text—treat it as you would an unsolicited link.

Layer your protection. Antivirus alone is no longer enough. You need real-time web filtering, identity monitoring, and behavioral detection working in parallel. If one layer is tested, others need to hold.

Know what a deepfake can and can’t do. Voice cloning is real and accessible. Video deepfakes are improving rapidly. If something feels off in a video call or voice message—even if you can’t articulate why—that instinct is worth acting on.

The low-tech defenses that still win

The most effective countermeasures against AI-powered social engineering are often the simplest—because they don’t try to out-tech the attacker. They rebuild the human trust signals that AI has eroded.

Agree on a safe word with the people closest to you. Pick a word or short phrase that only your family or close circle knows. If someone calls claiming to be your spouse, your child, or a family friend—and something feels off—ask for the safe word. No legitimate caller will be offended. A scammer won’t have it. This matters more than ever: voice cloning used in distress scams generated over  million in victim losses in 2025 alone, and the tactic is expanding to impersonate more types of relationships and emergency scenarios.

Establish a callback protocol for financial requests. Anyone in your household—especially older relatives—should have a standing rule: no money moves without a direct callback to a known number. Not the one in the email. Not the one on the screen. The one already in your phone.

Create a “grandparent rule.” Adults 60 and older filed 201,266 complaints with the FBI in 2025—a 37% increase from the year before—and more than $7 billion in losses, a 59% jump year over year, with an average loss of 8,500. One of the fastest-growing tactics involves fraudsters impersonating a grandchild in distress—arrested, in an accident, stranded abroad—and asking for emergency wire transfers. Agree in advance: no family member will ever ask for money through a stranger. If that call comes, hang up and call the family member directly.

Pause on anything that asks you to act immediately. Urgency is a manipulation tool, not a feature of legitimate requests. Banks don’t demand you transfer funds in the next ten minutes. The IRS doesn’t call and threaten arrest. Any communication engineered to prevent you from thinking is designed to bypass your judgment—because your judgment would catch it.

Name-check unexpected callers. If someone calls claiming to be from your bank or a government agency, don’t give them information—ask for theirs. Get a name, a department, a reference number. Then hang up and call the institution directly using contact details from their official website. Legitimate organizations will always support this.

The threat landscape has changed faster than most people’s mental models of it. The scams reaching your inbox, your phone, and your family don’t look like scams anymore—and that’s the design. The response isn’t to become paralyzed by suspicion. It’s to understand what’s changed, update how you verify, and make sure the tools protecting you have kept pace.

Attackers are using AI to move fast. Your protection should move faster.

Guillaume Pascuale headshot

Guillaume Pascual

Guillaume Pascual leads product marketing for Webroot’s consumer cybersecurity portfolio at OpenText. With more than two decades in tech—including roles at Apple, Microsoft, Norton—he focuses on translating complex security topics into strategies that matter for real people.