Blog post
How Fraudsters Use Deepfakes and Stolen Identities During Tax Season
David Maimon & Tim Forrest
Published
April 1, 2025

The Federal Trade Commission report on identity theft related to employment and tax fraud reveals that the volume of identity theft related to employment and tax refunds is traditionally high during the first two quarters of each year from 2020 to 2024 (see FTC figure below). This is mainly since fraudsters are using the tax season to engage in tax refund fraud.
Tax refund fraud occurs when a fraudster uses a stolen or synthetic identity to file a fake tax return and claim a fraudulent refund from the government. These individuals often exploit popular tax filing software to submit the fraudulent returns, taking advantage of features like payment advances and other incentives designed for legitimate users. By submitting falsified documentation and doctored tax filings that claim an overpayment of taxes, fraudsters ensure a refund is issued—one they can then deposit into a drop account they control. But where are the fraudsters getting the stolen identities from?
The illicit supply chain of stolen identities and tax refund fraud
The online fraud ecosystem provides fraudsters with access to large volumes of stolen identities. These identity databases are often sourced from public data dumps linked to major data breaches or sold through dark web marketplaces and even messaging-app-based stores. Interested buyers can typically purchase a stolen identity for less than $5. When the stolen data is incomplete, fraudsters can turn to online lookup services to fill in missing details—and in some cases, provide access to victims' full credit reports.
Once in possession of the stolen identities, criminals use Social Security numbers and other personal information to file fraudulent tax returns, claiming refunds before the legitimate taxpayer has a chance to file—often before the victim is even aware of the fraud, thanks to the convenience and speed of e-filing. To collect fraudulent refunds, criminals arrange for the money to be sent to bank accounts or addresses they control.
To access larger payouts, fraudsters can go further by creating entirely fake businesses, registering them with Secretary of State websites using stolen or synthetic identities. They then use other stolen or synthetic identities to pose as employees of these fake companies, generating fraudulent pay stubs with payroll software like QuickBooks or even basic spreadsheets. During tax season, they file for payroll tax returns based on these fake wages paid to non-existent employees.
Images shared by fraudsters as evidence of working tax schemes.
To scale and support these schemes, fraudsters frequently share detailed tutorials and walkthroughs with others in the ecosystem. These guides highlight key information and techniques needed to successfully file fraudulent tax returns. The images below are taken from one such tutorial we recently observed, illustrating how these tactics are shared and applied in tax refund fraud.
A growing trend: combining stolen identities with deepfakes
We have recently observed a growing trend of fraudsters leveraging stolen identities together with deepfake technology to open accounts with neobanks and file fraudulent tax refund claims. This evolving scheme reflects a sophisticated level of planning and coordination across multiple platforms.
The process often begins with the creation of brand-new Gmail accounts, typically around the time of the neobank or tax refund application. These fresh email addresses are a key indicator of synthetic or stolen identity usage. Following this, fraudsters apply for neobank accounts—specifically demand deposit accounts (DDAs)—before engaging with tax preparation platforms. The goal is to establish a new bank account in the victim’s name where they can direct the stolen tax refunds.
Another notable pattern in these applications is the mismatch between the application IP address and the declared physical address. We frequently observe web and mobile IPs originating over 20 miles from the application's stated location. However, to avoid triggering geographic inconsistencies, fraudsters often use phones with area codes that match the input state address, making the application appear more legitimate at a glance.
A critical part of this scheme involves the use of deepfake tools to fabricate faces which will accompany the stolen identities. Attackers create faces using generative AI tools and then use them to create fake driver’s licenses. If the neobank or tax agency flags the application for additional verification steps such as a liveness test, the fraudsters can respond with deepfake videos to beat the test.
These videos are typically created using tools which allow smartphones to either Airplay to a computer or allow the computer to be configured as an Android Virtual Device (AVD). Once the computer has a video feed of their face, scammers can use tools like DeepFaceLive or Avatarify to map an AI-generated face or the victim's face onto their own. From there, free screen recording and streaming software OBS Studio can be used to stream the manipulated video feed into Zoom, Skype, or a browser window to simulate a real-time video call.
The deepfake visuals themselves often follow recognizable patterns. For example, the person who appears in the video will often be wearing the same clothes and have the same hairstyle as their drivers license, because the video is being generated based on that image. They also often have white or beige backgrounds, as seen in the screenshots below (taken from an educational video posted by a fraudster that covers how to use deepfakes for talking to fraud victims over phone). These visual patterns are increasingly important indicators in detecting identity fraud attempts across platforms.
A deepfake image shared by fraudsters.
How to combat deepfake-powered identity theft
It is possible that in the future, generative AI tools or other AI-based anti-fraud tools will be able to effectively detect AI-powered fraud such as the deepfake videos discussed in this article. However, it is also possible that AI-generated content will be indistinguishable from genuine content, even to machines.
In the short term, looking for signs such as the clothing and background elements described above can help flag potentially deepfaked videos in the context of a liveness check or manual review. In the long term, though, SentiLink's opinion is that combating generative AI in fraud will mean relying on the things that fraudsters can't fake, including:
- Historical information – generative AI cannot fake the past and, for example, make it look as though a brand new email has been in use for a decade.
- Authoritative data sources such as eCBSV and AAMVA – generative AI cannot change the contents of these sorts of databases, so even the most realistic-looking generative AI fakes won't pass checks against an authoritative source.
If you’re interested in learning more about identity theft and how SentiLink can help you fight all forms of identity fraud, get in touch!
Related Content

Blog article
April 1, 2025
How Fraudsters Use Deepfakes and Stolen Identities During Tax Season
Read article
Blog article
March 11, 2025
Hands-On with the CPN Economy, Part 2: How CPNs are Sold and Marketed
Read article
Blog article
March 10, 2025