skip to main content
MMBB
MMBB Supporting Your Calling is Our Calling MonitorMMBB Supporting Your Calling is Our Calling

How to Protect Yourself from the Rise in AI Scams

By MMBB Learning Specialist Sharon McDowell

By Sharon McDowell, MMBB Business Liaison, Technical Trainer

What Is Covered in this Article:

  1. Voice Cloning Scams
  2. AI-Generated Phishing Communications and Domains
  3. Deepfake Video Impersonation
  4. AI-Enhanced Fake Profiles and Identity Fraud


Artificial intelligence is driving innovation and productivity across industries, but it is also changing the landscape of typical scams, making them more sophisticated and harder to detect. Criminals now use AI tools to mimic voices, generate realistic messages, fabricate videos, and create digital scams that feel urgent and personal. Public safety agencies warn that these tactics are becoming increasingly advanced, with the rise of generative AI reducing the time and effort cybercriminals must expend to deceive their targets.1 Let’s explore four types of AI scams and suggestions for how to help avoid them.

1. Voice Cloning Scams

AI can replicate a person’s voice using only a few seconds of recorded audio. Scammers use this technology most often to impersonate loved ones or someone you may know to make you think they are in distress. The scammer follows up with an urgent request for financial assistance. 

Scammers will research their target and who they’re pretending to be through social media using recorded audio or personal information to play on your emotions. They also might pretend to be an authority figure such as law enforcement, a lawyer or a doctor working with your family member. These tactics are intended to invoke fear and urgency, and the perpetrator usually demands a wire transfer via a vehicle such as  Western Union, cryptocurrency or gift cards.2 Criminals can generate these calls in minutes. 

How to protect yourself:

Sometimes these spoof calls show up in your Caller ID when they contact you. Hang up and call the person back using a trusted number, not the one  given to you over the phone or in the caller to identify themselves.

2. AI-Generated Phishing Communications and Domains

Unlike older phishing attempts filled with spelling errors, AI-generated messages are polished and highly personalized. They may appear to come from financial institutions, employers, delivery services, or government agencies. 

These scams often arrive as texts about a “failed delivery attempt” or a fake Amazon purchase confirmation even when you’re not expecting anything. Cybercriminals have become so advanced they can set up phishing domains, which look like the websites oflegitimate financial institutions, payroll companies, and even retail to steal your personal login and credit card information.

According to the Internet Crime Complaint Center, the most reported cybercrime in 2024 was phishing/spoofing, making it the primary gateway to other scams leaving victims more vulnerable.

How to protect yourself:

  • Avoid clicking links in unexpected messages. If you receive an unsolicited link, delete the email or message. These links can download harmful malware onto your device, compromise your files, or steal your information. 
  • Visit official websites directly by typing the address into your browser.
  • If you click on what you suspect to be a phishing link, do not enter any personal information. Change your passwords immediately, back up your device, and keep your software up to date.

3. Deepfake Video Impersonation

Deepfake technology uses AI to create realistic videos of people saying or doing things they never did. These videos may be used to spread disinformation, create investment scams or business-email-compromise schemes to pressure victims into transferring funds. The advancement of deepfake technology creates a mistrust because the public can’t always tell what is real and fake information online.

How to protect yourself:

  • Watch for visual inconsistencies in videos, such as unnatural gestures and words not syncing with mouth movements. Be aware of who is distributing the video; deepfakes are often shared by bot accounts.  A bot is an Internet software program that performs repetitive tasks automatically. 
  • Verify unusual requests through a second, trusted communication channel.
  • Be cautious of urgent financial instructions delivered via video message.

4. AI-Enhanced Fake Profiles and Identity Fraud

AI tools can generate convincing photos and online identities, allowing scammers to build trust before requesting money or personal information, these are typically done using social media platforms like Facebook and Instagram. These tactics are commonly used in friendship or romance scams. They are also utilized in broader social engineering scams. 

“Responding to an AI image scam can lead to further exploitation or manipulation. Fraudsters may use your response to gather more information about you or manipulate you into engaging in fraudulent activities.3”

How to protect yourself:

  • Be skeptical of newly created accounts or profiles with limited history. Some social media platforms allow you to view information such as when an account was created and the region in which it is based.
  • Never share sensitive personal or financial information with someone you have not verified and report any suspicious activity using the platform features.

Stay Informed. Stay Protected.

AI itself is not the threat, but the misuse of AI is. As these tools become more advanced, scammers rely on urgency and emotion to pressure 
victims into acting quickly. If something feels rushed, unexpected, or too good to be true, pause and verify before responding.

If you suspect fraud, contact your financial institution immediately and report the incident to the appropriate authorities. Awareness remains one of the strongest defenses against emerging scams.  Armed with knowledge, you can reduce your chances of being defrauded.

Sharon McDowell serves as the business liaison and technical trainer at MMBB. She joined MMBB’s staff in 1992 and served on MMBB’s Help Desk team as a network analyst for more than 15 years. She is currently responsible for coordinating MMBB’s ongoing cybersecurity training. Her education includes a BS in computer science from State University of New York, College at New Paltz.

Sources:

1 Internet Crime Complaint Center (IC3) | Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud

Internet Crime Complaint Center (IC3) | Account Takeover Fraud via Impersonation of Financial Institution Support

Internet Crime Complaint Center (IC3) | Senior U.S. Officials Continue to be Impersonated in Malicious Messaging Campaign

https://helpcenter.trendmicro.com/en-us/article/tmka-19302

Kaspersky Cybersecurity Research (kaspersky.com)

Stay Connected with MMBB
Keep up to date with all our financial services!
Email Address

Translations of any materials into languages other than English are intended solely as a convenience to the non-English-reading public. We have attempted to provide an accurate translation of the original material in English, but due to the nuances in translating to a foreign language, slight differences may exist.

Las traducciones de cualquier material a idiomas que no sean el inglés son para la conveniencia de aquellos que no leen inglés. Hemos intentado proporcionar una traducción precisa del material original en inglés, pero debido a las diferencias de la traducción a un idioma extranjero, pueden existir ligeras diferencias.

Close Alert

You will be linking to another website not owned or operated by MMBB. MMBB is not responsible for the availability or content of this website and does not represent either the linked website or you, should you enter into a transaction. The inclusion of any hyperlink does not imply any endorsement, investigation, verification or monitoring by MMBB of any information in any hyperlinked site. We encourage you to review their privacy and security policies which may differ from MMBB.

If you “Proceed”, the link will open in a new window.

back to topBack to Top