Tech Matters: AI scams – What you need to know to stay safe

Photo supplied
Leslie MeredithArtificial intelligence might seem like a fascinating hobby for beginners, whether you’re trying text-to-image generation or asking ChatGPT to help plan your weekend. But while many of us dabble in these technologies, cybercriminals are treating AI as a powerful tool to improve their scams. Their motivation is high, and the results can be devastating.
Take the story of a French woman who believed she was in an online relationship with Brad Pitt — yes, that Brad Pitt — for over a year. She ended up sending her supposed celebrity boyfriend $850,000 before realizing she’d been conned. Then there’s the case of Thailand’s prime minister, who narrowly avoided an AI phone scam after recognizing a synthetic voice impersonating a world leader. As scammers get better at using AI, all of us, not just the highly targeted, are vulnerable.
In 2023, AI-assisted fraud caused losses exceeding $12 billion in the U.S., according to a Deloitte study, and that figure could reach $40 billion within two years. It’s a real threat, but one that the government and big tech companies are working to combat.
For instance, DARPA’s Media Forensics program focuses on tools that combine AI and computer vision to detect manipulated media. Its work targets national security threats posed by deepfakes. Microsoft’s Video Authenticator analyzes videos and images, assigning a confidence score for manipulation likelihood. The tool detects artifacts left by deepfake generation software and has been used to combat disinformation, particularly around elections. YouTube has implemented automated systems to flag and remove deepfake content that violates their misinformation policies. As these tools are developed, so too are the methods to evade them.
The situation is a continuation of ongoing security efforts like detecting and blocking phishing emails in your accounts. But just like phishing email detection, a big part of the solution is still the responsibility of the user. Understanding how AI scams are delivered and how they can be identified can go a long way in protecting yourself from becoming a victim.
AI scams typically fall into several categories: voice, text, photo and video manipulations. And just like regular scams, they can be delivered through phone calls, text messages, emails or social media. Criminals leverage AI to mimic voices of loved ones, creating fake requests for help, usually money, that sound real. Others exploit photos or videos, generating deepfakes that can trick you into believing that what you see is the real thing.
AI has made it easy for scammers to personalize messages to a specific region and clean up written communications to remove the telltale signs of a scam, like poor grammar or odd phrasing. If the scammer can find just a little bit of information about someone you’re close to — say a series of Facebook posts about a recent trip with you and a friend — they can use that to customize their scam. The same strategy works for generating fake photos and videos.
So, how can you protect yourself? First, make sure all of your social media accounts are set to private. If you get a friend request from someone you don’t know, ignore it. If you do know the person but haven’t spoken for a while, send an email first to verify that this person has sent the request.
Always be leery of unexpected requests for personal information or money, especially if the caller wants the payment by wire transfer, gift card, a payment app or cryptocurrency. These methods of payment make it nearly impossible to get a refund once you’ve been scammed.
Call the person who made the request to verify that he or she actually called, texted or emailed you. Do not get sucked into emergency requests and skip this critical step — that’s exactly what scammers are counting on. If you can’t reach the person in question, call one of their family members or close friends. Never agree to keep a suspicious exchange a secret. Report suspicious calls to relevant authorities or your service provider to help prevent others from falling victim, and let your family know as well, so that they can be on the alert.
While the technology behind AI is incredible, its misuse is a reminder to stay vigilant. The better prepared we are, the harder it becomes for scammers to succeed.
Leslie Meredith has been writing about technology for more than a decade. Have a question? Email her at asklesliemeredith@gmail.com.