Remember when we thought the future would be all about flying cars, floating cities, and video phones? Well, it’s 2025 and, uh, at least we got the video calls.
In case you haven’t noticed the constant deluge of ads and every device you own excitedly adding AI-driven features no one asked for (despite the technology’s enduring unpopularity), we’ve forgone domed cities and space vacations for generative AI. And with genAI in the age of influencer marketing comes the AI influencer — which also makes plenty of space for brand-new AI influencer scams.
So, What Is an AI Influencer?
AI influencers, which were predated by virtual influencers well before generative AI was a mainstream concern, cover a wide swath. At their core, they are computer-generated influencers that market products on social media in much the same way human influencers do. The only difference is, they don’t exist outside of the digital realm.
Typically, computer-generated 3D models—or increasingly the subject of fully AI-generated photos and videos that mimic and are often hard to distinguish from real images—virtual influencers can act as the mascot for a specific company or collaborate with a variety of brands, just like real influencers. Virtual or AI influencers can appear in still photos, videos, interviews, social media posts, and even magazine spreads.
(Generative) AI Influencers
While traditional artificial intelligence is programmed by humans and powered by a local computer (i.e. the chipset inside your gaming console or iPhone), generative AI, or machine learning models like ChatGPT and Copilot for example, are often powered by off-site data centers and “trained” by vacuuming up huge amounts of existing data, from text to images to videos, voices and programming languages.
When prompted to create something, what genAI spits out is an algorithmically-determined mashup of all of that scraped data, formed in the shape of what it “thinks” best answers the prompt. For instance, a generative AI model might be fed reams of science fiction movie scripts from decades of film history. Then, when asked to write a sci-fi movie script, it’ll generate a script based on its recognition of the common habits, themes, and language in the scripts it consumed.
Because this tech generates content on the fly, it’s been quickly adopted for use in AI influencers. Prompters can now quickly spit out fully AI-generated video content of a made-up human, determining their looks, their mannerisms, what they do, and what they say. GenAI is also sometimes leveraged to add a chatbot element to AI influencers, allowing fans and customers to engage in live chats with the virtual influencer. In 2025, this feature ended poorly for Snapchat influencer Caryn Marjorie, or @CutieCaryn, whose voice-cloned AI chatbot was quickly shut down after making up lies about Caryn’s family, engaging in fantasy roleplay with users, and encouraging self-harm.
Virtual Influencers, Real Problems
When it comes to AI influencers, one of genAI’s key dangers becomes central: its ability to quickly and effectively spread disinformation, increasingly convincingly. Some AI influencers are upfront about their digital status. For example, virtual influencers like Lil Miquela, Imma, and Shudu have millions of followers and have promoted brands from Samsung to Prada, and while they have detailed made-up bios, they are explicitly marketed as virtual influencers. Problems arise, however, when generative AI is used to create lifelike influencers to sell products without disclosing that these influencers are digital.
This practice of undisclosed deepfake influencers is highly prevalent on TikTok Shop, for example, especially in influencer-heavy spheres like health and wellness and cryptocurrency investment. Not only are these AI influencers posing as genuine humans to unsuspecting customers, they often spout blatant lies, ranging from fake personal testimonies to entirely made-up research about a product’s effects. Because these AI influencers are created whole-cloth from prompts, they’re often given fake biographies to sell products—the Media Matters article linked above points out phony AI influencers who have claimed to be retired Victoria’s Secret models, experienced doctors, and even “the highest paid plastic surgeon in Korea.”

AI Influencer: Scams and Sketchiness
Some fully AI-generated influencers hawk multiple products (oftentimes for the same seller), and sometimes take on multiple identities. One example sees the same AI-generated woman claiming to be a gynecologist, and later the sister of Zendaya’s stunt double, depending on what “she” is selling at the moment. The videos take on popular TikTok formats, like slideshows, often tell emotional stories to draw customers in, and leverage social media algorithms and personal buying and browsing habits to find their way to users already likely to be interested in the products they’re (deceptively) pushing.
Beyond deceptive AI influencer practices, there’s some more shadiness going on in that sphere. Just a few examples of notable scams related to AI influencers include:
- Phony AI influencers and AI-generated ads on TikTok Shop, alongside completely made-up storefronts, promising non-existent products with pressure tactics like countdown timers and “limited-time offers.” These tactics are used to lure users into making fraudulent cryptocurrency deposits or straight-up installing malware on their phones. In this case, there’s no product at all—just a direct deposit to scammers.
- Regular old phishing scams have been spotted in email inboxes and DMs with subjects like, “I’m reaching out from an AI-powered influencer agency,” trying to get victims to invest in AI influencers (it’s a budding, and often sketchy, business; prompters create AI influencers and their related content and profit by lending them to sellers). Of course, they’re really just looking for your private information to commit identity theft.
- Related, some social media ads advertise packages or classes that teach you how to make your own AI influencers. Like a snake eating its own tail, many of these ads are AI-generated themselves, and they can range from legit to bogus to phishing scams.
- Some AI scams stoop so low as to grift off of disasters, with AI-generated “disaster victims” acting as faux influencers alongside AI-made fake charity websites collecting illicit charity donations.
Beyond AI Influencers
As assistant media and technology professor Jieun Shin of the University of Florida puts it, “Misinformation and disinformation have emerged as a serious issue in the 21st century. Although the problem of false information has existed since the dawn of human civilization, artificial intelligence is exacerbating these challenges. AI tools make it easy for anyone to create fake images and news that are hard to distinguish from accurate information.”
From AI influencers to AI-generated catfishing profiles, phishing chatbots, and voice-cloned AI scam callers, it’s clear that generative AI can help automate scams. Its ability to create massive amounts of convincing but false media is already proving a boon for how scammers operate, with estimates indicating that damages from AI scams could break $10 trillion in 2025. While deepfaking the identities of real people is a dangerous proposition that genAI promises, much of the deception that AI influencers foster relies on creating entirely fake people, rather than impersonating an existing human—and that can be a key factor in outsmarting them.
A Little Help from Real Records
All you need is a little help from Spokeo. Whether they’re shilling a product you’ve got your eye on or trying to get you to invest in crypto, if you see an influencer that feels a little sus, pop their name or contact info into a Spokeo People Search. If the billions of historical information records, court records, and social media profiles we instantly search come up dry or don’t seem to line up with the influencer in question, you might be dealing with an AI influencer. And if that’s the case, you might want to hold on to your money.
As a freelance writer, small business owner, and consultant with more than a decade of experience, Dan has been fortunate enough to collaborate with leading brands including Microsoft, Fortune, Verizon, Discover, Office Depot, The Motley Fool, and more. He currently resides in Dallas, TX.