Home Advice & How-ToGuides Facebook’s AI Problem
Home Advice & How-ToGuides Facebook’s AI Problem

Facebook’s AI Problem

by Dan Ketchum
1798 views

By Dan Ketchum

Facebook has an AI problem.  If you haven’t visited the social platform in a few years, just a little scrolling down your feed is almost certain to reveal a glut of AI-generated images these days – some are up-front about what they are, others try to pass as legit, and lots are downright bizarre (true fact:  there’s a strangely large amount of AI images depicting shrimp-Jesus hybrids).  

So what’s the deal?  How did we get here, and should we worry?  Turns out this timeline spamming of absurdist imagery isn’t entirely harmless – but you can get the jump on it with some genuine human intelligence, and a hand from Spokeo.

Spokeo logo

Who's Calling Me?

Search any phone number to learn more about the owner!

How Generative AI is Affecting FB

According to a 2024 study from Stanford, of 120 Facebook spam and scam pages investigated, each page had posted a minimum of 50 AI-generated images, and in late 2023, an AI-generated pic was one of FB’s most-viewed pieces of content, period.  These Facebook AI images can be anything, but you’ll quickly spot a few favorite subjects, including religious and patriotic content, hypersexualized pictures of women, and subjects meant to garner sympathy, like starving children or wounded veterans.  

None of these topics are accidental.  They’re all extremely effective at generating engagement, which is often the primary purpose of Facebook AI posts (though AI-generated Facebook bots are also a thing, which we’ll get to in a bit).   

What Is GenAI?

Let’s backpedal just a bit and define what we mean when we say “AI-generated,” especially when referring to the most common types of Facebook AI posts.  Artificial intelligence, as a concept, has been around since the 1950s, simply referring to computer systems that perform tasks with human-like intelligence.  The ghosts that chase you in Pac-Man are a simple artificial intelligence.  The denizens of the latest Grand Theft Auto and Siri on your iPhone are forms of AI, too.   

The Facebook AI epidemic – like the vast majority of AI-related headlines you’ve seen these past few years – is all about a newer type of artificial intelligence known as generative AI (or genAI).  Also known as deep learning models or machine learning models, genAI essentially vacuums up huge volumes of existing data and “learns” from that data to output content (based on a pre-programmed algorithm) when prompted.  

So a large language model like ChatGPT hoovers up countless existing digital texts, from classic literature to Wikipedia to goofy Reddit threads, and then uses a complex algorithm to generate answers to the questions you ask it, based on all that raw material.  

Likewise, the image generators used to create the pics you see in Facebook AI posts have been fed – a process called “scraping” or being “trained on” – immense amounts of digital imagery.  They pull from the elements of that existing imagery to generate prompted images, like “homeless veteran,” “Joe Biden skateboarding,” or “sexy Scarlett Johansson.” 

Facebook's AI problem

What’s the Problem?

Facebook’s AI problem is multi-pronged.  While AI-generated posts and AI-generated profiles are associated with different dangers and deployed for somewhat different reasons – typically, the former to generate clicks and the latter to more directly phish people, with both having adjacency to scams – they share plenty of common red flags.   

AI-generated Facebook content is meant to engage, whether it’s simply intended to generate clicks for a page or lure you into a scam.  And when Facebook AI posts generate engagement in the form of likes, shares, and comments, Facebook’s algorithm considers that positive feedback and pushes AI content into more and more feeds, making Facebook AI a cyclical issue. 

Disinformation

Content created by genAI that presents itself as genuine falls somewhere on the spectrum of disinformation.  While an AI-generated image of an “artist” with a wildly impressive wood or ice sculpture (another popular category of Facebook AI posts) seems harmless enough, it’s simply not depicting a reality that exists.  

More sinister is the world of deep fakes, which are AI-created images and videos of real people, depicting them in scenarios that just plain never happened.  A report from Freedom House warns that “generative artificial intelligence threatens to supercharge online disinformation,” whether that’s on a personal level (think blackmail) or on a geopolitical level, “to sow doubt, smear opponents, or influence public debate.”    

This becomes a particular problem as Facebook’s audience skews older than other popular social networks, and less savvy at identifying what is AI-generated and what is not.  After all, many of these Facebook AI posts pass muster at first glance, until you look closely and notice the hyper-smooth skin or the seven fingers on one hand.  Just as the elderly are the most common target of traditional internet scams, they’re more susceptible to believing that an AI-generated Facebook post is the real deal.

Intellectual Property Theft

So here’s the thing.  Remember when we said that generative AI is trained on, or scrapes from, vast amounts of existing content – including text, visual art, photos, music, voices, videos, and so on – to generate its output?  An artist’s work shared online, clips of a professional voice actor’s performance, a Reddit or forum post, text from a New York Times article, files stored on the cloud, or even personal photos shared on social media – all of this and much, much more has already been scraped by generative AI models.  This type of AI does not produce wholly original content; it simply spits out amalgamations of existing content created by humans based on a prompt.  

While courts try their first intellectual property cases regarding artificial intelligence and what content counts as “fair use,” at the time of writing, there are currently no comprehensive federal regulations that restrict AI’s development or usage.     

Environmental Impact

There’s another issue with that scraping and outputting process.  Generative AI relies on massive, off-site data centers.  And that requires a massive amount of energy, which in turn contributes to climate change in an alarming way.

According to Nature, ChatGPT alone consumes the energy of about 33,000 homes, while a web search powered by genAI uses up to five times the energy of a traditional web search.  Those data clusters need water to cool their processors, too; a single genAI data center in a mid-sized city can use up to 6% of that district’s water supply.  Researchers at Carnegie Mellon University estimate that a single AI-generated image, like those shared en masse on Facebook, uses about as much energy as fully charging your smartphone.        

how to protect yourself from getting tricked by AI on facebook

Spam, Phishing, and Identity Theft

While many Facebook AI posts are simple engagement bait (you’ll see all kinds of common keywords based on trends, like “cabin crew,” celeb name drops, “like for good luck,” “share to wish me a happy birthday,” or “why doesn’t this type of post ever trend?”), they sometimes also include links to shady e-commerce sites, phishing links out to mine your private personal info, or even malware that can directly infect your computer.

And here’s where Facebook AI profiles and Facebook bots come into play, too.  Bad actors are using AI-generated content, such as AI-made profile pics, to create completely phony Facebook profiles.  According to Facebook parent company Meta, they’ve seen a “rapid rise” in the practice, telling CBS News that over two-thirds of the coordinated scam networks they busted in 2022 featured AI-generated profile pictures.  

Some of these are autonomous Facebook bots acting in what Meta calls “coordinated inauthentic behavior,” which are “efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation.”  This can include actions as massive as attempting to influence an election, or as small as generating engagement on a picture to boost its visibility on the Timeline (yes, many of the comments on AI-generated images are from Facebook bots – it’s AI commenting on AI). 

Otherwise, AI-generated images can simply make a fake Facebook profile look more authentic to the untrained eye.  Phishing, in which a scammer tries to trick victims into sharing valuable personal information like passwords or credit card numbers, has been a longstanding issue on Facebook.  It’s very common for scammers to hide behind a fake Facebook profile to make a personal connection to the victim before attempting to get that personal info directly, or to trick the victim into clicking a harmful phishing or malware link.

Previously, those fake profiles relied on existing images (a form of identity theft).  Now, Facebook scammers need only use an AI image generator to create profile pictures of humans who’ve never even existed, which can make the scams even tougher to spot. 

Get Real Intelligence on Your Side

We might not be able to stop the flow of disinformation or temper the environmental effects of generative AI, but Spokeo can definitely lend you a hand when it comes to keeping yourself protected from the phishing and scams powered by Facebook bots and AI-generated profiles.  It all comes down to simple identity verification.

With Spokeo People Search, all you need to do is enter a name, phone number, location, or email address – any of which you might find on the average Facebook profile, be it legit or bogus.  If you suspect that you’re dealing with an AI-generated profile or Facebook bots, People Search helps you compare information. 

If you type in a name and that name’s info doesn’t line up with anything you’re seeing on the FB profile in question, there’s a good chance you’re dealing with a potential scammer.  That’s fighting artificial intelligence with genuine human smarts.    

We’re just sorry we can’t help you unsee some of these insane Facebook AI posts.  They’ll haunt our dreams forever.  

As a freelance writer, small business owner, and consultant with more than a decade of experience, Dan has been fortunate enough to collaborate with leading brands including Microsoft, Fortune, Verizon, Discover, Office Depot, The Motley Fool, and more. He currently resides in Dallas, TX.