Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
In yet another episode of “technology was a mistake,” Instagram is facing criticism for failing to crack down on a disturbing new trend: AI-generated accounts pretending to be influencers with Down syndrome to sell adult content.
Apparently, we need another reason to question humanity’s relationship with artificial intelligence.
According to an investigation by 404 Media, a network of Instagram accounts is using AI to create deepfaked content that appropriates and fetishizes Down syndrome.
If you’re thinking, “Surely this can’t be legal,” well, welcome to the Wild West of AI content moderation.
The AI Grift Goes Lower Than Ever
These accounts – some amassing over 148,000 followers – are part of what sketchy online marketers are calling “AI pimping.”
Think of it as dropshipping but somehow even more ethically bankrupt. The perpetrators steal content from real creators, slap AI-generated faces on them, and funnel viewers to adult content platforms.
The largest of these accounts, “Maria Dopari,” perfectly exemplifies this predatory practice.
The account posts videos with captions like “They all criticize my down syndrome until…I decide to wear tight clothes” – managing to be both exploitative and offensive in under 280 characters.
Instagram’s Moderation Problem Child
While legitimate influencers with Down syndrome use their platforms for positive representation and modeling, these AI-generated accounts exist purely to monetize through adult content sites like Fanvue.
Most don’t even disclose their AI nature on Instagram, though some grudgingly admit it on their monetization pages with grammatically questionable disclaimers like “I do not actually have don’t syndrome.”
Let’s be clear: This isn’t just about AI content moderation anymore. It’s about the exploitation of a protected class for profit, wrapped in the trojan horse of “digital influencing.”
The Technical Sleight of Hand
For the tech-curious: Face swaps are becoming increasingly sophisticated, though tell-tale signs remain.
Look for that uncanny valley smoothness and glitches around mouths and teeth—like someone tried to Photoshop a smile onto the Mona Lisa using Windows 95.
While Instagram continues its see-no-evil approach to AI content, platforms like OnlyFans require explicit disclosure of AI-generated material. But that’s just pushed creators to alternative platforms with looser restrictions.
What Now?
Meanwhile, the problem continues to grow, with specialized tools and even courses teaching others how to profit from this ethically dubious practice.
For now, users can report suspicious accounts, but until platforms implement stricter AI content policies, we’re stuck playing digital whack-a-mole with these increasingly sophisticated fakes.
Welcome to 2025, folks – it’s somehow even weirder than we imagined. Maybe it’s time to ask: When did we collectively decide that “can we?” was more important than “should we?”
Neither Instagram nor the accounts responded to requests for comment.
What’s your take on AI-generated accounts exploiting Down syndrome for profit? Should Instagram tighten its moderation policies? Share your thoughts in the comments or join us on Facebook and Twitter to keep the conversation going!
Follow us on Flipboard, Google News, or Apple News

