Prediction: “AI Slop” on Social Media will lead to Trust & Safety Boom

|

3 min read

Over the past year, my social media feeds have started to feel… off.

There’s a creeping sameness to the posts. A certain bland polish. Too many threads that start the same way. Too many carousels with copy-paste wisdom. Too many videos with robotic voiceovers and cookie-cutter visuals. And the root of it all is what I’ve come to call — AI Slop.


What is AI Slop?

AI Slop is the flood of low-effort, algorithmically-generated content designed to game engagement — likes, shares, impressions — without offering anything original, insightful, or real. It’s content created not by people for people, but by prompts for metrics.

We’re talking about:

  • Threads generated by GPT that mimic viral writing styles
  • LinkedIn posts that follow the exact same “I was fired → I built a 7-figure business” template
  • YouTube shorts stitched from stock clips and ChatGPT scripts
  • Instagram reels with repurposed text-to-speech narrations
  • And, increasingly, tutorials using tools like n8n or Make.com showing how to automate content creation end-to-end — literally press a button and publish to five platforms

The goal? “Thousands of impressions.” “Instant credibility.” “Make money while you sleep.”

But the cost? A web that feels increasingly hollow, inauthentic, and… noisy.


Not Just Spam — A New Scale of It

AI Slop isn’t traditional spam — there are no shady links or scammy intent at first glance. But it is spam in spirit: unwanted, low-value content that clogs the feed and buries the signal under noise.

The scale, however, is unprecedented.

Spam detection was already a tough problem. AI Slop takes that and multiplies it by 100x. Because now:

  • The slop is harder to detect — it mimics human tone and structure.
  • The slop is endless — AI can generate infinite content.
  • The slop is incentivized — it still works, at least for now.

This is no longer just an algorithmic tuning challenge. It’s a Trust & Safety crisis in the making.


Across platforms — X, Facebook, Instagram, LinkedIn, YouTube — there are noticeable differences in how well they’re containing this tidal wave. Some feeds feel more curated and human. Others are visibly slipping.

But one thing is clear: every platform is being tested right now. The winners will be those that make aggressive moves to:

  • Identify and demote AI-generated low-value content
  • Surface authentic human perspectives, creativity, and conversation
  • Incentivize originality over automation

This will require new AI models, new detection systems, and significantly more investment in Trust & Safety operations. I expect to see:

  • A surge in T&S-focused startups to counter rise of AI content
  • Platforms hiring more content integrity teams
  • Maybe even watermarking standards for AI-generated content

The Stakes Are High

Social platforms thrive on trust and connection. If users start feeling that most of what they see is sludge — slop made by machines, not people — they disengage. Not all at once, but slowly. Quietly. Like the early signs of rot.

And once that happens, the network effect unravels.

We’re entering a new arms race: authenticity vs automation.

In the long run, platforms that successfully curb AI Slop and elevate genuine human voices will not just survive — they’ll lead.

Let’s hope they act before it’s too late.

Get freshly brewed hot takes on Product and Investing directly to your inbox!

Aman Kataria

Product Manager | Value Investor

Leave a Reply

Your email address will not be published. Required fields are marked *

×