The problem is growing exponentially.
There’s so much content – so many groups to suggest, next videos to watch, trending topics to recommend, millions of ads to match to users – that platform companies must rely on automation:
- YouTube automates billions of videos to play next, without the capacity to check whether they are conspiracies.
- Facebook automates millions of ads shown to millions of users, without the capacity to check whether they are lies.
- Twitter automates showing millions of #trending topics to millions of users, without the capacity to check whether they were fabricated.
- Facebook automates suggestions of millions of groups to join, without checking whether they are real.
- Twitter automates recommended users to follow, without humans checking whether they are bots or foreign governments.
- These platforms enable millions of people to create fake identities impersonating a celebrity or political organization, without the capacity to check whether it’s really them.
These systems are out of control. They have exponential impact without exponential oversight.
Why aren't tech platforms fixing the problem?
Platform have done little to solve these problems because that action contradicts their business model.
Platforms would lose money if they solved the problem:
- Facebook would lose revenue if they blocked advertisers from micro-targeting lies and conspiracies to the people most likely to be persuaded.
- Twitter’s stock price would fall if they were to remove bot networks, which academics estimate at 15% of their user base.
- Google would lose revenue if their tools didn’t allow advertisers and governments to automatically test millions of variations of content — word choices, color, images — to capture the most minds.