Facebook's Problem is That It's Too Big

In an excellent happenstance, I found this on the street while drafting this post. It’s now gone, but maybe there’s something equally optionated there instead. 51° 31’ 12.89” N, 0° 4’ 11.65” W
The stream of bad press for Facebook Meta Facebook1 has been going steady for a while now.2 As someone who finally deleted my Facebook account some time around or after the Cambridge Analytica scandal, I’d say: deservedly so.
To the extent that I continue to think about Facebook, I’ve been trying on-and-off to find a succinct and satisfying explanation for why Facebook, in particular, has become so influential and so toxic. To be sure, it’s not alone – Twitter X Twitter comes to mind – but it seems like the worst offender. What did/does it do that made it so different and so much more problematic than its competitors and predecessors?
The problem, as I see it, is that Facebook is the first to simultaneously enable, on the truly ridiculous scale of its user base, both:
- non-moderation, or, the ability to say whatever you want
- broadcastability, or, the ability to send what you say to a huge audience
That Facebook enables these together is hopefully obvious, their weak attempts at “moderation” notwithstanding. Perhaps less obvious is that a functional separation between these two has historically existed, but I think that is the case.
(Non-)moderation and broadcastability are both a spectrum, and they are largely inverses. In the past, as one built a big idea from a kitchen table conversation, through self-published flyers and town paper letters-to-the-editor up to national TV news interviews, one was subject to increasing moderation. To be granted access to the TV audience was to have your idea vetted as “something worth saying”, for some definition of “worth” that (used to) exclude the kind of stuff on Facebook that is prompting this national conversation.3
The mixing of non-moderation and broadcastability on Facebook plays out mostly in the blurrily public/private zones of Groups, Timeline and perhaps Events. An incendiary post from your weird uncle gets reshared by your other weird uncle on the other side. They know each other, sure, but they would have kept their mouth shut at the next family gathering out of a sense of propriety. Even if they hadn’t, the passive audience would be significantly smaller. Timeline is “private” enough that they feel comfortable sharing, but public enough that hundreds or thousands of bystanders are caught in the blast radius of negativity – all with the effort of a single click on “Reshare”.
Such interactions are the lifeblood of Facebook – engagement translates directly to revenue – hence the endless tweaking of the ever-inscrutable Algorithm to encourage them. Leading up to the 2020 election, we all learned for the first time of something that a lot of us suspected: Facebook knowingly exercises direct control over the degree of both moderation and broadcast of posts, dampening them only in emergencies.
Facebook isn’t the only entity that exercises such malicious, or at least, spectacularly tone-deaf control over the audience they have cultivated. Yelp’s questionable “editorial” processes frequently get them criticized as abusive, to the degree that they have a support page describing how they supposedly don’t extort businesses. Even a misguided or rogue moderator in an enthusiasts’ forum can single-handedly derail it.
These sites, and countless others like them, exist because of and are protected by the now (in?)famous Section 230:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.4
That’s the full text of Section 230(c)(1), and is the heart of the Communications Decency Act of 1996.
As it stands, Facebook, Yelp and all the others are still well within their (current) rights to manipulate content on their platforms.5 (Update: something may be happening there.) I would go so far as to say that Section 230 all but invites their creation and consequent manipulative behavior. After all, if you aren’t responsible for moderating what your users post and you can cherry-pick what should be broadcast, why wouldn’t a for-profit ad-driven company steer towards maximum engagement?
It’s clear, then, that the only way out of this predicament is with a change to Section 230.6
Some of those who benefit from the Section 230-protected intertubes have been advocating for full repeal, which is throwing the baby out with the bathwater. My blissfully Facebook-free internet life would be much the worse if almost every other site I frequented were forced to choose between taking on liability for their users or simply shutting down. Instead, I’d like to see a sliding scale of sorts that allows the small-time sites to stay unencumbered but mandates some degree of responsibility to ones with a larger blast radius.
Others say the problem can be solved by spinning off, say, Instagram. In the absence of regulatory changes, an AT&T-style breakup (regardless of whether it’s along horizontal or vertical lines) would merely waste everyone’s time. Without a paying user base7 or significant physical infrastructure to induce drag, the reconsolidation would be swift. Instead, I’d like to see some kind of forced decentralization of Timeline, Groups and their ilk, which is where the damage is really done. Prying those apart is unlikely to happen in such a breakup, however; I expect any such breakup would focus on separating Whatsapp and Instagram instead.
We’ve been trained to expect non-moderation and broadcastability almost as an inherent right of using the internet, but this is a destructive historical anomaly. A combination of forced decentralization (to control broadcastability) and transference of responsibility back to the platforms (to encourage non-negligible moderation) will be messy, painful and awkward. Our collective Facebook-trained expectations may make it politically difficult. But I find it hard to believe that the cure could be worse than the disease, and the disease is not going away by itself – in fact, it’s Metastasizing.8
If you’d like to read more like this, check out:
- Judges Rule Big Tech’s Free Ride on Section 230 Is Over – Matt Stoller
- If your website’s full of assholes, it’s your fault – Anil Dash
- People Aren’t Meant to Talk This Much – The Atlantic (and at times eerily similar to this post)
- The Largest Autocracy on Earth – The Atlantic (originally published as Facebookland, and yes, I like the Atlantic)
Updated 2022-01-07: added another link to the reading list.
Updated 2023-10-07: Twitter X Twitter.
Updated 2024-08-31: added reference to and editorial on the recent Section 230 ruling from the Third Circuit.
-
I was drafting this post in the weeks leading up to The Announcement. I don’t think the name Facebook is going any any time soon, and I don’t think a regulation-dodging rename makes any of these observations less relevant. So I’ll continue to use the name Facebook. ↩
-
Since at least 2021-01-06, if not 2018-03-17. ↩
-
I realize there are other problems with old-school mass-media gatekeeping, but let’s stay on topic for now. ↩
-
Via Cornell Law School. ↩
-
There isn’t, by the way, a platform-publisher distinction or any requirement of neutrality towards content, as is sometimes claimed. ↩
-
Zuckerberg has asked for regulation in the past, though the default reading of such behavior in the tech industry is that these are attempts to burden competitors with heavy-handed regulation that the incumbent can weather, which is likely the case here. ↩
-
Advertisers are the customers, but regular people are the users. ↩
-
I’m only a little sorry. ↩