In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms aren’t bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression — particularly the entangled concepts of “public figures” and “newsworthiness.”
In this article, we offer the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. We first introduce the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. We then turn to the “New Governors” and examine how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, we offer lessons for both courts and platforms as they confront new challenges posed by online speech. We expose the pitfalls of using algorithms to identify public figures; we explore the diminished utility of setting rules based on voluntary involvement in public debate; and we analyze the dangers of ad-hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted our normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in constitutional law or content moderation.
Finally, we explore what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. We argue that these platforms act as legislature, executive, judiciary, and press — but without any separation of powers to establish checks and balances. With these realities exposed, we contend that platforms must separate their powers and create institutions like the Supreme Court to provide transparent decisions and consistent rationales on how concepts related to newsworthiness and public figures are applied. This will give users some representation and due process in the new, private system regulating their expression. Ultimately, platforms cannot rely on global norms about free speech — which do not exist — and must instead make hard choices about which values they want to uphold through their content-moderation rules. We conclude that platforms should adopt constitution-like charters to guide the independent institutions that should oversee them.