In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms aren’t bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression — particularly the entangled concepts of “public figures” and “newsworthiness.”
In this article, we offer the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. We first introduce the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. We then turn to the “New Governors” and examine how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, we offer lessons for both courts and platforms as they confront new challenges posed by online speech. We expose the pitfalls of using algorithms to identify public figures; we explore the diminished utility of setting rules based on voluntary involvement in public debate; and we analyze the dangers of ad-hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted our normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in constitutional law or content moderation.
Finally, we explore what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. We argue that these platforms act as legislature, executive, judiciary, and press — but without any separation of powers to establish checks and balances. With these realities exposed, we contend that platforms must separate their powers and create institutions like the Supreme Court to provide transparent decisions and consistent rationales on how concepts related to newsworthiness and public figures are applied. This will give users some representation and due process in the new, private system regulating their expression. Ultimately, platforms cannot rely on global norms about free speech — which do not exist — and must instead make hard choices about which values they want to uphold through their content-moderation rules. We conclude that platforms should adopt constitution-like charters to guide the independent institutions that should oversee them.
Congressman Sues Twitter
The Washington Post has an article discussing a recent lawsuit filed by Rep. Devin Nunes (R-Calif.) who claims “that Twitter, two parody Twitter accounts and a Republican political consultant violated the First Amendment and defamed him.” Most people familiar with Section 230 of the Communications Decency Act realize that the congressman is unlikely to succeed in his lawsuit. However, according to the article, it appears that Congressman Nunes long term goal may not necessarily include winning his suit but rather laying the groundwork for future legal form i.e., getting the Supreme Court to reevaluate the standard for defaming public officials.
Judicial failure to recognize social media’s influence on juror decision making has identifiable constitutional implications. The Sixth Amendment right to a fair trial demands that courts grant a defendant’s change of venue motion when media-generated pretrial publicity invades the unbiased sensibility of those who are asked to sit in judgment. Courts limit publicity suitable for granting a defendant’s motion to information culled from newspapers, radio, and television reports. Since about 2014, however, a handful of defendants have introduced social media posts to support their claims of unconstitutional bias in the community. Despite defendants’ introduction of negative social media in support of their claims, these same courts have yet to include social media in their evaluation of pretrial publicity bias. But social media is media, and as this article demonstrates, trial court judges faced with deciding change of venue motions have a constitutional obligation to include social media in their evaluations.
The collective refusal to treat social media the same as biased television, radio, or print media, suggests an erroneous assumption on the part of lower courts that social media is somehow different. This article identifies three reasons as justification for dismissing social media: social media is too recent a medium to fully understand and analyze, social media is not a legitimate news source, and social media is opinion based. Application of pretrial social media publicity to long-standing Supreme Court change of venue doctrine, coupled with its exploration of scientific and social research on social media influence, debunk these lower court rationalizations.
This article demonstrates that the reluctance of courts to consider social media evidence when deciding whether to grant a motion for a change of venue is a violation of any defendant’s Sixth Amendment right to a fair trial. On a larger scale, the article demands that courts embrace our new reality. Social media intersects with criminal justice, and our daily lives, in ways that demand judicial recognition.
Facebook Under Criminal Investigation
The hits for Facebook just keep coming. The latest set back for the company was announced this week and came in the form of a criminal investigation. According to media reports, a New York federal criminal grand jury is looking into whether Facebook broke the law when it entered into data deals with some of the largest technology companies in the world. The agreements, which have been phased out over the last two years, allowed companies to see a Facebook users’ friends, contact information, and other data. This sharing of information occurred at times without user consent.
Forcing Someone Off Twitter
The Verge has an article about whether Elon Musk can be forced to give up Twitter. As some are aware, Elon has gotten into trouble with some of his tweets and he might be well served to give Twitter a rest at least for a while. However, the article, which raises some very interesting First Amendment issues, poses the question of whether a judge could actually order Elon to give up Twitter.
Lawyers, Social Media, and Emojis
Here is another story of a lawyer improperly using social media. This case may also be the first example of an unethical use of an emoji.
Chicago Police and Social Media
The ACLU recently issued a press release calling for the Chicago police to halt its monitoring of citizens on social media until a public hearing on the topic can be held. The ACLU is involved in active litigation over the police department’s use of social media to track and investigate Chicago residents.