Removing Content Expeditiously
In light of the recent deadly attacks on several mosques in New Zealand that were live streamed on Facebook, Australia has passed a new law which would make it criminal if a social media provider failed to remove violent content in an “expeditious” manner. While Facebook did ultimately take down the video, it was up for almost an hour and viewed thousands of times.
While I applaud the effort to regulate social media providers, I am not sure that this law was well thought out. In fact, most view it as a rushed, knee-jerk reaction to a horrific event. Also, I am not sure what more is expected from social media providers. While I agree that Facebook should have responded more quickly, I am not sure what is an acceptable length of time. I guess the key here is how Australian prosecutors define “expeditious.”
In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms aren’t bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression — particularly the entangled concepts of “public figures” and “newsworthiness.”
In this article, we offer the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. We first introduce the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. We then turn to the “New Governors” and examine how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, we offer lessons for both courts and platforms as they confront new challenges posed by online speech. We expose the pitfalls of using algorithms to identify public figures; we explore the diminished utility of setting rules based on voluntary involvement in public debate; and we analyze the dangers of ad-hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted our normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in constitutional law or content moderation.
Finally, we explore what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. We argue that these platforms act as legislature, executive, judiciary, and press — but without any separation of powers to establish checks and balances. With these realities exposed, we contend that platforms must separate their powers and create institutions like the Supreme Court to provide transparent decisions and consistent rationales on how concepts related to newsworthiness and public figures are applied. This will give users some representation and due process in the new, private system regulating their expression. Ultimately, platforms cannot rely on global norms about free speech — which do not exist — and must instead make hard choices about which values they want to uphold through their content-moderation rules. We conclude that platforms should adopt constitution-like charters to guide the independent institutions that should oversee them.
Congressman Sues Twitter
The Washington Post has an article discussing a recent lawsuit filed by Rep. Devin Nunes (R-Calif.) who claims “that Twitter, two parody Twitter accounts and a Republican political consultant violated the First Amendment and defamed him.” Most people familiar with Section 230 of the Communications Decency Act realize that the congressman is unlikely to succeed in his lawsuit. However, according to the article, it appears that Congressman Nunes long term goal may not necessarily include winning his suit but rather laying the groundwork for future legal form i.e., getting the Supreme Court to reevaluate the standard for defaming public officials.
Judicial failure to recognize social media’s influence on juror decision making has identifiable constitutional implications. The Sixth Amendment right to a fair trial demands that courts grant a defendant’s change of venue motion when media-generated pretrial publicity invades the unbiased sensibility of those who are asked to sit in judgment. Courts limit publicity suitable for granting a defendant’s motion to information culled from newspapers, radio, and television reports. Since about 2014, however, a handful of defendants have introduced social media posts to support their claims of unconstitutional bias in the community. Despite defendants’ introduction of negative social media in support of their claims, these same courts have yet to include social media in their evaluation of pretrial publicity bias. But social media is media, and as this article demonstrates, trial court judges faced with deciding change of venue motions have a constitutional obligation to include social media in their evaluations.
The collective refusal to treat social media the same as biased television, radio, or print media, suggests an erroneous assumption on the part of lower courts that social media is somehow different. This article identifies three reasons as justification for dismissing social media: social media is too recent a medium to fully understand and analyze, social media is not a legitimate news source, and social media is opinion based. Application of pretrial social media publicity to long-standing Supreme Court change of venue doctrine, coupled with its exploration of scientific and social research on social media influence, debunk these lower court rationalizations.
This article demonstrates that the reluctance of courts to consider social media evidence when deciding whether to grant a motion for a change of venue is a violation of any defendant’s Sixth Amendment right to a fair trial. On a larger scale, the article demands that courts embrace our new reality. Social media intersects with criminal justice, and our daily lives, in ways that demand judicial recognition.