Uganda and Tanzania to Impose Fees on Local Bloggers
While the Internet has in many ways made the world more democratic, it has also in certain instances led to greater restrictions on free speech e.g., Germany’s new hate speech law. Other examples include efforts in both Uganda and Tanzania to impose fees on local bloggers. In the case of Tanzania, bloggers must not only register with the government but also pay over $900 in US dollars if they want to blog. This fee would be an onerous sum in any western country but in Tanzania, where the per capita GDP rate was $867 in 2016, it guarantees that only a few select individuals will be allowed to blog. As for Uganda, there is a proposed daily internet tax for bloggers that might be either 100 or 200 shillings a day (.013 or .027). The tax would not apply to those who use the Internet for educational, research or references purposes. The tax would apply to those who engage in so-called “lugambo” which roughly translates to rumor and gossip.
To read more about these stories go here.
Facebook Releases Guidelines to Removing Content
While there have been leaked reports in the past, this is the first time that Facebook has publicly released the guidelines they use to determine whether or not to remove content from their site. The guidebook is a must have for those with clients who regularly post on Facebook.
Social Media and Law Enforcement
Here are two interesting stories about the use of social media by police. In the first story, the Wilmington Police Department discuss the various ways they employ Facebook to investigate and apprehend suspects and connect with the community. In the second story, Wired Magazine ran an op-ed discussing social media mining by law enforcement. While the op-ed appears to support the practice, it did raise concerns about the potential for constitutional violations and encroachment on user privacy. The article cited several examples to include police searching terms such as “#blacklivesmatter” and “police brutality” on Facebook and Twitter to identify individuals of interest.
2017 was a bad year for the Internet. Journalists trying to get to the bottom of the Russian election meddling story discovered pathologies of the Internet’s attention economy that legal and media scholars have been writing about for several years. From filter bubbles and clickbait to revenge porn and “fake news,” the antisocial effects of social media are now front and center in a serious public debate about the future of the Internet and the firms that have come to dominate it. As the public learns more about the ease with which the Internet’s most popular platforms can be exploited to harass, deceive, and manipulate their users, there is a growing consensus that the Internet is broken and that tech titans dominating the Internet’s edge are largely to blame.
The drumbeat for a regulatory response is getting louder. And it’s coming from points across the political spectrum. Some are calling for interventions in the area of antitrust law. Others have proposed imposing at the Internet’s application layer content neutrality rules that have historically applied only at the network layer. To describe such rules, conservative activist Phil Kerpen coined the term “layer-neutral net neutrality.” Supporters of this approach assert that rules requiring social media platforms to behave like network infrastructure providers in their handling of users’ content will enhance freedom of expression and limit the role of dominant platforms as gatekeepers of the privatized public sphere. Former Democratic Senator Al Franken offered the same rationale in an op-ed in The Guardian. Franken wrote that “no one company should have the power to pick and choose which content reaches consumers and which doesn’t. And Facebook, Google, and Amazon—like ISPs—should be ‘neutral’ in their treatment of lawful information and commerce on their platforms.”
This article is a high-level effort to explain, in terms of both regulatory history and shifting public attitudes about online speech, why adopting a must-carry obligation for social media platforms is not what the Internet needs now. Such a requirement would more likely exacerbate than remediate social media’s current problems with information quality and integrity. Part I discusses the historical layer-consciousness of Internet regulation and explains the public policies underlying differential treatment of “core” and “edge” services. Part II considers evolving speech norms at the Internet’s edge and the increasing pressure on social media platforms to more actively address some demonstrable failures in social media’s “marketplace of ideas.” Part III argues that a must-carry rule for social media platforms is precisely the wrong regulatory approach for addressing those failures. The better prescription, I argue, is to breathe new life into the underused “Good Samaritan” provision in § 230 of the Communications Decency Act, which was intended to protect and promote good faith content moderation at the Internet’s edge. What the Internet needs now is not layer-neutral net neutrality; it is an awakening to what James Grimmelmann has called “the virtues of moderation.”
Social media platforms have emerged as formidable regulators of online discourse, and their influence only grows as more speech activity migrates to online spaces. The platforms have come under heavy criticism, however, after revelations about Facebook’s role in amplifying disinformation and polarization during the 2016 presidential election. Policymakers have begun to discuss an official response, but what they envision – namely, a set of rules for online political ads – addresses only a small corner of a much wider set of problems. Their hesitancy to go deeper is understandable. How would government even go about regulating a social platform, and if it did, how would it do so without intruding too far on the freedom of speech?
This Article takes an early, panoramic view of the challenge. It begins with a conceptual overview of the problem: what kinds of risks do online platforms present, and what makes these risks novel compared to traditional First Amendment concerns? The Article then outlines the eclectic and sometimes exotic policies regulators might someday apply to problems including false news, private censorship, ideological polarization, and online addiction. Finally, the Article suggests some high-level directions for First Amendment jurisprudence as it adapts to online platforms’ new and radically disruptive presence in the marketplace of ideas.
State Department to Increase Social Media Vetting of Visa Seekers
The State Department announced yesterday in the Federal Register that it seeks public comment on proposed new requirements for those requesting visas to enter the U.S. Specifically, the State Department wants to require visa applicants, approximately 15 million people annually, to list a number of social media platforms and any account names or identifiers they have used with those platforms over the past five years. The new rule would also allow the applicant to volunteer information about social media accounts not listed in the application.
John Roberts assumed his position as Chief Justice of the United States just prior to the commencement of the October 2005 Term of the Supreme Court. That was seven years after Google was incorporated, one year before Facebook became available to the general public, and two years before Apple released the first iPhone. The twelve years of the Roberts Court have thus been a period of constant and radical technological innovation and change, particularly in the areas of mass communication and the media. It is therefore somewhat astonishing how few of the Roberts Court’s free speech decisions touch upon new technology and technological change. Indeed, it can be argued that only two cases directly address new technology: Brown v. Entertainment Merchants Association on video games, and Packingham v. North Carolina on social media. Packingham, it should be noted, is the only Roberts Court free speech case directly implicating the Internet. Even if one extends the definition of cases addressing technology (as I do), only four cases, at most, can be said to address technology and free speech.
It seems inevitable that going forward, this is going to change. In particular, recent calls to regulate “fake news” and otherwise impose filtering obligations on search engines and social media companies will inevitably raise important and difficult First Amendment issues. Therefore, this is a good time to consider how the Roberts Court has to date reacted to technology, and what that portends for the future. This paper examines the Roberts Court’s free speech/technology jurisprudence (as well as touching upon a few earlier cases), with a view to doing just that. The pattern that emerges is a fundamental dichotomy: some Justices are inclined to be Candides, and others to be Cassandras. Candide is the main character of Voltaire’s satire Candide, ou l’Optimisme, famous for repeating his teacher, Professor Pangloss’s mantra “all is for the best” in the “best of all possible worlds.” Cassandra was the daughter of King Priam and Queen Hecuba of Troy in Greek mythology, condemned by the god Apollo to accurately prophesize disaster, but never to be believed. While not all justices fit firmly within one or the other camp, the Roberts Court is clearly divided relatively evenly between technology optimists and technology pessimists.
The paper begins by analyzing the key technology/free speech decisions of the Roberts Court, and classifying the current Justices as Candides or Cassandras based on their opinions or votes in those cases. In the remainder of the paper, I offer some thoughts on two obvious questions. First, why is the Court divided between Candides and Cassandras and what qualities explain the divergence (spoiler: it is not simply partisan or political preferences). And second, what does this division portend for the future. As we shall see, my views on the first issue are consistent with, and indeed closely tied to, Greg Magarian’s analysis of Managed Speech on the Roberts Court. On the second question, I am modestly (but only modestly) optimistic that the Candides will prevail and that the Court will not respond with fear to new technology., I am, in other words, hopeful that the Court will fend off heavy handed efforts to assert state control over the Internet and social media, despite the obvious threats and concerns associated with that technology. I close by considering some possible regulatory scenarios and how the Court might respond to them