Home » 2018 » April

Monthly Archives: April 2018

Jurors and Facebook

Facebook

Jurors and Facebook

Jurors accessing Facebook during trial leads to another criminal conviction being overturned.

State v. Christensen

Uganda and Tanzania to Impose Fees on Local Bloggers

uganda tanzania

Uganda and Tanzania to Impose Fees on Local Bloggers

While the Internet has in many ways made the world more democratic, it has also in certain instances led to greater restrictions on free speech e.g., Germany’s new hate speech law.  Other examples include efforts in both Uganda and Tanzania to impose fees on local bloggers.  In the case of Tanzania, bloggers must not only register with the government but also pay over $900 in US dollars if they want to blog.  This fee would be an onerous sum in any western country but in Tanzania, where the per capita GDP rate was $867 in 2016, it guarantees that only a few select individuals will be allowed to blog.  As for Uganda, there is a proposed daily internet tax for bloggers that might be either 100 or 200 shillings a day (.013 or .027).  The tax would not apply to those who use the Internet for educational, research or references purposes.  The tax would apply to those who engage in so-called “lugambo” which roughly translates to rumor and gossip.

To read more about these stories go here.

Facebook Releases Guidelines to Removing Content

Facebook

Facebook Releases Guidelines to Removing Content

While there have been leaked reports in the past, this is the first time that Facebook has publicly released the guidelines they use to determine whether or not to remove content from their site.  The guidebook is a must have for those with clients who regularly post on Facebook.

Social Media and Law Enforcement

Social Media and Law Enforcement

Here are two interesting stories about the use of social media by police. In the first story, the Wilmington Police Department discuss the various ways they employ Facebook to investigate and apprehend suspects and connect with the community.  In the second story, Wired Magazine ran an op-ed discussing social media mining by law enforcement.  While the op-ed appears to support the practice, it did raise concerns about the potential for constitutional violations and encroachment on user privacy.  The article cited several examples to include police searching terms such as “#blacklivesmatter” and “police brutality” on Facebook and Twitter to identify individuals of interest.

Remediating Social Media: Why Layers Still Matter for Internet Policy

Annemarie Bridy

abridy

Remediating Social Media: Why Layers Still Matter for Internet Policy

Abstract

2017 was a bad year for the Internet. Journalists trying to get to the bottom of the Russian election meddling story discovered pathologies of the Internet’s attention economy that legal and media scholars have been writing about for several years. From filter bubbles and clickbait to revenge porn and “fake news,” the antisocial effects of social media are now front and center in a serious public debate about the future of the Internet and the firms that have come to dominate it. As the public learns more about the ease with which the Internet’s most popular platforms can be exploited to harass, deceive, and manipulate their users, there is a growing consensus that the Internet is broken and that tech titans dominating the Internet’s edge are largely to blame.

The drumbeat for a regulatory response is getting louder. And it’s coming from points across the political spectrum. Some are calling for interventions in the area of antitrust law. Others have proposed imposing at the Internet’s application layer content neutrality rules that have historically applied only at the network layer. To describe such rules, conservative activist Phil Kerpen coined the term “layer-neutral net neutrality.” Supporters of this approach assert that rules requiring social media platforms to behave like network infrastructure providers in their handling of users’ content will enhance freedom of expression and limit the role of dominant platforms as gatekeepers of the privatized public sphere. Former Democratic Senator Al Franken offered the same rationale in an op-ed in The Guardian. Franken wrote that “no one company should have the power to pick and choose which content reaches consumers and which doesn’t. And Facebook, Google, and Amazon—like ISPs—should be ‘neutral’ in their treatment of lawful information and commerce on their platforms.”

This article is a high-level effort to explain, in terms of both regulatory history and shifting public attitudes about online speech, why adopting a must-carry obligation for social media platforms is not what the Internet needs now. Such a requirement would more likely exacerbate than remediate social media’s current problems with information quality and integrity. Part I discusses the historical layer-consciousness of Internet regulation and explains the public policies underlying differential treatment of “core” and “edge” services. Part II considers evolving speech norms at the Internet’s edge and the increasing pressure on social media platforms to more actively address some demonstrable failures in social media’s “marketplace of ideas.” Part III argues that a must-carry rule for social media platforms is precisely the wrong regulatory approach for addressing those failures. The better prescription, I argue, is to breathe new life into the underused “Good Samaritan” provision in § 230 of the Communications Decency Act, which was intended to protect and promote good faith content moderation at the Internet’s edge. What the Internet needs now is not layer-neutral net neutrality; it is an awakening to what James Grimmelmann has called “the virtues of moderation.”

Toward a First Amendment Jurisprudence for the Platform Economy

Kyle Langvardt

kyle langvardt_9dd18aa4d11f2a911edd5e074b698f4a

Toward a First Amendment Jurisprudence for the Platform Economy

Abstract

Social media platforms have emerged as formidable regulators of online discourse, and their influence only grows as more speech activity migrates to online spaces. The platforms have come under heavy criticism, however, after revelations about Facebook’s role in amplifying disinformation and polarization during the 2016 presidential election. Policymakers have begun to discuss an official response, but what they envision – namely, a set of rules for online political ads – addresses only a small corner of a much wider set of problems. Their hesitancy to go deeper is understandable. How would government even go about regulating a social platform, and if it did, how would it do so without intruding too far on the freedom of speech?

This Article takes an early, panoramic view of the challenge. It begins with a conceptual overview of the problem: what kinds of risks do online platforms present, and what makes these risks novel compared to traditional First Amendment concerns? The Article then outlines the eclectic and sometimes exotic policies regulators might someday apply to problems including false news, private censorship, ideological polarization, and online addiction. Finally, the Article suggests some high-level directions for First Amendment jurisprudence as it adapts to online platforms’ new and radically disruptive presence in the marketplace of ideas.