President Biden’s pick to become the new Secretary of Commerce, Rhode Island Governor Gina Raimondo, told members of the Senate that if confirmed she will look to modify Section 230 of the Communications Decency Act. Her proposed changes have yet to be put forward and it remains to be seen whether or not they would receive Congressional support.
TheVerge.com: Biden’s Commerce nominee backs changes to Section 230
Twenty-five years ago, Eugene Volokh published his seminal article Cheap Speech and What It will Do, predicting many of the consequences of the then-brand-new Internet. On the whole, Volokh’s tone was optimistic. While many of his predictions have indeed come true, many would argue that his optimism was overstated. To the contrary, in recent years Internet giants generally, social media firms specifically, and Facebook and its CEO Mark Zuckerberg more specifically, have come under sharp and extensive criticism. Among other things, Facebook has been accused of violating its users’ privacy, of failing to remove content that constitutes stalking or personal harassment, of permitting domestic and foreign actors (notably Russia) to use fake accounts to manipulate American voters by disseminating false and misleading political speech, of failing to remove content that incites violence, and of excessive censorship of harmless content. Inevitably, critics of Facebook have proposed a number of regulatory solutions to Facebook’s alleged problems, ranging from regulating the firm’s use of personal data, imposing liability on Facebook for harm caused by content on its platform, treating Facebook as a utility, to even breaking up the company. Given the importance of Facebook, with over 2 billion users worldwide and a valuation of well over half a trillion dollars, these proposals raise serious questions.
This essay will argue that while Facebook is certainly not free of fault, many of the criticisms directed at it are overstated or confused. Furthermore, the criticisms contradict one another, because some of the solutions proposed to solve one set of problems—notably privacy—would undermine our ability to respond to other problems such as harassment, incitement and falsehood. And vice versa. More fundamentally, critics fail to confront the simple fact that Facebook and other Internet firms (notably Google) provide, without financial charge, services such as social media, searches, email, and mapping, which people want and value but whose provision entails costs. To propose regulatory “solutions” which would completely undermine the business model that permits these free services without proposing alternatives and without taking into account the preferences and interests of Facebook users, especially in poor and autocratic countries where, for all of its conceded problems, Facebook provides important and even essential services, is problematic at best. Finally, the failure of critics to seriously consider whether the First Amendment would even permit many of the regulatory approaches they propose, all in the name of preserving democracy and civil dialogue, raises questions about the seriousness of some of these critics.
Ultimately, this essay argues is that aside from some limited regulatory initiatives, we should probably embrace humility. This means, first, that unthinkingly importing old approaches such as a utility or publisher model to social media is wrong-headed, and will surely do harm without accomplishing their goals. Other proposals, on the other hand, might “solve” some problems, but at the cost of killing the goose that lays the golden egg. For now, the best path might well be the one we are on: supporting sensible, narrow reforms, but otherwise muddling along with a light regulatory touch, while encouraging/pressuring companies to adopt voluntary policies such as Twitter’s recent ban on political advertising, Google’s restrictions on micro-targeted political ads, and Facebook’s prohibitions on electoral manipulation. Before we take potentially dangerous and unconstitutional legislative action, perhaps we should first see how these experiments evolve and work out. After all, social media is less than two decades old and there is still much we need to learn before thoughtful and effective regulation is plausible.
Last night 60 Minutes had an interesting piece on Section 230 of the Communications Decency Act which provides immunity to online platforms like Facebook.
In remarkably short order, there has been growing convergence around the idea that major social media platforms should use international human rights law (IHRL) as the basis for their content moderation rules, and even platforms themselves have begun to agree. But why have these legendarily growth-obsessed companies been so quick to voluntarily say they are jumping on this bandwagon, which its advocates generally envision should operate as a constraint on their operations? In the possible reasons, there are both encouraging and less encouraging answers. For the glass half-full types, there is the straightforward explanation that perhaps these companies do genuinely care about human rights. But there is also a less optimistic possibility: companies are embracing the terminology so readily because they know that in reality it will not act as much of a constraint at all. This is the prospect explored in this article. This article is a sympathetic critique of the contributions IHRL can make to content moderation, highlighting the very real limits of IHRL as a practical guide to what platforms should do in many, if not most, difficult cases. It surveys the many arguments in favor of IHRL as a basis for content moderation rule. Ultimately, however, it argues that failing to acknowledge the considerable limitations of IHRL in this context will only serve the interests of platforms rather than their users by giving platforms legitimacy dividends they will not pay for by allowing them to wrap themselves in the language of IHRL even as what that body of norms requires remains indeterminate and contested
Berkeley Law Center Creates First Global Protocol On Using Social Media As Evidence For War Crimes
Patch.com: A three-year joint effort by Berkeley Law’s Human Rights Center (HRC) and the U.N. Human Rights Office, the Protocol marks the first global guidelines for using publicly available information online — including photos, videos, and other content posted to social media sites — as evidence in international criminal and human rights investigations...to continue reading go here.
This article deals with the problem of hate speech on the Internet. The very mode of the Internet is access and the Internet’s ready access for ideas is in many ways its most attractive feature. Indeed the Internet is a harbor for all kinds of speech, including hate speech. But as recent events in Charleston, Pittsburgh and El Paso demonstrate, hate speech, such as white supremacist websites, has lethal consequences.
The interactivity of a social media platform, the camaraderie that it engenders, the rage that it fosters, provides a means of radicalization unknown to the more traditional media. A federal statute, 47 U.S.C. Sec. 230(c)(1) plays a significant role here. The theme of this provision is to free online service providers from whatever harmful effects that flow from the content they transmit. Furthermore, under First Amendment law, hate speech is generally speaking, protected speech. The Supreme Court has dealt with hate speech in non-internet contexts. This case law is analyzed in determining what guidelines it provides for the new phenomenon of hate speech on the Internet. In addition, the article suggests a possible amendment to Sec. 230 (c)(1).
Facebook Facing Antitrust Suit
WAPO: The U.S. government and 48 attorneys general filed landmark antitrust lawsuits against Facebook on Wednesday, seeking to break up the social networking giant over charges it engaged in illegal, anti-competitive tactics to buy, bully and kill its rivals...to continue reading go here.
Facebook’s Oversight Board Starting to Hear Cases
Facebook’s Oversight Board, which reviews Facebook moderation decisions, has received its first 6 cases. Five of the cases were brought by users and one comes from Facebook itself. These cases are now open for public comment for seven days. Once the comment period closes, the Board will determine whether the posts should have been taken down. Many to include Mark Zuckerberg view the Board, which is an independent body, as Facebook’s Supreme Court. To read more about this story go here.
As a former agency head explains, antitrust litigation is like fishing: “everybody likes to catch them, but nobody wants to clean them.” Antitrust enforcers around the world are eager to catch digital platforms with monopolization cases, but little attention is being paid to the remedies that will follow.
This article examines a new source of complexity for those monopolization remedies — data privacy. In particular, it considers remedies that require access to, or disclosure of the information held by digital platforms, to restore online competition. How are such “data access” remedies impacted by the rise of consumer data privacy law?
As the article explains, neither current theory nor past monopolization cases answer this question. Existing theories on the interface between antitrust law and data privacy are focused on liability. Their application may therefore miss the distinct privacy impacts that arise at the remedies stage of a case. Past monopolization cases that ended in data access remedies often ordered disclosure of company, not consumer, information. Individual data privacy was simply not relevant. The rare historical cases that ordered disclosure of consumer information pre-date the rise of U.S. data privacy law from the mid- 1990s to present. For the first time, antitrust remedies may well have to contend with consumer privacy protection, and the control such protection can impart over competitively important data.
The article calls for antitrust analysis to consider data privacy in the design of remedies, particularly for digital platforms. Without such analysis, remedies may unwittingly cause privacy harms that outweigh the benefits to consumers from restored competition. A remedy that causes such a reduction in consumer welfare would undermine the purpose of bringing antitrust enforcement action.
The article concludes with discussion of two potential approaches for implementing the proposal. The first focuses on obtaining consumer consent to remedial disclosure and use of data. The second focuses on legislative or judicial definitions of data privacy interests that exclude remedial disclosure. Both demand careful consideration of consumer privacy, and the new complexity it creates for monopolization relief.
Facebook May Face Antitrust Suits
It appears that several states and the federal government appear to be on there way to bringing antitrust lawsuits against Facebook for the purchase of Instagram and WhatsApp. According to media reports, these purchases by Facebook have basically cleared the field and left consumers with no quality alternatives.
To read more about the possible lawsuit go here.