#Better Rules: The Appropriate Use of Social Media in Rulemaking

Stephen M. Johnson


#Better Rules: The Appropriate Use of Social Media in Rulemaking

In December 2015, the Government Accountability Office (GAO) found that the Environmental Protection Agency’s (EPA) use of social media tools in a rulemaking under the Clean Water Act violated prohibitions in federal appropriations laws against publicity, propaganda and lobbying. Although academics previously explored whether the use of technology in rulemaking might violate the Administrative Procedures Act (APA), the Paperwork Reduction Act, or the Federal Advisory Committee Act, none predicted that one of the first firestorms surrounding the use of social media in rulemaking would arise out of federal appropriations laws.

While critics of the EPA “waters of the United States” (WOTUS) rule vigorously chastised the agency for its “illegal” activity, a close reading of the GAO report indicates that the agency’s violations of appropriations rules were relatively minor and could be easily avoided in the future. Despite the outcry in the wake of the report, an analysis of the appropriations legislation demonstrates that it poses very few restrictions, in practice, on agencies’ use of social media in rulemaking.

However, an analysis of the WOTUS rulemaking and the manner in which EPA used social media in the rulemaking demonstrates that agencies may decide to use social media in rulemaking for a variety of reasons, some of which are more legally defensible than others. Proponents of the use of social media in rulemaking tout its potential for educating the public, gathering more information from a broader range of participants, and developing better, more democratic and more widely accepted rules. However, an agency might also be tempted to use social media tools in the rulemaking process to evangelize, rather than educate, and to contour information (selectively promote the submission of information to support a pre-determined outcome). While EPA did not cross that line in the WOTUS rulemaking, when an agency uses social media to evangelize and contour information, it can run afoul not only of prohibitions in appropriations laws, but also of requirements of the APA. Although the violation of appropriations laws would only trigger minor sanctions, violation of the APA requirements could trigger invalidation of the agency rule.

As the Administrative Conference of the United States recently recommended, therefore, agencies should think carefully about what legitimate goals they expect to achieve through the use of social media in rulemaking before embarking on rulemaking and should develop a strategy for using social media tools in a manner that best achieves those legitimate goals.

This article examines the benefits of using social media in rulemaking, the limitations imposed on the use of social media by appropriations laws and the APA, and the practical considerations involved in choosing the right mix of social media tools for rulemaking. Part I of the article outlines the various goals that agencies might have when choosing to use social media tools in rulemaking. Part II explores the variety of social media tools that are available to agencies and provides a brief overview of federal support for the use of those tools in rulemaking. Part III examines the extent to which social media tools can actually achieve the goals that motivate agencies to use them. Part IV outlines the limits imposed on agencies’ use of social media by appropriations laws and Part V outlines the APA challenges that might be raised if agencies use social media to evangelize and contour information in the rulemaking process. Finally, Part VI provides some concluding suggestions regarding the appropriate use of social media tools in rulemaking.

Man accused of sending a seizure-inducing tweet charged with cyberstalking


Arstechnica.com: Man accused of sending a seizure-inducing tweet charged with cyberstalking


DHS’ Pilots for Social Media Screening Need Increased Rigor to Ensure Scalability and Long-term Success


DHS’ Pilots for Social Media Screening Need Increased Rigor to Ensure Scalability and Long-term Success 

According to this recently released report, it does not appear as though DHS’s use of social media screening is meeting expectations.


Search Engines, Free Speech Coverage, and the Limits of Analogical Reasoning

Heather Whitney


Mark Simpson


Search Engines, Free Speech Coverage, and the Limits of Analogical Reasoning


In this Essay we investigate whether new modes of online communication ought to be covered by free speech principles. The types of communications we’re interested in are the outputs of programs that synthesize, organize, and transmit third-party communications to users. Search engine results are the most familiar and ubiquitous example of this. Another notable example is Trending News on Facebook. While we are interested in the immediate policy implications of this classificatory question, this Essay also puts forth a broader methodological critique. More specifically, we critique the analogical reasoning – which likens search engine results to editorial publications – that has dominated First Amendment case law and legal scholarship around this issue to date. We argue that this analogy is unpersuasive on its own terms, and that proponents of this analogy have failed to properly engage with rival analogical interpretations of the function of search engines that favor the opposite conclusion (i.e., that search engine results should not receive free speech coverage). We argue that this is indicative of shortcomings with analogical methods in debates over free speech coverage more generally. When important and novel forms of communicative practice are being considered, courts should not use tenuous analogical inferences to extend First Amendment protections beyond their established scope. Here we present, for the first time in this literature, a taxonomy of different types of analogical reasoning, in order to offer a more precise account of the specific forms of analogical reasoning that we’re objecting to. We then explain the comparative merits of addressing question of coverage in a way that primarily relies on reference to three normative theories of free speech: democratic participation, epistemic, and thinker-based. We argue that under each of these theories – contrary to the currently prevailing view in both the cases and existing scholarship – only a subset of search engine results and similar communication should in fact be afforded free speech coverage.

Commission on Judicial Conduct warns judges of social media perils


Commission on Judicial Conduct warns judges of social media perils

NY state Commission on Judicial Conduct is warning judges that impartiality on the bench should translate to impartiality on Facebook and Twitter as well…

The Right Tools: Europe’s Intermediary Liability Laws and the 2016 General Data Protection Regulation

Daphne Keller


The Right Tools: Europe’s Intermediary Liability Laws and the 2016 General Data Protection Regulation


The so-called “Right to Be Forgotten” established by the Court of Justice of the European Union in 2014 is about to change. The EU’s General Data Protection Regulation (GDPR), which goes into effect in 2018, introduces new notice-and-takedown rules for online information targeted by “Right to Be Forgotten” erasure requests. As drafted, the new rules make deliberate or accidental over-removal of online information far too likely. They give private Internet platforms powerful incentives to erase or de-list user-generated content – whether or not that content, or the intermediaries’ processing of the content, actually violates the law. They also create new data disclosure obligations that undermine privacy and Data Protection rights for people who post content online. These problems could be mitigated, without threatening the important privacy protections established by the GDPR, through procedural checks and balances in the platforms’ removal operations.

This article details the problematic GDPR provisions, examines the convergence of European Data Protection and Intermediary Liability Law, and proposes ways that the EU’s own Intermediary Liability laws can restore balanced protections for privacy and information rights. Throughout, it focuses on the motivations and likely real-world behavior of online platforms, drawing on the author’s extensive experience as Google’s Associate General Counsel for Intermediary Liability and as Intermediary Liability Director at Stanford Law School’s Center for Internet and Society. It includes close examinations of

Whether and how the “Right to Be Forgotten” may apply to user-generated content hosts like Twitter or Facebook; Free expression provisions in the GDPR; The GDPR’s extraterritorial reach and consequences for companies outside the EU; Doctrinal tensions between the EU’s Intermediary Liability law under the eCommerce Directive, and its Data Protection law under the 1995 Data Protection Directive and the new GDPR; and Human rights and fundamental rights laws governing online notice and takedown operations.

ACLU Challenges Police Warrant for Dakota Access Pipeline Facebook Page


ACLU Challenges Police Warrant for Dakota Access Pipeline Facebook Page

Law enforcement in Washington have obtained a warrant to search a Facebook page dedicated to a group protesting the Dakota Access Pipeline.  The warrant requests the account information of those who have interacted with the group’s page.  In addition, the warrant seeks “messages, photos, videos, wall posts, and location information” dating from February 4 to February 15.  The timeline would cover a Feb 11th march in downtown Bellingham, Washington when protesters shut down I-5 for over an hour after learning about the government’s decision to complete the pipeline.  The ACLU has filed a motion to quash the warrant.  The ACLU’s motion will be argued in court on Tuesday.

Blog Stats

  • 5,215 hits
Follow LawandSocialMedia on WordPress.com

Criminal Law