As with other technical revolutions before it, such as the printing press, radio, and telephone, social media has changed the way in which people communicate. Due to cases involving the use of social media by employees, among other reasons, the often little-known National Labor Relations Board (NLRB or Board) has become the center of national media attention. In the cases involving social media, the Board simply applies well-established decades old legal principles. Yet, employers, business groups, and the media have portrayed the Board as deviating from longstanding precedent, overstepping its role in regulating employment, and misunderstanding the impact of social media. No Federal Circuit Court, to which Board decisions are appealed, has yet, however, denied enforcement of a Board decision in a case involving social media. While other scholars have contributed to the buzz surrounding the Board’s decision by arguing that the Board has been incorrect to apply its precedent to social media because the technology differs from prior technology, this article argues that the Board has properly used its wealth of expertise gained from many decades of enforcing labor management relations to extend its precedent in a flexible manner to the new technology. This article first summarizes the Board’s decisions and guidance about employees’ use of social media and employer policies regulating use of social media. It then discusses four simple clarifications that the Board should make in future decisions in order to make its regulation easier for employers and employees to understand and follow. First, the Board should clarify that anytime more than one employee is involved in a social media discussion, the employees act concertedly. Second, the Board should clarify that employees act for mutual aid and protection when they discuss working conditions, whether or not they explicitly focus on improving those conditions. Third, the Board should clarify how it will determine when employees engaged in otherwise protected concerted activity lose the protection of the Act due to the egregious nature of their social media use. Finally, the Board should clarify whether provision specific disclaimers providing concrete examples of what constitutes protected concerted activity will be effective to render a social media policy legal. These clarifications will enhance the likelihood of continued enforcement of Board decisions involving social media by the Circuit Courts. Moreover, these clarifications have not been discussed by articles written by other scholars and, thus, contribute to the growing literature on this topic.
This introduction to a special issue of “Telecommunications Policy” entitled “The Governance of Social Media” begins with a definition of social media that informs all contributions in the special issue. A section describing the challenges associated with the governance of social media is presented next, followed by an overview of the various articles included in the special issue.
While the Internet and the World Wide Web have always been used to facilitate social interaction, the emergence and rapid diffusion of Web 2.0 functionalities during the first decade of the new millennium enabled an evolutionary leap forward in the social component of web use. This and falling costs for online data storage made it feasible for the first time to offer masses of Internet users access to an array of user-centric spaces they could populate with user-generated content, along with a correspondingly diverse set of opportunities for linking these spaces together to form virtual social networks.
To define “social media” for our current purposes, we synthesize definitions presented in the literature and identify the following commonalities among current social media services:
1) Social media services are (currently) Web 2.0 Internet-based applications,
2) User-generated content is the lifeblood of social media,
3) Individuals and groups create user-specific profiles for a site or app designed and maintained by a social media service,
4) Social media services facilitate the development of social networks online by connecting a profile with those of other individuals and/or groups.
Transformative communication technologies have always called for regulatory innovation. Theodor Vail’s vision of “one policy, one system, universal service” preceded more than one-hundred years of innovative regulations aimed at connecting all Americans to a single telephone network. The sinking of the Titanic, caused in part by “chaos in the spectrum” led to the Radio Act of 1912 and the creation of a command and control model designed to regulate broadcast radio. Safe-harbor hours were put in place after a father and son heard George Carlin’s “seven dirty words” routine over the radio in their car. The fairness doctrine and the minority tax certificate program were designed to address inequalities in the broadcast television industry. The Digital Millennium Copyright Act responded to intellectual property concerns raised by a global Internet and the FCC’s 700mhz auction was the result of demand for smarter mobile phones. Now we must consider the role of regulatory innovation in response to the emergence of social media.
Twitter has profoundly changed how people communicate with one another and learn about the world. In less than a decade since it first launched, Twitter has become the place where all news breaks first, where political revolutions are launched, and where presidential campaigns are conducted. The service has more than half a billion users, who use Twitter to talk about the news, follow celebrities, support sports teams, conduct business, and learn about one another. Twitter has touched every area of human interaction, and the law is no exception. Thus, although no member of the Supreme Court uses Twitter officially (yet), the world needs to know which Justice is most “tweetable.” The paper uses data from the SCOTUS Search database to rank the Justices by whether their oral argument statements are fit to be tweeted.
Adam B. Thimmesch
The technological developments of recent decades have allowed data to emerge as the functional equivalent of a currency in the digital economy. One result is that individuals now have the ability to obtain a wide variety of benefits, from cash discounts to access to news, social media, and online software, in exchange for their personal data. Scholars in a variety of fields recognize these personal-data transfers as market exchanges and have questioned the functioning and impact of the personal-data market. That market is currently invisible, however, for tax purposes. Neither the receipt of an individual’s data as consideration for digital products nor an individual’s receipt of benefits in exchange for data is currently recognized as a taxable event. The result is not only potentially lost tax revenue, but also a tax preference for the use of data as a currency. This implicit personal-data tax exemption thus has implications for the design of our tax systems in the new economy and for how we regulate the data market in other ways. The article therefore offers further insight into the tax-reform efforts directed towards base erosion and profit shifting in the digital economy, encourages thought regarding the use of alternative tax instruments to address the erosion of traditional tax bases and to promote beneficial data practices, and urges the recognition of the tax preference for data in the broader U.S. regulatory structure related to data and personal privacy.
This invited Symposium piece considers the new norm of investigating potential jurors online. The piece proceeds in three parts. First, it examines the current state of jury investigations, and how they differ from those conducted in the past. Then, it describes the evolving legal and ethical positions that are combining to encourage such investigations. Finally, it offers a note of caution – condoning such investigations while keeping them hidden from jurors may be perceived as unfair and exploitative, risking a possible backlash from outraged jurors. Instead, I propose a modest measure to provide notice and explanation to jurors that their online information is likely to be searched, and why.
It has long been discussed whether individuals should have a “right to be forgotten” online to suppress old information that could seriously interfere with their privacy and data protection rights. In the landmark case of Google Spain v AEPD, the Court of Justice of the European Union addressed the particular question of whether, under EU Data Protection Law, individuals have a right to have links delisted from the list of search results, in searches made on the basis of their name. It found that they do have this right – which can be best described as a “right to be delisted” – when some conditions are met.
The ruling, which imposes on search engines the duty to assess and accommodate delisting requests, has proven to be highly controversial. Strong feelings have been expressed either in favor or against it, in what may be seen as a clash between the values of personal data protection and freedom of expression.
This article does not delve into this underlying debate. Instead, it aims to explore the solidness of the ground on which the right is based. It begins by providing an overview of the relevant elements of EU data protection law so as to allow readers not familiar with its nuances to properly follow the discussion. After presenting the facts of Google Spain, both at national and EU level, the article discusses how the ‘right to be delisted’ was crafted by the CJEU. It argues that it is based on shaky ground, as it is premised on the characterization of search engines as “data controllers,” which is arguably at odds with their intermediary role and – in the absence of specific safeguards – makes their activity largely incompatible with the data protection legal framework. Moreover, the article discusses how the Court failed to devise a proper balance of the different rights at stake, particularly that of freedom of expression and information. It suggests that the intermediary role of generalist search engines should be adequately protected, both under the data protection legal framework as well as under the liability limitation scheme established by the E-Commerce Directive. This, however, is not likely to be achieved in the near future. A careful approach by national courts and data protection authorities is thus suggested as a way to fix some of the shortcomings identified in the ruling.
Arianna R. Levinson