Home » 2015 » July

Monthly Archives: July 2015

Accountability in Algorithmic Enforcement: Lessons from Copyright Enforcement by Online Intermediaries

Maayan Perel (Filmar)

University of Haifa Faculty of Law

Niva Elkin-Koren

University of Haifa – Faculty of Law

Accountability in Algorithmic Enforcement: Lessons from Copyright Enforcement by Online Intermediaries

Abstract:

Recent years have seen a growing use of algorithmic law enforcement by online intermediaries. A recent example is the ruling of the European Court of Justice on the Right to Be Forgotten (RTBF), requiring search engines to remove certain search results upon a user’s request. The implementation of the ruling involves handling a vast number of individual removal requests concerning millions of URLs, and therefore must be performed by an algorithm.

Two decades ago it was the Notice and Takedown (N&TD) regime, under the Digital Millennium Copyright Act (DMCA). Copyright law was at the forefront of algorithmic law enforcement from the early 90’s, conferring safe harbor protection to online intermediaries who remove allegedly infringing content upon notice. Over the past two decades the N&TD regime has become ubiquitous, and embedded in the system design of all major intermediaries, as algorithms are being used to monitor, filter, block, and disable access to allegedly infringing content.
Notwithstanding their critical role in shaping access to online content and facilitating public discourse, intermediaries are hardly held accountable for algorithmic enforcement. We simply do not know which allegedly infringing material triggers the algorithms, how decisions regarding content restrictions are made, who is making such decisions and how target users might affect these decisions.

Algorithmic enforcement by online intermediaries reflects a fundamental shift in our system of governance. It effectively converges law enforcement and adjudication powers, at the hands of a small number of mega platforms, profit-maximizing, and possibly biased private entities. The robust online infrastructure of algorithmic law enforcement raises critical challenges to the rule of law, due process and accountability. In essence, algorithmic law enforcement by online intermediaries lacks sufficient measures to assure accountability, namely, the extent to which online intermediaries are expected to justify their choices, are answerable for their actions, and are held responsible for their failures and wrongdoings. Lessons drawn from algorithmic copyright enforcement by online intermediaries could offer a valuable case study for addressing these concerns.

This Article proposes a novel framework for analyzing accountability in algorithmic enforcement that is based on three factors: transparency, due process and public oversight. It identifies the accountability deficiencies in algorithmic copyright enforcement and further maps the barriers for enhancing accountability, including technical barriers of non-transparency and machine learning, legal barriers that prohibit black box tinkering and user-related barriers. Finally, the Article explores different strategies to enhance accountability, by increasing public literacy and transparency in algorithmic copyright enforcement. These strategies include watchdog’s initiatives, intermediaries’ voluntary transparency reports and regulatory mechanisms of mandatory disclosure.

Reporting, Reviewing, and Responding to Harassment on Twitter

Twitter

J. Nathan Matias

Amy Johnson

Whitney Erin Boesel
Brian Keegan

Jaclyn Friedman
Charlie DeTar

Reporting, Reviewing, and Responding to Harassment on Twitter

Abstract: 

When people experience harassment online, from individual threats or invective to coordinated campaigns of harassment, they have the option to report the harassers and content to the platform where the harassment has occurred. Platforms then evaluate harassment reports against terms of use and other policies to decide whether to remove content or take action against the alleged harasser — or not. On Twitter, harassing accounts can be deleted entirely, suspended (with content made unavailable pending appeal or specific changes), or sent a warning. Some platforms, including Twitter and YouTube, grant “authorized reporters” or “trusted flaggers” special privileges to identify and report inappropriate content on behalf of others.

In November 2014, Twitter granted Women, Action, and the Media (WAM!) this authorized reporter status. From November 6–26 2014, WAM! took in reports of Twitter-based harassment, assessed them, and escalated reports as necessary to Twitter for special attention. WAM! used a special intake form to collect data and promised publicly to publish what it learned from the data it collected. In three weeks, WAM! reviewers assessed 811 incoming reports of harassment and escalated 161 reports to Twitter, ultimately seeing Twitter carry out 70 account suspensions, 18 warnings, and one deleted account. This document presents findings from this three-week project; it draws on both quantitative and qualitative methods.

Findings focus on the people reporting and receiving harassment, the kinds of harassment that were reported, Twitter’s response to harassment reports, the process of reviewing harassment reports, and challenges for harassment reporting processes.

Latest Edition of Socially Aware Newsletter

mofo

Latest Edition of Socially Aware Newsletter

Morrison Foerster’s Socially Aware newsletter is now available.  For those who are unaware, the law firm puts out a “quarterly newsletter” that highlights prior social media posts from its SociallyAware blog.  Of all the firms that specialize in social media or maintain active practice groups in this area, Morrison and Foerster appears to be the most knowledgeable and sophisticated.

Judges & Social Media: Managing the Risks (European Perspective)

Dimitra Blitsa
Greek National School of Judges
Ioannis Papathanasiou
Greek National School of Judges
Maria Salmanli 
Greek National School of Judges

Judges & Social Media: Managing the Risks (European Perspective)

Abstract:

The aim of the present paper is to identify and address some of the fundamental ethical implications of social networking for members of the judiciary. By examining the unique characteristics of social media such as lack of data privacy and anonymity, the permanent nature of information appearing therein, and misconceptions that may arise in regards to social media users and their online activity, this paper explores the question of whether judges should maintain a social media account and, if so, how they should comport themselves in a virtual world. The authors of this paper assert that judges should be able to maintain an active presence on social media, while contemplating implications of social media activities for judges. Specifically, the authors analyze social media friendship with judges, investigation by judges of parties to legal proceedings or facts of a pending case, social media postings on pending cases and career/work issues, imparting incidents of judicial malpractice, and the sharing of private life activities by judges as manifested through social media (for example: sharing of personal information and photos, the use of “(dis)like”/“follow”/“comments” buttons and discussion threads, and participation in online groups and forums). The authors conclude that it is not the means of communication, but the way social media are used by judges that may lead to breaches of ethical rules. Consequently, the authors call for raising awareness among members of the judiciary about the appropriate use of social media, the development of best practices rooted in bright-line rules and technologically up-to-date guidelines that account for the ethical implications of social media use, and the operation of European ethics councils competently trained to implement such standards.

Facebook Can’t Challenge Warrants for User Information

facebook

Facebook Can’t Challenge Warrants for User Information

This week a New York appellate court (In re 381 Search Warrants Directed to Facebook) held that Facebook cannot litigate the constitutionality of a warrant pre-enforcement nor can it warn its users about the pending search.   According to the court, “there is no constitutional or statutory right to challenge an alleged defective warrant before it is executed.”

Copyright’s Digital Deputies: DMCA-Plus Enforcement by Internet Intermediaries

Annemarie Bridy

Annemarie-Bridy

Copyright’s Digital Deputies: DMCA-Plus Enforcement by Internet Intermediaries

Abstract: 

In the years since passage of the Digital Millennium Copyright Act (“DMCA”), the copyright industries have demanded that online intermediaries — both those covered by the DMCA and those falling outside the statute’s ambit — do more than the law requires to protect their intellectual property rights. In particular, they have sought new ways to reach and shutter “pirate sites” beyond the reach of United States law. Their demands have been answered through an expanding regime of nominally voluntary “DMCA-plus” enforcement.

This chapter surveys the current landscape of DMCA-plus enforcement by dividing such enforcement into two categories: Type 1 and Type 2. Type 1 DMCA-plus enforcement is cooperation by DMCA-covered intermediaries over and above what is required for safe harbor. Type 2 DMCA-plus enforcement is cooperation by intermediaries whose activities fall outside the scope of the DMCA’s safe harbors and who are not liable for their customers’ copyright infringements under secondary liability rules.

As the gap widens between what the law requires and what intermediaries are agreeing to do on a voluntary basis, there is reason to be concerned about the expressive and due process rights of users and website operators, who have no seat at the table when intermediaries and copyright owners negotiate “best practices” for mitigating online infringement, including which sanctions to impose, which content to remove, and which websites to block without judicial intervention.

Cyber Banging

Cyber Banging

I have seen the term “cyber banging” in news articles.  I have even referenced the term in prior posts.  However, I have not, until today, seen it appear in an actual published case.  The First Appellate court in California recently used the term in reviewing a juvenile proceeding.  In the case of in re G.H., the appellate court described cyber banging as follows:

Many of the photographs and messages on appellant’s Facebook page amounted to “cyber-banging,” which Gault described as using the Internet to promote one’s own gang or to disrespect rival gangs. In one such exchange, appellant responded to posts that disrespected the Swerve Team by making threats and posting pictures of himself pointing what appeared to be a gun. Gault recognized Deandre and two other individuals in the group photographs on appellant’s Facebook page to be Swerve Team members. Deandre was on appellant’s list of Facebook “friends.”

h/tip Eric Goldman