Home » 2017 » June

Monthly Archives: June 2017

Global Internet Forum to Counter Terrorism

cnntech

Global Internet Forum to Counter Terrorism

Twitter, Facebook, Microsoft, and YouTube have teamed up to create the Global Internet Forum to Counter Terrorism.  The tech giants plan to use this forum to work together and share tools to limit terrorism-related content on their platforms.

CNNTech.com: Tech giants bolster collaborative fight against terrorism

 

Advertisements

Facebook, Free Expression and the Power of a Leak

Facebook

Interesting article that compares Facebook’s content policies to U.S. law.

NYTimes.com: Facebook, Free Expression and the Power of a Leak

Taming the Golem: Challenges of Ethical Algorithmic Decision Making

Omer Tene

Omer

Jules Polonetsky

Jules

Taming the Golem: Challenges of Ethical Algorithmic Decision Making

Abstract

The prospect of digital manipulation on major online platforms has reached fever pitch in the last election cycle in the United States. Jonathan Zittrain’s concern about “digital gerrymandering” has found resonance in reports, which were resoundingly denied by Facebook, of the company’s alleged editing content to tone down conservative voices. At the start of the election cycle, critics blasted Facebook for allegedly injecting editorial bias into an apparently neutral content generator, its “Trending Topics” feature. Immediately after the election, when the extent of dissemination of “fake news” through social media became known, commentators chastised Facebook for not proactively policing user generated content to block and remove untrustworthy information. Which one is it then? Should Facebook have deployed policy directed technologies or should its content algorithm have remained policy neutral?

This article examines the potential for bias and discrimination in automated algorithmic decision making. As a group of commentators recently asserted, “The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology.” Yet the article rejects an approach that depicts every algorithmic process as a “black box,” which is inevitably plagued by bias and potential injustice. While recognizing that algorithms are manmade artifacts written and edited by humans in order to code decision making processes, the article argues that a distinction should be drawn between “policy neutral algorithms,” which lack an active editorial hand, and “policy directed algorithms,” which are intentionally framed to pursue a designer’s policy agenda.

Policy neutral algorithms could in some cases reflect existing entrenched societal biases and historic inequities. Companies, in turn, can choose to fix their results through active social engineering. For example, after facing controversy in light of an algorithmic determination to not offer same-day delivery in low-income neighborhoods, Amazon has nevertheless recently decided to offer the services in order to pursue an agenda of equal opportunity. Recognizing that its decision making process, which was based on logistical factors and expected demand, had the effect of accentuating prevailing social inequality, Amazon chose to level the playing field.

Policy directed algorithms are purposely engineered to correct for apparent bias and discrimination or intentionally designed to advance a predefined policy agenda. In this case, it is essential that companies provide transparency about their active pursuit of editorial policies. For example, if a search engine decides to scrub search results clean of apparent bias and discrimination, it should let users know they are seeing a manicured version of the world. If a service optimizes results for financial motives without alerting users, it risks violating FTC standards for disclosure. So too should service providers consider themselves obligated to prominently disclose important criteria that reflect an unexpected policy agenda. The transparency called for is not one based on revealing source code, but rather public accountability about the editorial nature of the algorithm.

The article addresses questions surrounding the boundaries of responsibility for algorithmic fairness, and analyzes a series of case studies under the proposed framework.

Facebook “likes” by a judge’s spouse unlikely to lead to recusal

Likes

Facebook “likes” by a judge’s spouse unlikely to lead to recusal

WaPo has an interesting article about defendants in a civil suit who are attempting to get a federal judge recused from hearing their case because of the judge’s affiliation with a nonprofit group and two Facebook “likes” by the judge’s wife.  Based on the comments provided by another judge hearing the recusal motion, the defendants are unlikely to be successful.

WaPo.com: Judge skeptical of anti-abortion group’s bias claims

 

 

Tweeting from the Courtroom

Twitter

Tweeting from the Courtroom

Indiana is following the growing trend of allowing people to tweet from the courtroom.

Socialmedialawbulletin.com: Use of Twitter to Broadcast Courtroom Proceedings

Facebook is Looking for a Privacy Attorney

Facebook

Facebook is Looking for a Privacy Attorney

Facebook is seeking talented and flexible counsel to work on Facebook’s legal privacy team and advise the company on a range of privacy, data protection and security-related legal and compliance initiatives.

SCT Overturns NC’s Sex Offender Social Media Ban

SCT

SCT Overturns Sex Offender Social Media Ban

Today, the Supreme Court, in North Carolina v. Packingham, found North Carolina’s law prohibiting registered sex offenders from accessing social media to be unconstitutional.  To read the opinion go here.  To hear the oral argument go here.

The issue in the case is as follows:

Whether, under the court’s First Amendment precedents, a law that makes it a felony for any person on the state’s registry of former sex offenders to “access” a wide array of websites – including Facebook, YouTube, and nytimes.com – that enable communication, expression, and the exchange of information among their users, if the site is “know[n]” to allow minors to have accounts, is permissible, both on its face and as applied to petitioner, who was convicted based on a Facebook post in which he celebrated dismissal of a traffic ticket, declaring “God is Good!”

The holding in the case is as follows:

The North Carolina statute, which makes it a felony for a registered sex offender “to access a commercial social networking Web site where the sex offender knows that the site permits minor children to become members or to create or maintain personal Web pages,” impermissibly restricts lawful speech in violation of the First Amendment.