The Behavioral Advertising Decisions Are Downgrading Services (or BAD ADS) Act

The Verge has a good write up of Senator Hawley’s latest legislative swipe at social media platforms.

The Behavioral Advertising Decisions Are Downgrading Services (or BAD ADS) Act would remove protections under Section 230 of the Communications Decency Act for large web services that display ads based on “personal traits of the user” or a user’s previous online behavior. This is defined as “behavioral advertising” and does not include targeting based on users’ locations or the content of the site they’re on.

Theverge.com: Sen. Josh Hawley wants to strip legal protections from sites with targeted ads

Squaring the Circle Between Freedom of Expression and Platform Law

Michael Karanicolas

Abstract

Among the greatest emerging challenges to global efforts to promote and protect human rights is the role of private sector entities in their actualization, since international human rights rules were designed to apply primarily, and in many cases solely, to the actions of governments. This paradigm is particularly evident in the expressive space, where private sector platforms play an enormously influential role in determining the boundaries of acceptable speech online, with none of the traditional guardrails governing how and when speech should be restricted. Many governments now view platform-imposed rules as a neat way of sidestepping legal limits on their own exercise of power, pressuring private sector entities to crack down on content which they would be constitutionally precluded from targeting directly. For their part, the platforms have grown increasingly uncomfortable with the level of responsibility they now wield, and in recent years have sought to modernize and improve their moderation frameworks in line with the growing global pressure they face. At the heart of these discussions are debates around how traditional human rights concepts like freedom of expression might be adapted to the context of “platform law.” This Article presents a preliminary framework for applying foundational freedom of expression standards to the context of private sector platforms, and models how the three-part test, which lies at the core of understandings of freedom of expression as a human right, could be applied to platforms’ moderation functions.

Facebook Reconsidering Policy on Political Ads

Facing a lot of pressure from advertisers and others, Facebook is thinking about changing how it deals with political ads. This Washington Post article describes the ongoing discussions.

Washingtonpost.com: Facebook, in a possible reversal, considers banning political ads near the U.S. Elections.

They Should Be Fired: The Social Regulation of Free Speech in the U.S.

Franciska Coleman

They Should Be Fired: The Social Regulation of Free Speech in the U.S.

Abstract

The debate over First Amendment jurisprudence often assumes that the First Amendment reflects a choice of non-regulation over regulation. This article suggests, however, that it is more accurate to describe the First Amendment as reflecting a choice of social regulation over legal regulation. Social regulation of speech has generally been lauded and preferred in America for its autonomy-enhancing properties, as private parties in civil society often lack the overwhelming power of a government censor. A review of recent high-profile incidents of social speech regulation, however, suggests that the ubiquity of social media and the hegemony of corporations have increased the breadth, visibility, and mechanisms of social speech regulation to such an extent that its scope can now approach that of a government censor. These mechanisms generally entail economic pressure on corporations, designed to force them to fire and ostracize employees who engage in censorable, contested, or discreditable speech. While the level of offensiveness of these types of speech is not the same, the sanction often is the same-loss of livelihood. This article argues that if the expected benefits of social speech regulation in an era of social media are not to be outweighed by losses in citizen autonomy, an approach to social regulation that includes legal protections against domination is required, beginning in the crucibles of free speech – public schools and universities.

Court’s Ban on Future Social Media Postings about Relatives Unconstitutional (Bey v. Rasawehr)

Court’s Ban on Future Social Media Postings about Relatives Unconstitutional

The Ohio Supreme Court today vacated portions of Mercer County civil stalking protection orders that prohibited a man from posting anything on social media about his mother and sister, whom he accused of contributing to the deaths of their husbands.

(Bey v. Rasawehr)

The Free Speech Blind Spot: Foreign Election Interference on Social Media

Evelyn Douek

The Free Speech Blind Spot: Foreign Election Interference on Social Media

Abstract

The current system for monitoring and removal of foreign election interference on social media is a free speech blind spot. Social media platforms’ standards for what constitutes impermissible interference are vague, enforcement is seemingly ad hoc and inconsistent, and the role governments play in deciding what speech should be taken down is unclear. This extraordinary opacity — at odds with the ordinary requirements of respect for free speech — has been justified by a militarized discourse that paints such interference as highly effective, and “foreign” speech as uniquely pernicious. But, in fact, evidence of such campaigns’ effectiveness is limited and the singling out and denigration of “foreign” speech is at odds with the traditional justifications for free expression.

Hiding in the blind spot created by this foreign-threat, securitized framing are more pervasive and fundamental questions about online public discourse, such as how to define appropriate norms of online behavior more generally, who should decide them and how they should be enforced. Without examining and answering these underlying questions, the goal that removing foreign election interference on social media is meant to achieve — reestablishing trust in the online public sphere — will remain unrealized.

Watering Down Section 230

Upset at Twitter’s effort to fact check and flag the president’s false and misleading tweets, Trump has issued an executive order that attempts to water down the protections afforded social media providers under Section 230 of the Communications Decency Act. Specifically, the president’s executive order

–encourages the Federal Communications Commission to rethink the scope of Section 230 and when its liability protections apply

–directs complaints about political bias to the Federal Trade Commission

–creates a council in cooperation with state attorneys general to probe allegations of censorship based on political views.

However, it remains to be seen if his order is constitutional. More importantly, any major change to Section 230, which is a federal law, will require either the courts or Congress to take action.

HiQ v. LinkedIn, Clearview AI, and a New Common Law of Web Scraping

Benjamin Sobel

HiQ v. LinkedIn, Clearview AI, and a New Common Law of Web Scraping

The Clearview AI facial recognition scandal is a monumental breach of privacy that arrived at precisely the wrong time. A shadowy company reportedly scraped billions of publicly-available images from social media platforms and compiled them into a facial recognition database that it made available to law enforcement and private industry. To make matters worse, the scandal came to light just months after the Ninth Circuit’s decision in hiQ v. LinkedIn, which held that scraping the public web probably does not violate the Computer Fraud and Abuse Act (CFAA). Before hiQ, the CFAA would have seemed like the surest route to redress against Clearview. This Article analyzes the implications of the hiQ decision, situates the Clearview outrage in historical context, explains why existing legal remedies give aggrieved plaintiffs little to no recourse, and proposes a narrow tort to empower ordinary Internet users to take action against gross breaches of privacy by actors like Clearview: the tort of bad faith breach of terms of service.

Part I argues that the Ninth Circuit’s hiQ decision marks, at least for the time being, the reascension of common law causes of action in a field that had been dominated by the CFAA. Part II shows that the tangle of possible common law theories that courts must now adapt to cyberspace resembles the strained property and contract concepts that jurists and privacy plaintiffs reckoned with at the turn of the 20th century. It suggests that modern courts, following the example some of their predecessors set over a century ago, may properly recognize some common law remedies for present-day misconduct. Part III catalogs familiar common law claims to argue that no established property, tort, or contract claim fully captures the relational harm that conduct like Clearview’s wreaks on individual Internet users. Part IV focuses on the common law of California to propose a new tort, bad faith breach of terms of service, that can provide aggrieved plaintiffs with a proper remedy without sacrificing doctrinal fidelity or theoretical coherence.

Disparage Away on Social Media

SJC

Disparage Away on Social Media

The Massachusetts Supreme Judicial Court finds nondisparagement orders commonly issued during divorce proceedings unconstitutional prior restraints.  The decision arose from a divorce case Shak v. Shak where the husband posted disparaging remarks about his wife on social media.  The wife obtained a court order preventing the husband from making such posts.

The relevant portion of the second court order reads as follows:

“1) Until the parties have no common children under the age
of [fourteen] years old, neither party shall post on any
social media or other Internet medium any disparagement of
the other party when such disparagement consists of
comments about the party’s morality, parenting of or
ability to parent any minor children. Such disparagement
specifically includes but is not limited to the following
expressions: ‘cunt’, ‘bitch’, ‘whore’, ‘motherfucker’, and
other pejoratives involving any gender. The Court
acknowledges the impossibility of listing herein all of the
opprobrious vitriol and their permutations within the human
lexicon.

“2) While the parties have any children in common between
the ages of three and fourteen years old, neither party
shall communicate, by verbal speech, written speech, or
gestures any disparagement to the other party if said
children are within [one hundred] feet of the communicating
party or within any other farther distance where the
children may be in a position to hear, read or see the
disparagement.”

In reversing the probate court’s second court order, the MA Supreme Judicial Court found that

 because there was no showing of an exceptional
circumstance that would justify the imposition of a prior
restraint, the nondisparagement orders issued here are
unconstitutional.

The court went on to say that the parties found in similar situations are not completely without remedies.

For example, our ruling does not impact
nondisparagement agreements that parties enter into voluntarily.
Depending upon the nature and severity of the speech, parents
who are the target of disparaging speech may have the option of
seeking a harassment prevention order pursuant to G. L. c. 258E,
or filing an action seeking damages for intentional infliction
of emotional distress or defamation… And certainly
judges, who are guided by determining the best interests of the
child, can make clear to the parties that their behavior,
including any disparaging language, will be factored into any
subsequent custody determinations.

State Power to Regulate Social Media Companies to Prevent Voter Suppression

Spencer Overton
Spencer Overton

State Power to Regulate Social Media Companies to Prevent Voter Suppression

Abstract

Fake social media accounts and ads did not merely polarize the American electorate in 2016 — these tactics also targeted and suppressed Black votes. While African Americans made up just 12.7% of the United States population, Black audiences accounted for over 38% of U.S.-focused ads purchased by the Russian Internet Research Agency and almost half of the user clicks. The social media accounts generally built a following by posing as being African American-operated and by paying for ads that social media companies distributed largely to Black users. Near Election Day, the accounts urged African Americans to “boycott the election.” Federal policymakers have failed to respond immediately to enact strong and clear laws to prevent similar deceptive practices and voter-suppression schemes in the future, and thus States should take the initiative. State lawmakers should not be deterred by arguments that Section 230 of the federal Communications Act of 1934 “immunizes” social media companies from State liability. This Essay explains that Section 230 does not limit the power of States to hold social media companies legally responsible for using data collection and algorithms to target protected classes of voters with suppressive ads. By using such techniques, social media companies contribute materially to discrimination and are thus ineligible for Section 230 immunity.

Blog Stats

  • 9,131 hits
Follow LawandSocialMedia on WordPress.com

Top Clicks

  • None

Criminal Law