Why a Public Interest Data Access Law Would Likely Trigger First Amendment Scrutiny

columbia

Max Fiest

Why a Public Interest Data Access Law Would Likely Trigger First Amendment Scrutiny

Abstract

To study and uncover the use, and often abuse, of social media platforms, journalists and researchers need access to data. As social media companies have unilaterally restricted third-party access to data, some have proposed a “public interest data access law” (“PIDAL”). Such a law would compel social media companies to grant researchers access to anonymized “activity” data, most likely delivered through an API.

If a PIDAL is passed it is likely to be attacked as a violation of the First Amendment. Broadly, the law might be attacked as a direct infringement on the right to expression, under the doctrine of compelled speech, or as an indirect burden on protected expression. In this Note, I provide a framework for analyzing whether and how the compelled disclosure of different types of data would trigger First Amendment scrutiny. Although this Note focuses on a PIDAL, the framework provided can serve as starting point for analyzing other proposals to regulate social media.

No matter what data platforms would be required to disclose, a PIDAL is very likely to trigger the First Amendment. This is true even for the compelled disclosure of data which has little resemblance to traditional speech, and even if the government is regulating for the arguably laudable purpose of increasing access to information. Both legislators and the public should be aware of this perhaps unexpected and likely insurmountable barrier to regulation, and thereby appreciate the mass scope of the modern First Amendment.

False Influencing

Alexandra Roberts

alexandra

False Influencing

Abstract

Social media influencers and the brands that engage them are bound to comply with the portions of the FTC Act that regulate advertising and endorsement. But many don’t. While the FTC has promulgated guidelines, sent warning letters to repeat offenders, and occasionally brought actions against influencers and brands whose practices run afoul of the guidelines, it tends to apply most of its resources to issues it considers more pressing than regulating influencer marketing claims. Private parties, meanwhile, lack standing to challenge competitors’ practices based on violations of the FTC Act. The Lanham Act provides companies with a false advertising cause of action, but so far few have called upon it in an attempt to enjoin false or misleading claims their competitors make via influencer marketing. Can an influencer’s failure to disclose that a post is a paid endorsement — a clear violation of FTC Guidelines — constitute a misleading statement under §43(a)(1)(B)? If an influencer’s testimonial about a product or about her experience with it is untrue, might that falsehood be material to consumers’ purchasing decisions, and thus actionable? This article will explore the potential for private actors to use the Lanham Act to challenge competitors’ “false influencing” — disseminating false or misleading advertising messages via influencer marketing — as a means to increase consistency in how ads are regulated across platforms and types of media

Police and Social Media

police

Police and Social Media

Law enforcement, like other professions, has its fair share of professionals who for a variety of reasons can’t maintain their professionalism when using social media. A recent study entitled the Plain View Project looked at 5,000 posts made by both active and retired police officers.  The study found that 1 in 5 posts were either violent or racist to include “displaying bias, applauding violence, scoffing at due process, or using dehumanizing language.”

This study raises a number of issues but one that jumps out to me is the difficulty with calling these officers to testify.  Unlike other professions, police officers are regularly put on the stand.  Their social media posts could go a long way in undermining their credibility before the judge and jury.

For a more on the story read this article.

Social Media Influencers

Legal Intelligencer

Social Media Influencers

This article in the Legal Intelligencer discusses the legal issues that arise when so-called social media influencers fail to disclose their material connection to the product they are promoting online.  “Social media influencers” are defined as “individuals who leverage their social media presence to encourage followers to buy specified goods and services.”  The article also describes efforts by federal government agencies (FTC, SEC, and CFTC) to regulate influencers.  While it appears that the government has made an example of a few influencers, there are many more who continue to flaunt the rules.

Legalintelligencer.com: Companies Beware—Social Media Influencers Are Becoming Enforcement Targets

Platforms the First Amendment and Online Speech: Regulating the Filters

sofia

Sofia Grafanaki

Platforms the First Amendment and Online Speech: Regulating the Filters

Abstract

In recent years, online platforms have given rise to multiple discussions about what their role is, what their role should be, and whether they should be regulated. The complex nature of these private entities makes it very challenging to place them in a single descriptive category with existing rules. In today’s information environment, social media platforms have become a platform press by providing hosting as well as navigation and delivery of public expression, much of which is done through machine learning algorithms. This article argues that there is a subset of algorithms that social media platforms use to filter public expression, which can be regulated without constitutional objections. A distinction is drawn between algorithms that curate speech for hosting purposes and those that curate for navigation purposes, and it is argued that content navigation algorithms, because of their function, deserve separate constitutional treatment. By analyzing the platforms’ functions independently from one another, this paper constructs a doctrinal and normative framework that can be used to navigate some of the complexity.

The First Amendment makes it problematic to interfere with how platforms decide what to host because algorithms that implement content moderation policies perform functions analogous to an editorial role when deciding whether content should be censored or allowed on the platform. Content navigation algorithms, on the other hand, do not face the same doctrinal challenges; they operate outside of the public discourse as mere information conduits and are thus not subject to core First Amendment doctrine. Their function is to facilitate the flow of information to an audience, which in turn participates in public discourse; if they have any constitutional status, it is derived from the value they provide to their audience as a delivery mechanism of information.

This article asserts that we should regulate content navigation algorithms to an extent. They undermine the notion of autonomous choice in the selection and consumption of content, and their role in today’s information environment is not aligned with a functioning marketplace of ideas and the prerequisites for citizens in a democratic society to perform their civic duties. The paper concludes that any regulation directed to content navigation algorithms should be subject to a lower standard of scrutiny, similar to the standard for commercial speech.

The Problem of Online Manipulation

Shaun Spencer

ShaunSpencer01-333x500-4

 

The Problem of Online Manipulation

Abstract

Recent controversies have led to public outcry over the risks of online manipulation. Internal Facebook documents discussed how advertisers could target teens when they feel particularly insecure or vulnerable. Cambridge Analytica suggested that its psychographic profiles enabled political campaigns to exploit individual vulnerabilities online. And researchers manipulated the emotions of hundreds of thousands of Facebook users by adjusting the emotional content of their news feeds. This Article attempts to inform the debate over whether and how to regulate online manipulation of consumers. The Article details the history of manipulative marketing practices and considers how innovations in the Digital Age allow marketers to identify and even trigger individual biases and then exploit them in real time. Part II surveys prior definitions of manipulation and then defines manipulation as an intentional attempt to influence a subject’s behavior by exploiting a bias or vulnerability. Part III considers why online manipulation justifies some form of regulatory response. Part IV identifies the significant definitional and constitutional challenges that would arise in any attempt to regulate online manipulation directly. The Article concludes by suggesting that the core objection to online manipulation is not its manipulative nature but its online implementation. Therefore, the Article suggests that, rather than pursuing direct regulation, we should use the threat of online manipulation as another argument to support the push for comprehensive data protection legislation.

Is Social Media Content a Form of Currency?

Facebook

Is Social Media Content a Form of Currency?

This is the question currently before U.S. District Judge William Alsup who must decide whether to certify a class action lawsuit in which the plaintiffs allege that the personal data shared on Facebook is a form of payment for using the platform.

The underlying lawsuit stems from a breach of Facebook accounts in 2018 that impacted 29 million users.  In defending against the lawsuit, Facebook states that the liability language in its terms of service is well-suited to defeat the claims by plaintiffs, especially since its service is “free.”  In contrast, plaintiffs argue that Facebook is not “free” because users pay by providing valuable data to Facebook.  Plaintiffs go on to note that Facebook uses their content for targeted advertising to the tune of more than $40 billion in 2017.

During the hearing to decide whether to certify the class action lawsuit, the judge acknowledged that he was in unchartered territory and asked both parties to provide him with case law for or against allowing personal information to serve as a “cost” of service.  While there is definitely monetary value in the information shared on social media, I am not sure you can go as far as saying that it constitutes a fee for using that service.

Blog Stats

  • 8,085 hits
Follow LawandSocialMedia on WordPress.com

Criminal Law