To study and uncover the use, and often abuse, of social media platforms, journalists and researchers need access to data. As social media companies have unilaterally restricted third-party access to data, some have proposed a “public interest data access law” (“PIDAL”). Such a law would compel social media companies to grant researchers access to anonymized “activity” data, most likely delivered through an API.
If a PIDAL is passed it is likely to be attacked as a violation of the First Amendment. Broadly, the law might be attacked as a direct infringement on the right to expression, under the doctrine of compelled speech, or as an indirect burden on protected expression. In this Note, I provide a framework for analyzing whether and how the compelled disclosure of different types of data would trigger First Amendment scrutiny. Although this Note focuses on a PIDAL, the framework provided can serve as starting point for analyzing other proposals to regulate social media.
No matter what data platforms would be required to disclose, a PIDAL is very likely to trigger the First Amendment. This is true even for the compelled disclosure of data which has little resemblance to traditional speech, and even if the government is regulating for the arguably laudable purpose of increasing access to information. Both legislators and the public should be aware of this perhaps unexpected and likely insurmountable barrier to regulation, and thereby appreciate the mass scope of the modern First Amendment.
Social media influencers and the brands that engage them are bound to comply with the portions of the FTC Act that regulate advertising and endorsement. But many don’t. While the FTC has promulgated guidelines, sent warning letters to repeat offenders, and occasionally brought actions against influencers and brands whose practices run afoul of the guidelines, it tends to apply most of its resources to issues it considers more pressing than regulating influencer marketing claims. Private parties, meanwhile, lack standing to challenge competitors’ practices based on violations of the FTC Act. The Lanham Act provides companies with a false advertising cause of action, but so far few have called upon it in an attempt to enjoin false or misleading claims their competitors make via influencer marketing. Can an influencer’s failure to disclose that a post is a paid endorsement — a clear violation of FTC Guidelines — constitute a misleading statement under §43(a)(1)(B)? If an influencer’s testimonial about a product or about her experience with it is untrue, might that falsehood be material to consumers’ purchasing decisions, and thus actionable? This article will explore the potential for private actors to use the Lanham Act to challenge competitors’ “false influencing” — disseminating false or misleading advertising messages via influencer marketing — as a means to increase consistency in how ads are regulated across platforms and types of media
Police and Social Media
Law enforcement, like other professions, has its fair share of professionals who for a variety of reasons can’t maintain their professionalism when using social media. A recent study entitled the Plain View Project looked at 5,000 posts made by both active and retired police officers. The study found that 1 in 5 posts were either violent or racist to include “displaying bias, applauding violence, scoffing at due process, or using dehumanizing language.”
This study raises a number of issues but one that jumps out to me is the difficulty with calling these officers to testify. Unlike other professions, police officers are regularly put on the stand. Their social media posts could go a long way in undermining their credibility before the judge and jury.
For a more on the story read this article.
Social Media Influencers
This article in the Legal Intelligencer discusses the legal issues that arise when so-called social media influencers fail to disclose their material connection to the product they are promoting online. “Social media influencers” are defined as “individuals who leverage their social media presence to encourage followers to buy specified goods and services.” The article also describes efforts by federal government agencies (FTC, SEC, and CFTC) to regulate influencers. While it appears that the government has made an example of a few influencers, there are many more who continue to flaunt the rules.
Legalintelligencer.com: Companies Beware—Social Media Influencers Are Becoming Enforcement Targets
In recent years, online platforms have given rise to multiple discussions about what their role is, what their role should be, and whether they should be regulated. The complex nature of these private entities makes it very challenging to place them in a single descriptive category with existing rules. In today’s information environment, social media platforms have become a platform press by providing hosting as well as navigation and delivery of public expression, much of which is done through machine learning algorithms. This article argues that there is a subset of algorithms that social media platforms use to filter public expression, which can be regulated without constitutional objections. A distinction is drawn between algorithms that curate speech for hosting purposes and those that curate for navigation purposes, and it is argued that content navigation algorithms, because of their function, deserve separate constitutional treatment. By analyzing the platforms’ functions independently from one another, this paper constructs a doctrinal and normative framework that can be used to navigate some of the complexity.
The First Amendment makes it problematic to interfere with how platforms decide what to host because algorithms that implement content moderation policies perform functions analogous to an editorial role when deciding whether content should be censored or allowed on the platform. Content navigation algorithms, on the other hand, do not face the same doctrinal challenges; they operate outside of the public discourse as mere information conduits and are thus not subject to core First Amendment doctrine. Their function is to facilitate the flow of information to an audience, which in turn participates in public discourse; if they have any constitutional status, it is derived from the value they provide to their audience as a delivery mechanism of information.
This article asserts that we should regulate content navigation algorithms to an extent. They undermine the notion of autonomous choice in the selection and consumption of content, and their role in today’s information environment is not aligned with a functioning marketplace of ideas and the prerequisites for citizens in a democratic society to perform their civic duties. The paper concludes that any regulation directed to content navigation algorithms should be subject to a lower standard of scrutiny, similar to the standard for commercial speech.