Social Media Influencers
This article in the Legal Intelligencer discusses the legal issues that arise when so-called social media influencers fail to disclose their material connection to the product they are promoting online. “Social media influencers” are defined as “individuals who leverage their social media presence to encourage followers to buy specified goods and services.” The article also describes efforts by federal government agencies (FTC, SEC, and CFTC) to regulate influencers. While it appears that the government has made an example of a few influencers, there are many more who continue to flaunt the rules.
Legalintelligencer.com: Companies Beware—Social Media Influencers Are Becoming Enforcement Targets
In recent years, online platforms have given rise to multiple discussions about what their role is, what their role should be, and whether they should be regulated. The complex nature of these private entities makes it very challenging to place them in a single descriptive category with existing rules. In today’s information environment, social media platforms have become a platform press by providing hosting as well as navigation and delivery of public expression, much of which is done through machine learning algorithms. This article argues that there is a subset of algorithms that social media platforms use to filter public expression, which can be regulated without constitutional objections. A distinction is drawn between algorithms that curate speech for hosting purposes and those that curate for navigation purposes, and it is argued that content navigation algorithms, because of their function, deserve separate constitutional treatment. By analyzing the platforms’ functions independently from one another, this paper constructs a doctrinal and normative framework that can be used to navigate some of the complexity.
The First Amendment makes it problematic to interfere with how platforms decide what to host because algorithms that implement content moderation policies perform functions analogous to an editorial role when deciding whether content should be censored or allowed on the platform. Content navigation algorithms, on the other hand, do not face the same doctrinal challenges; they operate outside of the public discourse as mere information conduits and are thus not subject to core First Amendment doctrine. Their function is to facilitate the flow of information to an audience, which in turn participates in public discourse; if they have any constitutional status, it is derived from the value they provide to their audience as a delivery mechanism of information.
This article asserts that we should regulate content navigation algorithms to an extent. They undermine the notion of autonomous choice in the selection and consumption of content, and their role in today’s information environment is not aligned with a functioning marketplace of ideas and the prerequisites for citizens in a democratic society to perform their civic duties. The paper concludes that any regulation directed to content navigation algorithms should be subject to a lower standard of scrutiny, similar to the standard for commercial speech.
Recent controversies have led to public outcry over the risks of online manipulation. Internal Facebook documents discussed how advertisers could target teens when they feel particularly insecure or vulnerable. Cambridge Analytica suggested that its psychographic profiles enabled political campaigns to exploit individual vulnerabilities online. And researchers manipulated the emotions of hundreds of thousands of Facebook users by adjusting the emotional content of their news feeds. This Article attempts to inform the debate over whether and how to regulate online manipulation of consumers. The Article details the history of manipulative marketing practices and considers how innovations in the Digital Age allow marketers to identify and even trigger individual biases and then exploit them in real time. Part II surveys prior definitions of manipulation and then defines manipulation as an intentional attempt to influence a subject’s behavior by exploiting a bias or vulnerability. Part III considers why online manipulation justifies some form of regulatory response. Part IV identifies the significant definitional and constitutional challenges that would arise in any attempt to regulate online manipulation directly. The Article concludes by suggesting that the core objection to online manipulation is not its manipulative nature but its online implementation. Therefore, the Article suggests that, rather than pursuing direct regulation, we should use the threat of online manipulation as another argument to support the push for comprehensive data protection legislation.
Is Social Media Content a Form of Currency?
This is the question currently before U.S. District Judge William Alsup who must decide whether to certify a class action lawsuit in which the plaintiffs allege that the personal data shared on Facebook is a form of payment for using the platform.
The underlying lawsuit stems from a breach of Facebook accounts in 2018 that impacted 29 million users. In defending against the lawsuit, Facebook states that the liability language in its terms of service is well-suited to defeat the claims by plaintiffs, especially since its service is “free.” In contrast, plaintiffs argue that Facebook is not “free” because users pay by providing valuable data to Facebook. Plaintiffs go on to note that Facebook uses their content for targeted advertising to the tune of more than $40 billion in 2017.
During the hearing to decide whether to certify the class action lawsuit, the judge acknowledged that he was in unchartered territory and asked both parties to provide him with case law for or against allowing personal information to serve as a “cost” of service. While there is definitely monetary value in the information shared on social media, I am not sure you can go as far as saying that it constitutes a fee for using that service.
Regulating Social Media
Here is a Wired magazine article suggesting that social media should be regulated like guns. I am not sure the analogy works here but this article follows a growing trend calling for increased regulation of social media platforms.
Removing Content Expeditiously
In light of the recent deadly attacks on several mosques in New Zealand that were live streamed on Facebook, Australia has passed a new law which would make it criminal if a social media provider failed to remove violent content in an “expeditious” manner. While Facebook did ultimately take down the video, it was up for almost an hour and viewed thousands of times.
While I applaud the effort to regulate social media providers, I am not sure that this law was well thought out. In fact, most view it as a rushed, knee-jerk reaction to a horrific event. Also, I am not sure what more is expected from social media providers. While I agree that Facebook should have responded more quickly, I am not sure what is an acceptable length of time. I guess the key here is how Australian prosecutors define “expeditious.”
In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms aren’t bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression — particularly the entangled concepts of “public figures” and “newsworthiness.”
In this article, we offer the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. We first introduce the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. We then turn to the “New Governors” and examine how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, we offer lessons for both courts and platforms as they confront new challenges posed by online speech. We expose the pitfalls of using algorithms to identify public figures; we explore the diminished utility of setting rules based on voluntary involvement in public debate; and we analyze the dangers of ad-hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted our normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in constitutional law or content moderation.
Finally, we explore what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. We argue that these platforms act as legislature, executive, judiciary, and press — but without any separation of powers to establish checks and balances. With these realities exposed, we contend that platforms must separate their powers and create institutions like the Supreme Court to provide transparent decisions and consistent rationales on how concepts related to newsworthiness and public figures are applied. This will give users some representation and due process in the new, private system regulating their expression. Ultimately, platforms cannot rely on global norms about free speech — which do not exist — and must instead make hard choices about which values they want to uphold through their content-moderation rules. We conclude that platforms should adopt constitution-like charters to guide the independent institutions that should oversee them.