Positioned as a “Supreme Court” for Facebook’s content moderation decisions, the external panel of 20 journalists, academics, lawyers and human rights experts will weigh in — and potentially override its content moderation decisions. The board has up to 90 days to review cases submitted by users through its website after they have exhausted their content appeal options directly with Facebook. If the Board sides with the user, Facebook will restore the content and potentially re-evaluate its policies...
‘#SP’ Or ‘Thanks [Brand]’ Is Not Enough: FTC Guides for Social Media Influencers on Endorsements and Testimonials
With global advertising expenditures on the rise, social media supports an increasing share of all advertising and endorsements and is subject to regulation by the Federal Trade Commission (“FTC”). The FTC established the “Guides Concerning the Use of Endorsements and Testimonials in Advertising,” to regulate advertising to ensure that promotional content is honest and not misleading, and that material connections between the advertiser and the endorser are disclosed. These guides are particularly important in social media posts so that a reader knows whether the celebrity poster simply enjoyed a good restaurant dinner, or if they were paid to endorse the restaurant. While social media sites are beginning to provide some tools for disclosure, some consumers may not notice those tools or be aware of their purpose. What are the current FTC guides and how can social media influencers and advertisers comply
According to press reports, Harvard Law School now has a non-attribution policy, which reads in part as follows:
When using social media or other forms of communication designed to reach members of the public, no one may repeat or describe a statement made by a student in class in a manner that would enable a person who was not present in the class to identify the speaker of the statement.
Apparently, this new policy stems from an incident in which a student was cleaning his gun during a Zoom call. Other students took screenshots of the student cleaning his gun and posted them online.
Here is the link to the policy.
This article explores the ideals of open Internet governance in Brazil. I examine Brazil’s Internet law, the Marco Civil da Internet (MCI), which promotes the right to Internet access, online privacy, and net neutrality. The MCI’s ideals of a free and open Internet are challenged by Internet companies, such as Facebook, which offer “zero-rating” promotions that provide limited, free mobile data to low-income subscribers. I juxtapose the ideals of openness embodied in the regulatory sphere of the MCI with those of Brazil’s cultura livre (free culture) movement to show the ascendance of open values in Brazilian governance and culture. Accordingly, I employ the rhetorical question, “Is Facebook the Internet?” to demonstrate the ways in which commitments to open Internet governance, expressed in both the cultural and regulatory realms, run counter to the more proprietary ideals of the transnational tech community.
Misinformation Mayhem: Social Media Platforms’ Efforts to Combat Medical and Political Misinformation
Social media platforms today are playing an ever-expanding role in shaping the contours of today’s information ecosystem. The events of recent months have driven home this development, as the platforms have shouldered the burden and attempted to rise to the challenge of ensuring that the public is informed – and not misinformed – about matters affecting our democratic institutions in the context of our elections, as well as about matters affecting our very health and lives in the context of the pandemic. This Article examines the extensive role recently assumed by social media platforms in the marketplace of ideas in the online sphere, with an emphasis on their efforts to combat medical misinformation in the context of the COVID-19 pandemic as well as their efforts to combat false political speech in the 2020 election cycle. In the context of medical misinformation surrounding the COVID-19 pandemic, this Article analyzes the extensive measures undertaken by the major social media platforms to combat such misinformation. In the context of misinformation in the political sphere, this Article examines the distinctive problems brought about by the microtargeting of political speech and by false political ads on social media in recent years, and the measures undertaken by major social media companies to address such problems. In both contexts, this Article examines the extent to which such measures are compatible with First Amendment substantive and procedural values.
Social media platforms are essentially attempting to address today’s serious problems alone, in the absence of federal or state regulation or guidance in the United States. Despite the major problems caused by Russian interference in our 2016 elections, the U.S. has failed to enact regulations prohibiting false or misleading political advertising on social media – whether originating from foreign sources or domestic ones – because of First Amendment, legislative, and political impediments to such regulation. And the federal government has failed miserably in its efforts to combat COVID-19 or the medical misinformation that has contributed to the spread of the virus in the U.S. All of this essentially leaves us (in the United States, at least) solely in the hands, and at the mercy, of the platforms themselves, to regulate our information ecosystem (or not), as they see fit.
The dire problems brought about by medical and political misinformation online in recent months and years have ushered in a sea change in the platforms’ attitudes and approaches toward regulating content online. In recent months, for example, Twitter has evolved from being the non-interventionist “free speech wing of the free speech party” to designing and operating an immense operation for regulating speech on its platform – epitomized by its recent removal and labeling of President Donald Trump’s (and Donald Trump, Jr.’s) misleading tweets. Facebook for its part has evolved from being a notorious haven for fake news in the 2016 election cycle to standing up an extensive global network of independent fact-checkers to remove and label millions of posts on its platform – including by removing a post from President Trump’s campaign account, as well as by labeling 90 million such posts in March and April 2020, involving false or misleading medical information in the context of the pandemic. Google for its part has abandoned its hands-off approach to its search algorithm results and has committed to removing false political content in the context of the 2020 election and to serving up prominent information by trusted health authorities in response to COVID-19 related searches on its platforms.
These approaches undertaken by the major social media platforms are generally
consistent with First Amendment values, both the substantive values in terms of what constitutes protected and unprotected speech, and the procedural values, in terms of process accorded to users whose speech is restricted or otherwise subject to action by the platforms. The platforms have removed speech that is likely to lead to imminent harm and have generally been more aggressive in responding to medical misinformation than political misinformation. This approach tracks First Amendment substantive values, which accord lesser protection for false and misleading claims regarding medical information than for false and misleading political claims.
The platforms’ approaches generally adhere to First Amendment procedural values as well, including by specifying precise and narrow categories of what speech is prohibited, providing clear notice to speakers who violate their rules regarding speech, applying their rules consistently, and according an opportunity for affected speakers to appeal adverse decisions regarding their content.
While the major social media platforms’ intervention in the online marketplace of ideas is not without its problems and not without its critics, this Article contends that this trend is by and large a salutary development – and one that is welcomed by the vast majority of Americans and that has brought about measurable improvements in the online information ecosystem. Recent surveys and studies show that such efforts are welcomed by Americans and are moderately effective in reducing the spread of misinformation and in improving the accuracy of beliefs of members of the public. In the absence of effective regulatory measures in the United States to combat medical and political misinformation online, social media companies should be encouraged to continue to experiment with developing and deploying even more effective measures to combat such misinformation, consistent with our First Amendment substantive and procedural values.
The Verge has a good write up of Senator Hawley’s latest legislative swipe at social media platforms.
The Behavioral Advertising Decisions Are Downgrading Services (or BAD ADS) Act would remove protections under Section 230 of the Communications Decency Act for large web services that display ads based on “personal traits of the user” or a user’s previous online behavior. This is defined as “behavioral advertising” and does not include targeting based on users’ locations or the content of the site they’re on.
Among the greatest emerging challenges to global efforts to promote and protect human rights is the role of private sector entities in their actualization, since international human rights rules were designed to apply primarily, and in many cases solely, to the actions of governments. This paradigm is particularly evident in the expressive space, where private sector platforms play an enormously influential role in determining the boundaries of acceptable speech online, with none of the traditional guardrails governing how and when speech should be restricted. Many governments now view platform-imposed rules as a neat way of sidestepping legal limits on their own exercise of power, pressuring private sector entities to crack down on content which they would be constitutionally precluded from targeting directly. For their part, the platforms have grown increasingly uncomfortable with the level of responsibility they now wield, and in recent years have sought to modernize and improve their moderation frameworks in line with the growing global pressure they face. At the heart of these discussions are debates around how traditional human rights concepts like freedom of expression might be adapted to the context of “platform law.” This Article presents a preliminary framework for applying foundational freedom of expression standards to the context of private sector platforms, and models how the three-part test, which lies at the core of understandings of freedom of expression as a human right, could be applied to platforms’ moderation functions.
Facing a lot of pressure from advertisers and others, Facebook is thinking about changing how it deals with political ads. This Washington Post article describes the ongoing discussions.
The debate over First Amendment jurisprudence often assumes that the First Amendment reflects a choice of non-regulation over regulation. This article suggests, however, that it is more accurate to describe the First Amendment as reflecting a choice of social regulation over legal regulation. Social regulation of speech has generally been lauded and preferred in America for its autonomy-enhancing properties, as private parties in civil society often lack the overwhelming power of a government censor. A review of recent high-profile incidents of social speech regulation, however, suggests that the ubiquity of social media and the hegemony of corporations have increased the breadth, visibility, and mechanisms of social speech regulation to such an extent that its scope can now approach that of a government censor. These mechanisms generally entail economic pressure on corporations, designed to force them to fire and ostracize employees who engage in censorable, contested, or discreditable speech. While the level of offensiveness of these types of speech is not the same, the sanction often is the same-loss of livelihood. This article argues that if the expected benefits of social speech regulation in an era of social media are not to be outweighed by losses in citizen autonomy, an approach to social regulation that includes legal protections against domination is required, beginning in the crucibles of free speech – public schools and universities.
Court’s Ban on Future Social Media Postings about Relatives Unconstitutional
The Ohio Supreme Court today vacated portions of Mercer County civil stalking protection orders that prohibited a man from posting anything on social media about his mother and sister, whom he accused of contributing to the deaths of their husbands.