CA has a new law on the books, AB 1475, which prohibits law enforcement from posting mugshots on social media platforms. The law, however, only applies to nonviolent crimes. Furthermore, there are some notable exceptions e.g., the suspect is at large, poses an imminent threat, or release of the booking photo could help in apprehension. According to Assemblyman Evan Low who introduced AB 1475, some law enforcement agencies across the state have placed mugshots on social media in an effort to “shame” suspects and there is no real public safety purpose behind the postings.
To read more about this law go here.
Social media today plays a central, albeit vexing and divisive role in our social and political culture. In response to the alleged failures of social media, a vast array of regulatory proposals have been advanced, and in some cases legislatively enacted, that would restrict the ways in which social media platform owners may moderate content on their platforms. These proposals include, among other things, imposing common carrier status on platforms (an approach endorsed by Justice Thomas in a recent separate opinion), requiring viewpoint-neutral content moderation policies, and restricting or conditioning platforms’ Section 230 immunities in various ways. What all of these proposals have in common is that they seek to impose legal restrictions on how social media platforms control the content that they host, refuse to host, display, and prioritize.
These proposals are in deep tension with the idea that platforms themselves have First Amendment rights to control what content is available or visible on their platforms—what I call editorial rights. This article considers whether, and to what extent, social media platforms enjoy First Amendment editorial rights, and the implications of those rights for assorted regulatory initiatives.
I begin by defining First Amendment editorial rights, and distinguishing between different kinds of editorial rights. I then examine how, and to what extent, the courts have extended editorial rights to new communications technologies. I next turn to the specific question of internet platform editorial rights, concluding that social media platforms should indeed enjoy substantial editorial rights, though probably fewer than prototypical holders of editorial rights such as print newspapers. I conclude by considering whether current regulatory proposals are consistent with these editorial rights.
Social media attorney Daliah Saper offers her take on Mahanoy Area School District v. B.L., which was recently argued before the U.S. Supreme Court. The issue in the case was whether Tinker v. Des Moines Independent Community School District, which holds that public school officials may regulate speech that would materially and substantially disrupt the work and discipline of the school, applies to student speech that occurs off campus.
The case arose from a disgruntled JV cheerleader who did not make the Varsity squad. The cheerleader in question (B.L.) and her friend took to Snapchat to express their displeasure with the selection process. More specifically, they took two Snaps with raised middle fingers and included the following text:
“F–k school f–k softball f–k cheer f–k everything”
BL had approximately 250 friends who followed her on Snapchat and while Snaps self-delete one of her friends took a screenshot of her Snap. Ultimately, the cheerleading coaches became aware of BL’s Snap and she was suspended from the cheerleading team for one year.
BL appealed the school’s decision and the federal district court in Pennsylvania sided with BL ruling that her Snap had had no connection to the school and therefore was beyond its disciplinary reach under Tinker. The 3rd Circuit Court of Appeals upheld the lower court’s decision which created a circuit split that the Supreme Court took up. A decision should be rendered some time this summer. To hear Daliah Saper’s take on the case go here.
Facebook’s Independent Oversight Board often referred to as the Supreme Court of Facebook has upheld the provider’s suspension of Donald Trump’s account. However, the Board did find that Facebook might have to reexamine the “indefinite” nature of the ban.
Regulating social media companies has been the talk of the town. In recent years, the disinformation crisis has made resolving the issue one of necessity. The consequences of disinformation campaigns are not limited to meddling elections or political debates, they have at times been deadly. In response, however, many have for the most part focused on regulating the companies in the U.S., without paying attention to their global impact, particularly in the Global South. Moreover, there is also a scholarly gap on consumer protection measures in many of the developing countries in the South and the way that social media companies are neglecting consumer rights.
Countries such as the U.S. have the power to regulate theses private companies, should they choose to do so. However, the current power asymmetry limits many countries in the Global South to have any meaningful bargaining power to advocate for their citizen’s consumer rights and managing misinformation campaigns in their sovereign territories. In some countries, it is even unclear if there is any political will from the government to advocate for any consumer right. This power imbalance is not resolved by optimism towards corporate social responsibility, nor by corporate self-governance. This article argues that unless countries in the Global South act collectively, they cannot expect any major change from powerful social media companies. Regional treaties among countries as a form of collective action could push social media companies to be more attentive to their actions outside of the Global North and bare responsibility for their actions on a transnational space. They can also, in the long term, inspire a global coalition and bring about public accountability on a global level, for corporations that are working globally.
Over the past few years, there have been numerous news stories about prosecutors posting racially inflammatory content on their social media accounts. There have also been several incidents in recent years in which prosecutors have commented on matters of public concern on social media in a way that is not overtly racist but nonetheless raises legitimate concerns over the prosecutors’ integrity and appreciation of the special role that prosecutors play. Concerns over the extent to which prosecutors bring their personal biases into the courtroom have only increased in recent years and have contributed to the doubts as to the overall fairness of the criminal justice system, particularly as applied to people of color. These forms of extra-prosecutorial conduct tend to diminish public confidence in the impartiality, integrity, and independence of prosecutors and the criminal justice system more broadly. Currently, no rule of professional conduct speaks directly to the situation in which a prosecutor engages in such conduct in a private capacity. This Article addresses this gap in prosecutorial ethics.
The Facebook Oversight Board’s decision about the suspension of Donald Trump’s account is different from the Board’s other cases because it interests states. The ‘Trump Ban’ case affects the Board’s reputation and Facebook’s relationships with states and publics. We will not understand the case’s impact if we do not understand these relationships.
Scholarship about social media platforms discusses their relationship with states and users. The Essay is the first to expand this theorization to account for differences among states, the varying influence of different publics and the internal complexity of companies. Theorizing Facebook’s relationships this way includes less influential states and publics that are otherwise obscured. It reveals that Facebook engages with states and publics through multiple, parallel regulatory conversations, further complicated by the fact that Facebook itself is not a monolith. This Essay argues that Facebook has many faces – different teams working towards different goals, and engaging with different ministries, institutions, scholars and civil society organizations. Content moderation exists within this eco-system.
This Essay’s account of Facebook’s faces and relationships shows that less influential publics can influence the company through strategic alliances with strong publics or powerful states. It also suggests that Facebook’s carelessness with a seemingly weak state or a group, may affect its relationship with a strong public or state that cares about the outcome.
To be seen as independent and legitimate, the Oversight Board needs to show its willingness to curtail Facebook’s flexibility in its engagement with political leaders where there is a real risk of harm. This Essay hopes to show Facebook that the short-term retaliation from some states may be balanced out by the long-term reputational gains with powerful publics and powerful states who may appreciate its willingness to set profit-making goals aside to follow the Oversight Board’s recommendations.
Constitutional doctrine protects false speech and even lies in certain circumstances. The doctrine also endorses non-literal interpretation of speech that can cause statements implying factual assertions to be treated as non-factual, non-actionable opinion. These doctrines limit the degree to which laws may counteract falsity. Historically publishers exercised discretion, through ex ante review, that limited the dissemination of false statements, including those that would have been protected speech had they been published.
Political dissatisfaction with the exercise of such discretion has led to calls to treat social media outlets either as state actors or common carriers. Neither option is desirable. Social media outlets do not satisfy the legal criteria for state action, and misguided claims that Section 230 gives them a subsidy provides no logical basis for treating them as state actors. Nor is common carrier treatment warranted. If the relevant market is publicly available expression, as critics seem to assert, then even the largest outlets have no plausible claim to market power. Normatively, to treat social media outlets as either state actors or common carriers would subject them to falsity-protecting constitutional rules and thus lead to a net increase in harmful conduct–lies, among other things. Public discourse would be better served by allowing media outlets to continue to refine their content moderation practices, as private speech outlets historically have done.
President Biden’s pick to become the new Secretary of Commerce, Rhode Island Governor Gina Raimondo, told members of the Senate that if confirmed she will look to modify Section 230 of the Communications Decency Act. Her proposed changes have yet to be put forward and it remains to be seen whether or not they would receive Congressional support.
TheVerge.com: Biden’s Commerce nominee backs changes to Section 230
Twenty-five years ago, Eugene Volokh published his seminal article Cheap Speech and What It will Do, predicting many of the consequences of the then-brand-new Internet. On the whole, Volokh’s tone was optimistic. While many of his predictions have indeed come true, many would argue that his optimism was overstated. To the contrary, in recent years Internet giants generally, social media firms specifically, and Facebook and its CEO Mark Zuckerberg more specifically, have come under sharp and extensive criticism. Among other things, Facebook has been accused of violating its users’ privacy, of failing to remove content that constitutes stalking or personal harassment, of permitting domestic and foreign actors (notably Russia) to use fake accounts to manipulate American voters by disseminating false and misleading political speech, of failing to remove content that incites violence, and of excessive censorship of harmless content. Inevitably, critics of Facebook have proposed a number of regulatory solutions to Facebook’s alleged problems, ranging from regulating the firm’s use of personal data, imposing liability on Facebook for harm caused by content on its platform, treating Facebook as a utility, to even breaking up the company. Given the importance of Facebook, with over 2 billion users worldwide and a valuation of well over half a trillion dollars, these proposals raise serious questions.
This essay will argue that while Facebook is certainly not free of fault, many of the criticisms directed at it are overstated or confused. Furthermore, the criticisms contradict one another, because some of the solutions proposed to solve one set of problems—notably privacy—would undermine our ability to respond to other problems such as harassment, incitement and falsehood. And vice versa. More fundamentally, critics fail to confront the simple fact that Facebook and other Internet firms (notably Google) provide, without financial charge, services such as social media, searches, email, and mapping, which people want and value but whose provision entails costs. To propose regulatory “solutions” which would completely undermine the business model that permits these free services without proposing alternatives and without taking into account the preferences and interests of Facebook users, especially in poor and autocratic countries where, for all of its conceded problems, Facebook provides important and even essential services, is problematic at best. Finally, the failure of critics to seriously consider whether the First Amendment would even permit many of the regulatory approaches they propose, all in the name of preserving democracy and civil dialogue, raises questions about the seriousness of some of these critics.
Ultimately, this essay argues is that aside from some limited regulatory initiatives, we should probably embrace humility. This means, first, that unthinkingly importing old approaches such as a utility or publisher model to social media is wrong-headed, and will surely do harm without accomplishing their goals. Other proposals, on the other hand, might “solve” some problems, but at the cost of killing the goose that lays the golden egg. For now, the best path might well be the one we are on: supporting sensible, narrow reforms, but otherwise muddling along with a light regulatory touch, while encouraging/pressuring companies to adopt voluntary policies such as Twitter’s recent ban on political advertising, Google’s restrictions on micro-targeted political ads, and Facebook’s prohibitions on electoral manipulation. Before we take potentially dangerous and unconstitutional legislative action, perhaps we should first see how these experiments evolve and work out. After all, social media is less than two decades old and there is still much we need to learn before thoughtful and effective regulation is plausible.