Should social media companies ban Holocaust denial from their platforms? What about conspiracy theorists that spew hate? Does good corporate citizenship mean platforms should remove offensive speech or tolerate it? The content moderation rules that companies develop to govern speech on their platforms will have significant implications for the future of freedom of expression. Given the prospects for compelling platforms to respect users’ free speech rights are bleak within the U.S. system, what can be done to protect this important right?
In June 2018, the United Nations’ top expert for freedom of expression called on companies to align their speech codes with standards embodied in international human rights law, particularly the International Covenant on Civil and Political Rights. After the controversy over de-platforming Alex Jones in August 2018, Twitter’s CEO agreed that his company should root its values in international human rights law and Facebook referenced this body of law in discussing its content moderation policies.
This is the first Article to explore what companies would need to do to align the substantive restrictions in their speech codes with the key international standard for protecting freedom of expression. The Article concludes it would be both feasible and desirable for companies to ground their speech codes in this standard though further multi-stakeholder discussions would be helpful in clarifying certain issues that arise in translating international human rights law into a corporate context.
Facebook Tells Police to Stop Creating Fake Accounts
Facebook recently informed Memphis police via letter that they must stop creating and using bogus Facebook pages. The letter stems in large part from an earlier report this summer that the Memphis police were creating fake Facebook accounts in order to surveil Black activists. The question of course is what action, if any, will Facebook take against the Memphis police or any other police department if they continue to violate Facebook’s rules. It would be interesting to see if Facebook would go so far as to remove an entire police department from its site.
Judge Prevents Defense Attorney from Using Social Media to Research Jurors
The vast majority of judges in this country allow attorneys to use social media to research jurors. Now, a particular judge may place some type of stipulation on the research such as the attorney must share it with opposing counsel and or give jurors prior notice that research will occur. However, it is the rare instance where any attorney gets completely shutdown which is what occurred here. What is even more rare is that this is the second time, that I know of, for criminal defense attorney, Andrew Jezic. 5 years earlier a different judge told him that he could not research jurors.
Facebook in Hot Seat for Discriminatory Ads
The picture above was one of alleged discriminatory job advertisements sent by an employer to male Facebook users between the age of 18-50 who were in the Ft. Worth, Texas area or had recently visited the area. Employers are able to send this type of pinpoint advertising to specific groups of people because of the data analytics Facebook has on its users.
According to a complaint filed with the EEOC by the Communication Workers of America and ACLU, women were not shown this ad, which, if true, could violate federal law covering discriminatory hiring practices. Here, the complainants target both the employers and Facebook arguing that Facebook was an employment agency serving as “an active participant in the recruiting campaign rather than a passive publisher of content like a traditional newspaper with a classified section.”
Hate Speech on Social Media
This essay expounds on Raphael Cohen-Almagor’s recent book, Confronting the Internet’s Dark Side, Moral and Social Responsibility on the Free Highway, and advocates placing narrow limitations on hate speech posted to social media websites. The Internet is a limitless platform for information and data sharing. It is, in addition, however, a low-cost, high-speed dissemination mechanism that facilitates the spreading of hate speech including violent and virtual threats. Indictment and prosecution for social media posts that transgress from opinion to inciteful hate speech are appropriate in limited circumstances. This article uses various real-world examples to explore when limitations on Internet-based hate speech are appropriate. In October 2015, twenty thousand Israelis joined a civil lawsuit filed against Facebook in the Supreme Court for the State of New York. Led by the civil rights organization, Shurat HaDin, the suit alleges Facebook allows Palestinian extremists to openly recruit and train terrorists to plan violent attacks calling for the murder of Israeli Jews through their Facebook pages. The suit raises important questions, amongst them: When should the government initiate similar suits to impose criminal sanctions for targeted hate speech posted to Facebook? What constitute effective restrictions on social media that also balance society’s need for robust dialogue and free communication, subject to limitations reflecting a need for order and respect among people? Our essay progresses in four stages. First, we examine philosophical origins of free speech and the historical foundations of free speech in the United States. Second, we provide an overview of American free speech jurisprudence. Third, we address particular jurisprudence that provides a framework for imposing limitations on free speech in the context of social media. American history and jurisprudence embrace free speech as a grounding principle of democracy, yet simultaneously subject speech to limitations. Finally, through a comparative exploration of real-world examples, we address the narrow instance when limitations on inciteful and targeted hate speech are appropriate.
Using Social Media to Monitor Students
Interesting article in the NY Times discussing how schools are increasingly using social media to monitor the public social media sites of their students. The article also discusses the 2014 California law “requiring California schools to notify students and parents if they are even considering a monitoring program.” This same law also permits students to see the information collected about them and have that information destroyed once the student turns 18 or leaves the district.
Authentication of Social Media
According to the article below, “there is no strict rule or formula that must be met in order to have social media communications authenticated in order to be admitted into evidence.” While this may be true in New York, other jurisdictions e.g., Maryland take a more hard-line approach to authenticating evidence derived from social media.
NewYorkLawJournal.com: Authentication of Social Media