The rise of online social networks has engaged regulators, users’ representatives, and social-network service providers in a vibrant regulatory dialogue around shifting privacy norms and laws. Driven by competitive market forces, these social-networking online service providers have introduced new services and opened privacy barriers to allow greater information flow, which, in turn, has created disjunctions between users’ desired and achieved levels of privacy. By examining the conflict of values among stakeholders and subsequent technology changes in the context of privacy expectations, norms, market pressures, and laws, this project explores how the regulatory system affects information collection practices of the largest social network service provider: Facebook. Specifically, the paper traces Facebook’s information collection practices through an historical content analysis of regulatory decisions, users’ complaints, and associated legal documents to illuminate the dynamic relationship among stakeholders in a competitive market.
Undeniably, by adding services and changing privacy settings and notices, online service providers operating in a dynamic and rapidly innovating competitive environment are uniquely able to control their virtual environments and influence users’ behavior as part of the competitive process. This project analyzes the approach of the largest social networking service provider in its competition for users’ attention and, in turn, how it reacts to other stakeholders. The understandings this paper provides help to yield a better sense of required tools and policies to regulate information collection practices.
The presence of terrorist speech on the Internet tests the limits of the First Amendment. Widely available cyber terrorist sermons, instructional videos, blogs, and interactive websites raise complex expressive concerns. On the one hand, statements that support nefarious and even violent movements are constitutionally protected against totalitarian-like repressions of civil liberties. The Supreme Court has erected a bulwark of associational and communicative protections to curtail government from stifling debate through overbroad regulations. On the other hand, the protection of free speech has never been an absolute bar against the regulation of low value expressions, such as calls to violence and destruction.
Terrorist advocacy on the Internet raises special problems because it contains elements of political declaration and self-expression, which are typically protected by the First Amendment. However, terrorist organizations couple these legitimate forms of communication with calls to violence, recruitment to training, and indoctrination to belligerence. Incitement readily available on social media is sometimes immediate or, more often, calibrated to influence and rationalize future dangerous behaviors. This is the first article to analyze all the Supreme Court free speech doctrines that are relevant to the enactment of a constitutionally justifiable anti-terrorism statute. Such a law must grant the federal government authority to restrict dangerous terrorist messages on the Internet, while preserving core First Amendment liberties. Legislators should develop policies and judges should formulate holdings on the bases of the imminent threat of harm, true threats, and material support doctrines. These three frameworks provide the government with the necessary constitutional latitude to prosecute dangerous terrorist speech that is disseminated over social media and, thereby, to secure public safety, without encroaching on speakers’ right to free expression.
Recently, Microsoft, in a blog post, discussed how it will address internet-related terrorist content. While I did not find anything to surprising here, I did like reading about Microsoft’s partnership efforts. Here is sample of how Microsoft is patterning with others.
Leveraging new technologies: One challenge is that once a technology firm removes terrorist content, it is often quickly posted again. It is a game of “whack-a-mole,” but with serious consequences. We want to see if technology that has worked well in other circumstances can be used to good effect here. That’s why we are providing funding and technical support to Professor Hany Farid of Dartmouth College to develop a technology to help stakeholders identify copies of patently terrorist content. The goal is to help curb the spread of known terrorist material with a technology that can accurately and proactively scan and flag public content that contains known terrorist images, video and audio.
Investing in public-private partnerships: We know that tackling these difficult issues will require new and innovative partnerships bringing together experts and leaders from different backgrounds and perspectives. To help with this, we’re a founding member and a financial sponsor of a new, public-private partnership to develop or enhance activities to help combat terrorist abuse of Internet platforms. Launched in April in Geneva, the initiative brings together the United Nations Counter-Terrorism Committee Executive Directorate, civil society, academics, and government and industry representatives, to address terrorist content.
Providing additional information and resources: We appreciate that we can also work to enhance education and understanding, especially among young people. To help, we’re also adding new resources to the online safety program pages of our YouthSpark Hub, an important component of Microsoft’s YouthSpark initiative, which provides access to educational and economic information and opportunities for young people around the world. YouthSpark Hub provides resources for safer online socializing and tools to identify the risks and responsibilities of being good digital citizens. The new resources include material designed to help young people distinguish factual and credible content from misinformation and hate speech as well as tools for how to report and counter negative content. Experts say youth with more fully developed analytical and critical thinking skills are less likely to start down questionable paths, including those toward radicalization.
To read the entire post go here.
Law Firm Blog Targeting Social Media
Apparently, I have been remiss in ignoring the work occurring at Sheppard Mullin which over the past few years has established itself as law firm that focuses on the needs of those who use social media and interactive games. According to the firm’s website:
Sheppard Mullin was one of the first major law firms to create a multidisciplinary games industry team. The social media and interactive games industries have grown to support different forms of interactive entertainment, hardware and software platforms, and business models. Our nationally recognized, multidisciplinary team includes patent lawyers with strong technical backgrounds in the electrical and computer arts, other intellectual property lawyers who protect brands and expressive content, transactional and financing lawyers who get deals done, and those with unique experience in sports industries, digital media and distribution, digital business, advertising, gambling, fantasy sports, and virtual currencies. Capable of handling any challenge, our Social Media & Games team has expanded to more than 70 lawyers in 15 offices in the U.S., Europe and Asia.
Among other things, this firm provides a weekly Web Wrap-Up entitled, Law of the Level, which is arguably a blog full of insightful information on social media and interactive games. It will be interesting to see whether Sheppard Mullin can offer real competition to Morrison Foerster which, at present, is the go to firm for all things social media related.
Facebook Publishes Editorial Guidelines
In light of the recent fall about how Facebook determines what stories to include in “Trending Topics,” the social media behemoth has released its editorial guidelines to the public. Among other things the guidelines inform readers that Facebook editors rely on 10 media outlets to a determine the importance of a story. Specifically, the guidelines read as follows: “We measure this by checking if it is leading at least five of the following 10 news websites: BBC News, CNN, Fox News, the Guardian, NBC News, the New York Times, USA Today, the Wall Street Journal, Washington Post, Yahoo News or Yahoo.”
To read more about this story go here.
Witness Intimidation on Facebook Leads to 37 Month Sentence
Knoxville News Sentinel: Email? Facebook message? Judge cares less about the label than the crime
Incorporating Social Media into Federal Background Investigations
This past friday the House Government Reform Committee held a hearing on how OPM uses social media to conduct background checks. To access the hearing go here. Below is the House write up of the hearing.
To better understand why social media information is not currently used in conducting background investigations.
To learn what pilot programs have found to date, and what plans are underway to responsibly include social media information in the future.
Federal agencies do not currently make use of social media data in the background investigations process, nor do they inquire about applicants’ online identities.
The Consolidated Appropriations Act of 2016 contained a measure mandating the Office of the Director of National Intelligence direct federal agencies to adopt a personnel security program integrating social media information by the end of 2020.
OPM announced a pilot program to test its ability to automatically track public social media postings of people applying for security clearances.