Twitter’s policy reboot: the good, the bad, and the ugly

BY WAFA BEN HASSINE| IN Digital Media | 15/01/2016
The lack of transparency in what is considered the promotion of terrorism risks Twitter applying its new policy to users inconsistently and arbitrarily,
Says WAFA BEN HASSINE

 

 Reprinted courtesy the Electronic Frontier Foundation

 

January 12, 2016 Update: It is important to note that some of the language that was added to the Twitter Rules on December 30, 2015 is not entirely new and was recycled from other Twitter Help pages, such as the Abusive behavior policy page. We consider the direct and explicit inclusion of this language in the Twitter Rules significant for the reasons discussed in the post below.

Twitter has struggled with how to deal with harassment and abuse on its service for years. Former CEO Dick Costolo went so far as to say, “We suck at dealing with abuse” and vowed to make improvements. Some of Twitter’s previous efforts in managing online harassment and abuse, particularly those that give users more control over what they see, have garnered our praise in the past. Allowing users to export and share block lists, for instance, marked an important step in that direction. However, Twitter’s latest announcement gives us some reason for concern. Last Wednesday, Twitter announced that it is updating their rules to clarify what it considers to be “abusive behavior and hateful conduct.”

“The updated language emphasizes that Twitter will not tolerate behavior intended to harass, intimidate, or use fear to silence another user’s voice. As always, we embrace and encourage diverse opinions and beliefs – but we will continue to take action on accounts that cross the line into abuse.” 

A side-by-side comparison of the Twitter Rules on December 27, 2015 and today shows considerable changes. Some of these changes are organizational, such as dividing the sweeping “Abuse and spam” section into respective “Abuse” and “Spam” categories, each with developed subsections. Other changes, however, are substantive and present users with new areas of ambiguity. Notably, four new subsections were added: (1) violent threats, (2) harassment, (3) hateful conduct, and (4) self-harm.

 

The Good

What was once the “abusive behavior” section is now divided up into one section on harassment and another on hateful conduct. The change is welcome – Twitter did not previously offer guidance as to what constitutes harassment. Today, Twitter lists four factors that the company may consider when evaluating abusive behavior in its harassment section. 

Twitter also says that it is taking new measures to help people who appear to be contemplating suicide or self-harm while using the platform. In the “Self-Harm” subsection, Twitter indicates that when it receives reports that a user is considering suicide or self-harm, it may “take a number of steps to assist them, such as reaching out to that person expressing our concern and the concern of other users on Twitter or providing resources such as contact information for our mental health partners.” It’s good to see Twitter helping vulnerable users get help, but troubling that the subsection is listed under a header lumping all other offenses: “Any accounts and related accounts engaging in the activities specified below may be temporarily locked and/or subject to permanent suspension.” We hope that Twitter does not suspend users who appear to be contemplating self-harm and/or suicide, and reaches out to them instead.

Additionally, Twitter announced that it will put free speech ahead of the “rights” of politicians to avoid embarrassment. Last summer, Twitter pulled the plug on Politwoops, a transparency project that records, stores, and publishes deleted tweets of politicians, by revoking access to its API. Last Thursday, Twitter announced that it reached an agreement with the Open State Foundation, the group that runs the project. Politwoops will again be functional – and it’s refreshing to see Twitter correct a mistake.

 

The Bad

The subsection on violent threats now includes a short clause on “threatening or promoting terrorism.” The clause is likely to have been added as a response to critics who claim Twitter is not doing enough to silence Daesh recruiters on its service. Seemingly short and concise, the clause is worryingly vague. Our own Supreme Court has struggled with defining the word “terrorism,” and Twitter does not offer a working definition either. Further, Twitter has not given us any information on what it considers speech that is “promoting terrorism,” or when free speech crosses the line and becomes offending content on the basis of promoting terrorism. This presents Twitter the massive responsibility of defining terrorism and justifying account suspensions accordingly.

The lack of transparency in what is considered the promotion of terrorism risks Twitter applying its new policy to users inconsistently and arbitrarily, and ultimately, suppressing free speech.Furthermore, whereas the rules previously only banned “violence against others,” they now explicitly ban hateful speech – defined as any speech that threatens other people “on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease” or incites harm towards others on the same bases. Threatening speech can cause harm, but we remain uncomfortable with the prospect of leaving it to Twitter to distinguish hateful speech from speaking truth to power across hundreds of countries and across every community.

Currently, Twitter removes user-generated content in a non-transparent way that leaves no room for accountability, which makes it impossible to know whether Twitter’s policies are being applied fairly across the board. In an effort to shed light on this problem, which plagues all social media platforms, EFF has partnered with Visualizing Impact to launch Onlinecensorship.org – a platform that documents the who, what, and why of content takedowns on social media.

Onlinecensorship.org also provides other tools for users, including a guide to various appeals processes to fight content takedowns on social media. 

But as long as the content take-down process remains unaccountable and opaque, we will remain uncomfortable with large scale social media platforms determining for all their users what constitutes unacceptable speech and taking it down.

 

The Ugly

As if to prove that our worries regarding overreach are well founded, Twitter’s new policies may have already resulted in an embarrassing mix-up. On December 31, the company suspended the account of Norway-based human rights activist Iyad Al-Baghdadi shortly after two news articles misidentified him as Abu Bakr Al-Baghdadi, the leader of Daesh – “Al-Baghdadi” is a very common name in the Arab world. Twitter simply sent Al-Baghdadi a message telling him that he has “violated the Twitter Rules.” Iyad al-Baghdadi is a prominent activist that has a large following online and no connection whatsoever to Daesh.

Without transparency, these procedures also leave undisclosed the extent to which these companies might be pressured to censor and suspend at the request of the U.S. government. With news that the White House has been holding private meetings with Twitter and other companies to discuss how they might combat terrorism, that concern can only grow.

Twitter’s purported intention with the latest rule changes is to clarify what “abusive behavior” entails for the purposes of using the service. In reality, the changes do little but further confound the issues.