The Twitterverse can be a nasty place. One study found that 88% of abusive social media mentions occur on Twitter. It is the platform of choice for a certain personality type: Trolls. This dynamic isn’t new, but it has escalated recently. Around the time of the American election, Twitter noted in a press release that “the amount of abuse, bullying, and harassment we’ve seen across the Internet has risen sharply.” Online hatefulness has been normalized and in some cases lauded. This is not a healthy development.
Twitter CEO, Jack Dorsey, identifies trolling as one of the biggest inhibitors to the growth of his company. As a result, Twitter seems to have entered into an arms race with the trolls. On Tuesday, they launched their latest salvo consisting of three new lines of defense: stopping the creation of new abusive accounts, bringing forward safer search results, and collapsing potentially abusive or low-quality Tweets.
As any Harry Potter or Hobbit fan knows, trolls are notoriously hard to kill. Ban user angryyam462 and they will simply reemerge with a cloned account as angryyam463. To combat this, Twitter is focusing on preventing permanently suspended users from creating new accounts. This should not be viewed as suppression of dissenting views but rather as protecting an ecosystem from known predators. As Twitter puts it, they are focusing on “the most prevalent and damaging forms of behavior, particularly accounts that are created only to abuse and harass others.”
“Safe Search” programmatically removes tweets “that contain potentially sensitive content and Tweets from blocked and muted accounts.” Again, the tweets aren’t deleted. You can find and access them with a little effort. You just won’t be assaulted by them in your default search results. Similarly, abusive and low-quality Tweets are collapsed or pushed “below the fold” so that more relevant conversations are brought forward.
Twitter’s efforts to combat trolls are a work in progress. Tuesday’s announcements are qualified with “we are working on”, “we are taking steps” and “rolling out in the coming weeks.” Hopefully they will materialize and work as intended.
These platform tweaks are a good step in the direction of programmatic counter-measures to the post-truth, post-decorum dynamic I described in my last post. Just as important is the awareness Twitter and other members of the industry are showing that we have a serious problem that must be addressed.
In early 2016, Twitter established a Trust and Safety Council comprised of advocates, academics, and researchers to combat “behavior intended to harass, intimidate, or use fear to silence another user’s voice.” While the efforts of corporate bodies such as this are primarily intended to serve the needs of their parent organizations, they are also drawing attention to the need to establish new norms of acceptable behavior for online forums. It is disheartening how low the bar is set, but it’s a start.
For example, Twitter has a Hateful conduct policy to explicitly state what is out of bounds. The rules are basically the same as those governing an elementary school playground. It is sad that they must be reiterated and enforced for a community of users that one would assume are adults. Most children understand that “slurs, epithets, racist and sexist tropes, or other content that degrades someone” are unacceptable in civil discourse. Twitter has to call this out, monitor for violations and penalize offenders. Hopefully these norms and mechanisms will be applied consistently, regardless of who is tweeting.
At least it’s a start.
Article by Darin Stewart, Gartner