As part of its endless, Sisyphean struggle to curb online harassment, Twitter recently rolled out two new improvements to give users more control over the content they see. First, users can now choose to mute “keywords, phrases and even entire conversations,” which will keep them from receiving notifications about tweets that include those muted words. Second, Twitter is expanding its users’ ability to report any “hateful conduct” they see, even if they weren’t the target. While it’s fine that Twitter is giving more control to its user base, users shouldn’t expect these features to combat online harassment all that much. Furthermore, those who choose to use the new muting options should do so judiciously, lest they unintentionally block themselves from seeing legitimate content that is aimed at furthering discussion on important topics, rather than harass.
The broadened reporting privileges are fairly straightforward, but the selective muting function is more interesting because it has the potential to be more useful. I don’t use Twitter, but if I did, I’d want as much control over what I see as possible. This muting function is most useful for the notifications you get when someone mentions you in a Tweet or replies to one of your Tweets. If you don’t want to see notifications about replies which use certain words, you can mute those words, which will keep you from being notified about them. This can keep you from seeing a Tweet which directly insults or slanders you, for example.
The function’s usefulness is limited, however, primarily because those Tweets containing your muted words will still be visible on your timeline and in search results. This new muting feature would be more useful if it had the option of scrubbing your timeline and search results of Tweets containing those muted words. Additionally, if you aren’t careful with what words you limit, you can end up preventing yourself from being notified about content that you might be interested in seeing, just because the Tweet it’s in used a word you muted. Finally, even if you limit your list of muted words to profanity, insults and slurs, the muted word list will do little to stop Twitter’s biggest problem: Online harassment and trolling.
The short version: Trying to fix Twitter’s problem with hate and abuse is like trying to fix a collapsed bridge using rubber bands and tape. (By the way, I put a lot of effort into making that sentence exactly 140 characters for you Twitter fans.) Like every other online service that allows users to interact, Twitter is a hive of internet trolls, hate speech and hurt feelings, and has a long history of desperately attempting to shut it down by banning abusive users. Twitter’s beleaguered reputation has even scared away potential investors such as Disney, who could have saved it from its plummeting stock values with their investments.
Increased muting and reporting powers aren’t going to be enough to save Twitter because the damage to its reputation can’t be mended. Both the investors and trolls already know that it’s a platform where anyone can post offensive images and fling insults at sensitive users for cheap entertainment. These two, new features will only stop a few low-effort trolls who aren’t that serious about bothering someone. If a troll is truly persistent, they can get around these new barricades against their abuse. Did their insults get muted? They can get creative and use other ones, or attack with offensive images and not words. Did they get banned due to multiple people reporting their abuse? They can make another account pretty quickly. The only way to “stop” a troll is to never engage; don’t feed the trolls, as they say.
Although Twitter’s new tools are mildly useful steps toward giving users better control over what content they see and dodging abuse, the damage that has been done to Twitter is incurable and terminal. As usual, the best way to distance yourself from material that you don’t want to see online is to disengage from social media as much as you can.