ONLINE abuse, cyber bullying, net hate — whatever you call it, you don’t have to look far to see internet trolls at work. Hiding behind the screen of online anonymity, these sickening keyboard warriors seem hell-bent on spoiling the internet for us all.
Amid the swell of support for victims of the terrible events in Manchester this week, trolls have instead been busy posting fake pictures of missing people. Following sustained online abuse against her son, Harvey, Katie Price recently called for these attacks to be made a specific criminal offence and for a register of offenders. Her e-petition collected more than 200,000 signatures in its first week, ensuring it will be debated in parliament.
So what are online platforms — many of whom profit massively from the billions of photos, status updates and likes we share — doing to take the fight back to the trolls?
Facebook’s Mark Zuckerberg recently revealed that around a third of the content reported to its moderators is now flagged by artificial intelligence algorithms. The social network has long invested in AI for facial and image recognition but with so many photos and status updates occurring every day, many of Facebook’s reporting tools still rely on individuals to flag posts. Facebook acknowledges ‘it will take many years to fully develop these systems’ but automated reporting using artificial intelligence is the way forward.
Online abuse doesn’t only strike in sentences. Non-consensual, intimate images — or ‘revenge porn’ — are among the most humiliating forms of harassment on the internet. But last month Facebook launched tools to identify intimate images across its networks. Once reported and confirmed by Facebook’s moderators, photo-matching technology will help to prevent images from being shared again on Facebook or other networks such as Instagram and Messenger.
And earlier this year, Google launched Perspective, an artificially intelligent ‘anti-troll’ tool that uses machine learning to identify potentially abusive comments. The software has been trained to understand what offensive language looks like by observing thousands of moderated comments on platforms such as Wikipedia and the New York Times, as well as crowdsourcing human data from surveys.
Armed with these models, the Perspective API rates a comment based on its ‘toxicity’, giving a near-instant score of how abusive it is likely to be. This helps moderators — and those posting the comments — to understand the potential impact of their writing. Google has made this software available to publishers and experiments are under way with news websites and online platforms to remove offensive posts and improve the conversation.
With Twitter getting in on the act as well, it’s clear that social media platforms can no longer ignore the effects of trolling. How to navigate the fine line between supporting freedom of expression and clamping down on abuse, however, is a stern test for technology.
Silencing toxic tweeters
TWITTER leans heavily on users to report potentially abusive messages but it has also been exploring how the public can be prevented from seeing such posts in the first place. As well as analysing the content of updates with a ‘quality filter’, Twitter is deploying tools to identify and act upon the patterns associated with abusive online behaviour.
For example, trolls frequently use fake profiles to attack victims, sometimes creating multiple accounts to bombard them. Twitter says it has developed technology that identifies these patterns and the users behind them. Once suspended, Twitter is also taking steps to identify banned users and prevent them from creating new accounts.
It has also introduced tools that allows users to filter profiles that haven’t verified their telephone number, email or changed their default profile picture. While email addresses are disposable, it is hoped that linking a profile to a telephone number may help stem the flow of fake accounts.