Going online and having conversations online is becoming increasingly toxic. Abuse is rife, and its main delivery method – online comments. Google and Jigsaw have been working on a method to identify and filter out toxic comments with machine learning. It’s called Perspective.
If you try to discuss anything online nowadays, it seems that you are bound to get at least one “negative reaction” – even from friends. So imagine what happens in open fora, message boards, on social platforms, within big private or public groups… Well, you don’t need to imagine it. If you’re online a lot, you’ve noticed that abuse is everywhere. As Jared Cohen, President of Jigsaw – an Alphabet company – recently said:
Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation.
This is actually a far reaching problem with lasting effects! As Cohen quoted from a recent study, 72% of American internet users have witnessed harassment online. Overall, almost half have actually experienced it personally. And apart from the psychological aspects of it and lasting damage, harassment online has consequences in the way we communicate. The same study found that almost a third “self-censor what they post online for fear of retribution.”
In real numbers,
online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.
And it’s not only a problem that users find ahead of them. News organisations are facing the difficulty of moderating their comments – so many have even resorted to shutting them down altogether. A notable example of this – The New York Times – have turned off comments on around 90% of their articles. But of course this is not the solution.
So, Jigsaw and Google worked together to try to solve the problem with technology. Specifically, machine learning. They came up with Perspective, “an early-stage technology that uses machine learning to help identify toxic comments.” And everyone can start to use it. Perspective has an API that opens the technology to platforms and publishers to use on their own sites. Here’s how it works:
- – Perspective will review comments and then score them “based on how similar they are to comments people said were ‘toxic’ or likely to make someone leave a conversation.” The system has been taught manually by human reviewers, after examining hundreds of thousands of comments.
- – Publishers can then choose “what they want to do with the information they get from Perspective.” This could include anything from flagging a comment, or showing a commenter how “toxic” their comment is – raising awareness even among abusers. Finally, it even allow users to “sort comments by toxicity,” thus clearing up the clutter that toxic comments often create.
As Perspective is based on machine learning, it constantly gets better with time. But Jigsaw and Google are not satisfied with this. Their teams are already working on expanding it. As Cohen explained:
Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.
Will Perspective, and technologies like it improve comments and help improve conversations online? Let’s hope so!