The world’s most popular computer game is taking a bold new step to counter harassment. “League of Legends” publisher Riot Games announced in a blog post last week that North American players now have access to a new “reform system” that works to correct abusive behavior in the competitive online game. If you’re playing a game and experience abusive language from a teammate or opponent, you can report that player at the end of the match — as usual. But now, a system is in place to automatically process the content of a player’s chat messages. It will “validate” the report and deliver a “reform card” to the offending player, detailing their negative behavior and the …
The world’s most popular computer game is taking a bold new step to counter harassment.
“League of Legends” publisher Riot Games announced in a blog post last week that North American players now have access to a new “reform system” that works to correct abusive behavior in the competitive online game.
If you’re playing a game and experience abusive language from a teammate or opponent, you can report that player at the end of the match — as usual. But now, a system is in place to automatically process the content of a player’s chat messages. It will “validate” the report and deliver a “reform card” to the offending player, detailing their negative behavior and the punishment they’re receiving in hopes of improving their interactions moving forward.
“If a player shows excessive hate speech (homophobia, sexism, racism, death threats, so on) the system might hand out a permanent ban to the player,” Jeffrey Lin, Riot Games’ lead social systems designer, elaborated in a comment on the blog post.
Punishment is supposedly handed down within 15 minutes after a game concludes. But how accurate can an automated system really be?
“In terms of false positives, we recently flew in Player Support and Player Behavior team members from all around the world to hand-review thousands of chat logs, and we saw false positive rates in the 1 in 6000 range,” Lin said.
The reform system is currently in a “testing” period, meaning that actual Riot Games employees will review the first several thousand reports. If all goes well, it’ll be introduced to all other regions that “League of Legends” is available in — Europe, Korea, China and Southeast Asia.
Ben Kuchera of Polygon noted Monday that it’s already rolling out for European players.
“League of Legends” has long led the charge in terms of how popular video games deal with online trolls, introducing innovative ways to counter harsh language and improve player behavior. The game is tremendously popular, boasting over 67 million monthly players in 2014. Because it’s typically played competitively with other humans — rather than against computer-controlled players — tensions can sometimes run high during matches.
Riot Games’ blog post notes that moving forward, the reporting system could also be used to reward players who display good behavior, rather than just punishing those who do not.
— This feed and its contents are the property of The Huffington Post, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.
Read More:
‘League Of Legends’ Introduces Automated System To Battle Abusive Language