How This Gaming Company Is Wiping Out Cyber-Bullying
APRIL 7, 2016 • BLOG POST • BY JAMES DALY, WIRED BRAND LAB
IN THIS ARTICLE
- League of Legends creator Riot Games has been experimenting with AI and predictive analytics to find and stop online trolls and increase sportsmanship
- Since implementing their AI-assisted program, verbal abuse in games has dropped 40 percent
Riot Games is putting artificial intelligence to work to improve the sportsmanship of millions of gamers
Millions of young online gamers today are accustomed to battling bad guys. But their biggest foes are often their fellow players. Many online gaming sites are rife with creepy bigotry, harassment and even death threats. It's a common issue for many online communities, too, including Twitter, YouTube and Facebook.
So how do you root out the rotten apples? Over the past several years, Riot Games, which produces the immensely popular League of Legends, has been experimenting with artificial intelligence (AI) and predictive analytics tools to find the online trolls and make their games more sportsmanlike. League players are helping spot toxic players and, as a community, deciding on appropriate reactions. Their judgments are also analyzed by an in-house AI program that will eventually—largely on its own—identify, educate, reform and discipline players. The research Riot Games is doing into how large and diverse online communities can self-regulate could be used in everything from how to build more collaborative teams based on personality types to learning how our online identities reflect our real-world identities.
“We used to think that online gaming and toxic behavior went hand in hand," explains Jeffrey Lin, lead game designer of social systems at Riot Games. “But we now know that the vast majority of gamers find toxic behavior disgusting. We want to create a culture of sportsmanship that shows what good gaming looks like."
Achieving that goal presents big challenges. Riot Games has always maintained rules of conduct for players—forbidding use of racial slurs and cultural epithets, sexist comments, homophobia and deliberate teasing—but in the case of League, the volume of daily activity has made it all but impossible to enforce the rules through conventional tools and human efforts. More than 27 million people play at least one League game per day, with over 7.5 million online during peak hours.
That's one reason why Riot Games is putting serious brainpower behind the initiative. Lin, who holds a doctorate in cognitive neuroscience, works with two other Riot doctors—data science chief Renjie Li (Ph.D. in brain and cognitive sciences) and research manager Davin Pavlas (Ph.D. in human factors psychology)—to drive the program forward. Creating the tech foundation for this effort wasn't easy either. A giant data pipeline was needed to turn petabytes of anonymous user data into useful insights on how players behave. Lin's team also collaborated with artists and designers to make sure their work didn't interfere with the look or flow of the game.
Phase 1: The Tribunal
In the first phase of the program—the Tribunal, which launched in 2011—players would report fellow gamers when they felt they broke the rules. Reports were fed into a public case log where other players (called “summoners") were assigned incidents to review. The case often included chat logs, game statistics and other details to help the reviewer decide if the accused should be punished or pardoned. Linn says that most negative interactions come from otherwise well-behaved players who are simply having a bad day and take it out online.
The players use the context of the remark to vote on the degree of punishment for the case, which could range from a modest email “behavior alert" reminding them of a rules infraction and hopefully pointing them to positive play or a lengthy ban. After tens of millions of votes were cast, Riot put the Tribunal “in recess" in 2014 and started to make the pivot toward a new system that could be managed more on its own through AI.
“There are two ways to deal with any type of problem at this scale, and we support both in tandem," says Erik Roberts, head of communications at Riot. “First, put the tools in the hands of the community and second, build machine learning systems that leverage the scale of data—contributed from the community through reports—to combat the problem."
Phase 2: AI
Last year, Riot kicked off testing of its new “player reform" system, one that provides faster feedback and automates parts of the process. It specifically targets verbal harassment, with the system capable of emailing players “reform cards" that mark evidence of negative behavior. Lin's team hand-reviewed the first few thousand cases to make sure everything was going well and the results were astounding: Verbal abuse has dropped 40 percent since the Tribunal and the new AI-assisted evaluation program took over.
Lin believes that more game developers will follow thisr model—linking cognitive research to better game play and hiring cross-discipline teams dedicated to that purpose. “By showing toxic players peer feedback and promoting a discussion among the community, players reformed," Lin says. “We showed that with the right tools we could change the culture."