Two years ago, Polygon polled multiple gaming companies to ask them what kind of corporate policies were in place to protect employees from online harassment. Out of the 25 companies that they had reached out to, only 6 responded with a proper statement. It seemed that at the time, most of them simply didn’t have a real strategy set up to deal with attacks on social media. Fast forward to 2018, where according to Polygon’s follow up story, things haven’t changed much at all.
Similar to the situation two years ago, most companies failed to provide any response to Polygon’s inquiry, or they acknowledge the question but declined to comment. The polled companies who fell into one of these two categories are as follows: Activision, Bethesda, Epic Games, Gamestop, Ubisoft, Sony, EA, Capcom, Microsoft, Sega, and Take Two.
Nintendo was one of the companies that replied two years ago, and they did so once again saying:
“Nintendo condemns the harassment of any individual in any form, including through social media or when playing games online. As we noted in our last statement, we take steps to support and protect our employees through policies designed to combat online harassment. That includes working to keep pace with the challenges this issue raises as it grows in complexity. We are also continuing our work to limit our consumers’ exposure to negative communications or hostile online interactions while using our systems and games. Nintendo is committed, whether through its work in the industry or with the broader community, to ensuring that people can both work and play without fear of hate-fueled attacks.”
A company which did not provide a response in the past but did this time around is Riot Games. Given the bad press they’ve been having recently, it seems like a smart move to boost their PR a bit. Here’s their response:
“Direct engagement with players online and IRL is something we encourage all Rioters to do if they feel comfortable doing so. This direct access has been a cornerstone of building trust with players since Riot’s earliest days. In order to equip Rioters to feel comfortable and confident in interacting with the community, for many years we’ve provided training to every Rioter about how to have authentic and appropriate player conversations.
“We evaluate any negative incident on a case-by-case basis, but the one constant in each is that we do everything we can to keep the Rioter(s) involved safe. In cases where a Rioter is found to have behaved inappropriately, the outcomes aren’t determined by community sentiment and instead are evaluated through the lens of our values and principles.”
While it seems this was ripped directly from an employee training manual, it’s still better than not responding at all.
The company which provided a truly exemplary response was CCP Games. Their senior community manager, Paul Elsy, actually called Polygon up and gave them a phone interview.
During the course of their conversation, Elsy gave a clear step-by-step description of how the company deals with toxic members in the community and provided several examples.
“If someone in our community is harassing anyone or repeatedly breaking rules, they’re out. We’re not interested in them being part of our community. If we see abuse in-game, we’ll shut them out. And if we see abuse coming to us via social media platforms, we’ll report them and request that the person’s account be shut down.”
“Even today we saw a member of the community dox another member. That’s an automatic permanent ban.”
“When we’ve experienced brigading from places like Reddit, we’ve been able to handle it by going in to those spaces and saying, ‘this isn’t cool, you need to take a step back.’”
One example proving just how committed CCP is to their community standards is the fact that they have a direct line with a senior police official, allowing them to alert the authorities should an extremely urgent issue arise.
“We’ve established a direct line, via email, to a senior police official here in Reykjavik, who can contact local police around the world, if we get an alert that indicates certain keywords, like suicide. We’ve used it on a number of occasions and it’s about a 30 minutes response time.
Please take some notes American and Japanese companies. While it’s impossible to completely prevent online abuse, it should go without saying that you need to have some guidelines to deal with it when it occurs. The fact that so many big brand names didn’t even bother to send a response says a lot about the current state of the gaming industry.