Call of Duty AI introduces voice chat monitoring to crack down on hate speech

Rate this post

Call of Duty players will have their in-game voice chats monitored in real-time by artificial intelligence as part of a larger crackdown on hate speech and other harmful behavior.

Activision announced in a blog post Wednesday that it has partnered with tech firm Modulate to roll out ToxMod, an AI-powered voice chat moderation tool that “identifies and enforces in real-time against toxic speech — including hate speech, discriminatory language, harassment.” . and more”.

An initial beta rollout of the new system began Wednesday for North American players of the popular online shooter series, which Call of Duty: Modern Warfare II And Call of Duty: Warzone.

The AI ​​tool will be released worldwide – except Asia – on November 10, coinciding with the launch. Call of Duty: Modern Warfare III On 10 November.

Support will begin in English with additional languages ​​to follow at a later date.

“This new development will boost the ongoing moderation system led by the Call of Duty anti-toxicity team, with text-based filtering for in-game content (chat and usernames) in 14 languages, as well as robust in-game player reporting. system,” Activision said.

“Since launch Modern Warfare II, Call of Duty’s existing anti-toxicity moderation bans voice and/or text chat on more than one million accounts found to be in violation of the Call of Duty Code of Conduct. Real-time rejection of harmful language is well established with constantly updated text and username filtering technology.

Activision said the data showed that 20 percent of players “did not reoffend after receiving the first warning.”

“Repeat offenders are subject to account penalties, including but not limited to feature restrictions (such as voice and text chat bans) and temporary account restrictions,” it said.

“This positive impact aligns with our strategy of providing clear feedback to players on their behavior. The team at Call of Duty is dedicated to combating toxicity in our games. Key to this ongoing commitment is leveraging new technologies, developing critical partnerships and evolving our practices. As always, we look forward to working with our community to make Call of Duty fair and fun for everyone.”

In a question-and-answer segment, Activision stated that in-game voice chats were monitored and recorded “for the express purpose of surveillance.”

“Call of Duty’s voice chat moderation system is focused on detecting harm in voice chat versus specific keywords,” it said.

“Players who do not wish to control their voice can disable in-game voice chat in the settings menu.”

The AI ​​tool detects and flags toxic language in real-time “classified by its type of behavior and rated level of severity based on a developed model”, though Activision will be responsible for enforcement decisions.

“Determined violations of the Code of Conduct may require additional reviews of relevant recordings to identify contexts before enforcement is determined,” it said.

“So, the actions taken will not be immediate. As the system grows, our processes and response times will evolve.”

The gaming giant clarified that “trash talk” was not banned.

“The system helps enforce the existing code of conduct, which can lead to ‘trash-talk’ and friendly spats,” it said. “Hate speech, discrimination, sexism and other forms of harmful language will not be tolerated as outlined in the Code of Conduct.”

Activision’s chief technology officer Michael Vance said in a statement that “there is no place for disruptive behavior or harassment in games”. “Dealing with disruptive voice chat is an extraordinary challenge, especially in gaming,” he said.

“With this collaboration, we are now bringing Modulate’s cutting-edge machine learning technology that can scale in real-time to global implementations. This is an important step in creating and maintaining a fun, fair and welcoming experience for all players.”

Modulate chief executive Mike Pappas said the company is “excited to work with Activision to advance the cutting edge of trust and security”.

“The size and scale of Call of Duty is a huge step forward in supporting the player community and further reinforces Activision’s ongoing commitment to leading this effort.”

According to Modulate, ToxMod doesn’t just look for flagged words but “analyzes the tone, context and perceived intent of those filtered conversations using its advanced machine learning processes”.

“ToxMod’s powerful toxicity analysis assesses the tone, timbre, emotion and context of a conversation to determine the type and severity of toxic behavior,” it says on its website.

“ToxMod is the only voice moderation tool built on advanced machine learning models that goes beyond keyword matching to provide a true understanding of each toxicity event. ToxMod’s machine learning technology can understand emotions and subtle cues to help distinguish between friendly banter and real bad behavior.”

ToxMod’s ethics policy states that it may also consider a speaker’s race, gender identity, sexuality or other demographics to determine whether certain behavior is acceptable.

“We sometimes consider a person’s demographics when determining the severity of harm,” it said.

“We … recognize that some behaviors may be fundamentally different depending on the demographics of the participants.”

For example, “while the n-word is generally considered a vile slur, many athletes who identify as black or brown have reclaimed it and used it positively in their communities”.

“Modulate does not detect or recognize the ethnicity of individual speakers, although it will listen to conversational cues to determine how others in the conversation are reacting to the use of such terms,” ​​it said.

“If someone says the n-word and clearly offends others in a chat, it will be rated more seriously than repetitive use that is naturally incorporated into the conversation.”

frank.chung@news.com.au

Leave a Comment