FACEIT implement Minerva, an AI to punish toxicity in CSGO
Image Credit: Bethesda
Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Apex Legends Titanfall flyers big

FACEIT implement Minerva, an AI to punish toxicity in CSGO

This article is over 4 years old and may contain outdated information

It’s no secret that toxicity in competitive games is, unfortunately, quite common. Be it CSGO, Call of Duty, FIFA, Dota 2, you name it, players eventually stumble upon a cheater, griefer, or some other form of toxic in-game behavior. Well, one of the biggest CSGO competitive platforms, FACEIT, has announced that it is implementing Minerva, an AI that monitors for toxic in-game behaviors and issues adequate punishments and warnings. Minerva is being developed closely with Google and Jigsaw.

Recommended Videos

Minerva to issue adequate punishment to perpetrators

The AI is already up and running in version 0.1 on FACEIT. For now, the developers have made Minerva focus purely on chat messages, both in and out of game. The AI, upon detecting toxic chat behavior – be it racism, sexism, or unnecessary hate speech – will issue a warning or cooldown to the offender. Furthermore, the punishments get more and more serious if the user does not correct their behavior.

An example of a message a toxic player may receive following a match. Issued by Minerva.

While Minerva is not (yet) trained to detect cheaters, the AI’s toxic chat analysis is the first step in improving FACEIT’s CSGO competitive experience.

Statistics and evidence

According to the official FACEIT blog post, Minerva has been implemented on a trial run since August. Apparently, since the implementation of the AI on FACEIT, toxic messages have dropped “from 2,280,769 in August to 1,821,723 in September, marking a 20.13% decrease.”

The numbers do look impressive. A 20% decrease in toxic messages on a CSGO platform in just a month is immense, given the toxic nature of the community.

However, it must be said that it will take a lot more than just issuing warnings and bans for toxicity to improve competitive matchmaking. It is a step in a (hopefully) good direction, but how optimal can Minerva become in monitoring toxic behavior? Moreover, can it be taught, or even teach itself, to detect cheaters? Perhaps FACEIT is developing a better version of Valve’s VACNET? It’s hard to judge at this point in time.

What do you think of FACEIT’s AI? Let us know down in the comments below, and as always, follow us at Daily Esports for all your latest news in CSGO as well as other major esports.

Author