League of Legends
Call of Duty
While harassment campaigns of varying intensity are nothing new on Twitch, the recent uptick in hate raids directed at marginalized streamers on the platform stunned even frequent victims.
As streamers boycotted Twitch for a day in protest on Sept. 1, others, like Tanya DePass and the crew of Motherlands, all people of color, begrudgingly went live. DePass said she couldn’t skip streaming because of her contract. She had to go through the extended effort of setting up tools to keep hate raids at bay yet again.
Thanks to the increased frequency of hate raids, the time it takes DePass to prep has doubled. It’s a problem many marginalized streamers have to face, especially lately.
“Streaming on my own, doing anything from painting guys to gaming, almost every time I’ve streamed I’ve gotten either a flat out attempted a hate raid,” DePass said. “But they’ve been shut down because we turned off alerts. We did all this other stuff, but it’s doubled the amount of time it takes me to prep for a stream, which really made me consider: ‘Do I actually feel like streaming today? Why am I going to go on if I know this is my everyday now?'”
We're launching channel-level ban evasion detection and account verification improvements later this year. We’re working hard to launch these tools as soon as possible, and we hope they will have a big impact. Check out more on our existing tools here: https://t.co/Dku6eBhY72
— Twitch (@Twitch) August 11, 2021
Twitch has fairly limited built-in safety features for streamers, so members of the wider streaming community have started stepping up to fill in the gaps, providing bots, tutorials and more as they wait for Twitch’s promised upcoming changes. For now, these community fixes are often the only thing keeping streamers safe as hate raids continue relatively unchecked.
When approached for comment, Twitch told Upcomer that it has been developing channel-level ban evasion detection and account verification improvements “for months,” and stated it cannot confirm a timeline for when these changes will be implemented beyond “later this year.”
For her current setup, DePass has started using the SMASH Security Suite bot, developed by a Twitch moderator who uses the handle Modest Mishmash. The bot “just kind of works in the background,” DePass said.
SMASH Security Suite “provides you with a solid defense against trollz, spam bots, and malicious messages,” according to its website. “The fully autonomous moderator for your channel will detect bots with bad intentions and remove them from your channel.”
In an interview with Launcher, Modest explained why they decided to create SMASH a few months ago.
“It was at that time I decided that manual banning and reporting isn’t going to stop automated attacks,” Modest said. “However many moderators might be in chat, a bot could generate accounts faster than moderators can ban them.”
SMASH isn’t the only community bot that popped up recently, however. Ashe Muller is a streamer and programmer who believes he was one of the first people to release a functioning auto-emote-off system. The bot came out on Sept. 1, purposefully coinciding with the boycott.
“I wanted to show that there are good uses for bots, because some of the terms of the boycott would’ve messed up the entire developer community,” Muller told Upcomer in Twitter direct messages. “Also it was topical, so I wanted to help with a problem while it needed helping.”
Some terms of the boycott included fewer bot accounts per email address, which according to Muller would “potentially be really annoying.” Most of Muller’s work is based on bots, including a game he’s creating within his own Twitch channel’s chat window. The demonstration day was a good opportunity to prove the utility of bots, while also providing an actual solution.
“Released mine ~3 hours before Streamlabs,” he said. “And then I replaced emotes only with sub-only.”
Muller’s bot, hateRaidsAreGone, runs in the background via the cloud. If part of a message in chat doesn’t match the standard English keyboard, the bot will remove it from the chat feed. In short, this increases the security of the chat by catching the non-English characters, like ñ, that auto-mod alone misses. It runs “24/7 with almost no downtime,” according to Muller.
To gain access, streamers simply need to DM Muller on Twitter and ask to be added to the list and make the bot user a mod on their channel. “It does the rest on its own,” Muller said.
LIVE! I MADE AN ENTIRE GAME ENGINE FOR STREAM CHAT TEST IT LIVEhttps://t.co/8TsC9pfU1I
— Ashe Muller (@MaleVTuber) September 3, 2021
So far, it seems like hateRaidsAreGone is doing a pretty good job, at least according to Muller. “It surprisingly solves a lot of the raid issues from what I’ve seen from screenshots,” he said.
Those looking to replicate or otherwise improve Muller’s bot can with ease, since it’s fully open source and runs on Python. It seems as if there’s plenty of room to expand what a bot like this can do, especially considering that Muller made it within 24 hours of having the idea.
Despite his own speed at turning this solution around, Muller doesn’t hold Twitch to the same standard.
“What I’m doing works on a small scale basis, but implementing this wide-scale would be a nightmare,” Muller said. “There are millions of users 24/7. Imagine the amount of messages they’d have to parse through.”
People like Stephneee_Plz, a YouTuber and member of the Rainbow Arcade stream team, need Twitch to act quickly, however. Her experiences with hate raiding and harassment on Twitch ultimately led to her feeling unsafe on the platform, and she’s stopped streaming for now. These concerns have also led to her sticking to her handle Stephneee_Plz, including for the purposes of this article.
“I think it’s hard for me because I see where they’re coming from,” she said. “And that they put out these statements like, ‘We can’t tell you exactly what we’re doing because that is literally a roadmap to tell any malicious people how to circumvent our safety policies.’ But I’m also, as a user, I just feel so let down.”
Stephneee_Plz is the creator of a YouTube tutorial on how to stop hate raids using the Elgato Stream Deck that came out in July of 2020, almost a year before the most recent surge on Twitch. She decided to make it after talking about what the Elgato was capable of in terms of buttons and tools with other members of the Rainbow Arcade Discord. Realizing how important making information like this more widely available was, since the Discord is restricted to team members only, Stephneee_Plz created her tutorial with the approval of her teammates.
At the time, Stephneee_Plz wasn’t a YouTube partner yet, but she had started pushing for it. At the same time, the video started picking up speed as more folks realized how useful it could be.
“It was kind of like that perfect merging of people sharing the tool because it was a helpful tool, and then also people sharing because they were like, ‘Yeah, I want to hype up my friend,'” she said.
Eventually, during Pride Month in June, the video got enough attention that Stephneee_Plz decided to contact Elgato about it directly.
“It was going around a lot, and I actually reached out to Elgato,” she said. “I was trying to push them to make this video themselves because, you know, I’m a small creator, and last year, my video had 1,000 views maximum… We did have talks about it, but it didn’t end up coming to fruition in any form.”
According to the Elgato spokesperson Upcomer spoke to, the company is unaware of Stephnee_Plz and her request.
“There are hundreds of great uses for Stream Deck and we speak to many different creators about their configurations and uses for Stream Deck, but we’re not able to directly contribute to them all,” the Elgato spokesperson said. “We’ll certainly take a look at shining a light on this use for Stream Deck.”
That said, others in the streaming space have started stepping up to build on what Stephneee_Plz started. Until Twitch comes forward with its solution, her work, and the work of others, will be the only barrier protecting streamers from hate raids and similar automated attacks.