Case study

Star Stable - Getting users to be more positive

Alexander Gee
March 29, 2022

Our primary object at Oterlu is to empower companies that build communities where all can feel welcome. Last year we reached out to Star Stable Entertainment to see how they approached dealing with toxic users in Star Stable Online. It was refreshing to hear how they take the health of their users quite seriously, with a no-tolerance policy in regards to toxicity, bullying and harassment on their platform. With that policy in mind they have nurtured a healthy and thriving community and more importantly a place for their users to grow in that is inviting and safe.

But that wasn’t enough for them, they wanted to do more in this area and see what opportunities our AI-solutions could bring to the table. This is when we, with Star Stable devised a pilot test alongside the experts over at Peppy Agency to see what else we could do to make a difference.

This pilot test consisted of two key features. We at Oterlu created a custom AI model for the SSO community that was tuned to understand the language used by SSO players. This technology was designed to identify the nuance that easily gets by conventional word filters. Second was the innovation of applying a social and emotional learning technique to how we respond to the flagged messages by our model. Here we relied on Paulina Olsson over at the Peppy Agency to apply as she put it “non-judging conversations with players about their behavior.”


"Hey! It seems you've sent an unkind message. Stop and think: How would you feel if someone said that to you? Remember, being a good friend can also get you lots of new friends!"


Flagged messages were met with this message engineered by the Peppy Agency in collaboration with then Director of Community, Content and Channel, Jane Billet, and her team from Star Stable. We wanted to form a non-judging message that encouraged a change in behavior.

To see if this approach was effective we designed an A/B test. Half of the users flagged for writing problem texts identified by our models were served a carefully calibrated message. This was the treatment group. The other half, the control group, received no intervention. After four weeks the pilot test was concluded and the impact was obvious almost from the outset.

Toxicity in the treatment group was reduced by five percent while users in the control group were sending toxic messages more of the time. While five percent seems modest had the project continued we predict the rate of toxic messages would have continued to drop.

In conclusion, the implications for the games industry are clear. Alexander the CEO of Oterlu concludes. “Messaging can be an effective way to reduce toxicity and abusive behavior. With the help of technology, this approach is a powerful way to make your community safer and more welcoming.”

In this collaboration with the experts over at Star Stable and Peppy Pals we answered some important questions for the next generation of games moderation. Using your platform to provide a healthy environment is always priority number one, but being able to interact with toxic users and guide them to being more positive and constructive in the community is now possible. Using AI is essential for tackling moderation at this scale, but that’s not all we can do as we concluded from this pilot test. With the addition of emotional learning principles we are able to achieve more than just stopping bad actors. We can now help to educate users in how their behavior might be perceived and what negative impact it may have. Moving away from the traditional approach of silencing, kicking or banning users we look forward to working with our expert partners in this collaboration again and further exploring different tactics that advocate for a more positive gaming community globally.


Read the full project overview here.

Latest articles
| Koalla Brand Studio