GDC 2022 reducing toxicity in-game
We are presenting on how to reduce toxicity in a data driven way at GDC 2022
Oterlu detects and removes unwanted behavior from chats, forums and other user-generated text. We have created a sophisticated AI and analytics platform that enables you to make smart decisions on how to tackle abusive behaviour.
Our AI model is easy to customize to fit exactly your community and your policies. You have full control over what behaviour is acceptable and what is not on your platform.
Our web tool empowers you to move from a reactive workflow to being proactive in your day-to-day work. We provide you with analytics distilled to actionable insights.
Through our industry expertise from the likes of Google we provide you with expert guidance on how our service can bring the maximal benefit to your community.
Try out our general English AI model for flagging toxicity, profanity and NSFW.
We could go on and on about why you should use our service. But, rather we'd have you read a story from one of our customers, Recolor, on how they utilise our service.
Join the likes of Fatshark and Hiber and super charge your Discord moderation with our AI-technology.
Identify and take action on behaviours such as harassment and bullying on your platform through the power of cutting edge AI.
68% of online gamers state that they have experienced severe in-game harrasment in chat, and 22% of them stop playing games because of it. (ADL Free to Play 2020)
In this example you will see how quickly harassment evolves in an online multplayer match.
We aim to make everyone feel welcome online. We combine multiple years of experience from the areas of Community Management, Trust and Safety and Machine Learning to achieve this.
BLOG & CASE STUDIES
Schedule your quick introductory call with one of our experts and we’ll show you how Oterlu is empowering hundreds of online communities to improve retention of users and bringing new users to their platforms.