One thing we offer our enterprise customers is the possibility to report back if our AI models flags something that it shouldn’t have or vice versa. We also decided to offer this with our Discord bot so a moderator can tell us when the AI models are making a mistake. An example of a false positive would be someone writing “you are awesome!” and our AI model flagging that as abusive.
When we launched our Discord bot a couple of weeks ago we realised that we had quite a few users joining our support server from other servers. They weren’t necessarily looking for support but more curious about the bot and underlying technology.
Erik, who works on the community side at Oterlu, came up with the idea of running a competition on our Discord server to help users get familiar with the technology. We would give out points each time a user managed to find a false positive, so if our bot flagged “you are awesome!” as abusive you would get a point.
Little did we know that this would take off…
Over the span of a week we got 7,000+ messages that were tested against the bot. We were completely blown away! Users would come up with creative combinations that we hadn’t seen before such as “i’ll sneeze at you”. One of the more interesting examples was that our bot really did not like crocodiles or squirrels and flagged them as abusive. It was apparently fine with the rest of the animal kingdom.
The lesson we learned is that community sourcing is an excellent way to help you test for issues with your AI model. As humans we are inherently creative and it helped to surface some areas of brittleness that we hadn’t encountered before through rigorous testing. A big thanks to all the participants in our competition!
P.S. We let our bot watch some David Attenborough documentaries and it has now built up an affinity for crocodiles and squirrels.