AI Moderation for online comments
“Online comment moderation powered by world class AI technology. Protecting your web community from harassment, profanity and trolls.”
AI Moderation demonstrates human level performance. Our test shows it classifies 93% of comments correctly whilst journalists score 92% on the same data set.
AI Moderation provides a real-time moderation service, removing inappropriate comments instantly. It runs 24/7 and is easily integrated with your website or blog.
AI Moderation is cheaper and more reliable than a human based moderation service. Its scalability results in even better performance in hours of peak web traffic.
Newspapers struggle with an ever-growing volume of public comments. The New York Times receives on average 12,000 comments a day. Many of these are inappropriate, offensive or simply illegal. The role of moderation is to prevent toxic comments from causing brand or reputational damage without discouraging your public audience.
Existing solutions like keyword lists tend to have low accuracy, whilst manual moderation efforts can be expensive and difficult to scale. Keyword list based approaches can cause so many false alarms companies are forced to disable automated comment moderation altogether, foregoing community engagement and associated advertising revenue. In some organisations, journalists and editors are left to monitor comments in their own time on a best effort basis. This can increase the risk of unreviewed comments causing reputational or brand damage.
AI Moderation by Three Springs Technology protects your online community from off-topic comments, harassment, spam, trolls, illegal comments, profanity and abusive language.
- It uses the latest state-of-the-art natural language processing (NLP) technology and machine learning algorithms to detect and block inappropriate comments.
- It can be trained on your existing data. It can learn context and different moderation policies specific to your company, language and use case.
- It knows its limits. Every comment that passes through the algorithm has a confidence measurement associated with it. When the confidence is low, the algorithm recommends that a human review the decision.
- It offers you flexibility based on your personal moderating needs: it can be set from very tolerant to very conservative, or it can be tailored to a very specific set of preferences or logic.
AI Moderation is for organizations that:
- deal with high volumes of unpredictable public comments
- benefit from community engagement
- need to prevent toxic comments from being published
See for yourself
See the results for yourself in our demo: demo.aimoderation.com
We can also demonstrate it on your own data, or provide more technical detail on request.
We can help you develop with custom integration into your workflow.
Contact us on: email@example.com