A growing number of South Korean information technology (IT) companies employ artificial intelligence (AI) software to filter and block abusive comments, improving user experience in online communities.
South Korea’s leading internet portal, Naver, introduced its AI-powered chatbot dubbed “Cleanbot,” into its webtoons, online news, and sports information to filter comments.
In March, the company announced that it would begin disclosing users’ comment activity history and nicknames used to write malicious comments. The system could also sort out users and automatically block comments that contain swear words.
According to a Naver official, the company would develop a more accurate filter using more diverse models and big data. The company hopes further to prevent malicious and abusive behavior in online communities.
In 2017, the country’s leading game company Nexon launched an in-house AI research and development group named “Intelligence Labs.” The team developed a machine learning-based text detection technology that filters abusive language and unauthorized advertisements.
Nexon said that the program could also recognize newly coined offensive words disguised in symbols. In the early stages, the software failed to identify modified swear words. However, the AI-based program could now detect modified forms of abusive language. The company said that the feature could stop profanities and illegal advertisements.
Nexon’s Intelligence Labs also applied the text detection technology to in-game chatting, preventing unpalatable conversations.
Watcha, a personalized content recommendation service introduced an AI monitoring filter designed to monitor comments and reviews posted by users on content that the Watcha Play platform offers. If the machine-learning engine deems comments as abusive, the software would automatically hide these comments.
Watcha’s AI-based system could also capture abusive and inappropriate language, expressions, and even special symbols.