South Korea’s chief intelligence agency, the National Intelligence Service (NIS), is preparing guidelines for the usage of AI chatbots like OpenAI’s ChatGPT. The agency recently announced that concerns about the safety of artificial intelligence-related technology have been increasing throughout the country.
The tech environment is transforming due to advancements in AI technology. There is a significant risk of data breaches, leaks, and the spread of realistic fake news. According to the NIS, businesses require specific security standards and regulations to protect themselves.
Major corporations, including Samsung Electronics, have already imposed restrictions on their employees’ use of generative AI for daily work. Domestic companies in South Korea are strengthening their internal security policies and also seeking in-house alternatives for ChatGPT.
The South Korean government aims to utilize ChatGPT in the administrative sector. To ensure secure usage, the NIS has been collaborating with the leading cybersecurity research institute, the National Security Technology Research Institute (NSTI), and other experts since April to develop security guidelines.
The guidelines are expected to be released by the end of this month. They will provide an overview of generative AI technology, address security threats, and outline safe practices for utilizing AI. They will also encompass measures to safeguard institutional information.
Kwon Tae-kyoung, a professor at Yonsei University’s research institute in Seoul, emphasizes the importance of using AI technology as per prescribed security policies. Constructing robust security measures is as crucial as the development of the technology itself.
Drafting of code of conduct for OpenAI ChatGPT
Recognizing security concerns and the widespread use of OpenAI’s ChatGPT, countries such as the United States and the European Union have already begun establishing codes of conduct for AI.
Italy was the first country to ban the use of OpenAI’s ChatGPT. However, after the AI company complied with Italian regulations and provided assurances about safety, the Italian government granted access to ChatGPT.
Several European countries are currently in the process of approving Europe’s first AI Act, with ongoing groundwork. This new legislation aims to ensure that data provided by AI chatbots complies with copyright laws.
Due to political tensions and the threat of misinformation spread, countries like Russia and China have blocked access to ChatGPT. In April, Russia released Gigachat, an AI chatbot similar to ChatGPT. Whereas, China has developed draft rules for companies involved in the development of its own indigenous generative AI product.