South Korea has set a precedent as the world’s second jurisdiction to introduce artificial intelligence-governing legislation, but it appears to be a balancing act between promoting AI and keeping it at bay, writes Brian Yap

As South Korea propels itself into a new era of regulated AI development, senior legal experts caution it is critical for law firms to carefully select AI services and their providers. With a backdrop of political uncertainty following then South Korean president Yoon Suk Yeol’s martial law declaration and his subsequent impeachment, the National Assembly passed the Act on the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act) on 26 December 2024.

Set to take effect in January 2026, the act – which consolidates 19 separate AI bills – marks the world’s second piece of comprehensive AI regulatory legislation, following the EU’s Artificial Intelligence Act of May 2024.

Hwan Kyoung Ko, a partner in the technology, media and telecoms group at Lee & Ko in Seoul, and Kyoungjin Choi, a professor of law and director of the Centre for AI Data and Policy at Gachon University in Seongnam, were among several legal experts at a public hearing into the AI Basic Act held at the National Assembly in September last year.

Ko tells Asia Business Law Journal that although it is unlikely that law firms will be subject to the AI Basic Act for simply utilising AI in providing legal advice or services to clients, careful diligence remains critical when collaborating with AI service providers in order to protect client information.

“AI services are expected to offer significant efficiency improvements to professional users, including law firms, sole practitioners and in-house counsel,” says Ko. “However, certain challenges persist, such as the hallucination issue in generative AI, limitations due to insufficient legal training data, and concerns about the reliability of such services.”

Hwan Kyoung KoHwan Kyoung Ko

Choi, who is also president of the Korea Association for Artificial Intelligence and Law, warns that compliance with the new law “becomes an issue” for law firms and lawyers if their clients or affiliated companies develop or provide high-impact AI-related products or services.

“Therefore, it has become important for law firms and lawyers to understand and adhere to the various obligations imposed by the AI Basic Act,” says Choi.

In a similar fashion to the EU AI Act, South Korea’s AI Basic Act divides AI systems into high-impact and generative AI categories.

A high-impact AI system is one that may have a significant impact on, or pose risks to, the lives, physical safety and fundamental rights of individuals when used in any one of 10 specific areas. These include the supply of energy and the development of medical devices.

A generative AI system is defined as one that mimics input data to generate outputs such as text, sound, images and other creative content.

Under the act, businesses that develop or use AI and provide related products and services to clients must give users advance notice if such products and services are powered by high-impact or generative AI. They must ensure the safety and reliability of their AI systems, and create risk management plans, impact assessments and user protection strategies.

However, there is presently no specific interpretation or government guidance on the definition of high-impact AI. This is to be clarified by presidential decree and related subordinate legislation, expected to be released before the enforcement of the act.

The Ministry of Science and ICT website states that subordinate laws are set to be completed within the first half of 2025. But many questions remain for companies, particularly those in the technology sector.

“Particularly, high-impact AI operators are required to take specific measures to ensure safety and reliability,” says Kum Sun Kim, a senior corporate counsel at Microsoft Korea in Seoul. “However, since the term ‘AI business operator’ encompasses both AI developers and AI-using business operators, it is unclear which specific operators will bear these obligations.”

Kum Sun KimKum Sun Kim

Kim argues that, as violations of such provisions may result in fact-finding investigations, corrective orders and fines by the Ministry of Science and ICT, it necessitates caution. She says that copyright issues related to AI model training, which emerged during discussions on the introduction of an AI act, also remain unresolved.

Kim points to a need for in-house counsel to thoroughly analyse whether this law applies to their companies, identifying the AI systems that their companies develop or use, and assessing the associated risks. If it is determined that the law does apply, in-house counsel must establish comprehensive regulatory compliance plans and take necessary actions to ensure adherence to the new regulations, she says.

Finding balance

The AI Basic Act has its roots in AI’s rapid advancement in recent years. The speed of development has prompted extensive discussion about preparing for potential AI-associated risks.

Ko, of Lee & Ko, points to growing concerns over issues like the difficulty of regulating AI under existing laws, and excessive reliance on AI without human intervention in high-risk areas.

The previous absence of a clear regulatory framework for AI resulted in reliance on the application of individual laws. From this, concerns arose about reduced legal stability and predictability, and the possibility of it deterring proactive business investment in AI-related infrastructure.

The AI Basic Act primarily aims to prevent excessive regulation of AI while incorporating transparency regulations to mitigate the misuse of AI technologies, such as with deepfakes, and establishing self-regulating measures to ensure AI safety and reliability.

Gachon University’s Choi told ABLJ that South Korea also considered both the EU’s AI Act and the US self-regulatory approach as key models when drafting its AI Basic Act.

The US does not have any comprehensive federal laws regulating AI, or specifically banning or restricting its use. Instead, the country governs AI through existing federal laws and guidelines, while relying on federal and state governments, industries and courts to regulate it.

But Choi says that South Korea, lacking the same level of AI competitiveness as the US and having a different legal system, found it challenging to directly adopt the US approach.

While South Korea’s legal system is similar to the EU’s civil law system, there were concerns that the EU’s stringent regulations could hinder the development of South Korea’s AI industry. This resulted in notable differences in regulatory frameworks, sanction levels and specific provisions between the EU act and South Korea’s AI Basic Act.

“South Korea opted to introduce a legal regulatory framework that is less stringent than the EU’s while promoting autonomous AI development akin to the US approach,” says Choi.

The EU AI Act adopts a risk-based approach to artificial intelligence, categorising AI into prohibited AI, high-risk AI, and specific types of AI, and imposing separate regulations for general-purpose AI models.

It also imposes comprehensive and differentiated obligations on providers, deployers, importers and distributors of AI. Non-compliance can result in fines of up to 7% of global annual turnover, or EUR35 million (USD36.2 million) for providing prohibited AI services within the EU.

In contrast, South Korea’s AI Basic Act does not include provisions explicitly prohibiting certain types of AI. Instead, it focuses on ensuring the safety and reliability of high-impact, not high-risk, AI, transparency regulations for generative AI, and the use of high-impact AI. Violations of these provisions are subject to penalties such as fines of up to KRW30 million (USD20,500).

Ko explains that the decision not to adopt the EU’s comprehensive regulatory framework and stringent sanctions also stems from differences in societal acceptance of AI technology. “It also reflects South Korea’s optimistic outlook on AI technology and industry development, and its national strategies and policies tailored to the country’s unique AI ecosystem,” he says.

Opportunity knocks

South Korean law firms are already being approached by domestic and international companies from different industries seeking legal advice about complying with the act.

Tae Uk Kang, a partner in the intellectual property practice group and data protection team at Bae Kim & Lee (BKL) in Seoul, says most inquiries so far focus preparations for each company’s specific circumstances – such as regulatory compliance measures and legal risk management. “Additionally, there have been many detailed questions about the specific meaning and implications of individual provisions within the AI Basic Act, as well as their practical applicability,” says Kang.

Tae Uk KangTae Uk Kang

Keun Woo Lee, a partner and deputy head of the new project group at Yoon & Yang in Seoul, specialises in intellectual property and new technology including AI. He has had clients knocking on his door since the passage of the act, including those from sectors like semiconductors, secondary battery, cloud service and gaming.

“Some clients wish to analyse and respond to how this law will impact cloud business,” says Lee. “Similarly, gaming companies want to analyse and respond to how it affects current game development.”

Keun Woo LeeKeun Woo Lee

 

Lee says clients are asking for reviews of, and comprehensive responses to, matters such as how AI-driven legal technology impacts the work of in-house counsel, strategies for companies to prepare for advancements in AI technology, and the ethical and legal issues companies need to be cautious of when adopting AI.