{"id":295540,"date":"2026-01-21T09:08:14","date_gmt":"2026-01-21T09:08:14","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/295540\/"},"modified":"2026-01-21T09:08:14","modified_gmt":"2026-01-21T09:08:14","slug":"korea-sets-ai-safety-rules-first-in-world","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/295540\/","title":{"rendered":"Korea sets AI safety rules, first in world"},"content":{"rendered":"<p>       <img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2026\/01\/news-p.v1.20260120.bd5dff6d7d0a43b9bd43a4be60f10f83_P1.jpg\" alt=\"(123rf)\"\/>     (123rf)  <\/p>\n<p>South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance \u2014 or so-called frontier, AI systems \u2014 a move that sets the country apart in the global regulatory landscape.<\/p>\n<p>According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.<\/p>\n<p>\u201cThis is not about boasting that we are the first in the world,\u201d said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry, during a study session with reporters in Seoul on Tuesday. \u201cWe\u2019re approaching this from the most basic level of global consensus.\u201d<\/p>\n<p>The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body \u2014 the Presidential Council on National Artificial Intelligence Strategy \u2014 and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.<\/p>\n<p>To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law\u2019s scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve.<\/p>\n<p>The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.<\/p>\n<p>High-impact AI refers to fully automated systems deployed in critical sectors such as energy, transportation and finance \u2014 areas where decisions made without human intervention could significantly affect people\u2019s rights or safety. At present, the government says no domestic services fall into this category, though fully autonomous vehicles at level 4 or higher could meet the criteria in the future.<\/p>\n<p>What distinguishes Korea\u2019s approach from that of the European Union is how it defines &#8220;high-performance AI.&#8221; While the EU focuses on application-specific risk \u2014 targeting AI used in areas like health care, recruitment, and law enforcement \u2014 Korea instead applies technical thresholds. These include indicators such as cumulative training computation, meaning only a very limited set of advanced models would be subject to the safety requirements.<\/p>\n<p>As of now, the government believes no existing AI models, either in Korea or abroad, meet the criteria for regulation under this clause. In comparison, the EU is rolling out its own AI regulations gradually, with some measures accompanied by multiyear transition periods.<\/p>\n<p>Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines \u2014 capped at 30 million won ($20,300) \u2014 issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one.<\/p>\n<p>Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.<\/p>\n<p>Kim emphasized that the purpose of the legislation is not to hinder innovation but to offer a basic regulatory foundation that reflects growing public concerns. \u201cThe goal is not to stop AI development through regulation,\u201d he said. \u201cIt\u2019s to ensure that people can use it with a sense of trust.\u201d<\/p>\n<p>He added that the law should be seen as a starting point, not a finished product. \u201cThe legislation didn\u2019t pass because it\u2019s perfect,\u201d Kim said. \u201cIt passed because we needed a foundation to keep the discussion going.\u201d<\/p>\n<p>Recognizing concerns from smaller firms and startups, Kim said the government plans to stay engaged throughout implementation. \u201cWe know smaller companies and ventures have their own worries,\u201d he said. \u201cAs issues come up, we\u2019ll work through them together via the support center.\u201d<\/p>\n<p>yeeun@heraldcorp.com<\/p>\n","protected":false},"excerpt":{"rendered":"(123rf) South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally&hellip;\n","protected":false},"author":2,"featured_media":295541,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,5402,18,19,17,5403,5399,5398,5404,5400,5401,82,5397,5406,5407,5405],"class_list":{"0":"post-295540","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-asia-news","12":"tag-eire","13":"tag-ie","14":"tag-ireland","15":"tag-k-pop","16":"tag-koreaherald","17":"tag-korean-news","18":"tag-kpop","19":"tag-south-korea-news","20":"tag-south-korea-news-in-english","21":"tag-technology","22":"tag-the-korea-herald","23":"tag-5406","24":"tag-5407","25":"tag-5405"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115932306000859805","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/295540","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=295540"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/295540\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/295541"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=295540"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=295540"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=295540"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}