South Korea’s landmark AI law: what businesses need to know

Updated as of: 22 January 2025

Implementing a risk management system, designating a local representative, and labelling output as AI-generated are some key requirements under South Korea’s recently enacted AI Basic Act.

Shutterstock.com/Tang Yan Song

South Korea has become the second jurisdiction worldwide after the EU to pass a landmark artificial intelligence (AI) law, introducing transparency, safety, and trustworthiness requirements for AI systems. Passed by the National Assembly on 26 December 2024 amidst a month-long political crisis, the law will take effect one year after its promulgation, which is expected to take effect in January 2026 (Korean language only).

The Basic Act on the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act) 2024 (Korean language only) marks the first comprehensive AI legal framework in the Asia-Pacific region.

The legislation comes after lawmakers consolidated 19 previous AI-related bills over four years of discussions (Korean language only) since it was first proposed in July 2020. Like the EU’s AI Act 2024, the South Korean law takes a risk-based approach to regulating high-impact AI systems. However, the law does not outright ban any specific type of AI – unlike the EU AI Act, which bans AI applications deemed to pose an unacceptable risk, starting 2 February 2025.

South Korea’s new law aims to establish new standards for AI, protect public interest, and strengthen national competitiveness. The law outlines plans to develop a national governance system for AI, systematically foster the AI industry, and prevent risks that may arise from AI (Korean language only).

Lexology PRO explores the key elements of the law and key implications for businesses.

What are the key features of the AI Basic Act?

Scope of application

The law imposes common obligations on AI developers and users. AI developers are responsible for designing and providing AI systems, while AI users refer to someone who offers AI products or services to the public. The AI Basic Act defines these two groups collectively as “AI operators”. This differs from the approach taken in the EU’s AI Act, which differentiates related obligations between AI providers, deployers, importers, and distributors.

The AI Basic Act also has extraterritorial reach, applying to AI-related activities conducted abroad if they impact the domestic Korean market or users. However, the law does not apply to AI systems developed and used for national defence or security.

The Basic Act regulates the following types of AI:

  • high-impact AI – AI systems that pose significant risks or impacts on human life, physical safety, or fundamental rights; and
  • generative AI (GenAI) – AI systems that imitate the structure and characteristics of input data to generate new and original outputs, such as text, audio, and images.

Local representative designation

Foreign businesses offering AI products or services that exceed a certain threshold of users or sales must designate a local representative in South Korea. The local representative’s responsibilities include:

  • submitting the results of safety measure implementation;
  • requesting ministry confirmation on the classification; and
  • supporting the implementation of safety and reliability measures.

Establishment of a national governance system for AI

The law will see the establishment and operation of the National AI Committee (NAIC), launched in September 2024 (Korean language only). The NAIC will oversee the country’s AI policy, with members comprising the President as chairman and members from the private sector.

Additionally, the legislation establishes legal foundations for other existing key institutions to ensure the systematic development and execution of AI-related policies and initiatives. These include the AI Policy Centre, the AI Safety Research Institute, and the Korea AI Promotion Association (all Korean languages only).

Transparency requirement for high-impact AI and GenAI

The AI Basic Act imposes specific transparency obligations for businesses offering high-impact AI or GenAI, which must provide users with clear and accessible information that their product or service is AI-powered. For GenAI systems, companies should also clearly label that the generated output is artificially created, especially those that imitate real-world sounds, images, or videos.

Safety requirements for large-scale AI systems

The law mandates specific obligations for businesses offering AI services or products that exceed a certain computation threshold:

  • identify, assess, and mitigate potential risks at each stage of the AI system’s lifecycle;
  • implement a risk management system for monitoring and responding to AI safety incidents; and
  • submit the above results to the Ministry of Science and ICT (MSIT).

Trustworthiness requirement for high-impact AI

The AI Basic Act outlines additional obligations for businesses offering AI services or products categorised as high impact, including:

  • developing and implementing a risk management plan;
  • establishing and executing measures to explain the AI system;
  • developing and implementing a user protection plan;
  • ensuring human supervision and oversight;
  • maintaining documentation on safety and reliability measures; and
  • performing additional tasks determined by the NAIC.  

Fact-finding investigations and sanctions for non-compliance

The legislation grants MSIT the power to conduct a fact-finding investigation on suspected violations related to the transparency, safety, and trustworthiness requirements for AI systems.

Businesses found to be non-compliant can face administrative penalties of up to ₩30 million (US$ 20,568). Violations include:

  • failing to provide prior notice that a service is AI-generated;
  • lacking a local representative in South Korea; or
  • failing to comply with corrective orders issued following an investigation.

Key implications for businesses 

Conduct a preliminary assessment of the impact of products and services

Businesses developing or offering AI services or products should conduct a preliminary assessment to determine if their systems fall within the regulated categories. For instance, experts anticipate the new law could impact sectors like automobile, healthcare, and financial services. Companies should proactively follow new developments of upcoming laws and policies to ensure compliance.

Ensure AI systems satisfy outlined principles

Businesses developing or offering AI systems in Korea should prioritise implementing measures outlined in the new legislation to ensure readiness by the enforcement date, scheduled to be implemented by January 2026. This includes adopting processes and protocols to ensure their products and services satisfy the AI Basic Act’s requirements, such as safety, transparency, and trustworthiness. Adopting strategies developed for the EU or other regulated markets could be an efficient way to meet these requirements.

Appoint local representatives

Businesses without a registered address of the business office in South Korea, but exceed certain users or sale thresholds, must designate their domestic representatives within the country. These domestic representatives are essential in assisting international companies to comply with the law. This includes implementing safety measures and submitting their results to the MSIT.  

Apply for government support or certification processes

Companies aiming to benefit from the government’s preferential procurement policies for high-impact AI systems should prepare for the certification and assessment procedures. Under the AI Basic Act, the MICT is required to publish a master plan for AI every three years (Korean language only). The law also outlines government support for research and development, standardisation, the establishment of learning data measures, and support for the introduction and utilisation of AI.

See our new interactive Compliance Calendar for key deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations. 

Follow Lexology’s artificial intelligence hub to stay up to date with key developments and in-depth articles.  

Track the latest artificial intelligence updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool.