6.4 Global and Regulatory Trends


AI policy is rapidly evolving worldwide. In 2024–25 a wave of national AI strategies and international frameworks has emerged to steer AI development and address its societal impacts. Governments are seeking both to capture AI’s economic benefits and to build public trust. Below we compare key country strategies and multilateral ethics efforts, then examine recent regulatory trends and standardization work on AI governance.

6.4.1 International AI Strategies and Collaborations

  • EU (European Union): The EU has pursued an “excellence and trust” strategy. It aims to be an AI leader by heavily investing in R&D (e.g. €1 billion per year from EU funds) and building data/computing infrastructure. At the same time Europe is finalizing the AI Act (the world’s first comprehensive AI law) to ensure AI systems respect fundamental rights. In 2024–25 the Commission launched an “AI Continent” action plan and innovation package to boost startups, develop so-called AI “Gigafactories”, and adopt generative AI (GenAI) responsibly. These efforts align strategy with regulation: the EU emphasizes trustworthy AI, e.g. requiring high-risk systems to meet strict transparency, human-oversight and robustness standards before deployment.

  • United States: The U.S. approach is more decentralized. There is no single national AI law yet, but the U.S. government has issued guidance and executive actions. In 2022 the White House OSTP published a voluntary Blueprint for an AI Bill of Rights, listing five principles (safe and effective systems, data privacy, fairness, etc.). In October 2023 President Biden signed an Executive Order on “Safe, Secure, and Trustworthy AI” to promote U.S. leadership and require safeguards for government AI use. The U.S. also relies on existing agencies (FTC, EEOC, etc.) to apply laws on discrimination and privacy to AI. In practice, U.S. policy prizes innovation; for example the National AI Initiative Act (2020) funds research and the NIST AI Risk Management Framework (2023) offers voluntary best practices for fairness, transparency and security.

  • China: Beijing’s AI strategy is state-driven. It has pledged to become the world leader in AI by 2030. Massive public and private investment (e.g. an $8.2 billion AI fund for startups) supports this goal. Unlike the West, China ties AI to its industrial policies and national security. China’s strategy focuses on key sectors (smart cities, robotics, fintech, etc.) and on developing homegrown models (e.g. Baidu’s ERNIE, Tsinghua’s WuDao) so it can compete with U.S. firms. New developments in 2024–25 include regulations on AI content: in August 2023 China issued its first Interim Measures for generative AI services, and by 2025 it will require mandatory labeling of AI-generated content. The government has also released cybersecurity standards for GenAI data and training (to be enforced in late 2025). These moves reflect China’s dual aim of fostering rapid AI innovation while enforcing control (e.g. “socialist values”) and preventing misuse.

  • South Korea: Seoul has also moved quickly. In December 2024 Korea passed a comprehensive AI Basic Act (effective Jan 2026), becoming the second jurisdiction after the EU with an omnibus AI law. The Act creates a national “AI control tower” and a new AI Safety Institute, and allocates R&D and funding to make Korea a “top-3 AI nation”. It mandates risk assessments, transparency and safety measures for high-impact AI (including GenAI). Thus Korea’s strategy blends industrial promotion (supporting startups, talent and infrastructure) with regulatory guardrails to build public trust.

  • Other countries: Many others have or are updating strategies. For example, Japan, India, Canada and Australia each have national AI plans (often centering on healthcare, manufacturing or education) or are revising them. Oxford Insights notes that in 2024 about a dozen new AI strategies were announced – triple the number in 2023 – especially from developing countries. (This underscores growing global momentum in AI policymaking.)

  • Multilateral ethics and AI initiatives: On the international stage, organizations have launched common principles and partnerships. UNESCO adopted the first global standard on AI ethics – a “Recommendation on the Ethics of AI” (2021) – committing all 193 member states to principles like fairness, transparency and human oversight. In mid-2024 the OECD updated its AI Principles (first adopted 2019) to address emerging issues like AI safety, privacy and IP in the age of generative AI. The new principles have been endorsed by 47 countries (including the EU and USA) and emphasize interoperability via common AI definitions. UNESCO is active too: it has joined the OECD-hosted Global Partnership on AI (GPAI) as an observer. GPAI (founded by G7 nations) fosters applied AI research and broad stakeholder dialogue on topics like AI and democracy or environment. In 2023, the UK convened the first Global AI Safety Summit (the “Bletchley Park Summit”), where 28 governments (US, China, EU, India, Brazil, etc.) agreed on a Declaration to cooperate on the risks of frontier AI models. These efforts signal that, despite different national approaches, major powers recognize the need for international cooperation on AI ethics, safety and standards.

6.4.2 AI Governance and Future Legal Landscapes

  • Regulatory trends – EU vs. US vs. others: A clear trend is a move toward explicit AI regulation, but with varying styles. The EU is on track to implement its AI Act, a risk-based regulation classifying AI systems from “unacceptable” (banned) to “high-risk” (strict rules). Under the Act, high-risk AI (e.g. in healthcare, transportation, education, employment screening) will face pre-market conformity assessments, rigorous documentation, data quality controls and human oversight requirements. By contrast, the U.S. has so far resisted a single law, preferring guidelines. The Biden administration’s Executive Order 14110 (Oct 2023) directs federal agencies to adopt AI risk management, sets new transparency rules for foundation model developers, and invests heavily in R&D. The “AI Bill of Rights” remains a voluntary framework (e.g. it calls for alerting users when AI is used, aligning with privacy laws). In China, multiple new rules have appeared for content and safety. For example, the Cyberspace Administration’s Interim Measures (2023) regulate generative AI services; new rules will force providers to embed safety features and obtain security certification. By late 2025 all GenAI outputs must carry explicit or implicit labels. (China also strictly enforces data privacy and algorithm laws that impact AI.) South Korea’s AI Basic Act similarly enshrines governance: it will require businesses offering “high-impact” AI to perform risk assessments, appoint local compliance officers, and ensure human oversight. Other jurisdictions are active too: the UK has opted not to legislate yet but will empower regulators (finance, healthcare, information) with AI guidance, while Japan, Singapore, India and others are drafting new rules (often focused on critical sectors or GenAI). In sum, the global trend is toward greater oversight of AI’s harms – balancing innovation with precautions – though countries differ on scope and strictness.

  • Key regulations – EU AI Act and U.S. frameworks: The EU AI Act is the flagship. It entered into force in August 2024 and will be fully applicable by 2026. It prohibits certain uses (e.g. unfair biometric surveillance, social scoring) and requires high-risk AI developers to, among other things, conduct risk analyses, maintain logs, and provide “detailed documentation” to regulators. Even “limited-risk” systems (like AI chatbots) must meet transparency rules: users must be informed they are interacting with AI. In practice this creates one of the world’s strictest AI compliance regimes. In the U.S., by contrast, the emphasis is on general principles. The OSTP’s Blueprint (2022) sets out protected rights, and agencies like NIST and the Federal Trade Commission have published guidelines on fairness and accountability. The Oct 2023 Executive Order additionally requires developers of the most powerful models to notify the government and apply stronger safety testing. It also directs agencies to ensure equitable deployment of AI (e.g. in education, childcare) and to update liability rules for AI-related harm. However, any binding federal AI law in the U.S. remains pending legislation (e.g. bipartisan bills like the “SAFE Innovation Framework” have been proposed but not passed).

  • Standardization efforts: Parallel to laws, technical and management standards are emerging for trustworthy AI. ISO and IEC have created or are drafting standards: for instance, ISO/IEC 42001:2023 provides a framework for “AI management systems”, outlining how organizations can govern AI responsibly (covering risk management, impact assessments, vendor oversight and more). It explicitly promotes fairness, transparency and auditability – stating that AI systems should be “explainable, auditable and free from bias”. In the U.S., NIST’s AI Risk Management Framework (RMF) provides a voluntary structure for identifying and mitigating risks. NIST defines “trustworthy AI” by seven attributes (including safety, security, explainability and fairness) and encourages organizations to integrate these into design and monitoring. Industry groups (IEEE, ITU) are also developing standards on topics like bias detection and model documentation, though these are still works-in-progress.

  • Explainability, safety and fairness benchmarks: Across academia and industry there is recognition that AI systems need quantifiable standards for key qualities. For safety and fairness, initiatives like the DARPA XAI program and the IEEE’s transparency series aim to define metrics and testing procedures. In practice, however, no single global benchmark exists yet. A recent Stanford AI Index report notes “a significant lack of standardization” in how major AI labs evaluate model safety and bias. Each developer tends to use different test suites for issues like toxicity, misinformation or fairness, making cross-comparison difficult. Efforts are underway (via academic conferences like FAccT or consortia like the Partnership on AI) to build shared “red team” benchmarks and evaluation frameworks, but these are still nascent. In Europe, the AI Act requires organizations to report serious AI incidents and to maintain post-market monitoring, which over time may help create common metrics for reliability and harm mitigation. Likewise, the OECD recommends that member countries share best practices on AI risk evaluation.

In summary, global AI governance is coalescing around a few themes: fostering AI innovation while safeguarding rights; aligning on ethical principles; and building common technical standards. Different countries vary in emphasis – e.g. the EU leads with binding regulation and a rights-based approach, the U.S. with voluntary frameworks and funding for trustworthiness, and China with strict content control and tech leadership goals. However, the recent AI Safety Summit, OECD principles update and UNESCO guidelines suggest increasing convergence: major AI powers recognize the need for interoperability (through shared definitions and collaborative research) to manage AI’s global impact. As one analysis notes, new international agreements (like the Bletchley declaration) and standards bodies aim to “harmonise AI governance across borders” and reduce the burden of fragmented rules for businesses.

Sources: Government and multilateral reports on AI policy (European Commission, UNESCO, OECD, NIST, etc.) and recent analyses (e.g. RAND, Stanford AI Index, law firm white papers) were used to compile the above. These provide objective details on current AI strategies, ethics principles, laws and standards.

댓글

이 블로그의 인기 게시물

Expert Systems and Knowledge-Based AI (1960s–1980s)

Core Technologies of Artificial Intelligence Services part2

3.1.4 Linear Algebra and Vectors