Commentary
Japan’s Hybrid Approach to AI Governance: Balancing Soft Law and Hard Law
Kyoko Yoshinaga considers Japan’s approach to AI governance, its policy logic, and why this model works effectively in Japan’s legal and cultural context.
Japan has been a leader in international discussions on responsible artificial intelligence (AI) from the outset, helping shape the G-7, Organisation for Economic Co-operation and Development (OECD), and G-20 principles. Most notably, as the G-7 host in 2023, Japan launched the Hiroshima AI Process, which established eleven guiding principles for a code of conduct for organizations developing advanced AI systems as shared global values for safe and trustworthy AI, while respecting national differences in regulatory approaches.
Domestically, Japan adopts a “soft law” approach to AI governance through guidelines that are not legally binding, complemented by the newly enacted AI Promotion Act, which also avoids direct penalties. This flexible framework has sometimes been misunderstood as “hands-off” regulation, raising questions about enforcement. However, there are unique reasons why it functions effectively in Japan, where cooperation between government and industry and social accountability play significant roles.
Moreover, Japan does not solely rely on soft law. When necessary, it amends existing sector-specific laws to address risks through legally binding measures. The combination of comprehensive soft-law frameworks and sector-specific hard-law reforms defines Japan’s current domestic regulatory strategy toward AI and offers lessons for other countries developing their approach to AI governance. This commentary considers Japan’s approach, its policy logic, and why this model works effectively in Japan’s legal and cultural context.
Japan’s Contributions to International Discussions
Japan has long been engaged in global AI governance efforts. As the G-7 host in 2016, it introduced eight principles for AI research and development—transparency, controllability, safety, security, privacy, ethics, user assistance, and accountability—proposed by then minister of internal affairs Sanae Takaichi, who is now Japan’s first female prime minister. These nonbinding principles provided a foundation for later international discussions within the G-7 and OECD and were reflected in the OECD AI principles adopted in 2019.[1]
When Japan again chaired the G-7 in 2023, Prime Minister Fumio Kishida launched the Hiroshima AI Process at the May leaders’ summit to address both the opportunities and risks of AI. Following discussions at the ministerial meeting in September and at the Internet Governance Forum in Kyoto in October, this process culminated in the Hiroshima AI Process Comprehensive Policy Framework, adopted in December 2023. This was the first international framework combining guiding principles and a code of conduct to ensure the safe, secure, and trustworthy development and deployment of advanced AI.[2] The Hiroshima AI Process aims to build inclusive global AI governance, encouraging participation beyond the G-7 to include developing and emerging economies, industry, academia, and civil society. To advance this vision, the Friends Group of the Hiroshima AI Process was launched in May 2024, and 58 countries and regions had joined by October 2025.[3] This inclusive vision also reflects Japan’s cultural value of wa (harmony)—a traditional spirit that emphasizes cooperation, coordination, and mutual respect, which continues to shape Japan’s consensus-based and collaborative approach to international dialogue on AI governance.
Japan’s Domestic AI Policies and Frameworks
Since 2016, Japan has developed an extensive set of AI guidelines and policies. In 2017 the Ministry of Internal Affairs and Communications (MIC) released the “AI R&D Guidelines for International Discussions,” adding “collaboration” as a principle to emphasize interoperability and interconnectivity among AI systems.[4] Recognizing the growing social implications of AI, the Cabinet Office Council published the “Social Principles of Human-Centric AI” in 2019.[5] These seven principles—(1) human-centricity (i.e., the utilization of AI must not infringe on the fundamental human rights guaranteed by the constitution and international standards), (2) education/literacy, (3) privacy protection, (4) security, (5) fair competition, (6) fairness, accountability, and transparency, and (7) innovation—are grounded in the three core philosophies of (1) dignity, (2) diversity and inclusion, and (3) sustainability. The same year, MIC issued the “AI Utilization Guidelines” with ten principles addressing risks in AI deployment, noting that AI systems evolve continuously through data learning.[6] While MIC was responsible for developing the AI principles as part of its role in overall policy coordination and ensuring public trust, the Ministry of Economy, Trade and Industry (METI) was responsible for promoting industrial innovation and bridging policy principles with business implementation. In 2022, it released the “Governance Guidelines for Implementation of AI Principles” to translate these principles into practice, offering tools and case studies for corporate risk management.[7]
Building on the Social Principles of Human-Centric AI, these three earlier guidelines—MIC’s AI R&D Guidelines and AI Utilization Guidelines and METI’s Governance Guidelines—were integrated and updated to reflect the latest technologies and international discussions, resulting in the “AI Guidelines for Business,” published in April 2024.[8] This framework adopts a risk-based and agile governance approach, encouraging voluntary corporate efforts while staying aligned with international policy trends. It draws on and refers to global examples such as the U.S. National Institute of Standards and Technology’s AI Risk Management Framework, the UK National Cyber Security Centre’s Guidelines for Secure AI System Development, the European Union’s AI Act, and the European Commission’s Ethics Guidelines for Trustworthy AI, as well as guidance issued by the OECD, Global Partnership on AI, G-7, Council of Europe, International Organization for Standardization, World Economic Forum, and United Nations. Created through multi-stakeholder dialogue, the guidelines emphasize practicality and legitimacy. They are also treated as a “living document,” having already been updated in November 2024 and March 2025 to keep pace with technological and international developments.
Recognizing the need to promote the responsible use of AI across government ministries, the Digital Agency issued “The Guideline for Japanese Governments’ Procurements and Utilizations of Generative AI for the Sake of Evolution and Innovation of Public Administration” in May 2025.[9] It reinforces compliance with the AI Guidelines for Business by ensuring that the common guiding principles outlined there are applied throughout the government’s procurement and contracting processes. It also establishes the Advisory Board on Advanced AI, which is tasked with receiving risk information from ministries and agencies and providing cross-ministerial advice.
Following the release of AI procurement guidelines, Japan enacted the Act on Promotion of Research and Development, and Utilization of AI-related Technology (AI Promotion Act), which took effect on September 1, 2025. The law promotes AI research, development, and use without imposing direct penalties for violations. It also authorizes ministries to request information from business operators, analyze misuse cases affecting individuals’ rights and interests, and conduct research to support safe and effective development and use of AI. Based on these findings, the government may issue guidance, advice, or other necessary measures to relevant stakeholders. Prime Minister Shigeru Ishiba stated: “With this law, which both promotes innovation and addresses risks, we aim to make Japan the easiest country in the world for AI research, development, and implementation.”[10] Based on this law, an AI strategic headquarters has been established within the Cabinet Office with the prime minister as director-general, and work on the AI Basic Plan is underway and expected to be finalized by the end of the year.
Japan has also issued other comprehensive guidelines, including “Contract Guidelines on Utilization of AI and Data,” which provide guidance for private businesses and other entities when entering into contracts concerning data use or AI development and utilization of AI-based software, and the “Machine Learning Quality Management Guidelines,” which define quality standards to improve reliability and reduce risks.[11] The AI Safety Institute, established to develop and promote evaluation methods and standards for safe and trustworthy AI, is actively issuing practical guidance on AI safety. Sector-specific guidelines complement these efforts. In education, for example, the Guideline for the Use of Generative AI in Primary and Secondary Education supports appropriate use in schools by explaining key concepts and contextual risks.
Meanwhile, Japan continues to amend existing hard laws to ensure accountability in specific sectors. One example is the Act on Improving Transparency and Fairness of Digital Platforms, which requires platform providers to ensure fairness in the digital market. Although not directly targeting AI, the law contributes to AI governance by enhancing algorithmic transparency, accountability, and fairness in digital transactions, thereby supporting a trustworthy data and AI ecosystem. Much of the content display and recommendation on such platforms relies on AI technologies. The Financial Instruments and Exchange Act regulates high-speed algorithmic trading operators through registration and risk-management obligations. To facilitate AI development, Japan amended the Copyright Act and the Act on the Protection of Personal Information to make it easier to develop and train AI models. For AI utilization, the Road Traffic Act now allows SAE Level 4 autonomous driving—the second-highest level—under certain conditions.
Together, these frameworks illustrate Japan’s balanced approach of promoting innovation through flexible soft laws while ensuring safety and accountability through targeted hard laws.
Why the Soft-Law Approach Works in Japan
METI’s report “AI Governance in Japan” notes that in order to balance respect for AI principles with the promotion of innovation, AI governance should, at least for the time being, rely primarily on instruments of soft law, which are more favorable to companies that adhere to responsible AI principles.[12] Three key reasons motivate Japan’s approach. First, AI is viewed as a solution to major social challenges, such as labor shortages resulting from the country’s low birthrate and aging population. Second, there is a time lag between the formulation and enforcement of laws and the speed and complexity of AI technology development and societal implementation. Third, detailed rule-based regulation could stifle innovation. These points are explicitly noted in the preface to the AI Guidelines for Business.
Japan’s broader vision for AI governance is to realize Society 5.0—a human-centric society that achieves both economic growth and solutions to social challenges through cyber-physical systems. The AI Promotion Act, which currently does not include penalties, is expected to operate alongside soft-law instruments such as the AI Guidelines for Business. Together, they establish social norms that companies are encouraged to follow, contributing to Japan’s Society 5.0 vision through the development and utilization of advanced technology that remains human-centric.
Soft law governing AI also enables Japan to flexibly incorporate lessons from international discussions, ensuring that its frameworks are current. This adaptability reflects the country’s long tradition of learning from global best practices and tailoring them to local contexts.
Several factors explain why this approach works effectively in Japan. The long-standing relationship between Japanese government and industry means that when the government issues a guideline, companies tend to comply even without legal obligation. They know that noncompliance could result in reputational harm, loss of government support, or stricter regulation later. Even for foreign entities, failure to comply with guidelines could lead to operational restrictions or tighter oversight. Leading organizations are often invited to participate in the rulemaking process through membership in government commissions, thereby becoming implicitly committed to following the resulting guidelines. Many technology companies, for example, had already been adhering to the three earlier guidelines—the direct predecessors of the AI Guidelines for Business.
Additionally, Japan’s strong culture of social accountability reinforces compliance. Companies fear reputational harm far more than legal penalties, recognizing that public trust is essential for business survival. Consequently, corporate adherence to AI ethics and governance is often regarded as part of corporate social responsibility and environmental, social, and governance initiatives.
Japan’s multi-stakeholder policymaking process further strengthens this approach. The expert commission for the AI Guidelines for Business included participants from academia, legal practice, major technology firms, start-up companies, and civil-society organizations. This inclusivity ensures that diverse perspectives are incorporated, thereby enhancing the legitimacy of the AI guidelines and fostering broad support.
At the same time, Japan has been amending hard laws, as mentioned earlier, to complement its overarching soft-law approach. Thus, AI governance is not left entirely to voluntary measures but combines flexible soft-law instruments with binding rules addressing risk depending on context and application. This domestic approach aligns with Japan’s international approach, with the Hiroshima AI Process Framework serving as a set of overarching principles designed to encourage broad participation. Sustained international collaboration is essential to achieving interoperability. Therefore, while global legally binding rules remain unlikely given differences in legal traditions, institutional capacities, and technological maturity, principle-based agreements provide a practical and sustainable path forward. In this context, Japan advocates for incremental convergence—a gradual alignment of approaches through shared principles, mutual learning, and instruments of soft law. Such frameworks establish common goals while allowing countries flexibility in national implementation—an approach that complements Japan’s domestic strategy.
Conclusion
Japan’s AI governance seeks to promote innovation while managing risks through a layered approach. Based on the AI Promotion Act, comprehensive soft-law frameworks such as the AI Guidelines for Business provide flexibility, while sector-specific hard laws offer safeguards where needed. This hybrid structure enables Japan to balance agility with accountability in a way that suits its institutional and cultural context.
Internationally, Japan has demonstrated leadership through the Hiroshima AI Process, advancing principle-based and interoperable frameworks for trustworthy AI. By promoting collaboration among governments, industry, academia, and civil society, Japan contributes to building an inclusive and cohesive global governance environment—one that sets common goals while respecting each country’s approach and granting flexibility in implementation.
However, this model rests on the premise of trust and voluntary compliance. If responsible development and use cannot be ensured under existing soft-law mechanisms, the government may need to take further steps, including establishing stronger regulatory measures. As AI systems become increasingly autonomous, it may be worth reviving the principle of controllability—originally included in Japan’s AI R&D Guidelines for International Discussions—which emphasized that “developers should pay attention to the controllability of AI systems.”[13]
Looking ahead, the challenge will be to sustain this balance by continually updating soft-law frameworks in step with technological advances while reinforcing accountability through targeted hard-law mechanisms. Japan’s experience provides a model of how responsible innovation and flexible governance can coexist for other nations seeking to align AI governance with shared human values while accommodating diverse national and regional systems. At its core, Japan’s approach remains guided by the spirit of wa—harmony—which values cooperation and mutual respect as the foundation for maintaining balance in governance.
Kyoko Yoshinaga is a Project Associate Professor of the Graduate School of Media and Governance at Keio University and an Affiliate Scholar at Georgetown Law Institute for Technology Law and Policy. Her work focuses on the law and policy of emerging technologies, particularly AI governance and ethics. She also serves as an expert for the Global Partnership on Artificial Intelligence.
This research was supported by the JST Moonshot R&D project, JPMJMS2215.
Endnotes
[1] Conference toward AI Network Society, “AI Utilization Guidelines Practical Reference for AI Utilization,” Report, August 2, 2019, https://www.soumu.go.jp/main_content/000658284.pdf.
[2] See “Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI System,” https://www.soumu.go.jp/hiroshimaaiprocess/pdf/document04_en.pdf; and “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems,” https://www.soumu.go.jp/hiroshimaaiprocess/pdf/document05_en.pdf.
[3] For more information on the Friends Group, see https://www.soumu.go.jp/hiroshimaaiprocess/en/supporters.html.
[4] Ministry of Internal Affairs and Communications (Japan), “Draft AI R&D Guidelines for International Discussions,” trans. Conference toward AI Network Society, July 28, 2017, https://www.soumu.go.jp/main_content/000507517.pdf.
[5] Cabinet Office Council (Japan), “Social Principles of Human-Centric AI,” 2019, https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf.
[6] Ministry of Internal Affairs and Communications (Japan), “AI Utilization Guidelines: Practical Reference for AI Utilization,” trans. Conference toward AI Network Society, August 9, 2019, https://www.soumu.go.jp/main_content/000658284.pdf.
[7] Expert Group on How AI Principles Should Be Implemented, “Governance Guidelines for Implementation of AI Principles,” ver. 1.1, July 28, 2022, https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf.
[8] The original Japanese text of the guidelines is available from METI at https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/20240419_report.html.
[9] Digital Agency, “The Guideline for Japanese Governments’ Procurements and Utilizations of Generative AI for the Sake of Evolution and Innovation of Public Administration,” May 27, 2025, https://www.digital.go.jp/en/news/3579c42d-b11c-4756-b66e-3d3e35175623.
[10] The original Japanese quote is available at https://www.nikkei.com/article/DGXZQOUA0267B0S5A600C2000000.
[10] The original Japanese text of the “Contract Guidelines on Utilization of AI and Data” is available at https://www.meti.go.jp/policy/mono_info_service/connected_industries/sharing_and_utilization/20200619001.pdf
[11] The original Japanese text of the “Machine Learning Quality Management Guidelines,” is available at https://www.digiarc.aist.go.jp/publication/aiqm/AIQuality-requirements-rev4.2.0.0113-signed.pdf.
[12] Expert Group on How AI Principles Should Be Implemented, “AI Governance in Japan,” ver. 1.1, July 9, 2021, https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20210709_8.pdf.
[13] For further discussion, see Kyoko Yoshinaga, “Controllability as a Core Principle for AGI Governance and Safety,” in Crisis or Redemption with AI and Robotics? The Dawn of a New Era, ed. M.F. Silva et al. (Cham: Springer, 2025), 144–53.


