Charting ASEAN’s Path to AI Governance
Uneven Yet Gaining Ground
This brief by Karryl Kim Sagun Trajano explores ASEAN’s collective approach to AI regulation, as well as the domestic efforts (or at times the lack thereof) of individual member states. It concludes that the region can contribute valuable insights to the global conversation on responsible AI development and governance.
The Association of Southeast Asian Nations (ASEAN) and its member states’ innovation-friendly approach to artificial intelligence (AI) has in some measure been effective in advancing the region toward its goal of becoming a fully digital economy. The current framework seeks to ensure that no country is left behind. However, growing AI-related risks now suggest the need for more binding regulations. It may be timely to review existing guidelines and assess which countries are technologically prepared to start implementing enforceable regulations. Singapore has the potential to serve as a governance benchmark, helping pave the way for a region-wide approach through alignment of regulations across ASEAN. This could enhance the region’s credibility as a technological hub, protect individuals from bad actors exploiting legal gaps, and attract global investment by reinforcing ASEAN’s image as a trustworthy destination for AI development.
This brief explores ASEAN’s collective approach to AI regulation, as well as the domestic efforts (or at times the lack thereof) of individual member states. It concludes that the region can contribute valuable insights to the global conversation on responsible AI development and governance.
AI Regulation: The ASEAN Way
The AI boom of recent years has aligned with ASEAN’s ambition to become a digital economy and digital society by 2025. Yet its key document, the ASEAN Digital Masterplan 2025 (published in 2021), makes no mention of AI across its 140 pages. This omission suggests that the potential of AI might not have been fully anticipated during the drafting process.
To catch up, ASEAN has since taken an innovation-friendly governance approach. Following the introduction of more compulsory regulatory frameworks around the globe, ASEAN released the nonbinding Guide on AI Governance and Ethics for the entire region in February 2024. The guide articulates core principles—transparency, fairness, security, reliability, privacy, accountability, and human centricity—which seek to ensure that countries retain agency over AI-driven outcomes. Its voluntary, adaptable framework enables countries to tailor governance to their specific contexts and degrees of AI readiness. Given the varying levels of domestic AI infrastructure and capabilities across Southeast Asian nations, a nuanced approach allows for greater flexibility to preserve room for AI innovation.
Yet, if ASEAN is truly committed to ensuring that emerging technologies remain safe and human-centric, it must responsibly begin considering more binding legal frameworks, including AI regulation. According to the guide, human centricity means that AI systems are designed and used to promote human well-being, ensuring that they benefit society while protecting individuals, particularly the vulnerable, from harm or exploitation. This is a pressing issue, especially since the technology is already being used in malign ways, such as to produce deepfake content during elections.
However, even though ASEAN serves as the primary regional organization in Southeast Asia, it remains an intergovernmental body whose member states retain full sovereignty. Unlike the European Union, ASEAN lacks a parliament with legislative authority, which makes the implementation of binding laws across the region more challenging. Nonetheless, regional harmonization is not out of reach. ASEAN has previously demonstrated effective coordination in the technological sphere—for example, through the ASEAN Cybersecurity Cooperation Strategy.
Domestic AI Playbooks
Given ASEAN’s overall innovation-friendly approach, it is unsurprising that no member state has enacted dedicated, binding AI legislation to date. Southeast Asian countries largely rely on “soft law” instruments, such as ethical guidelines, governance frameworks, national roadmaps, and readiness assessments to guide the responsible development and deployment of AI.
Brunei. Like the rest of the region, Brunei has yet to enact dedicated AI legislation, but it is actively developing a governance framework rooted in innovation-led principles. This enables the country to drive innovation across multiple sectors and applications. Brunei published its own Guide on AI Governance and Ethics. Similar to ASEAN’s approach, this guide is technology- and sector-neutral, taking a pragmatic, principles-based stance designed to keep pace with rapid technological change. A formal national AI strategy is also in progress under the country’s 2025 Digital Economy Masterplan, prioritizing infrastructure development, talent cultivation, and innovation. The strategy also emphasizes flexibility, allowing AI guidelines to be revised as technologies and societal needs evolve. Meanwhile, Brunei is finalizing the Personal Data Protection Law—a critical step for regulating data practices in AI systems and beyond.
Vietnam. Vietnam’s AI governance resembles Brunei’s in its emphasis on digital transformation policies but differs through its use of binding sector-specific regulations. The National Program for Digital Transformation promotes AI use in areas like education, public administration, and urban planning. Efforts include AI-integrated curricula, smart-city projects, and even AI-powered waste management. Initiatives like national data systems led by the Ministry of Education show Vietnam’s commitment to AI readiness. Use of AI in the workplace is covered by the Labor Code, while cybersecurity laws support AI-related protections, advancing Vietnam’s aim to be among the top cybersecurity leaders by 2030. While these steps allow Vietnam to ensure that innovation aligns with context-specific societal needs, challenges remain, including ethical concerns, infrastructure gaps, and fragmented governance.
Malaysia. Akin to the broader ASEAN approach, Malaysia is shaping its AI regulations through a combination of ethical, legal, and infrastructure frameworks. By combining ethics and regulation with strategy, the country is able to foster responsible AI in high-impact sectors. The National AI Roadmap (2021–25) promotes responsible AI use across five key sectors: agriculture, healthcare, smart cities, education, and public services. Complementing this roadmap, the National Guidelines on AI Governance and Ethics set out core principles, including fairness, safety, transparency, and human benefit, to build trust and mitigate risks. Reflecting a commitment to human centricity, Malaysia’s 2024 Cybersecurity Bill supports AI governance by addressing critical issues of data security. Its National Artificial Intelligence Office, established in 2024, serves as the central body for coordinating AI policy and ensuring alignment with international standards. In line with its flexible, adaptive approach, Malaysia continues to engage in open discussions with various stakeholders to address emerging AI challenges.
Thailand. The approach of Thailand to AI governance incorporates national planning, legislative development, and sector-specific initiatives, thus driving innovation while ensuring responsible use. Its National AI Strategy and Action Plan (2022–27) is ambitious, with the goal of positioning Thailand as a regional AI hub focusing on infrastructure, workforce development, and legal readiness by 2027. The country also demonstrates a strong commitment to responsible AI. Last year, the Ministry of Digital Economy and the Electronic Transactions Development Agency issued generative AI guidelines mandating risk assessments, transparency, and data protection compliance. Additionally, a royal decree, currently under review, adopts a risk-based model inspired by the EU AI Act’s risk-based approach to enhance algorithmic accountability and public safety. The legislation requires that high-risk AI applications (such as facial recognition for surveillance) be registered with the Thai government, outlines categories of prohibited uses, and imposes penalties upon failure to comply. For a more balanced approach, Thailand is also drafting the Act on the Promotion and Support of Artificial Intelligence Innovation, aimed at lowering regulatory barriers and encouraging collaboration. By 2025, the country aims to have a dedicated framework for the public sector to further guide ethical AI use in areas such as healthcare and energy. Thailand also partners with UNESCO on AI ethics training and offers incentives—including tax breaks—to encourage responsible, human-centric AI adoption.
Indonesia. Indonesia primarily relies on adapting existing laws, ethical guidelines, and international standards, rather than creating entirely new regulatory frameworks for AI. Through this approach, legal clarity and stakeholder familiarity can potentially be more effectively supported. The country also appears to adopt a more consumer-friendly stance, legally defining AI as an “electronic agent” under the amended Electronic Information and Transactions Law, which holds operators liable unless user negligence can be proved. Government Regulation No. 71 of 2019 and the Personal Data Protection Law form the legal foundation of AI use in Indonesia, emphasizing security and consumer protection. AI ethics were also outlined in Circular Letter No. 9/2023, which focuses on transparency and inclusivity, with additional guidance from the Financial Services Authority for financial technology. In terms of innovation, Indonesia’s National AI Strategy (2020–45) sets long-term goals across ethics, talent, and infrastructure. A draft presidential regulation is in progress, with sector-specific rules in health and education expected. Indonesia’s approach aligns with Pancasila values[1] as well as with international frameworks like the EU AI Act and UNESCO assessments.
The Philippines. Whereas Indonesia leans on existing frameworks, the Philippines has introduced targeted regulations to steer the responsible growth and use of AI technologies. This approach benefits the country through regulatory agility and responsiveness to challenges as they come along. Targeted, sector-specific regulations also allow the Philippines to address emerging risks more efficiently. The Philippines’ 2021 AI Roadmap aims to boost digital infrastructure, research, and workforce skills. The country also has the Data Privacy Act of 2012. Under the legislation, when AI systems process personal information, individuals must give their consent for automated processing, and such processing must be registered with the National Privacy Commission. Key laws, such as Republic Act No. 11927, support digital skill-building, while Republic Act No. 10175 addresses deepfake-related offenses and online harassment. In response to AI-generated deepfakes during its recent midterm elections, the Philippines enacted Resolution No. 11064 mandating transparency in AI use for electoral processes. Its Advisory No. 2024-04 further clarifies rules for AI’s compliance with privacy laws. The country also aligns with UNESCO and the Bletchley Declaration on ethical AI and supports regional regulation within ASEAN.
Myanmar, Cambodia, and Laos. These three countries remain in much earlier stages of AI governance relative to the rest of the region. Currently, they do not have dedicated national AI strategies or agencies. While they participate in regional AI efforts by closely following the ASEAN guide, more coordinated action, both nationally and regionally, is urgently needed to ensure they are not left behind in AI innovation. ASEAN’s preference for soft law reflects the reality that many member states are still catching up technologically. If these three countries lag further behind, it could delay the region’s readiness to adopt more binding AI regulations, potentially slowing collective progress.
Myanmar’s Cybersecurity Law No. 1/2025 indirectly touches on AI, focusing on digital platforms and national security, while its e-Governance Master Plan 2030 reflects the country’s overall vision of building a digital government that is efficient, transparent, citizen-centric, and inclusive, affecting AI development. High-level coordination meetings are also being held to craft a future national AI strategy and policy. However, weak infrastructure, censorship, and the absence of AI-specific laws continue to limit progress in Myanmar.
Cambodia’s secretary of state Keo Sothie stressed that the country will “regulate, not strangulate” AI in its approach to oversight. In June 2025, public consultations were launched on a draft national AI strategy, a key step in shaping policy. Cambodia’s approach to AI is based on its wider Digital Government Policy (2022–35) and tailored for AI as guided by its Ministry of Industry, Science, Technology and Innovation’s AI Landscape Report (2023). These steps mark key progress in Cambodia’s AI strategy and regulation efforts.
While no specific AI law exists in Laos, its government adopted a twenty-year vision (2021–40) augmented by shorter-term plans to guide digital policy, including AI. Legal frameworks under development aim to align with national priorities, ensuring transparency, accountability, data privacy, and protection of public interest and reflecting the ASEAN way.
The Curious Case of Singapore
Singapore is often lauded as Asia’s smartest city, with its leadership in innovation, including in AI, setting the country apart from its Southeast Asian peers. Given its strong technological foundation, Singapore is well-positioned to take the lead in shaping more binding AI regulations. However, like much of the region, its current approach to AI governance remains flexible and principles-based, aiming to balance innovation and ethical use.
This measured strategy can be a double-edged sword. On the one hand, it keeps Singapore ahead within the region and positions the country competitively on the global stage, even among more advanced economies—economies typically with more stringent laws. It likewise enables the country to have bilateral agreements and collaboration on AI development outside the region with minimal restrictions. For example, Singapore built on its 2020 Digital Economy Agreement with Australia to deepen AI cooperation focused on maximizing benefits and mitigating risks. In 2024, it also partnered with Rwanda to launch the AI Playbook for Small States, highlighting how geographically distant nations can face similar challenges in AI policy implementation.
On the other hand, Singapore might be missing a valuable opportunity to take the lead in shaping binding AI regulation in the region. The country has already demonstrated its regulatory capabilities through the effective implementation of its Cybersecurity Act. It certainly holds much potential to play a similar role in shaping AI governance. A robust approach to advancing both strategy and ethics through policy and implementation measures can already be observed in the country. Singapore launched its National AI Strategy in 2019, well ahead of its regional peers, with the goal of positioning itself as a global leader in AI, especially in high-value sectors, by 2030. In the same year, the Model AI Governance Framework (later updated in 2020) was launched, providing detailed guidance for private-sector organizations, emphasizing transparency, fairness, explainability, and accountability. The AI Verify Toolkit, launched in 2022, helps assess AI systems for trustworthiness and aligns with Singapore’s “smart nation” goals. The following year, in 2023, Singapore released the National AI Strategy 2.0. The updated strategy outlines long-term goals, reinforcing a soft-law approach that remains agile yet principled in response to evolving technologies.
Consistent with ASEAN’s principles-based approach, Singapore supports responsible AI development through frameworks like the Advisory Guidelines for Personal Data in AI Systems, which ensure alignment with the country’s Personal Data Protection Act. To further promote safe innovation, the Model AI Governance Framework for Generative AI (2024) emphasizes transparency, risk assessment, and safeguards against harmful outputs. Singapore has also expanded the Privacy Enhancing Technologies Sandbox to test privacy-preserving technologies in generative-AI contexts. Although the country does not yet have comprehensive, AI-specific legislation, regulators like the Monetary Authority of Singapore have issued sector-specific guidelines to address emerging risks.
While Singapore leaps toward a proactive and flexible governance model, other ASEAN states take incremental but meaningful steps toward AI policies. Together, these efforts form a regional approach favoring readiness over coercion. However, if global trends are followed, once states are sufficiently equipped with a baseline level of technological and governance capacity, the next logical step is to move toward more formal and binding regulation. This is crucial, as the rapid pace of AI development not only brings new opportunities but also poses significant risks that demand a stronger regulatory response.
Singapore, being technologically ready for governance, risks setting a negative example in the region if it does not lead in establishing binding AI regulations. The country has the capacity to set a regional benchmark for responsible AI governance, encouraging regulatory alignment and accelerating progress across ASEAN. Adopting binding laws would also signal to the global community that the region takes AI safety seriously, enhancing its reputation as a trustworthy technological hub and attracting more international investment. Clear legal frameworks promote accountability and mitigate risk, while the absence of binding laws creates gaps that bad actors can exploit, potentially leading to unfair practices and user harm. Regulatory ambiguity also weakens the region’s ability to resolve legal disputes involving AI. Establishing firm legal standards could provide essential guidance for the public, private, and academic sectors, helping Singapore and the rest of the region stay resilient amid rapid technological change.
Conclusion
Southeast Asia’s AI governance may seem uneven, but it reflects a region attempting to adapt regulation to its diverse political, economic, and developmental realities. ASEAN’s use of soft law through voluntary guidelines, strategic frameworks, and ethics-based principles shows a pragmatic, innovation-friendly approach. Rather than treating innovation and regulation as opposing forces, the region promotes a flexible, inclusive model that enables countries at different stages of development to participate in AI development. Most are crafting adaptable national playbooks aligned with regional norms, prioritizing innovation over inflexible rules. As seen in the case of Singapore, this approach is a deliberate choice to balance opportunity with responsibility while considering infrastructure capabilities and evolving digital ecosystems. However, once readiness is in place, bolder steps toward binding regulation to address risks should follow.
Overall, the ASEAN region’s approach to AI governance demonstrates that the development and implementation of responsible AI do not always require sweeping laws at the onset. Instead, regulation can be built on a foundation of trust, ethics, and flexibility that adapts to technological progress.
Karryl Kim Sagun Trajano is a Research Fellow for Future Issues and Technology in the S. Rajaratnam School of International Studies (RSIS) at Nanyang Technological University in Singapore.
Endnotes
[1] Pancasila is Indonesia’s national ideology, embodying the values of divinity, humanity, unity, democracy, and justice. It serves as a guide for social life, and underpins governance by promoting checks and balances, consensus-based decision-making, and the pursuit of legal certainty, justice, and social harmony.