ASEAN Bulletin Issue 5
April 30, 2025 - December 31, 2024
Centre on Asia and Globalisation
Lee Kuan Yew School of Public Policy

Guest Column

The post-war liberal order has frayed, and Southeast Asia faces an increasingly uncertain strategic environment. This evolving situation further pressures a region historically confronted by traditional and non-traditional security threats. As such, it is unsurprising that military modernisation is an especially salient issue among regional states. However, policymakers must balance meeting national defence needs and other socio-economic priorities. Consequently, regional defence spending remains around 2-3 percent of GDP, a modest amount compared to adjacent regions such as Northeast Asia.

Nevertheless, there exists a significant interest concerning the militarised use of Artificial Intelligence (AI), which is seen in the development and acquisition of Autonomous Weapon Systems (AWS) and AI-based Decision Support Systems (ADSS). The Philippines and Indonesia have signalled their interest in acquiring AWS, while Singapore recently unveiled an ADSS during Exercise Forging Sabre in 2024. While the adoption of AI-enabled systems is unsurprising given the strategic environment and lessons from the ongoing Ukraine-Russia War, a cautious approach is warranted.

Confidence in the transformative potential of AI-enabled defense technologies is not unique to Southeast Asia. Considering the advantages offered, such as (1) freedom from human psychological and physical limitations, (2) limited direct risk to operators, and (3) the mitigation of bias, it is easy to see its appeal. However, this confidence may be misplaced, contributing to further regional instability. To avoid this outcome, AI’s transformative potential must be considered across three fundamental dimensions: technological, organisational, and individual.

The underlying technologies required to develop and operate AWS and ADSS are neither insignificant nor inconsequential. While recent breakthroughs such as DeepSeek signal the entry of more actors into the AI space, it does not nullify the fundamental requirements for the expertise in computing and data—challenges that continue to impede progress in the region. Although material constraints can be addressed through commercial off-the-shelf solutions (COTS), this approach brings its own challenges.

Incompatibility in the expected “end goals” between commercial and defence actors may lead to undesirable outcomes and increase overall uncertainty in the use of these technologies. Tangentially, concerns about supply chain and developmental processes could erode trust in such systems, resulting in limited adoption or even abandonment. Moreover, acquiring these technologies from multiple providers without greater insight into the design and development raises concerns about unpredictable behaviour when interacting with either other autonomous systems or human counterparts.

These concerns feed into potential friction points at the organisational level, where institutional inertia and bureaucratic competition can further slow adoption. Underlying this is the growing perception in some quarters that AI threatens established parochial interests. Furthermore, organisations may resist embracing technological solutions due to persistent cultural preferences.

At the individual level, concerns affecting both the immediate operators of these systems and policymakers themselves require special attention. Concerning the former, questions of trust regarding the delegation of action or reliance on algorithmically derived recommendations dominate ongoing discourse—an issue that has been further emphasised by recent research ighlighting widespread distrust of these technologies. Conversely, we may also see the growth of automation bias owing to sustained exposure to AI-enabled technologies outside the defence sector. Amongst policymakers, the uncertainty of the strategic environment coupled with the lack of technological expertise could prompt overconfidence that could lead to inappropriate policy choices. While this tendency can manifest in response to other emerging and disruptive technologies, overconfidence in AI could have especially pronounced consequences, given the extent to which hyperbolic narratives about its capabilities abound.

To address these concerns, ASEAN needs to establish clear guidelines concerning the use of AI. Although the ASEAN Guide on Governance and Ethics exists, it does not directly address the use of AI within the defence sector. Fortunately, there are signs of a more cautious approach towards AI in the defence sector: this year, ASEAN defence ministers acknowledged AI’s transformative potential alongside its corresponding risks. Complementing this, regional states are actively participating in multilateral forums such as the Responsible AI in the Military Domain (REAIM) summit to help establish and promote norms for AI use in a military context.

The current geopolitical climate guarantees sustained efforts within and outside the region to augment military capabilities using AI. Nevertheless, this should not be an excuse to abandon caution in favour of a race to the bottom— one that could adversely undermine regional security and stability.


Miguel Alberto Gomez is a Senior Research Fellow with the Centre on Asia and Globalisation (CAG) at the Lee Kuan Yew School of Public Policy, National University of Singapore.


The views expressed in the article are solely those of the author(s) and do not necessarily reflect the position or policy of the Lee Kuan Yew School of Public Policy or the National University of Singapore.


Image Credit: Flickr/deathhell


Be part of the community