Singapore has earned a reputation as a "living lab" for disruptive technologies, where innovation is balanced against public safety and trust.
Singapore has earned a reputation as one of the world’s foremost “living labs” for disruptive technologies. From autonomous vehicles (AVs) to drones, robotics, and now artificial intelligence (AI), the city-state has taken a distinctive approach in testing how new technologies can be integrated into society. But with disruption comes risk. How does a government encourage innovation while protecting the public from safety failures, privacy breaches, cybersecurity attacks, or job losses?
The answer, argues LKYSPP Assistant Professor
Araz Taeihagh, lies in what he calls adaptive governance: a flexible, experimental, and iterative approach to policymaking, allowing innovation to proceed within guardrails, learning and adjusting as technologies evolve.
“The accelerated expansion of trials and regulatory provisions for AVs demonstrates Singapore’s aspiration to be nimble, and showcases the simultaneous adoption of two contrasting implementation approaches – prescriptive and experimentalist– to guide AV adoption.”
This adaptive playbook, first tested in mobility, is now being applied to AI. Looking at both cases side by side shows how Singapore is developing a model for governing disruptive technologies that could be instructive for the rest of Asia.
From no-response to adaptive governance: The AV journey
In the early 2010s, “AVs were a distant mobility concept for most Singaporeans”, regulators watched developments overseas but took no concrete steps. That changed with the launch of the Centre of Excellence for Testing and Research of AVs (CETRAN), a regulatory sandbox that allowed companies to test AVs in real-world conditions under controlled parameters. This was accompanied by the Technical Reference 68 (TR68) standards, setting voluntary guidelines for AV testing and safety.
As described
in a 2018 paper, this sandbox-and-standards approach allowed policymakers to gradually address five major risk domains:
safety, privacy, cybersecurity, liability, and labour disruption. The result was a governance system that reassured the public while giving industry the confidence to invest.
Asst Prof Taeihagh and Dr Tan observed:
“Singapore’s governance of AVs demonstrates that strong political will, coupled with high levels of policy capacity, can drive the rapid implementation of a disruptive technology, given the presence of public policies that foster policy pilots or trials, dynamic public–private partnerships, an open business environment that favours innovation, as well as inter-agency collaboration that implements deliberative and forward-looking policy decisions.”
By building regulation in stages and focusing on a mix of policies that are prevention, control, toleration, or adaptation oriented, governments can both manage risks and keep pace with innovation.
Governing beyond cars: The autonomous ecosystem
The story doesn’t stop with vehicles.
In a 2023 paper, Asst Prof Taeihagh and co-author Asst Prof Devyani Pande argue that Singapore’s regulatory innovations have extended across the autonomous ecosystem - from drones to industrial, healthcare, and military robots.
A common problem among all autonomous systems is that they are designed to learn and adapt their behaviour along the way as they interact with humans and the world around them. This adds an element of unpredictability that challenges rule makers.
Multi-stakeholder involvement is essential in setting provisional regulations, with government stewardship playing a central role in coordinating robust governance.
For example, in the case of personal care robots, the government created provisional standards that list possible and foreseeable situations that can be considered hazardous. This involved the input from - and applies to -those designing, manufacturing, and operating the robots. The standards acknowledge that situations may change, and the robots will be used in various ways.
In other words, adaptive governance depends on collaboration between government, industry, and researchers. This coordination has been vital in tackling the operational, legal, economic, social, and ethical challenges posed by autonomous systems.
Data sharing: The hidden linchpin
One of the least visible but most important governance challenges is data sharing. Without access to large, diverse, and reliable datasets, AVs and other autonomous systems cannot function. Yet data sharing is fraught with obstacles.
A 2023 study by Dr Si Ying Tan, Asst Profs Taeihagh, and Pande identified six barriers: technical, motivational, economic, political, legal, and ethical.
To overcome them, Singapore has again relied on experimentation.
As the authors argue:
“Data sharing within regulatory sandboxes should be promoted… public–private collaborations can overcome motivational barriers, while ethical analysis is necessary for overcoming ethical barriers.”
Governance isn’t only about controlling technology—it’s also about enabling the flow of data in ways that are safe, fair, and trustworthy. This insight is directly relevant to AI, where questions of data protection and privacy-enhancing technologies are at the heart of global debates.
Trust and user acceptance: The human factor
Even the best-designed policies won’t succeed if the public doesn’t accept the technology.
In their 2024 study, Asst Profs Pande and Taeihagh tested what factors influence Singaporeans’ willingness to use autonomous systems. The results showed that performance expectancy, ease of use, social influence, and trust in government all positively affected adoption—while perceived risk was a major deterrent.
Their conclusion was clear:
“…in Singapore, we find that performance expectancy, effort expectancy, social influence, and trust in government to govern autonomous systems significantly and positively impact the behavioural intention to use autonomous systems.”
This finding has wide resonance. Whether it’s AVs or generative AI, building public trust is just as important as developing the underlying technology. Singapore’s adaptive governance model, by visibly testing, standardising, and communicating risks, is designed to earn this trust.
The next frontier: Governing AI
In July 2025, Singapore introduced a new set of tools for governing AI:
- The Global AI Assurance Sandbox, allowing firms to test AI systems against governance standards
- A Privacy-Enhancing Technologies (PET) Adoption Guide, encouraging data use without compromising privacy.
- The Singapore Standard for Data Protection (SS 714), which elevates existing data certification into a full national standard.
These initiatives echo the AV playbook: sandboxes, standards, and stakeholder collaboration. They also respond to the global urgency of governing generative AI, where public concerns about bias, misinformation, and accountability remain high.
In this way, Singapore is adapting its tried-and-tested model from mobility to AI - again seeking to balance innovation with risk.
The Singapore playbook
Looking across both case studies, a common governance pattern emerges:
- Start small: use sandboxes to test technologies in safe environments.
- Codify lessons: transform pilots into standards that industry can follow.
- Collaborate widely: involve government, industry, and academia in shaping rules.
- Build trust: recognise that public acceptance is as crucial as technical readiness.
- Adapt constantly: policies evolve as technologies and risks evolve.
Lessons for Asia
Singapore’s adaptive governance journey offers important lessons for Asia’s other fast-growing cities. Instead of waiting for perfect regulation, governments can experiment, iterate, and learn. Instead of focusing narrowly on risks, they must also build trust and legitimacy. And instead of seeing governance as a brake, they can treat it as an enabler of responsible innovation. New and disruptive technologies will continue to surface, successful governments will be those prepared to adapt.