From the printing press to the electric lightbulb, the spinning Jenny to the Internet itself, history is replete with technological breakthroughs which have had far-reaching consequences for society.
On each occasion, the sheer speed with which progress was made has left the public, business and policymakers scrambling to keep pace — with the resulting vacuum often resembling a regulatory “wild west”.
Now, the world once again stands at another such juncture with artificial intelligence (AI). Recent developments in generative AI tools have seen both progress and challenges, but also raised concerns over the potential risks.
Said Dr Carol Soon, Principal Research Fellow at the Institute of Policy Studies, Lee Kuan Yew School of Public Policy: “AI is not a new innovation, research has been conducted on generative AI models since the 1960s. The developments in AI technology in the past few decades focused on machine learning and predictive AI.
“However, what makes generative AI technologies like ChatGPT, Bard, and Midjourney different is they take what they learn from training data and prompts, to create new content that comes in the form of text, images and videos. More importantly, given their low to no cost, these technologies are now accessible to the mass population for trial and use in different contexts.”
In 2023 alone, OpenAI — the company behind ChatGPT — has seen significant personnel and boardroom upheaval, while the entertainment industry in the United States has been crippled by actor and writer strikes, with creatives fearful of the impact such AI tools will have on their livelihoods. Meanwhile, politicians have rushed to seek consensus over regulation, while tech experts have warned of dire — even existential — consequences of getting AI wrong.
Brave new world or dystopian vision?
As with all technological breakthroughs, there will be winners and losers with AI. The technology promises to revolutionise the productivity of entire industries, taking on back-office and administrative functions in the business and legal worlds, streamlining medical research through complex analyses and prototyping, as well as enabling smarter data-driven decisions across a whole range of sectors.
Many countries have been quick to latch on to these potential positive impacts.
In 2019, as part of its Smart Nation vision to better harness technology, Singapore unveiled its first
National AI Strategy to increase the use of the tech to transform its economy. The move led to the establishment of 150 teams working on research and development, and 900 startups working in the field. In 2023, Singapore launched the
Singapore National AI Strategy 2.0 (NAIS 2.0) to pursue further advances.
But many have raised concerns over AI, including the very experts who have had a hand in developing it. Geoffrey Hinton, a pioneer of deep learning who recently stepped down from Google, warned that society must confront the “existential dangers of artificial intelligence”.
Said Dr Soon: “While these technologies yield benefits like simplifying and saving time for myriad tasks, they also pose threats to individuals and societies like deception, privacy and data infringement, and large-scale disinformation.”
Such concerns include deep fakes — which Singapore Prime Minister Lee Hsien Loong has previously warned about. His fears proved well founded when four years later, his image was used in fake advertisements to promote alleged crypto scams. Other potential issues raising concern around the world are fraud and electoral interference, along with the potential impact on demand for human labour, and subsequent impact on wages and wider economics.
Global perspectives on AI regulation
Governments worldwide have been left trailing in the wake of AI’s rapid progress, but there is a clear indication that they are waking up to its potential impact.
The UK recently held a
high-level summit where British Prime Minister Rishi Sunak announced an agreement between Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the US and UK to test the eight leading tech companies’ AI models before their release. Some critics have labelled this exercise as more of a show of style than substance.
Singapore itself has developed a non-binding Model AI Governance Framework and AIVerify toolkit intended to provide guidance to tech companies in the nation state. On 16 January, the government announced that it is seeking international feedback for a new governance framework, the Model AI Governance Framework for Generative AI, that builds on the 2019 framework.
Separately, in December 2023, European Union politicians reached a provisional agreement on the Artificial Intelligence Act. The regulation aims to ensure that fundamental rights, democracy, the rule of law, and environmental sustainability are protected from what the act called ”high risk AI”, while also boosting innovation.
There are different routes open to regulators, even as they try to keep pace with the rapid developments. Said Dr Soon: “Regulation can be hard or soft. Hard regulation comprises rules and laws, while soft regulation takes the form of certifications, industry codes, guidance standards and licensing requirements.
“Examples include Australia’s Artificial Intelligence Ethics Framework, China’s Ethical Norms for New Generation Artificial Intelligence and Japan’s Social Principles of Human-Centric AI.”
The state of regulation: A necessary ethical framework for AI
But the debate raging around regulation of AI is complex. Nations risk stifling innovation with an approach that is too heavy handed. But at the other end of the scale, any technology with such a capacity to impact society could be potentially disastrous if rolled out with no controls.
Perhaps nowhere is this tightrope being walked more carefully than in Singapore, a state with a thriving start-up culture and which prides itself on supporting technological innovation.
Dr Soon said: “Singapore has a broad swathe of measures to govern the development and use of AI, from funding research programmes, the formation of an advisory council, the establishment of a national AI strategy to the creation of verification toolkits.
“The underlying theme of all these different initiatives is to nurture technological innovation while anticipating and minimising potential harms to individuals, businesses and society.”
But she noted that a critical consideration when developing regulation is striking a fine balance between under-regulation and over-regulation.
“Under-regulation results in negative consequences suffered by people, organisations and society. Over-regulation results in the stifling of innovation and an imbalanced playing field,” she said.
However, a piecemeal approach to regulation with nations moving in different directions risks an AI arms race. So, it begs the question of whether a global, rather than regional, approach to regulation is needed — much like nuclear weapons non-proliferation — to ensure such an AI race does not emerge with competing blocs vying for supremacy.
While global consensus may not be possible - yet, getting the major players together may be one important step, according to Dr Soon.
She said: “Despite the challenges and contestations that are taking place on the global stage, I think that it is possible to achieve consensus, not necessarily global, but one that is shared by a significant number of players.
“We are already seeing this in the adoption of AI governance principles developed by the OECD (Organisation for Economic Co-operation and Development). The OECD sets out AI principles like inclusive growth, sustainable development and well-being; human-centred values and fairness; sustainability and explainability; robustness, security and safety; and accountability. To date, more than 40 countries have agreed to adopt these principles. Alongside its continuous pursuit to encourage innovation and maximise the benefits of AI in different sectors, Singapore is also an active participant in global discussions on AI. For instance, it is a founding member of the Global Partnership on Artificial Intelligence and it is in discussions with other countries to align approaches in AI governance.”
Crafting an ethical and innovative AI landscape
Creating harmony between innovation and regulation will remain key to whether AI’s impact on society is broadly positive or negative.
In a worst-case scenario, left to its own devices, AI could run amok, disseminating false information to the point where true and fake news become almost impossible to tell apart, while putting whole swathes of the labour force on the junk pile.
But if utilised correctly with agreed, global regulations, AI could be a force for good — driving forward scientific and medical innovation, streamlining business practices, and ushering in a new dawn of creativity and productivity.