With the rapid development of Artificial Intelligence (AI) technology, our lives and ways of working are undergoing dramatic changes. AI technology not only opens up new opportunities and advancements, but also brings up new challenges and risks. In this context, the global attention on AI has turned to this question: How do we balance the convenience brought by AI with its potential risks?
To delve into this, Lee Kuan Yew School of Public Policy (LKYSPP) hosted an online forum titled “AI Governance: Policies and Collaboration from a Global Perspective” on 15 May 2024. Jaslyn Seah, Regional Director (China) of International Collaboration and Strategic Partnerships at the Infocomm Media Development Authority of Singapore, Yao Weibin, Head of International Security Technology at Ant Group and Chief Technological Officer of ZOLOZ, Xu Jinsong, Founder and Managing Director of Innowave Tech,
LKYSPP Associate Professor Alfred Wu took part in the panel discussion. The forum was moderated by Master in International Affairs graduate Hou Yuxin.
The conversation brought together
experts and scholars from various fields who shared their insights and strategies, and provided valuable ideas and thinking on governance of AI. The forum was a platform for promoting international cooperation and consensus, and sought to explore how AI technology can continue to bring positive impact to society.
AI technology is rapidly developing and gradually integrating into various fields such as production, life sciences, and academic research. Recently, OpenAI released the new multimodal AI model GPT-4o, which integrates groundbreaking text, images, sound, and video. It can also demonstrate empathy with humans. These developments are unprecedented. Therefore, AI governance has become an even more important and profound topic in public policy.
Jaslyn Seah kicked off the forum by introducing Singapore's principled approach to AI governance from the perspective of a government regulator, aiming to address AI risks through practical tools and inclusive international cooperation. Since the launch of its’ first national AI strategy in 2019, Singapore has continuously updated its’ AI policies, releasing version 2.0 in December 2023 to harness AI for public good.
In her presentation, Jaslyn outlined Singapore's goal in the technology sector, - to drive economic and social development through AI innovation while improving the quality of life for its’ citizens. However, the rapid development of AI technology has also introduced new challenges, such as bias, misinformation, fraud, and cyberattacks. To address these challenges, she said that Singapore is committed to building a trustworthy AI ecosystem that balances the opportunities and risks of AI.
In terms of AI governance, Jaslyn added that Singapore adopts a collaborative approach, working with industry, academia, and other countries to develop AI governance norms. Singapore's Model AI Governance Framework provides a comprehensive set of guidelines designed to help organisations and companies implement responsible AI practices. The framework covers AI ethical principles and offers substantive recommendations on risk management and corporate governance.
The speaker said that to support companies in implementing AI governance, Singapore launched the AI Verify tool in 2022. This open-source AI governance testing framework and software toolkit is based on 11 internationally recognised AI ethical principles and includes over 90 process reviews and technical tests, primarily evaluating the fairness, explainability, and robustness of AI. The open-source nature of the tool means that developers worldwide can download and use it from GitHub, and they can build various plugins to expand its functionality, such as different test result displays, testing algorithms, and tests or plugins for specific vertical domains.
Singapore is also actively exploring the
governance of generative AI, releasing a specialised governance framework and launching the Evaluation Sandbox and the LLM Evaluation Catalogue to address new issues arising from generative AI. The governance framework covers nine key dimensions, including establishing incentives and accountability systems, data training, trustworthy development and deployment, incident reporting, third-party testing, cybersecurity, content provenance, safety and alignment research, and AI for public good.
Furthermore, Jaslyn says that Singapore promotes the development of AI safety and testing science through the Digital Trust Center and actively participates in international cooperation. Through the AI Verify Foundation platform, Singapore collaborates with global companies to foster a responsible and trustworthy AI ecosystem. The foundation has more than 120 members from around the world, including well-known companies like Google, Microsoft, IBM, and Ant Group.
The next speaker, Yao Weibin, delved into the challenges and strategies for AI safety and governance, sharing insights on the relationship between technological security and AI compliance governance. This provided a critical perspective on the dual nature of AI technology.
Weibin first emphasised the profound impact of AI on daily life, particularly in improving efficiency and reducing costs. He illustrated this with the case of the bKash wallet in Bangladesh, where AI had enabled 2 million people to open an account successfully within a month, showcasing AI's potential in streamlining processes and enhancing efficiency.
However, he also highlighted new issues brought about by AI, such as forgery and fraud, especially the threats posed by deepfake technology. He cited an example of a serious case where a Hong Kong company suffered a loss of up to 200 million HKD due to a fraudulent video conference created using deepfake technology, underscoring the concern over authenticity and complexity of AI forgery technology and its role in facilitating financial crime.
To tackle these challenges, he said that Ant Group developed multi-layered protection schemes, including safeguards at the graphic image layer, app layer, and system layer. The interactive risk control capabilities of Alipay have effectively thwarted numerous telecom fraud attempts, demonstrating the application potential of AI technology in security protection.
These real-life incidents were astonishing and attracted significant attention and discussion from the audience. Weibin advised the public to remain vigilant about AI forgery technologies and learn methods to distinguish authenticity, such as covering the face with a hand or setting up security codes. He advocated for incorporating watermarking or content tracing technologies into AI frameworks to enhance the effectiveness of future standards and regulatory implementation.
In conclusion, Weibin summarised the large potential of AI technology in enhancing efficiency while facing severe security challenges. He called for increased focus on AI safety and regulation to ensure the effective application of AI technology in security and governance and highlighted the importance of collaboration and innovation in promoting healthy AI development.
The third speaker, Xu Jinsong, began by acknowledging the rapid development of AI technology, especially in deep learning and large language models, which have brought unprecedented opportunities to the industrial sector. AI is gradually approaching and, in some areas, surpassing human cognitive abilities. Jinsong envisioned that the integration of perceptual capabilities with AI would propel the entire industry forward at a rapid pace.
By integrating multimodal models, sensor data, and human judgment, Innowave Tech developed the
Virtual Expert (AI Agent) technology. This technology enhances production efficiency and decision-making accuracy through real-time data collection, judgement, and execution. The core advantage of the Virtual Expert lies in its ability to provide expert-level judgments anytime, anywhere, which is particularly effective in advanced manufacturing, significantly reducing losses caused by delays.
Jinsong emphasised the necessity of ensuring that AI models used in industrial applications are reliable and trustworthy. He noted that incorrect judgments in industrial environments could lead to significant losses, making near-perfect accuracy crucial. He said that his company, Innowave Tech, achieved zero operational errors through AI technology, significantly improving product quality and productivity, and further optimised production processes through data retention and predictive analysis.
He also highlighted the importance of collaborative cooperation in AI governance, such as discussions on guidelines and industry norms with relevant organisations such as IMDA and industry associations. To enhance AI understanding and application capabilities, companies can also conduct training and discussion sessions with clients and partners. These efforts help businesses leverage AI technology more effectively, improving efficiency and reducing risks.
Jinsong was optimistic about AI technology's potential in boosting production efficiency, optimising decision-making, and enhancing quality. He called for strengthened collaboration and governance to ensure the safe and reliable application of AI technology in the industrial field.
The last panelist, LKYSPP Associate Professor Alfred Wu, explored AI governance from the perspective of public administration in a VUCA era, which is the short form for a world characterised by Volatility, Uncertainty, Complexity, and Ambiguity.
Prof Wu pointed out that in the VUCA era, the rapid changes and uncertainties in the environment demand AI governance strategies that are of greater flexibility and adaptability. The COVID-19 pandemic exemplifies this complex environment, with Singapore achieving a balance between economic development and pandemic control.
Discussing
volatility, Prof Wu emphasised the rapid development of AI technology, particularly the rise of generative AI, and the need for policymakers to be adaptable. He suggested that policies must strike a balance between promoting technological innovation and mitigating risks.
Regarding
uncertainty, Prof Wu highlighted that nations face a choice in managing AI: either avoiding risks entirely or balancing risk and innovation. He cited the example of some universities initially banning students from using ChatGPT, illustrating how excessive concern could limit the positive application of technology. Instead, he advocated for proper education and guidance, which could yield more beneficial outcomes.
On
complexity, Prof Wu stressed the interconnectedness of modern society's various sectors and the need for policy decisions to engage with multiple stakeholders. Unlike the closed, isolated nature of early agricultural societies, today's nations and sectors are intricately linked, where action in one area can have widespread repercussions on another. Therefore, he emphasised the importance of international cooperation to foster the healthy development of AI technology and ensure its societal benefits.
Finally, discussing
ambiguity, Prof Wu suggested that AI governance frameworks should be flexible in their early stages while adhering to ethical principles. Recognising the inherent lag in setting laws and regulations, he urged policymakers to seriously consider how to address this issue. He recommended continuously updating legal frameworks to keep pace with technological advancements and address emerging issues.
In conclusion, Prof Wu reiterated that AI governance in the VUCA era requires flexibility, balance, and collaboration. Through international cooperation and knowledge exchange, the best paths for AI governance can be identified. He called for the establishment of trust and cooperative platforms to jointly set ethical standards and promote the healthy development of AI technology. These measures will help achieve the safe and effective application of AI technology, bringing a positive impact to society in a complex and ever-changing environment.
Before closing the session, moderator Hou Yuxin posed a critical question: " The development of AI far outpaces that of regulation development, creating more and increasingly complex issues and risks that policymakers have neither the expertise nor the capacity to respond to. Do our existing institutional arrangement and governance logic meet the needs of AI governance? What kind of regulatory framework, measures, mechanism, and philosophy do we need in order to solve these issues? How should cooperation be conducted between countries and regions, in the public and private sectors, and among different stakeholders?"
Here are the panelists' answers:
Jaslyn Seah: "From a regulator's perspective, the rapid development of AI indeed presents governance challenges. Singapore adopts a pragmatic and balanced approach, encouraging the responsible use and innovation of AI while establishing safeguards to prevent potential risks. Our AI governance work focuses on three main areas: First, utilising existing laws and data protection systems to address AI-related risks; second, providing practical tools and frameworks, such as the AI Verify tool, for voluntary use by businesses; third, ensuring we keep pace with global technological and governance developments. AI governance policies and measures must evolve, and the government needs to continuously learn and adapt."
Yao Weibin: "From a corporate perspective, the unintentional and intentional harms caused by AI are increasing, with crime moving online and extending internationally. This requires international cooperation to establish laws and regulations, as constant monitoring of transnational crime groups would otherwise be impossible. Additionally, companies often find themselves in a passive position when dealing with AI risks. Can we shift our approach and take more proactive measures? For example, using AI Verified tools to enhance model explainability, privacy protection, and fairness. Lastly, as AI technology advances, the creation of fake content becomes increasingly easy. To address this challenge, I believe that in the future, every AI-generated content (AIGC) should have a government-issued certificate or signature. This includes verifying video conferences and browser content. While this adds some cost, considering the large-scale rise of AIGC content, we must adopt proactive mechanisms to define new standards and prevent potential future issues."
Xu Jinsong: "From an industrial perspective, our framework is built with rigorous precision, and explainability can be achieved with complete transparency. After AI significantly boosts productivity, there are still concerns about how to use it to improve people's living and working environments. This requires ongoing communication and collaboration between companies and product providers to understand the win-win situation AI brings to enterprises."