A lot has been written about the so-called Fourth Industrial Revolution and what it means to live in a world increasingly run on artificial intelligence (AI), one where the digital and the physical merge.
Most policymakers agree on the benefits, from fuelling economic growth to better health services and safer transportation, as well as the challenges it poses, for example, in terms of employment, privacy and security.
But there is no broad agreement as yet on the need to regulate or enforce codes of conduct or ethics for AI. Many countries are inclined to adopt a wait-and-see approach. This would be a mistake.
Those in favour of regulation believe that we already have enough warnings from the dark side, such as rampant cybercrime or the recent massive privacy breaches. Those who are opposed or hesitant worry that the imposition of rules would hurt companies and entrepreneurs by stifling innovation or driving business to foreign competitors.
Regulation may stifle innovation, but it may also stimulate it. Consider that tough environmental regulation in the 1970s and 80s not only helped clear up the mess of the earlier industrial revolution, it also spurred the development of environmental technologies and renewable forms of energy.
DATA PROTECTION
The recent controversy over data breaches involving Facebook and Cambridge Analytica may have tipped the balance in favour of regulation, especially in personal data protection where the European Union has taken the world lead with its General Data Protection Regulation (GDPR), passed in 2016 and to come into effect on May 25.
GDPR bestows EU protection on European personal data wherever these are, and specifies conditions for data transfers outside the EU. More and more countries around the world are adopting or have adopted new legislation for personal data (109 countries passed legislation in 2015, with 35 more in the process).
The EU may have been prescient regarding personal data protection, but a fascinating debate has also started in other areas connected to the Fourth Industrial Revolution: ethics for intelligent systems and robots, data flows, cyber security, fake news and hate speech, taxation of Internet platform companies and competition policy.
Recent examples of these in action include Singapore's select committee on online falsehoods, the G-20 debate on taxation of international platforms, and the European Commission's plans to develop ethical guidelines for AI and its push to fight fake news and promote "algorithmic transparency" of online platforms. The 32nd Asean Summit has also agreed to better coordinate national cyber security policies.
BEYOND BORDERS
That the debate is carried out also at a regional level is essential because the challenges transcend borders: Uncoordinated policy responses may hamper the development of a digital single market, which is the objective of both Asean and the EU.
The EU has a lot of experience that can be shared and Singapore is in a pivotal position as it will co-chair the Asean-EU dialogue as of next July.
There are two more conditions for the debate to be productive: It should take place publicly with the involvement of government, industry, academia and the citizens because so much is at stake for everyone, and it should not be construed as a debate on regulating the technologies of AI since regulation should be technology-neutral. What the debate should focus on instead are the basic principles of public policy based on the values and expectations of society.
So what are the big questions?
RULES FOR ROBOTS?
First, there are basic questions about the "machines" themselves: If they make autonomous decisions, should they be liable for the impact of these decisions beyond any liability of their user, owner or manufacturer? How would this liability be exercised? Should ethical guidelines or codes of conduct (like Asimov's Laws) be promoted or imposed on all design, programming and use of AI systems? What should these codes be?
These discussions need to take place in every country that does not want to be a passive rule-taker in AI.
Second, there are fundamental questions regarding data since machines need data as our brains need our senses to grow and function. The principle for public data should be that they are open and reusable by those who paid for their collection, that is, the taxpayers, with some exceptions properly justified. The EU passed legislation in this area in 2003. The principle also seems clear for corporate and other non-personal data whose collection has been privately financed - they belong to those who produced or collected them.
But are demands for data localisation legitimate or efficient? And what happens with personal data? Can someone "own" another person's data or have exclusive rights for their use even if the data subject agrees? If yes, how do we ensure fair competition and avoid data monopolies? If not, how do we organise access to the personal data of those people who have agreed to conditional or unconditional access?
Third, there are tantalising questions related to the socio-economic transformations brought about by the Fourth Industrial Revolution. The previous industrial revolutions created wealth, the middle class and urbanisation. What will be the impact of this one on jobs, incomes and societal cohesion?
Several studies indicate that about half the current jobs in the United States are at risk, especially white-collar ones (the blue-collar ones are in long-term decline), and the figure is similar in other developed economies. Previous industrial revolutions created more jobs than they destroyed.
Will it be the same this time? There is no law of nature dictating that it should be so.
FUTURE OF JOBS
Some economists like E. Brynjolfsson and A. McAfee point to the decoupling of employment from productivity growth in developed economies since the beginning of this century, indicating that we are not creating new jobs fast enough.
What then will be the future for job seekers? Is reskilling a realistic option that can cater for a large share of those at risk of losing their jobs? Are inequalities (stagnating or diminishing real incomes of most of the population) pulling the rug under the new markets for products and services this next revolution would create?
How should fiscal policies change in an economy where paid labour is not a significant factor of production? How do we ensure everyone benefits rather than create a digital underclass that no one wants to employ or insure?
These are fascinating questions that we cannot yet answer in full confidence. They ask for interdisciplinary thinking and more policy research. But public policy development cannot wait until all answers are given nor can we afford large-scale damage before we are forced to act. Our future belongs to us because we collectively have the power to shape it. The time to act is now.
A version of this article appeared in the print edition of The Straits Times on May 23, 2018, with the headline 'Age of AI: Why it's urgent to regulate a revolution'.