In April 2019, IBM made news when their CEO announced that artificial intelligence can predict when people are about to quit their jobs. The IBM Watson, which is a question-answering computer system built using artificial intelligence, is now capable of predicting with 95 percent accuracy which workers are about to quit the company. However, as another news about Google DeepMind’s AI nearly flunking a high school math test shows, much of the discourse around artificial intelligence is full of hype and hyperbole.
In 2018, venture capital firms invested nearly US$10 billion into AI startups and governments around the world have started adopting and adapting to AI. For all the buzz surrounding the field, many people talking about AI have no clear idea about what the current technology is capable of and what it cannot do. It thus becomes important that we cut through the hype, acknowledge the limitations of AI in the short run and look at its long-term implications for improved policy making.
There are two main causes of this widespread hype about a technology that is touted to change the world and the nature of work as we know it today.
First, the conversation around AI is full of marketing hyperbole and exaggerated statements. These make for great sales pitch and eye-catching news headlines, but mislead at worst and misinform at best. Financial investments in technology companies and startups is linked to the level of hype. The bigger the claim, the higher the chance of receiving money. This also gives a perverse incentive to consultants and marketeers to peddle more of this hype.
Second, most people do not understand the limitations of the technology. Industry, academia and governments increasingly believe that AI can solve everything from poverty to cancer. In many cases, this is because of the hype and money factor discussed above, where researchers exaggerate these claims in order to secure funding for their projects. At the same time, reporters with limited technical knowledge help in spreading this misinformation.
What most people don’t realize is that most of the AI systems being used today are either general AI or systems meant for a very specific use. An example of general AI is a data processing system, like a superfast spreadsheet. However, most of the cases we read about in the news are examples of narrow AI. These systems are meant for a specific purpose and are very accurate in the jobs they are trained to do. There are usability trade-offs for the more accurate AI systems. That means an AI system trained to learn from tomography scans and identify the likelihood of cancer, is good enough only for that. It would take an equal amount of effort, if not more, to build another system which looks at brain scans and identifies the chances of a patient having Alzheimer’s. A system that can do both is an entirely different invention.
One of the biggest problems this kind of hype causes is overestimating the short run effect and underestimating the long-term impact of the technology. Richard Brooks, the famous roboticist and previous director of the MIT Computer Science and Artificial Intelligence Laboratory, wrote in a 2017 article that just as AI has been overestimated, “its prospects for the long term are also probably being underestimated”. He talks about the seven deadly sins of AI predictions, talking about reasons including failing to understand the difference between performance and competence of AI systems, and the increasing popularity of “suitcase words”, words that “mislead people about how well machines are doing at tasks that people can do”.
None of this is to say that artificial intelligence cannot do amazing things. One of the most defining moments for any AI system was when in 2016, DeepMind’s AlphaGo defeated Ke Jie, the world’s number one Go player in a three match series. Since then, AI has enabled software to write code, powered mobile phone assistants converse in eerily human ways, and helped fully self-driven cars make their way through mixed traffic conditions. At the same time, these advances don’t mean that robots are ready to take all our jobs and Terminator-like scenarios are a near future.
So, what can we do to stop this hype?
AI researchers, and more importantly their PR offices, have to stop generating hype-filled, generalizable headlines. This is easier said than done. However, just as researchers make sure to add caveats and limitations to their published studies, they have to do something similar for such announcements. A case could be made of the disappointment caused when AI systems are overhyped but underdeliver.
A more effective but long-term solution is to teach AI to everyone. Not everyone needs to learn how to write the code of an artificial intelligent system (although a case could be made for that too) but the basics of what is AI, the definitions and capabilities. Similar to how we started teaching how to use a computer to all kids in schools in the 90s, this is the need of the 21st century.
It is equally important for policy makers at all levels, from national to municipal governments, to take a long-term, holistic view of what it means for people, communities and societies to adopt AI systems. Many national governments have started taking stock of this and have come up not only with guidelines informing the development of AI but are also developing national strategies and governance frameworks.
It is more important than ever to cut through the hype surrounding artificial intelligence, carefully consider the long-term impact on human life and talk about the challenges it poses in the future. That future might well be very different from what popular media portrays.
If you’re interested in reading such content, subscribe here