AMID ECONOMIC TURBULENCE, DON’T BET ON ARTIFICIAL INTELLIGENCE TO SAVE US
Technology and artificial intelligence, we’re told, are creating a New Economy, where algorithms and robots do all our work for us, increasing productivity like never before. Go by the evidence, though, and the reality looks far different.
For decades, U.S. productivity grew by about 3 percent a year. After 1970, it slowed to 1.5 percent a year, then 1 percent. Today, that figure stands at 0.5 percent, and is likely to slump further from the shock of the coronavirus pandemic and mass lockdowns. But it isn’t only numbers. There’s a parallel between the evangelism around AI that we see now and a similar phenomenon we witnessed two decades ago.
The dot-com bubble was also fueled by wishful investors using novel metrics to justify ever-higher stock prices. Instead of something as old-fashioned as profits, investors counted a company’s sales, spending and website visitors. Companies responded creatively. Investors want more sales? I’ll sell something to your company and you sell it back to me. No profits for either of us, but higher sales for both of us. Investors want more spending? We’ll order another thousand Aeron chairs. Investors want more website visitors? We’ll give stuff to people who visit our website. No profits, but more traffic.
One measure of traffic was eyeballs, the number of people who visited a page; another was the number of people who stayed for at least three minutes. Even more fanciful was hits, the number of files requested when a webpage was downloaded from a server. Companies simply put dozens of images on a page, and each image loaded from the server counted as a hit.
Now we have the AI bubble, with plenty of hoopla about how computers are taking over the world. The coronavirus has only added to that rhetoric, and we’re seeing plenty of headlines along the lines of: “Five ways AI is helping fight the coronavirus.”
“AI” was the Association of National Advertisers’ Marketing Word of the Year in 2017. To cash in on the buzz, companies are slapping the AI label on mundane algorithms and advertising themselves as wizards in the field when they have barely begun to think about machine learning. Advertise first, build later.
And just like the meaningless metrics of dot-com commerce, we now have fanciful measures of the triumph of AI. In December, Stanford University released the 2019 edition of its AI Index — a 290-page document with dozens of tables and more than 100 charts — which “tracks, collates, distills and visualizes data relating to artificial intelligence.” When the AI Index was launched in 2017, a Stanford news story boasted that it “will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.”
Nope. GDP is a valuable measure of the amount of goods and services produced each quarter. Divided by hours worked, we have a useful measure of productivity. The S&P 500 is a valuable measure of zigs and zags in the market value of the 500 stocks in the index.
The AI Index, though, does not actually track the progress of the field, but rather reports trends related to it, from the growth in the volume of peer-reviewed AI papers — up by 300 percent between 1998 and 2018 — to increases in the number of conference attendees.
But the value of AI is not measured by these metrics any more than the value of the dot-com companies could be measured by eyeballs and hits. It would be more meaningful to assess the impact of AI on productivity in areas where there have been some successes, such as advertising, e-commerce and news. What are the challenges for AI in more complex areas such as accounting, legal, engineering and health care?
That would provide valuable insights for companies, AI startups, universities and policymakers — especially since the so-called success stories are actually shining examples of the limitations of AI at the moment. The Stanford report cites autonomous vehicles, where success has consistently lagged behind hype. Enabling vehicles to interpret and react to the innumerable objects that manned vehicles encounter on roads and highways and in parking lots, and in every type of weather, from glaring sun to falling snow, is far more complicated than identifying patterns in e-commerce or searching news stories. Autonomous vehicles are flawless in the laboratory, flawed on real highways.
It was the same with IBM Watson, once predicted to revolutionize health care, but is now a cautionary tale for those who gush about breakthrough technologies. Watson did great in the artificial world of Jeopardy!, but has overpromised and underdelivered in the real world of health care.
The speed with which firms adopted word processing, spreadsheet and presentation software in the late 1970s and early ’80s helped us foresee the adoption of enterprise software in subsequent years. In the same way, understanding the speed at which AI diffuses in retail, advertising and news will help us understand how soon it can really revolutionize accounting, legal and engineering applications and (eventually) autonomous vehicles and health care.
AI has so far feasted on low-hanging fruit, like search engines and board games. Now comes the hard part — distinguishing causal relationships from coincidences, making high-level decisions in the face of unfamiliar ambiguity and matching the wisdom and common sense that humans acquire by living in the real world. Until then, artificial intelligence, for all its potential, will have little measurable effect on the economy.