ops.glados.STEAM

“2001: A Space Odyssey.” “The Terminator.” “Portal.” Pop quiz: What do these things have in common?

Artificial intelligence sucks.

Well, it’s slightly more complicated than that, and there is a lot more nuance. But the general premise remains the same. Artificial intelligence is created by humans to help us perform tasks, test new theories and be our companions. But then the created turns on its creator. Our robot friends begin making decisions that override our own, imprison us and in some instances are happy to outright end us.

This nervousness is starting to pick up again as we have made strides in cultivating the AI of our computers. In 2011, a supercomputer created by IBM called Watson bested two reigning Jeopardy champions. “Bested” is more of a euphemism, really — “crushed” is more accurate, since it won over $35,000, more than triple the haul of the next-closest contender. Three years later, its processing has more than tripled and its size has decreased hundredfold. As for its job prospects? Watson is currently gainfully employed providing advice and information about lung cancer to 90 percent of nurses in the field.

Meanwhile, in Japan, mathematics professor Noriko Arai is developing a supercomputer nicknamed Todai-kun that can ace Tokyo University’s notoriously punishing entry examination, which includes math, physics, history and a written examination. Arai’s goalpost is in 2021, a mere seven years from now.

The advent of Todai-kun and Watson have raised the question of artificial intelligence’s role in our lives once again, and in today’s state of near-chronic economic depression, some worry that a supercomputer that can do just about anything will render many jobs obsolete.

This story has repeated itself throughout history. In the mid-1800s, the United States’ industrial revolution was in full swing and trains and railroads began to stitch together the fabric of the country. Outraged canal workers said that the new railways were taking their jobs — and they were.

The key thing is that those changes are an entirely normal thing for an economy to do. In fact, it’s the calling card of every major technological change in history. When ships began to be made out of metal instead of wood, the lumber industry was felled. In an ironic twist, the railway industry lost ground to trucking as the highway system was developed. Even now, as natural gas becomes an increasingly viable alternative to coal, coal workers are losing jobs in once-prosperous places.

As technology changes, the economic changes they produce are inevitable. Once-mighty titans collapse, and new sprouts grow into mighty redwoods. It’s the story of history, and trying to stop that change has the same effect as spitting into the wind: You get spit all over your face and look stupid while doing it.

Just as in other points in history, enhanced AI will siphon jobs away from other sectors. But it will also serve as a source for additional ones. People will need to maintain the artificial intelligence and cultivate it for future development. Computers may become integrated into other occupations, like teaching, but there will always be a place for human teachers who can provide the social interactions and a human model that artificial intelligence can’t.

At the same time, it seems likely that overall economic growth would increase as a result of artificial intelligence development. In the past, each new technological revolution has increased the United States’ economic output. Working on artificial intelligence has led to an increased number of jobs and an enhanced economic output in the computer sciences. Then there are the auxiliary industries that are also boosted by a better AI — for instance, the health care industry that Watson is now working for.

In the end, the world as a whole does better as economic growth increases and more people benefit from the increased earnings and increased paychecks. But as society attempts to jump the artificial intelligence gap, we must also provide a safety net to the people whose jobs and livelihoods are being lost in the transition, instead of watching them drop into the abyss.

Unemployment insurance and jobless benefits are part of the solution, of course. But it’s also key to provide funding for training programs that enable those left behind to pick themselves up and get back in the race. Instead of being down and out for years, workers who draw the short straw can retrain themselves and find a new line of work in the new economy. We shouldn’t leave anyone in our society behind, and we can use the gains of our economic growth to assist those most impacted by the changes in the economy.

As artificial intelligence develops, there’s no question that some jobs will fall to the wayside as computers become increasingly able to take on more and more difficult tasks. But at its core, humanity is adaptable. We’ve collectively survived two world wars, mass epidemics of influenza and plague and we live almost everywhere on the globe. Surely we can adapt to new advances in artificial intelligence.

Some people will persist in depicting dystopian futures where artificial intelligence attempts to exterminate humans. Let’s ask HAL, Skynet and GLaDOS how well that worked out for them. Spoiler alert: not well.

 

Author