Be Afraid, Be Very Afraid: Artificial Intelligence is Here. It May Soon be Conscious. And it May Want to Take Your Job (If it hasn’t Already)

It requires no sleep, vacations, sick days or maternity leave. It knows nothing of leisure, and demands neither love nor comfort. It can learn, and it never forgets. It does not age, does not slow down; indeed, it only becomes exponentially faster. It is artificial, often digital, sometimes neural, sometimes quantum, but ever present, and here. If it hasn’t already, it may soon take your job. Welcome to the brave new world.

Artificial Intelligence (AGI) has lived in the fertile minds of science fiction writers and futurists since the advent of the computer. But as incredibly powerful as digital computers have become, they have, until now, fallen short when it comes to the most impressive characteristics of the human mind: simply being human (creating art and exhibiting empathy for example), unstructured problem solving, deciding relevancy within a maze of undefined phenomena, and non-routine physical work.

sam harris

But this is changing fast. Too fast for some. Edge issued a call for perspective on AGI called “2015: What do you think about machines that think?” Over 180 public intellectuals responded. The most salient response, in my view, came from Sam Harris, who said:

Imagine, for instance, that we build a computer that is no more intelligent than the average team of researchers at Stanford or MIT—but, because it functions on a digital timescale, it runs a million times faster than the minds that built it. Set it humming for a week, and it would perform 20,000 years of human-level intellectual work. What are the chances that such an entity would remain content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present, and future than we do?

Interesting, and troubling, questions, to be sure. Computers, for example, are now breaking the law, seemingly deaf to moral, ethical or legal human compunctions. The Swiss, for example, created a computer robot shopper, giving it a weekly Bitcoin budget of $100, to shop until it dropped, making random purchases as it pleased within the budget. In November of 2014 it purchased Diesel jeans, a pair of Nikes, and, somewhat embarrassingly, 10 ecstasy pills. Oops.

watson


However, the practical implications a machines out-thinking us, and thus performing tasks once thought could only be performed exclusively by us Homo sapiens, is already upon us, having crept up so slowly that many don’t even realize it.
The philosophical and moral implications of humans creating self-aware machines that can out-think us, and that may put their own self-interest above their creators, is troubling, but so far relegated to the minds of futurists to be more seriously considered at a later date. Perhaps, though, in the not-too-distant future.

The outsourcing of manufacturing and many service jobs has essentially destroyed the middle class. Corporations moved manufacturing and call centers to cheaper locals. Domestic manufacturers increasingly use robots: they don’t tend to complain about low wages or unionize. But computers have not only replaced repetitive tasks once exclusively performed by human laborers, they are gradually replacing intellectual talent as well. Jacob Silverman’s brilliant piece in The Baffler, “The Crowdsourcing Scam” is destined to become a prophetic classic. He describes how once lucrative, prestigious full-time jobs in advertising, sales, marketing, consumer reporting, among many other areas, have been eliminated, sometimes completely, with the help of computing power, sophisticated software, and the internet. Geographical location has been becoming less and less important for decades. Professional freelancers are becoming the norm: they’re cheap, and they work on a per-project basis. No payroll, no benefits. Contractors only.

And of course, it gets worse. Digital computing, neural networks, and other computer technologies such as quantum computing, have become so efficient they may even replace what we’ve always thought to be irreplaceable: intellectual human talent. Computers are no longer sophisticated calculators. They are thinking machines. Increasingly sophisticated, they may in fact be poised to think us goopy, stinky humans out of jobs, leaving only elite, rich, human super-supervisors in (at least some semblance) of control.

IBM’s Watson, which won the 2011 Jeopardy! Competition, is now doing legal research with a program called Ross: “It’s able to do what would take hours to do in seconds,” says Andrew Arruda of Toronto’s Azevedo & Nelson. Another Watson prototype, called Watson Discovery Advisor, is advising doctors on more effective treatments by scanning the medical literature, performing in two seconds what would take a medical researcher two weeks to accomplish. Journalists may even be replaced by Quill, an “automated narrative generation platform.”


Of course there are some things that artificial intelligence may be forever incapable of. But what we once thought were impossible tasks for machines no longer are; the boundaries are being pushed back every day. What might be replaced next? Surely more jobs, and not just in the trades, but even in professions once performed exclusively by humans.

Which raises an even more disturbing question: If the trend continues (there is no serious argument that it will mysteriously and suddenly stop), what are we replaced working humans to do with ourselves? The Utopian view is that we will be released to pursue our leisure interests, our families, our passions apart from work for the sake of earning an income through work with which we often have no real interest. But how might we survive, let only thrive, without a salary? The dystopian view is that the majority of us without a super-specialty, an irreplaceable talent, will simply be cast off to the new majority underclass, while corporate profits skyrocket on the backs of machines. If history is any guide, and I were a betting man, I would have to place my chips on the latter proposition.

There is of course, another option. Manitoba experimented with this it the 1970s, investing $17 million in the small community of Dauphin, giving everyone a Universal Basic Income (called “Mincome”). Everyone received a set amount for basic needs, tax free. If they chose to work, invest, or start businesses, any income thereby derived was taxed. The result? Life improved: fewer accidents, fewer hospital visits, and better mental health. People were free to pursue their interests, their passions, without the risk of losing everything. This was a very progressive idea at the time, and controversial ─ there was a certain smack of communism to it. Critics claimed that a minimum basic income, unrelated to need, would rob people of motivation. It didn’t. People formerly on public assistance transferred to Mincome, where there were no restrictions on how they could spend the money. They started businesses, enrolled in school, and in job training programs. Not surprisingly, when then Conservatives took control of the government in 1979 the program was scuttled. 1,800 boxes of data were packed up and sent to storage; a final report was never released.

Of course the idea of universal basic income has its critics, but one criticism cannot be that it is communism under a different name. There is no central planning, recipients are free to spend the money how they wish in the economy, and the free market remains in place. Needs-based social welfare spending, and the massively expensive bureaucracy it employs could be eliminated. Creating a smaller government, and people freer from government intrusion. More freedom, more individual autonomy.

And we’re not talking about some progressive, liberal, socialist social experiment here. No pseudo-hippies singing Imagine or All We Need is Love, sitting in a tepee smoking a hookah.  An Oxford study concluded that 47 percent of current occupations are likely to be replaced by thinking machines in less than 20 years. Bakers, construction workers, journalists, taxi and truck drivers, farmworkers, paralegals, pharmacy workers, medical workers, real estate agents, airport security and customs officers, airline pilots ─ all virtually gone from the human job market.

But where would the money come from to provide everyone with a minimum basic income? Well, there would be an enormous boost in corporate profits from the elimination of half of the salaried work force, to be sure. And then there’s the $51 billion spent annually by the U.S. on the fatuous and unwinnable drug war (trillions annually worldwide). Of course cutting back on the nearly $600 billion annual military spending might not be such a bad idea either. This list is long. The money is there.

So it seems that there is a choice looming very near on the horizon of human civilization. It is a choice between the unleashing of The Hunger Games, or the unleashing of human potential, where all have the opportunity to lead dignified, fulfilling lives.

Too stark a contrast of choice? We’ll see.

 

© 2015 by Glen Olives Thompson.

Published by

olives.glen@gmail.com

Glen Olives Thompson is a Professor of North American Law at La Salle University in Chihuahua, Mexico. He is a graduate of Southwestern Law School in Los Angeles and California State University, Chico. He writes on a broad range of topics for newspapers and magazines as well as publishing academic research in journals within the areas of law and public policy.