We are using cookies to give you the best experience on our site. To find out more see our Cookies Policy. By continuing to use our website without changing the settings, you are agreeing to our use of cookies.

Yuval Harari Reveals the Future of Mankind

The world’s smartest futurist has the best blueprint yet. Can he really see what’s coming?

If the movies tell us how artificial intelligence spells doom for humanity, then either super-intelligent machines will band together to eliminate the humans who created them (The Terminator, The Matrix) or a lone, beautiful female android will do the same (Blade Runner, Ex Machina).

Professor Yuval Harari in an exclusive interview with BBC Economics Editor Kamal Ahmed.

We’ve been afraid of our artificial creations killing us since 1818, when Mary Shelley wrote of Victor Frankenstein sowing the seeds of his own destruction by sewing together his own sentient monster. Fear of killer robots stalked us on through the 20th century, from Fritz Lang’s 1927 film Metropolis – in which an alluring female automaton doesn’t kill us, but instead stirs the hearts and minds of a future mechanised society – right up to the 2016 HBO series Westworld, in which another group of beautiful robots starts to get a little too clever for humanity’s comfort.

Here, in the foothills of the 21st century, our fantasy horror of artificial intelligence has solidified into real-world worry. In October 2016, Professor Stephen Hawking warned that artificial intelligence is going to be “either the best, or the worst thing, ever to happen to humanity”. Last year, Hawking joined Tesla founder Elon Musk and Apple co-founder Steve Wozniak in signing an open letter that pleaded for artificial intelligence to be used for benevolent ends. Even inside a super-brain like Hawking’s, the fear that intelligent machines are going to take over the planet and destroy humans is still alive and well.

Yuval Harari

Yuval Harari’s latest books: Sapiens & Homo Deus

What if, however, artificial intelligence puts the human race at mortal risk not because machines develop a malevolent consciousness (at present, no one has generated anything like consciousness in a machine) but simply because artificial intelligence becomes so ordinary, so mundane, and so ubiquitous that humans only notice too late that they’ve allowed it to take over their lives?

The most important question in 21st-century economics may well be what to do with all the superfluous people

That is the scenario that is being suggested by the historian Yuval Noah Harari, who shot to fame with his 2014 bestseller Sapiens: A Brief History of Humankind, an extraordinary account of how humans went from a bunch of insignificant primates in Africa 70,000 years ago to the dominant species of the planet.

Harari’s portrait of how humanity always stumbles into its next civilisational step without foreseeing the trouble that might be lurking further down the road has earned him the ear of the world’s policymakers and leaders. His most ardent fans include President Obama, Bill Gates and Mark Zuckerberg.

In his latest book Homo Deus: A Brief History of Tomorrow, Harari offers his vision of how our future may be about to unfold, predictions that have seen him described as “the Seer of Silicon Valley”. He argues that we are already facing, with overwhelming probability, the end of humankind as we know it. The trigger for this cataclysm won’t be a race of beautiful androids or an army of machines bent on bumping off humans. It will be a simple creation that we have already incorporated into our everyday lives: the algorithm.

The algorithm is, Harari writes, “arguably the single most important concept in our world” today. An algorithm is simply a mathematical pattern for solving problems. Until very recently, the best source of that problem-solving power on Earth was a human brain. Algorithms now outpace us at an ever-growing list of cognitive tasks. They’re doing financial trading for us. The Hong Kong venture-capital firm Deep Knowledge Ventures even has an algorithm on its board.

Algorithms, not content with planning the best route home, are now reading the road and steering our cars. Earlier this year, Google DeepMind’s machine-learning algorithm AlphaGo beat the world’s best human player at the board game Go by devising strategies no human had ever thought of. Last month, an algorithm developed at University College London predicted with an accuracy of 79 per cent the outcome of 584 cases heard before the European Court of Human Rights.

The implications of the algorithm revolution, says Harari, are set to go much deeper than ending the careers of all lawyers, taxi drivers and financial traders. Algorithms are about to change what it means to work, and what it means to be human. The reason this will happen, Harari explains, is that you don’t need artificial intelligence to develop the entire range of human abilities in order for it to threaten humanity’s usefulness. You just need it to get reliably better at things that once upon a time only humans could do.

That moment, Harari argues, is already here. Much of what made us employable in the 20th century, says Harari, is already being obliterated by the power of the algorithm. “Humans have two basic types of abilities: physical abilities and cognitive abilities. As long as machines competed with us merely in physical abilities, you could always find cognitive tasks that humans could do better,” Harari explains in Homo Deus. “Yet what will happen once algorithms outperform us in remembering, analysing and recognising patterns?”

Algorithms, Harari suggests, will soon govern even mental and romantic decisions. In the next decades, algorithms will, he writes, begin to “advise us which movie to see, where to go on holiday, what to study in college, which job offer to accept, and even whom to date or marry.” In fact, says Harari, there’s no reason that algorithms won’t advise us how to vote, too. They’re likely to know our preferences and past choices far better than we know them ourselves.

“We may well see, in fact, a full reversal of the humanist revolution,” he suggests, “stripping humans of authority and putting nonhuman authorities in charge.”
Once an artificial-intelligence device has become indispensable, Harari points out, it’s no longer a gadget or gizmo. It’s the ruler. “Once Google, Facebook and other algorithms become all-knowing oracles, they may well evolve into agents, and finally into sovereigns.”

Harari surmises that this is all likely to have a devastating effect on human jobs and democracy. For a sign of what’s on its way in the jobs market, he suggests, just take a look at the military: “hi-tech forces ‘manned’ by pilotless drones and cyber-worms are replacing the mass armies of the 20th century, and generals delegate more and more critical decisions to algorithms.”

Where the military leads, says Harari, other industries will follow. Humans won’t be as valuable or desirable as algorithm-run artificially intelligent machines. “It is,” he writes, “sobering to realise that at least for armies and corporations, intelligence is mandatory, but consciousness is optional.”

As this trend accelerates across other industries, Harari warns, countless numbers of people may lose their economic significance. Where the 20th century produced a vast global middle class, the 21st century is likely to produce a vast “useless class”. We may even, says Harari, find ourselves asking what humans are for. “The most important question in 21st century economics may well be what to do with all the superfluous people.”

And how long, asks Harari, will widespread democracy survive when individuals have lost their importance as units of the economy? “We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans,” he explains. “Can democracy, the free market and human rights survive this flood?”

Of course, none of this has fully come to pass yet, as Harari himself concedes. But distracting ourselves with science-fiction fantasies of artificial intelligence growing consciousness and taking over the planet, he says, only puts us at greater risk of paving our own destruction, by not seeing the real threat facing us.

“Precisely because we have some choice about the use of new technologies,” he concludes in Homo Deus, “we had better understand what is happening and make up our minds about it before it makes our minds for us.” Our greatest threat won’t be nearly as obvious to spot as the super-intelligent machines of The Terminator and The Matrix, or the beautiful robots of Ex Machina and Westworld.

Our greatest threat, in Harari’s opinion, is already under our noses and near invisible: hidden away in the algorithms that choose our news feeds and increasingly advise us how to make our choices. As Harari warns: “The algorithms won’t revolt and enslave us. Rather, the algorithms will be so good in making decisions for us, that it would be madness not to follow their advice.”

 Robert Collins is the former deputy literary editor of The Sunday Times.