‘Homo Deus’ Author Yuval Noah Harari Says Authority Shifting from People to AI

Not exactly about the writing business, but certainly about a couple of interesting, albeit overpriced, books.

From The Wall Street Journal:

Yuval Noah Harari’s “Sapiens: A Brief History of Humankind,” became a best-selling book after it was first published in 2011. He argued that people dominate life on earth because they are the only animals that can cooperate in very large groups. Such mass cooperation only became possible, he says, with the emergence of myth, in which many people believe in the same thing, regardless of whether it is a religion, a nation or an economic system or corporation.

His latest work, “Homo Deus: A Brief History of Tomorrow,” published in 2017, dwells on what he believes to be the next stage of human development. Having learned to manage famine and war, he says people need a new challenge. He foresees an era in which authority shifts from humans and their myths to data and algorithms. In the foreseeable future, he argues, algorithms well may become so powerful that we will be able to program people just as we program computers, creating a superhuman species, “Homo Deus.” People might use this power to use in any number of ways, he argues. He says he wrote “Homo Deus” to spark a productive conversation about those choices.

. . . .

How much of the disorder that you see stems from technology?

For example if millions of people, especially in developing countries, lose their low-skill jobs in areas like the textile industry, because of the rise of new technologies, then we will see much more impact on political developments and also more and more influence on the ways the conflicts actually are managed.

. . . .

I imagine AI makes it easier for a smaller entity with few people to exert control and that masses of people become obsolete in a way? Does this prefigure where the rest of us are headed?

Yes. For all the talk of job loss and the impact of technology, one of the best places to look today is the military. It is a few steps ahead of the civilian economy. And what people are predicting for the civilian economy in 30 years is actually happening in the armed forces today. The armies rely on small numbers of highly professional super warriors and on sophisticated and autonomous technologies. I am not saying the civilian economy will happen in exactly the same way, but it is good testing ground for what might happen in the civilian economy.

You think people and technology will merge in a way, create literally or figuratively a new species, and that this new species, Homo Deus, is superhuman. We won’t all have super human powers but some of us will and the rest of us may become less and les relevant?

The basic insight is that nothing is deterministic. Technology is going to evolve. But the social and political outcomes are not deterministic. Just as in the 20th Century you could use electricity to build a communist dictatorship or a democracy, so in the 21st Century we have choices.

…Now one of the most important questions in the world, is who owns the data of humankind. Maybe the most important asset in the 21st Century is not land, and it’s not money, it’s really data. This is the basis for everything. And we are now accumulating the data to decipher humanity, and to change humanity, data about human behavior and even more importantly the human body.

When it comes to questions of mind, we are far less certain. Our understanding of mind is very limited and very poor.

AI will outperform humans in more and more tasks. This I think is almost a certainty. And it will not take a long time. When it comes to questions of mind, there we are far less certain where we are heading because our understanding of mind is very limited and very poor. One school of thought says that essentially minds work on the basis of electrochemical reactions in the brain, and that if we accumulate enough data on the brain, and enough computing power, we can hack humans in the same way as we hack computers. And once this happens you can start creating direct brain-computer interfaces and once you do that, you can connect several brains together into an inter-brain net, so I can access your memories … Now, personally I am skeptical about this particular idea because I think we are far from understanding the mind. But I know there are a lot of very serious people in places like Silicon Valley that think this can happen in 20, 40, 60 years. They even talk about uploading human minds into computers and so forth. As a historian, I say okay. I am just reporting that there are people who think this. But they are very serious people and they have billions of dollars invested in this.

Link to the rest at The Wall Street Journal (Link may expire)

.

16 thoughts on “‘Homo Deus’ Author Yuval Noah Harari Says Authority Shifting from People to AI”

  1. Another reason I want to move to a self-contained shack in the woods before the ‘AI’ craze (i.e. putting neural networks in charge of everything) leads to a complete technological collapse.

  2. There are no AI’s in charge of anything because ‘AI’ doesn’t exist. Barring some kind of massive breakthrough it is likely decades away. Anything else is just a complicated program.

  3. “Having learned to manage famine and war…”

    This comment strikes me as a tad optimistic, in a hugely over-populated world subject to climate change and mass migration. I don’t want to be depressing, but I’d say the Four Horsemen are probably lurking just around the next bend.

  4. Lurking?
    They’re among us right now.
    Have been for decades, if not centuries.
    Famine? Look to South Sudan. Venezuela.
    War? Middle east is hottest but not the only war, Africa has several. Philippines. Indonesia. Mexico.
    Pestilence? Tuberculosis on the rise. Zika. Ebola. Superbugs. Just because the media would rather cover politics doesn’t mean the plagues have gone away.
    Death? Always with us. Russia is far from the only country with declining life expectancies.

    And there’s still an asteroid out there with our name on it.

    • I’m hoping for a catastrophic solar flare, because that sounds interesting, but most likely it’ll be a confluence of mass migrations, famine, severe weather, droughts, wars, epidemics and all the boring stuff we can easily foresee. Sigh.

      • A Carrington event would be bad enough. End of civilization, possibly forever.
        An actual flare would be worse; it could sterilize the planet.

        With the asteroid we’d at least have a chance to survive and recover.

        • I think an asteroid impact would upset my daily schedule right now. Check back with me next week, and I’ll see when I can pencil it in.

  5. The armies rely on small numbers of highly professional super warriors and on sophisticated and autonomous technologies.

    Those small numbers are the guys directing the sophisticated technologies. And backing them up are large armies, artillery, tanks, planes, etc. If the military is a few steps ahead of the rest of us, then we don’t have much to worry about.

  6. I think you are all wrong– the OP, the commenters whom I have read so far, all wrong. We’ve had self-learning systems for at least twenty years although until relatively recently the examples were trivial. These systems can learn human-like judgement that is limited by the experience of the system rather than the intelligence of the original programmer. These systems require tremendous resources: enormous ultra-speed random access memory, compute cycles that old guys like me can hardly believe, and boundless data storage. Resources on this scale are becoming available. Amazing that the equivalent is packed in a human cranium; however, human craniums have not gotten larger in eons; computer resources have expanded exponentially and show no sign of ceasing to expand, even though we’ve heard predictions of the end of expansion from the beginning of computing.

    At the same time, robotic devices of great precision, dexterity, and speed have also come on line.

    What does this mean? I suspect predictions of the demise of manufacturing jobs are correct. Any job that produces many copies, or variations, on the same object, is likely to disappear. By objects, I don’t limit myself to physical objects. Some tax returns, project plans, law suits, computer programs, and many other intellectual constructs can be quickly, cheaply, and better executed by a well-trained machine. If you can learn to repeat a task without being fully engaged, a machine can do the task faster and more accurately. So far I fall pretty much in line with the OP.

    Now I begin to differ. Life has an uncertainty principle that is as sure as Heisenberg’s. In software development, I often observed that all software solutions modify the problem and are, therefore, inherently inadequate. Improve the performance of a system and the users soon demand better performance. Manufacture cars cheaply and efficiently and drivers demand better and cheaper cars. Replacing jobs with computers will not stop ever increasing demand. Give me a self-driving car and I want a self-driving car that makes my commute faster, is more fun, makes better coffee, and costs much less because I’m now working part time. And I want it to reflect MY taste, not some goofus machine’s interpretation of Conde Naste taste. I’m your customer, Mr. Manufacturer, deal with me or go broke.

    It is a given that the future will be different from the past, and I expect it to be better.

    • “These systems can learn human-like judgement that is limited by the experience of the system rather than the intelligence of the original programmer.”

      And that’s exactly the problem. Human-like judgement is… not very good. Which is why we’ve spent years engineering it out of systems, and replacing it with rule-based electronics. We know how those systems work, and can predict what they’ll do in different situations. We can’t predict what a neural network will do.

      The last thing I want is to be on an airliner with an autopilot that flies like a human rather than a machine.

    • Aside from the fact that learning algorithms are not actually AI, the fact remains that the ones we have are not particularly good because none come with built-in mores or ethics bounding.

      http://www.nbcnews.com/mach/technology/ai-learns-us-we-re-becoming-better-teachers-n731861

      They are simply better and more flexible automatons without any real problem solving ability. We’re still generations away from anything deserving of the AI name. By then we’ll have to coin a new term because AI will be too devalued.

  7. In one sense, we all ALL self-learning machines. From the first stirrings of consciousness, we pool available data to make decisions.

    We have extended that data pool in many ways:
    – initially, through story-telling in families and tribes
    – later, through schools, that formalized that story-telling process and made it available to more people
    – with print – that portable form of data collection that sparked modern civilization
    – with video/Internet/other means – all just an extension of the original data pool

    And, yet, there is little evidence that aggregative data is sufficient. The more important task is identifying patterns, selecting data from different threads, and synthesizing new ideas.

    None of that would be improved by using other people – that creative use of data is highly individualistic. Hive-minds are stultified and prone to running along familiar tracks.

    The one thing that does seem to spark creative thought is isolation from others and time to think. Both are in short supply in the modern era.

Comments are closed.