Minds Without Brains?

This content has been archived. It may no longer be accurate or relevant.

From Commonweal:

n the view of many scientists, Artificial Intelligence (AI) isn’t living up to the hype of its proponents. We don’t yet have safe driverless cars—and we’re not likely to in the near future. Nor are robots about to take on all our domestic drudgery so that we can devote more time to leisure. On the brighter side, robots are also not about to take over the world and turn humans into slaves the way they do in the movies.

Nevertheless, there is real cause for concern about the impact AI is already having on us. As Gary Marcus and Ernest Davis write in their book, Rebooting AI: Building Artificial Intelligence We Can Trust, “the AI we have now simply can’t be trusted.” In their view, the more authority we prematurely turn over to current machine systems, the more worried we should be. “Some glitches are mild, like an Alexa that randomly giggles (or wakes you in the middle of the night, as happened to one of us), or an iPhone that autocorrects what was meant as ‘Happy Birthday, dear Theodore’ into ‘Happy Birthday, dead Theodore,’” they write. “But others—like algorithms that promote fake news or bias against job applicants—can be serious problems.”

Marcus and Davis cite a report by the AI Now Institute detailing AI problems in many different domains, including Medicaid-eligibility determination, jail-term sentencing, and teacher evaluations:

Flash crashes on Wall Street have caused temporary stock market drops, and there have been frightening privacy invasions (like the time an Alexa recorded a conversation and inadvertently sent it to a random person on the owner’s contact list); and multiple automobile crashes, some fatal. We wouldn’t be surprised to see a major AI-driven malfunction in an electrical grid. If this occurs in the heat of summer or the dead of winter, a large number of people could die.

The computer scientist Jaron Lanier has cited the darker aspects of AI as it has been exploited by social-media giants like Facebook and Google, where he used to work. In Lanier’s view, AI-driven social-media platforms promote factionalism and division among users, as starkly demonstrated in the 2016 and 2020 elections, when Russian hackers created fake social-media accounts to drive American voters toward Donald Trump. As Lanier writes in his book, Ten Arguments for Deleting Your Social Media Accounts Right Now, AI-driven social media are designed to commandeer the user’s attention and invade her privacy, to overwhelm her with content that has not been fact-checked or vetted. In fact, Lanier concludes, it is designed to “turn people into assholes.”

As Brooklyn College professor of law and Commonweal contributor Frank Pasquale points out in his book, The Black Box Society: The Secret Algorithms That Control Money and Information, the loss of individual privacy is also alarming. And while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods,” and gag rules, the lives of ordinary consumers are increasingly open books to them. “Everything we do online is recorded,” Pasquale writes:

The only questions left are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t itself the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

Meanwhile, as Lanier notes, these big tech companies are publicly committed to an extravagant AI “race” that they often prioritize above all else. Lanier thinks this race is insane. “We forget that AI is a story we computer scientists made up to help us get funding once upon a time, back when we depended on grants from government agencies. It was pragmatic theater. But now AI has become a fiction that has overtaken its authors.”AI-driven social-media platforms promote factionalism and division among users, as starkly demonstrated in the 2016 and 2020 elections.

In Marcus and Davis’s view, the entire field needs to refocus its energy on making AI more responsive to common sense. And to do this will require a complete rethinking of how we program machines.

“The ability to conceive of one’s own intent and then use it as a piece of evidence in causal reasoning is a level of self-awareness (if not consciousness) that no machine I know of has achieved,” writes Judea Pearl, a leading AI proponent who has spent his entire career researching machine intelligence. “I would like to be able to lead a machine into temptation and have it say, ‘No.’” In Pearl’s view, current computers don’t really constitute artificial intelligence. They simply constitute the ground level of what can and likely will lead to true artificial intelligence. Having an app that makes your life much easier is not the same thing as having a conversation with a machine that can reason and respond to you like another human being.

In his Book of Why: The New Science of Cause and Effect, co-written with Dana McKenzie, Pearl lays out the challenges that need to be met in order to produce machines that can think for themselves. Current AI systems can scan for regularities and patterns in swaths of data faster than any human. They can be taught to beat champion chess and Go players. According to an article in Science, there is now a computer that can even beat humans at multiplayer games of poker. But these are all narrowly defined tasks; they do not require what Pearl means by thinking for oneself. In his view, machines that use data have yet to learn how to “play” with it. To think for themselves, they would need to be able to determine how to make use of data to answer causal questions. Even more crucially, they would need to learn how to ask counterfactual questions about how the same data could be used differently. In short, they would have to learn to ask a question that comes naturally to every three-year-old child: “Why?”

“To me, a strong AI should be a machine that can reflect on its actions and learn from past mistakes. It should be able to understand the statement ‘I should have acted differently,’ whether it is told as much by a human or arrives at that conclusion itself.” Pearl builds his approach around what he calls a three-level “Ladder of Causation,” at the pinnacle of which stand humans, the only species able to think in truly causal terms, to posit counterfactuals (“What would have happened if…?”).

But then a further question arises: Would such artificial intelligence be conscious the way we are? Or would it simply be a more advanced form of “smart” machine that exists purely to serve humans? There is reason for skepticism. As philosopher David Chalmers told Prashanth Ramakrishna in a New York Times interview in 2019, intelligence does not necessarily imply subjective consciousness:

Intelligence is a matter of the behavioral capacities of these systems: what they can do, what outputs they can produce given their inputs. When it comes to intelligence, the central question is, given some problems and goals, can you come up with the right means to your ends? If you can, that is the hallmark of intelligence. Consciousness is more a matter of subjective experience. You and I have intelligence, but we also have subjectivity; it feels like something on the inside when we have experiences. That subjectivity—consciousness—is what makes our lives meaningful. It’s also what gives us moral standing as human beings.

In Chalmers’s view, trying to prove that machines have achieved consciousness would not be easy. “Maybe an A.I. system that could describe its own conscious states to me, saying, ‘I’m feeling pain right now. I’m having this experience of hurt or happiness or sadness’ would count for more. Maybe what would count for the most is [its] feeling some puzzlement at its mental state: ‘I know objectively that I’m just a collection of silicon circuits, but from the inside I feel like so much more.’”

Link to the rest at Commonweal

5 thoughts on “Minds Without Brains?”

  1. Maybe some of these authors can tell us what intelligence and consciousness is before they tackle the artificial?

  2. A.I. is definitely over-hyped and while it might be artificial, it’s not intelligence or magic.

    If you really want to learn about it, go to books written for programmers, NOT the general public. For example, I’m currently reading Grokking Artificial Intelligence Algorithms, which is readable if you have basic math and logical skills:
    https://www.manning.com/books/grokking-artificial-intelligence-algorithms

    Similarly, if I wanted to learn about AI and trust, I’d look at something like Trust In Machine Learning by Kush Varshnev (Manning MEAP book)

  3. If they ever develop a self driving car that can drive the streets of Santa Fe, then you will have triggered the Singularity, because people can barely drive the streets of Santa Fe.

    Each time that I drive across town to go to the Post Office I have to remind myself that the streets are filled with people trying to kill me.

    I wish there was a way to put cameras all around my car, so that I could film a trip across town. If I were to post that on YouTube no one would believe that the videos are real, because I can’t believe what is happening all around me.

    No AI can deal with this.

    20 Crazy Road Moments Caught On Camera | Bad Drivers 2020
    https://www.youtube.com/watch?v=eZyjROvFbdc

    at the pinnacle of which stand humans, the only species able to think in truly causal terms

    Watch this video, and tell me how many people could do as well as this crow.

    Causal understanding of water displacement by a crow
    https://www.youtube.com/watch?v=ZerUbHmuY04

    We will never be able to build true Artificial Intelligence(AI). We will be able to build Intelligent Assistant(IA), and that will be good enough.

      • The universe we see playing out in space and time may be just the surface level, where we float like little boats while leviathans stir in the deep.

        — George Musser

        Reality is not deterministic. The events going on around us are subject to liminal events that intrude upon the Real. Reality cannot be described by Rules or ones and zeros, thus AI will fail.

        When I talk about these “liminal events” with my sister, she will always ask, “What about all of the subliminal events.” I always respond, “Exactly.”

        Even though the bizarre traffic events occur world wide, as shown in the various YouTube videos, I have observed that Santa Fe is a unique focus. It is essentially the “center of creation” as George Johnson mentions in his book.

        Whether or not one believes that northern New Mexico is the center of creation, it is easy to sympathize with the desire for a more orderly, circumscribed world.

        Download the sample for:

        Fire in the Mind: Science, Faith, and the Search for Order
        by George Johnson
        https://www.amazon.com/Fire-Mind-Science-Faith-Search-ebook/dp/B004478AOS/

Comments are closed.