Artificial Intelligence and the Future of the Human Race

This content has been archived. It may no longer be accurate or relevant.

From BookBrowse:

Science fiction tends to reflect deeper moral issues and fears confronting a society at the time it is written. Storytelling is a safe method to express anxieties about the state of the world. It allows authors and readers an opportunity to explore the murkiness of uncertainty in a non-threatening manner. Reading and discussing sci-fi is a more effective outlet than, say, randomly telling neighbors you are worried their Amazon Alexa might one day turn on them. Books like Day Zero are symptomatic of contemporary angst about artificial intelligence (AI).

Today, there is increasing concern about AI threatening the future of the human race. In his later years, Stephen Hawking became a vocal critic — even as he used it himself. “The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in 2014.

Humans are limited by biological evolution, which is dreadfully slow. Soon our species won’t be able to compete with AI advances. Machine learning develops at near exponential speeds, Hawking and others have argued. It will eventually — perhaps sooner than we imagine — surpass us. It’s not just world-renowned astrophysicists that worry about AI; corporate leaders also recognize the potential dangers of this technology.

Elon Musk is an AI skeptic. The owner of Tesla, SpaceX and StarLink believes we are less than five years away from AI surpassing humans in certain cognitive functions. It has taken us millions of years to evolve to our current level of intelligence. He believes it will not be terribly long until the ratio of human intelligence to AI is similar to that of cats and humans today.

Musk was an early investor in one of the leading AI companies, DeepMind. He claims to have invested in this company not for a profit, but to keep abreast of the latest technological developments. He’s also an active proponent of AI government oversight. He doesn’t see it happening, though. Realizing corporate interests and techno-libertarians will likely oppose government intervention, Musk created a non-profit foundation, OpenAI. Its goal is to democratize AI research and facilitate scientific and government oversight of developments.

. . . .

Steve Wozniak, Elon Musk and dozens of other prominent corporate and scientific figures have signed an open letter on the website of the Future of Life Institute affirming their opposition to autonomized AI weapons development. Letters expressing aversion are great, but it’s probably too late. It is rare for any technological development to be slowed when it can be used for military purposes. A considerable amount of our modern technology, including the internet, GPS and nuclear power, is the result of military research.

News reports today still highlight the proliferation of nuclear arms and the development of hypersonic nuclear weapons by Russia. It sells — people know to be scared of nuclear weapons. It’s a sideshow, though. The main event in international geopolitics right now is really the AI race between the People’s Republic of China and the United States. China is now the world leader in AI development — leapfrogging past the US in total published research and AI patents.

Indubitably, the AI race is being used by both superpowers to develop weapons. The US Department of Defense has already developed an AI algorithm that beat a top-ranked, US fighter pilot in dog fight simulations — all five times they faced each other. Should the US halt development of weaponized AI? What are the implications of stopping such research if other nations – or corporations – choose to pursue it?

What of the existential fear raised in novels such as Day Zero – i.e., will AI eventually displace humanity?

Link to the rest at BookBrowse

20 thoughts on “Artificial Intelligence and the Future of the Human Race”

  1. <i<But just because you can doesn’t mean you should.

    Agree. But because we can, many will. So, it’s reasonable to expect it. I’d love to see the ethicals tell us it’s OK to abort babies, but not enhance them. There is a long lost of ethical bumps behind us that have been flattened by the stampede.

    • Oh, humans are by and large stupid. Just look at the millions ruining their lives and reproductive legacy through weed and narcotics.

      https://pubmed.ncbi.nlm.nih.gov/2174024/

      Thing is, we can’t actually do it.
      Some *think* it is doable but they are wrong, wrong, wrong.
      Tweaking bacteria and plants is several orders of magnitude away from safely and accurately tweaking a human embryo.

      As the chinese have already proven, there *will* be misguided attempts to create superbabies. Doesn’t mean it should be tolerated or encouraged. The tech isn’t up to it. And neither is the science. The only thing worse than failure would be (accidental) success. 😉

  2. Humans are limited by biological evolution, which is dreadfully slow.

    Not true. I don’t know where CRISPR and other forms of genetic engineering are in terms of “intelligence,” but “human enhancements” and “designer babies” are already a thing with more to come.

    • CRISPR can be a useful treatment for some genetic conditions…
      …for a while. Long term, though, it will be a dead end evolutionarily speaking. If they’re not careful or are as reckless as the chinese idiot that wanted to breed an HIV-proof baby it can lead to another thalidomide mess.

      The path to ethical genetic treatments/enhancements lies down a different and safer (though by no means absolutely safe): synthetic organisms deployed as symbionts.

      Most genetic conditions are the result of non-production or excess production of proteins. We are at the cusp of being to engineer artificial breeds that can make up for a given condition without altering the individual’s genome. So CRISPER will be redundant.

      In fact, the mRNA covid vaccines are a signpost to where the tech is headed: for now the vaccines “teach” the immune system how to fight the virus. A logical extension would be to create a treatment that replicates and fights it directly and remains ready to fight a reinfection. Effectively, artificial antibodies.

      https://en.m.wikipedia.org/wiki/Synthetic_antibody

      Synthetic biology is going to be a big part of human evolution moving forward, just as mechanics and electronics. Because that is how humans evolve: adapting the world to themselves. Survival of the fittest still applies but the discriminators are no longer environmental forces but rather people’s behavior.
      And how well they protect their genomes. 🙂

      • No time for a fuller reply, but CRISPR Cas-9 is way more powerful than you describe and goes far beyond “genetic conditions.” Just a quick quote from the end of Isaacson’s new book (“The Code Breakers: Jennifer Doudna, Gene Editing, and the Future of the Human Race,” 2021):

        The supposed promise of CRISPR is that we may someday be able to pick which of these traits we want in our children and in all of our descendants. We could choose for them to be tall and muscular and blond and blue-eyed and not deaf and not—well, pick your preferences.

        • I’m aware of that “potential” behind that particular delusion but CRISPER is nowhere near precise enough to do that and only that. Some future version might. But not the version that exists or will exist anytime soon.

          Never mind the ethics (who gets to choose?) just remember the genome is software, buggy to start with, and we are nowhere near close enough to understand everything the genome does.

          It is reckless and stupid to make imprecise changes to software you don’t understand so how much worse to meddle with the genome? Especially when there are safer routes to achieving similar or better outcomes? (Symbionts, artificial chromosomes, etc).

          I’m not citing thalidomide as scaremongering but as an example of the kinds of things unintended gene changes can produce. In the comics mutations give superpowers but in the real world mutations kill fetuses. The lucky ones.

          https://www.bing.com/images/search?q=thalidomide+babies&qpvt=thalidomide+babies&FORM=IGRE

          I’m as future focused as anybody but I also know how sloppy humans are and how badly they screw up when they think they know what they’re doing.

          The whole “designer baby” dream is just another form of eugenic delusion. Great for SF and Video games. But real world? Not this century. Probably never.
          Smarter babies? When we don’t know where intelligence comes from? How it develops? We don’t understand autism, savant syndrome, phychopathy, sociopathy. When we don’t know where nature ends and nurture begins? And physical “superhumans”? When we don’t know what the price might be of the “enhancements”? Progeria? Brittle bone syndrome? A dozen rare genetic diseases.

          And what else those genes do? Genes aren’t little black boxes doing one tbing and nothing else. They do different things at different times and under different conditions. The whole genome is a delicate, dynamic cascade of actions that we’re only now beginning to understand through Epigenetics. And then there’s the “junk DNA” that is anything but junk and non-coding DNA that does code.

          Yes, this is a long discussion. But the end point doesn’t need much study today. What needs study is the genome. Lots and lots of study. We don’t know anywhere near enough to do anything to the genome.

          CRISPER and things like it belong in the laboratory. Engineering bacteria? Yes. And plants, carefully. Maybe mosquitoes but that’s a stretch. There’s room for lots of engineering in the field of genetics for a century of research but not in humans or primates or most of the more developed mammals.

          Bad things lie down that road.
          Hipocrates got it right milenia ago: Do. No. Harm.
          And in case of doubt do nothing.

  3. Maybe it’s a good idea to determine what intelligence is, and where it comes from, before saying it’s artificial?

  4. We’ve been this way before.
    “AI” isn’t AI.
    It’s just a label for a variety of new advanced software technologies but there is no actual Intelligence at work. Neither needed nor desired. Except maybe for luddites looking for dragons to slay.

    The most accurate term, coming out of the gaming world, would be VIRTUAL INTELLIGENCE, because under certain very limited condition software can replace human judgment. (Which software has always been able to do, of course.)

    Nonetheless, faux-AI is a very powerful tool in expanding humanity’s problem solving toolkit. The more basic forms of this class of tools are simply very powerful and fast databases, like IBM’s chess-playing machine. They worked by brute force, much like any computer ever built. Sophisticated and powerful but there was nothing of intelligence to it.

    The hype over faux-AI, however, is obscuring a newer form of software that relies on data sifting and pattern-matching. Something that should be called an inference engine and celebrated. It’s still not Intelligence, much less sentience, but it is something new that once upon a time only humans could do. And that is to tease out valid information out of data. Create useful algorithms, even on the fly, and write software. Software written by software. And this isn’t academic stuntwork but actual production tools being offered commercially right now by the likes of MICROSOFT and AMAZON through their cloud-based services businesses.

    Typically, when you hear of newer “AI driven” software what is really happened is that the production code relies on algorithms developed by “teaching” one of these software tools what kind of performance is desired (like teaching a manual laborer to sort by size nails in a barrel). The results are often extremely effective. Dangerous, too, but in a different way than pundits expound. A matter for later, as humans work.

    So yes, there is hype and there will continue to be hype; it was born out of exageration and misunderstanding, nurtured by marketing, and evolved into a catchy and intentionally meaningless label that protects trade secrets. Like the myth of Coke’s secret formula or the special mix of herbs and spices.

    These new tools and the software they produce are truly game changers, just as manually coded software were, and before that industrial machinery, and before that manual tools going back to knocking flint or obsidian rocks together. Powerful, useful, and yes, dangerous if not used carefully. It is no accident that among the first uses of stone tools were weapons. All tools beget new weapons. Because that is what humans do.

    Bumpy ride ahead. All we can do is buckle down.

  5. Blah, blah AI this. Blah, blah AI that. It’s all blah, blah. The equivalent of water cooler gossip. I’ve seen a takedown of the AI versus human pilot exercise, and yes it’s interesting, but no an AI isn’t going to be Top Gun anytime soon.

    It’s all hype.

    Without a theory of what is conscious self-awareness the chances of replicating a non-biological intelligence is about the same as an infinite number of monkeys typing the works of Shakespeare. Doable, but it’s a P versus NP problem.

    We’ll be waiting an awful longtime for us monkeys to randomly build Generalized AI.

  6. Humans are limited by biological evolution, which is dreadfully slow. Soon our species won’t be able to compete with AI advances.

    Soon? Let’s try this. Get a calculator or computer, and have a race. You vs Machine. What’s the cube root of 4,879,264?

    Who won?

      • I won the fire race, and lost the math race. The machine was far better than I was at the math, and I kicked butt on the fire.

        Anyone can replicate my research at home, and report your findings here.

Comments are closed.