Scientific Consensus: The development of full artificial intelligence

This content has been archived. It may no longer be accurate or relevant.

The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

Stephen Hawking told the BBC

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.

Elon Musk

I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.

James Barrat

Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.

Ginni Rometty

Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement – wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.

Eliezer Yudkowsky

Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.

Alan Kay

Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.

Ray Kurzweil

Forget artificial intelligence – in the brave new world of big data, it’s artificial idiocy we should be looking out for.

Tom Chatfield

4 thoughts on “Scientific Consensus: The development of full artificial intelligence”

  1. There is an element of hype at play here.
    There is also an element of self-serving angst-promotion to slow the deployment of GPT tech so their own lagging efforts can catch up. (Musk)
    There is also an element of outright loonieness. (Kurzweil).

    But for the most part “AI ANGST”, to coin a term, is the product of the fallacy of over-extrapolation and forgetting about the nature of technology development.

    Tech doesn’t develop at a constant rate nor does it improve eternally. Instead, tech develops (and is adopted) in a three phase process best described by the Sigmoid Curve:

    https://en.m.wikipedia.org/wiki/Sigmoid_function

    Technology development starts slowly as it takes time for researchers to find a useful approach. Once it is identified and proves effective, it is rapidly refined and tested to identify its limits and address them. Eventually the tech matures and diminishing returns kick in; attempts to extend the tech become harder/more expensive than they’re worth. Often, an alternate approach emerges that is more useful and a new cycle takes over. This happens over and over.

    The same principle applies to the adoption of said technology: first a slow growth rate of adoption, usually because of cost, then a critical mass of usage is reached and economies of scale leads to an explosion of usage and mainstreaming of the product until it reaches its limits of usability and saturates its natural market so growth slows and returns to a slow, stable rate. A plateau. Further cycles might come from new tech addressing the same market, expanding or replacing the original over time.

    The perfect example is consumer display technology that has gone through three cycles in a hundred years or so: black and white CRTs, color CRTs, and flat panel LCD (with aborted approaches from Plasma and 3D flat panels that failed before the economics of LCD). A fourth cycle may or not emerge from OLED or micro LED but the limits of both leave their fate TBD.

    The same can be expected of the Large Language Models that are inspiring the current wave of AI ANGST. They have been quietly developing for years, unoticed by the mainstream because their baseline “training” databases were too limited for mainstream use. Once the model database became big enough (GPT3, last year) it because just useful enough to spur adoption. Because of its reliance on cloud computing it is being adopted explosively as it proves useful enough to employ…with care. As usage explodes, the limits of the tech start to be revealed. They will be identified and eliminated, worked around, or fenced off and not be used beyond those limits.

    At that point either other approaches emerge or the sector stabilizes. The former is more likely.

    Nonetheless, the cause of the current handwringing is but one very small piece of the full “AI” space, which has been with us all along since the days of Jacquard, Charles, Babbage and Ada Lovelace. Because to be clear, all AI to date and into the near future is just a different technique of software development. Lots of different techniques, many that have been quietly and peacefully in use for years and decades. Even in (gasp!) weapons. (Think heat-seeking or terrain-following missiles. What is that but “AI” without the buzzwords?)

    Runaway AI is still and almost certainly will indefinitely remain an issue for fiction and ivory tower academics divorced from the real world. All the handwringing is achieving, beyond scaring the masses for a while, is to ensure any deployed “AI” comes with a kill switch. Which even HAL9000 had, anyway.

    For now, it’s best to bring out the muchies and sit back to watch the clueless hyperventilating pundits run in circles, trying to hold back the tide. The tech is here, it can be useful, it will be used. Deal with it calmly and rationally.

    Or not.
    Won’t make a difference which way anybody goes.
    It didn’t for looms, TVs, PCs, Smartphones, or social media.
    It won’t for “AI”.

  2. I would counter that this is all hype to make money. Effectively AI is a scheme like any other scheme to make money e.g. NFTs, Bitcoin etc. It’s being hyped to make it sexy.

Comments are closed.