From The Economist:
Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.
Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.
Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?
In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?
Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?
And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.
What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.
At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.
Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.
In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.
In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.
The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.
Link to the rest at The Economist
The OP’s author obviously doesn’t know his literary history: Fears of human-created intelligences overwhelming civilization go back a lot farther than “the very beginning of the computer age.” A few obvious examples, in descending-recency order (and there are a lot more, it’s just that each of these is inarguably prior to the “computer age” and nonetheless reasonably well-known and available):
• Metropolis
• Rossum’s Universal Robots
• The Shape of Things to Come
• Rabbi Löw’s golem
• The mythology of the golem
What the OP must mean is that it doesn’t “haunt humanity” until even the veriest dunce could form a coherent ChatGPT inquiry on the topic — and would.
History begins when they say it begins.
Only their views matter.
(And machine incidental misinformation is a threat to civilization but intentional misinformation,aka lies, by their cause is fine and dandy.)
Meanwhile, while they hyperventilate and bloviate over chatbots, the real uses of the underlying tech explode all over. Because all they care is their prerogative of proclaiming what is and isn’t “TRUTH”. Reminds me of the clueless antagonist in Harry Harrison’s DEATHWORLD 2: THE ETHICAL ENGINEER.
The entire series is great fun.
https://en.m.wikipedia.org/wiki/Deathworld
The Butlerian Jihad rises…
Here’s one area where LLM tech *will* explode:
https://www.msn.com/en-us/news/technology/chatgpt-spells-the-end-of-coding-as-we-know-it/ar-AA1amE0K?ocid=Peregrine&cvid=d688e269f3d04cc3b820deade50e68a6&ei=21
Short version: Microsoft ran a test to measure how *humans* using GPT fared vs humans without.
56% more productivity. The category: software development. Oops.
One particular category begging for that kind of boost is high end gave development, where a top game cost hundreds of millions over 5-10 years and generate billions…or fade over three months.
Microsoft has a long history of what Gates called “eating their dogfood”.
And if MS starts using GPT coding tools (Visual Studio is getting it imminently) their competitors will have to follow. (As Nadella said, its time to see if google can dance.) That is a $650B industry in 2023.
Never mind the chatbots or fake news: that is where the action is.
The OP and its ilk judge everything based on what they do. But the masters of “AI” aren’t in it to control information but to make money. Oodles and oodles.
Controling what is or isn’t “truth” only matters to the ones that want power, not the ones that have it.
It will be interesting to see what the billions of gamers have to say about the results. Three of them are my kids – they have standards and communities and strategies and friends…
It is a field where the new has a good chance, and the imitative only up until obvious flaws are removed.
Funny. Almost as if games have a soul.
They sort of do.
Like book characters and good movies, the best games evoke emotion and tell interesting stories. They engage the player and take him into another world. It takes vision and good writing but also a lot of sweat and grunt work.
And the new tech can both reduce the grunt work (debugging, security, hardware optimization) but also make the game more immersive.
This came out this week:
https://www.msn.com/en-us/news/us/skyrim-mod-uses-ai-to-give-npcs-memories/ar-AA1atFEf
Skyrim is one of the most successful games ever, ten years old and still selling like many new releases. Very immersive with hundred of game characters the player interacts with, all hand coded by humans. But the interactions are fixed and limited. With “AI” the characters can be even more interactive and build up a history with the player.
The developer is owned by Microsoft and are working on a new game in thecsame (very successful) series. It is still years away. Plenty of time to add new tech. If not them, somebody else will. Because games with “living” characters will obsolete all others in the category.
Big fun.
Big. Bucks.
It’ll be interesting to see what effect, if any, that has on tabletop RPGs, because one of the selling points of the genre is the relative unscriptedness of the interactions between your character and the world, something that isn’t really present in video games.
It might impact the rule book.
And, if applicable, class balancing.
But realistically? It’s software meant to impact software and other information management tools and systems. Lots of things will go on just as they ever did.
Forgetting that is what renders all the AI-phobia so ridiculous. There’s more to life, the universe, and everything than just words. 😉
The best game I ever played was in 1988. Harpoon. It was a naval strategy game where the only thing you saw was a series of instruments, while messages appeared along the bottom of the screen.
The task was to captain a WWII destroyer escorting three merchant ships up the coast of Norway. No triggers. No shooting. No flames. If a ship was sunk, it just stopped communicating. Helicopters had range, it took time to refit, and it took time to fuel. If you didn’t call the helicopter back in time, it simply stopped communicating. Missiles had ranges. Ships had max speeds. Radar had range. The ship had a finite supply of depth charges.
That kind of game would be great to play against an AI captaining the German U-Boats.
Have you checked any of the modern simulation games?
Flight simulator or Surviving Mars, CITIES: SKYLINES, or (more complex) KERBAL SPACE PROGRAM.
There’s also a lot of role playing games, ranging from fantasies, space operas, 20’s gangsters, etc.
You might also like AGE OF EMPIRES and CIVILIZATION.