AI gets weird

This content has been archived. It may no longer be accurate or relevant.

From Nathan Bransford:

The world’s mind continues to be collectively blown by ChatGPT, which has kicked off an AI arms race at some of our biggest tech companies. By far the most astute article I’ve read about AI comes from Ted Chiang, who writes that ChatGPT is a blurry JPEG of the web, or essentially a degraded, compressed synthesis of what’s out there on the internet. Until it can do more to meaningfully understand and push things forward, its utility will be constrained. Chiang is skeptical of its usefulness for helping create original writing (I agree).

And yet. I know what ChatGPT is. I know it’s basically just a language prediction algorithm that can absorb more information than a human ever could and synthesize the internet back at us in a way that can feel vaguely human. I know it doesn’t really “want” anything or have feelings. And yet. I was still unprepared for Kevin Roose’s deeply weird chat with the Bing chatbot, which quickly went off the rails in first extremely funny and subversive ways (the chatbot fantasizing about persuading nuclear power employees to hand over access codes) and then deeply creepy ways (the chatbot declaring its love for Kevin and trying to convince him he isn’t in love with his wife). The whole thing is worth a gander. (Fixes are reportedly in the works of course).

. . . .

I’m not worried about AI becoming sentient and pulling a “Terminator” (correction: I have fewer than zero fears about this), but I’m much more concerned about what it could steer credulous humans to do. We already have an entire segment of the population brainwashed on propaganda and anti-vaccine hysteria, and we’re certainly not prepared for misinformation and even simply advertising becoming even more hyper-personalized than it already is. Already the underpaid contractors who enforce the guardrails are sounding the alarm.

Link to the rest at Nathan Bransford