How Anthropic found a trick to get AI to give you answers it’s not supposed to

From TechCrunch:

If you build it, people will try to break it. Sometimes even the people building stuff are the ones breaking it. Such is the case with Anthropic and its latest research which demonstrates an interesting vulnerability in current LLM technology. More or less if you keep at a question, you can break guardrails and wind up with large language models telling you stuff that they are designed not to. Like how to build a bomb.

Of course given progress in open-source AI technology, you can spin up your own LLM locally and just ask it whatever you want, but for more consumer-grade stuff this is an issue worth pondering. What’s fun about AI today is the quick pace it is advancing, and how well — or not — we’re doing as a species to better understand what we’re building.

If you’ll allow me the thought, I wonder if we’re going to see more questions and issues of the type that Anthropic outlines as LLMs and other new AI model types get smarter, and larger. Which is perhaps repeating myself. But the closer we get to more generalized AI intelligence, the more it should resemble a thinking entity, and not a computer that we can program, right? If so, we might have a harder time nailing down edge cases to the point when that work becomes unfeasible?

Link to the rest at TechCrunch

2 thoughts on “How Anthropic found a trick to get AI to give you answers it’s not supposed to”

  1. Want to have some fun?
    Consider that “free” AI tools use our queries as inputs to refine their datasets and models.
    The more people use them, the more they “learn” about us. Which is worth a lot to “them”, whoever they might be, because LLMs are, after all, the next evolution of big data.

    (Nice way for alien observers to gather data about the idiotsyncracies of the money boys of Earth. 😉

    • The real fun will be when LLMs start purposely, or inadvertantly (or, worse yet, via intentional deception when an LLM “pretends” to be human), drawing upon LLM output for their own input. That will bring a new meaning to “GIGO” — not that I’ve ever seen this problem in [redacted], or in expert opinions offered in litigation, or…

Comments are closed.