An artificial intelligence dubbed Claude, developed by AI research firm Anthropic, got a “marginal pass” on a recent blindly graded law and economics exam at George Mason University, according to a recent blog post by economics professor Alex Tabarrok.
It’s yet another warning shot that AI is experiencing a moment of explosive growth in capability — and it’s not just OpenAI’s ChatGPT that we have to worry about.
. . . .
Claude is already impressing academics with its ability to come up with strikingly thorough answers to complex prompts.
For one law exam question highlighted by Tabarrok, Claude was able to generate believable recommendations on how to change intellectual property laws.
“Overall, the goal should be to make IP laws less restrictive and make more works available to the public sooner,” the AI concluded. “But it is important to still provide some incentives and compensation to creators for a limited period.”
Overall, Tabarrok found that “Claude is a competitor to GPT-3 and in my view an improvement,” because it was able to generate a “credible response” that’s “better than many human responses.”
To be fair, others were less impressed with Claude’s efforts.
“To be honest, this looks more like Claude simply consumed and puked up a McKinsey report,” the Financial Times wrote in a piece on Tabarrok’s findings.
Claude makes use of “constitutional AI,” as described in a yet-to-be-peer-reviewed paper shared by Anthropic researchers last month.
“We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs,” they wrote. “The process involves both a supervised learning and a reinforcement learning phase.”
“Often, language models trained to be ‘harmless’ have a tendency to become useless in the face of adversarial questions,” the company wrote in a December tweet. “Constitutional AI lets them respond to questions using a simple set of principles as a guide.”
Link to the rest at Futurism