• | 8:00 am

Anthropic announces the second major release of its Claude large language model

The Google-backed company says Claude 2 is smarter, safer, and better at taking direction.

Anthropic announces the second major release of its Claude large language model
[Source photo: Tasha Vector/Getty Images]

Anthropic has announced the second major iteration of its Claude large language model.

The aptly named Claude 2 is significantly better at coding, math, and reasoning. Anthropic says the new model scored 76.5% on the multiple choice section of the Bar exam, compared to the 73% scored by the latest version of Claude 1. It scores above the 90th percentile on the GRE reading and writing exams, and fared about as well as the median applicant on quantitative reasoning.

“We’re putting the next generation out there,” says Anthropic cofounder and president Daniela Amodei. “[It] feels like a more robust, powerful model, not a step function increase.”

Claude 2 also lets users input large amounts of data in their prompts, and can output longer answers.

In computer coding, Claude 2 scored a 71.2% on the Codex Human eval, a Python coding test, compared to the 56% scored by Claude 1.

Businesses can access Claude 2 directly via an API (for the same price as Claude 1), or as a service in either the Amazon AWS or Google clouds. Individual users in the U.S. and the U.K. can access it via a new public-facing website.

Anthropic was founded by a group of OpenAI alums—including siblings Daniela and Dario Amodei—who wanted to build safer and more controllable large language models. Daniela says Claude 2 is harder than earlier models to prompt with the goal of producing offensive or dangerous output.

“No language model in the world is perfectly safe,” Daniela Amodei says. “If you’re just interacting with Claude normally, it really shouldn’t veer off and say something unexpected or offensive, anything like that. Of course, if you attack it all day, you can probably jailbreak it.”

One of Anthropics’ goals from the beginning was to make language models that are more “steerable.”

“A really good concrete example of this is that Claude will take direction pretty well on things like tone or content or focus area,” Amodei says. “So, if you say ‘Hey Claude, you’re getting a little too familiar with me, I’d like for you to be more clinical in your answers to me,’ Claude will adjust its tone.”

Anthropic, a “public benefit corporation,” has raised $1.5 billion in venture funding as of July 2023. Major shareholders are Alameda Research, Spark Capital, and Google, which took a 10% ownership stake.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS