Claude: Everything you need to know about Anthropic’s AI

Anthropic, one of the world’s largest AI vendors, has a powerful family of generative AI models called Claude. These models can perform a range of tasks, from captioning images and writing emails to solving math and coding challenges.

With Anthropic’s model ecosystem growing so quickly, it can be tough to keep track of which Claude models do what. To help, we’ve put together a guide to Claude, which we’ll keep updated as new models and upgrades arrive.

Claude models

Claude models are named after literary works of art: Haiku, Sonnet, and Opus. The latest are:

  • Claude 3.5 Haiku, a lightweight model.
  • Claude 3.7 Sonnet, a midrange, hybrid reasoning model. This is currently Anthropic’s flagship AI model.
  • Claude 3 Opus, a large model.

Counterintuitively, Claude 3 Opus — the largest and most expensive model Anthropic offers — is the least capable Claude model at the moment. However, that’s sure to change when Anthropic releases an updated version of Opus.

Most recently, Anthropic released Claude 3.7 Sonnet, its most advanced model to date. This AI model is different from Claude 3.5 Haiku and Claude 3 Opus because it’s a hybrid AI reasoning model, which can give both real-time answers and more considered, “thought-out” answers to questions.

When using Claude 3.7 Sonnet, users can choose whether to turn on the AI model’s reasoning abilities, which prompt the model to “think” for a short or long period of time.

When reasoning is turned on, Claude 3.7 Sonnet will spend anywhere from a few seconds to a couple minutes in a “thinking” phase before answering. During this phase, the AI model is breaking down the user’s prompt into smaller parts and checking its answers.

Claude 3.7 Sonnet is Anthropic’s first AI model that can “reason,” a technique many AI labs have turned to as traditional methods of improving AI performance taper off.

Even with its reasoning disabled, Claude 3.7 Sonnet remains one of the tech industry’s top-performing AI models.

In November, Anthropic released an improved – and more expensive – version of its lightweight AI model, Claude 3.5 Haiku. This model outperforms Anthropic’s Claude 3 Opus on several benchmarks, but it can’t analyze images like Claude 3 Opus or Claude 3.7 Sonnet can.

All Claude models — which have a standard 200,000-token context window — can also follow multistep instructions, use tools (e.g., stock ticker trackers), and produce structured output in formats like JSON.

A context window is the amount of data a model like Claude can analyze before generating new data, while tokens are subdivided bits of raw data (like the syllables “fan,” “tas,” and “tic” in the word “fantastic”). Two hundred thousand tokens is equivalent to about 150,000 words, or a 600-page novel.

Unlike many major generative AI models, Anthropic’s can’t access the internet, meaning they’re not particularly great at answering current events questions. They also can’t generate images — only simple line diagrams.

As for the major differences between Claude models, Claude 3.7 Sonnet is faster than Claude 3 Opus and better understands nuanced and complex instructions. Haiku struggles with sophisticated prompts, but it’s the swiftest of the three models.

Claude model pricing

The Claude models are available through Anthropic’s API and managed platforms such as Amazon Bedrock and Google Cloud’s Vertex AI.

Here’s the Anthropic API pricing:

  • Claude 3.5 Haiku costs 80 cents per million input tokens (~750,000 words), or $4 per million output tokens
  • Claude 3.7 Sonnet costs $3 per million input tokens, or $15 per million output tokens
  • Claude 3 Opus costs $15 per million input tokens, or $75 per million output tokens

Anthropic offers prompt caching and batching to yield additional runtime savings.

Prompt caching lets developers store specific “prompt contexts” that can be reused across API calls to a model, while batching processes asynchronous groups of low-priority (and subsequently cheaper) model inference requests.

Claude plans and apps

For individual users and companies looking to simply interact with the Claude models via apps for the web, Android, and iOS, Anthropic offers a free Claude plan with rate limits and other usage restrictions.

Upgrading to one of the company’s subscriptions removes those limits and unlocks new functionality. The current plans are:

Claude Pro, which costs $20 per month, comes with 5x higher rate limits, priority access, and previews of upcoming features.

Being business-focused, Team — which costs $30 per user per month — adds a dashboard to control billing and user management and integrations with data repos such as codebases and customer relationship management platforms (e.g., Salesforce). A toggle enables or disables citations to verify AI-generated claims. (Like all models, Claude hallucinates from time to time.)

Both Pro and Team subscribers get Projects, a feature that grounds Claude’s outputs in knowledge bases, which can be style guides, interview transcripts, and so on. These customers, along with free-tier users, can also tap into Artifacts, a workspace where users can edit and add to content like code, apps, website designs, and other docs generated by Claude.

For customers who need even more, there’s Claude Enterprise, which allows companies to upload proprietary data into Claude so that Claude can analyze the info and answer questions about it. Claude Enterprise also comes with a larger context window (500,000 tokens), GitHub integration for engineering teams to sync their GitHub repositories with Claude, and Projects and Artifacts.

A word of caution

As is the case with all generative AI models, there are risks associated with using Claude.

The models occasionally make mistakes when summarizing or answering questions because of their tendency to hallucinate. They’re also trained on public web data, some of which may be copyrighted or under a restrictive license. Anthropic and many other AI vendors argue that the fair-use doctrine shields them from copyright claims. But that hasn’t stopped data owners from filing lawsuits.

Anthropic offers policies to protect certain customers from courtroom battles arising from fair-use challenges. However, they don’t resolve the ethical quandary of using models trained on data without permission.

This article was originally published on October 19, 2024. It was updated on February 25, 2025 to include new details about Claude 3.7 Sonnet and Claude 3.5 Haiku.


#Claude #Anthropics