
Anthropology / Benj Edwards
On Thursday, AI-maker and OpenAI competitor Anthropic launched Claude Pro, a subscription-based version of its Claude.ai web-based AI assistant, which works similarly to ChatGPT. It’s available for $20/month in the US or 18 pounds/month in the UK and promises a five-times-higher usage limit, priority access to the cloud during high-traffic periods, and early access when new features emerge.
Like ChatGPT, Claude Pro can generate text, summarize, analyze, solve logic puzzles, and more.
Claude.ai is offered by Anthropic as a conversational interface to its Claude 2 AI language model, just as ChatGPT provides an application wrapper for the underlying models GPT-3.5 and GPT-4. In February, OpenAI opted for a subscription route for ChatGPT Plus, which also offers early access to new features for $20 a month, but also unlocks access to GPT-4, OpenAI’s most powerful language model.
What does Cloud have that ChatGPT doesn’t? A big difference is the 100,000-token context window, which means it can process about 75,000 words at once. Tokens are chunks of words used when processing text. This means Claude can analyze long documents or have a long conversation without losing track of the topic. ChatGPT can only process about 8,000 tokens in GPT-4 mode.
“The biggest thing for me is the 100,000-token limit,” AI researcher Simon Willison told Ars Technica. “That’s huge! Opens up whole new possibilities and I use it several times a week just for that feature.” Willison regularly writes about using Claude on his blog, and he often uses it to clean up his personal discussion transcripts. Although he also cautions against “delusions”, where Claude sometimes makes things up.
“I’ve also seen more hallucinations from Cloud than with GPT-4,” says Willison, “which scares me when using it for longer tasks because there are so many opportunities to hallucinate something without me noticing.”

Ars Technica
Willison also had problems with Claude’s ethics filter, which he accidentally bothered with: “I tried to use it against the transcription of a podcast episode, and he processed most of the text—right before my eyes—deleting everything he had done! In the end, I realized that we’d lost track of the episode. Finally there was talk of bomb threats against data centers and Claude was effectively triggered by that and deleted the entire episode.”
What is “5x more usage”?
Anthropic’s primary selling point for the Cloud Pro subscription is “5x more usage,” but the company doesn’t clearly communicate what Cloud’s free-level usage limits actually are. Dropping clues like cryptic breadcrumbs, the company has written a support document on the subject that states, “If your conversations are relatively short (approximately 200 English sentences, assuming your sentences are about 15-20 words), you should expect to send at least Can. 100 messages every 8 hours, often depending on Claude’s current capacity. More than two-thirds of all conversations on claude.ai (as of September 2023) occur during this period.”
In another somewhat cryptic statement, Anthropic writes, “If you upload a copy of the The Great GatsbyYou can only send 20 messages within 8 hours of that conversation.” We’re not trying to do the math, but if you know the exact word count of F. Scott Fitzgerald’s classic, it’s possible to glean Claude’s actual limits. We reached out to Anthropic yesterday for clarification and a response ahead of publication. did not get
Either way, Anthropic explains that for an AI model with a 100,000-token context limit, a usage limit is necessary because the computation involved is expensive. “It takes a very powerful computer to run a capable model like Cloud 2, especially when responding to large attachments and long conversations,” Anthropic writes in a support document. “We’ve set these limits to ensure that the cloud can be freely available to many people to use, while allowing power users to integrate the cloud into their daily workflows.”
Also, on Friday Anthropic launched Claude Instant 1.2, a cheaper and faster version of Claude available via API. However, Claud 2 is less competent in logical tasks.
While it’s clear that large language models like Claude can do interesting things, it seems that their flaws and tendency toward confusion may prevent them from widespread use due to reliability concerns. Still, Willison is a fan of Cloud 2 in its online form: “I’m excited to see it continue to improve. The 100,000-token thing is a huge win, plus the fact that you can upload a PDF to it is really convenient.”