Anthropic used Pokémon to benchmark its newest AI model. Yes, really.
In a blog post published Monday, Anthropic said that it tested its latest model, Claude 3.7 Sonnet, on the Game Boy classic Pokémon Red. The company equipped the model with basic memory, screen pixel input, and function calls to press buttons and navigate around the screen, allowing it to play Pokémon continuously.
A unique feature of Claude 3.7 Sonnet is its ability to engage in “extended thinking.” Like OpenAI’s o3-mini and DeepSeek’s R1, Claude 3.7 Sonnet can “reason” through challenging problems by applying more computing — and taking more time.
That came in handy in Pokémon Red, apparently.
Compared to a previous version of Claude, Claude 3.0 Sonnet, which failed to leave the house in Pallet Town where the story begins, Claude 3.7 Sonnet successfully battled three Pokémon gym leaders and won their badges.

Now, it’s not clear how much computing was required for Claude 3.7 Sonnet to reach those milestones — and how long each took. Anthropic only said that the model performed 35,000 actions to reach the last gym leader, Surge.
It surely won’t be long before some enterprising developer finds out.
Pokémon Red is more of a toy benchmark than anything. However, there is a long history of games being used for AI benchmarking purposes. In the past few months alone, a number of new apps and platforms have cropped up to test models’ game-playing abilities on titles ranging from Street Fighter to Pictionary.
#Anthropic #Pokémon #benchmark #newest #model