Google to Pit Top AI Models Against Each Other in Live Chess Tournament

CN
Decrypt
Follow
9 hours ago

On Tuesday, Google will launch a chess tournament pitting leading AI models against each other, in a direct test of machine reasoning.


It follows claims by Elon Musk on Monday that his chatbot, Grok, exhibits “outstanding reasoning” abilities.


The event kicks off as part of the new Kaggle Gaming Arena, a platform for testing general-purpose AI agents in live, competitive environments.





The first tournament will feature daily chess matches between versions of six leading language models: ChatGPT, Gemini, Claude, Grok, Deepseek, and Kimi.


Unlike standard benchmark tests, the format puts AI strategy on public display by evaluating how models think, adapt, and recover under pressure, Google said in a statement.


Google says it hopes the competition will highlight differences in reasoning capabilities that other benchmarks fail to detect. The competition follows other gaming benchmarks used by Google to test AI reasoning, including games by Atari, AlphaGo, and AlphaStar.



“Submissions are ranked using a Bayesian skill-rating system that updates regularly, enabling rigorous long-term assessment,” Google said.


A Bayesian system uses probability to update a player’s skill rating over time based on performance against other competitors.


The inaugural chess matches will be between OpenAI’s o4 mini and DeepSeek-R1, Gemini 2.5 Pro and Claude Opus 4, Moonshot AI's Kimi K2 Instruct and OpenAI’s o3, and Grok 4 vs Gemini 2.5 Flash.



Chess has long served as a proving ground for AI.


In a historic match in 1997, IBM’s Deep Blue defeated Russian chess grandmaster and former World Chess Champion Garry Kasparov. Google’s new tournament builds on that tradition, but now with language models.


The matches will be streamed live on YouTube. Each round features a best-of-four series, with winners advancing through a single-elimination bracket. The top two models will face off in a final Gold Medal match.


“Games are perfect for AI evaluation because they help us understand how models tackle complex reasoning tasks,” Google wrote on X. “Many games are a proxy for real-world skills and can test a model's ability in areas like strategic planning, adaptation, and memory.”


Viewers will be able to see each model’s reasoning behind every move. According to Google, that transparency is critical for assessing whether models are actually thinking through problems, or just mimicking training data.


Still, on the Kaggle Game Arena discussion board, questions remain about how the LLMs will behave once the games start.


“What exactly happens if the model continues to suggest illegal moves after all allowed rethinks are exhausted?” one user asked. “Does it lose the game immediately, skip the turn, or is it disqualified in some way?”


“It really makes me wonder, are we seeing true reasoning here, or just pattern-based guessing?” another asked.


Google said it plans to expand the Kaggle Gaming Arena beyond chess in future events. For now, this initial tournament will serve as a public stress test for how well today’s most advanced models can handle real-time, strategic decision-making.


“Games have always been a useful proving ground for AI, including our own work on AlphaGo and AlphaZero,” Google DeepMind co-founder and CEO Demis Hassabis wrote on X. “We're excited to see the progress this benchmark will drive as we add more games and challenges to the Arena - we expect to see rapid improvement!”


Google did not immediately respond to Decrypt’s request for comment.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX:注册即返20%,全网最高返佣,不薅白不薅!
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink