Last year, I handed the impossible task of predicting March Madness to ChatGPT.
The result? Sadly, my bracket was as busted as everyone else's, but at least I believed my picks had more substance than luck.
A lot has changed in 365 days, and this time around, I've assembled the AI equivalent of the Dream Team: Google's Gemini Researcher, OpenAI's GPT-4.5 Research Agent, xAI's Grok-3 with Reasoning and Research, Baidu's freshly released ERNIE X1—which is supposed to beat DeepSeek R1 and all of OpenAI’s reasoning models. I worked up a bracket for each of them, which I shared below.
Then I dusted off my old GPT-based bot—which is now supercharged with GPT 4.o under the hood—and updated it with the past season’s data.
Here it is in case you want to fool around with it yourself: ask it to prioritize a specific methodology or statistic, analyze each game separately, or tweak the way it ponders information so it fits your needs and style.
Maybe one of these AIs will do better than last year, but probably not: The odds of picking a perfect bracket remain ludicrous at 1 in 9.2 quintillion. If you've filled out brackets since the dawn of humanity, you'd still have a 99.9999999883% chance of getting it wrong.
But these AIs don't care about the odds. They've crunched terabytes of data, analyzed every KenPom rating, and scraped an internet’s worth of stats to give us their best shot.
Will one of these models become the basketball prophet we've been waiting for? You be the judge. All of their replies are available on our GitHub repository, but here’s an overview of what each model predicted.
ChatGPT's SEC Lovefest
If ChatGPT were a college basketball fan, it would have the Southeastern Conference logo tattooed across its silicon ass.
Its Final Four prediction features three SEC teams: Alabama (East), Tennessee (Midwest), and Auburn (South), plus St. John's (West).
ChatGPT went all-in on an all-SEC championship game, predicting Auburn to defeat Alabama in the final. Among its more adventurous picks, it has Gonzaga (an 8-seed) knocking off 1-seed Houston in the second round.
For first-round upsets, ChatGPT leaned conservative, picking just a few surprises like New Mexico over Marquette, Colorado State (12-seed) over a 5-seed, and UC San Diego as a "bracket buster."
The model was especially high on Auburn's championship potential, citing their balanced excellence as the deciding factor: "Auburn was ranked No. 1 overall for a reason and has that defensive edge in a high-scoring battle. In a title game, when legs are tired, defense and half-court execution matter a ton– and Auburn gets the nod there."
Its championship prediction—Auburn 86, Alabama 80—would make for an electric all-SEC final that showcases what ChatGPT calls "balanced excellence," noting that Auburn ranks in the top 10–15 in both offensive and defensive metrics—"a proven formula for championship teams."
Final Four:
- Alabama vs Tennessee (all-SEC showdown) and Auburn vs St. John’s
National Championship:
- Auburn defeats Alabama
(Note: ChatGPT mentioned that Duke was highlighted by some early predictive models as having the highest initial odds, but the research Agent specifically selected Auburn as the champion.)
(You can read ChatGPT’s whole report by clicking on this link.)
Gemini's Blue Devil Prophecy
Google's Gemini sees a Final Four of Duke (East), Florida (West), Houston (Midwest), and Michigan State (South), with Duke beating Florida for the championship.
Unlike ChatGPT, Gemini went upset-happy in the first round, predicting multiple 13-seeds to triumph over 4-seeds: Yale over Texas A&M, Akron over Arizona, and High Point over Purdue. It also foresees Drake (11) over Illinois (6) and VCU (11) over BYU (6).
Gemini's analysis heavily emphasizes Duke's balance (top 5 in both offensive and defensive efficiency) and Florida's #1 offensive efficiency rating. Perhaps most notably, Gemini called out injury concerns for key players like Duke's Cooper Flagg and Houston's J'Wan Roberts—details the other models didn't mention.
Gemini's methodical region-by-region analysis revealed careful attention to statistics. For its championship prediction, it focused on Duke's good performance on both ends of the floor: "This analysis was based on KenPom ratings, NET rankings, recent form, injury reports, and expert predictions from various sources." Convincing enough for an amateur like me.
Final Four Candidates:
- Duke (1-seed, East Region), Florida (1-seed, West Region), Houston (1-seed, Midwest Region), Michigan State (2-seed, South Region)
Championship Prediction:
- Duke beats Florida
(You can read Gemini’s research by clicking on this link.)
Grok-3's Balanced Approach
Elon Musk's Grok-3 stakes out a middle ground with a Final Four of Auburn (South), Duke (East), Florida (West), and Houston (Midwest), ultimately picking Duke to defeat Florida for the championship.
Grok-3's upset picks include Yale over Texas A&M and the winner of a North Carolina/San Diego State play-in game over Marquette. It also listed other potential upsets like Liberty over Oregon, Akron over Arizona, Colorado State over Memphis, McNeese over Clemson, and High Point over Purdue.
Like Gemini, Grok-3 highlighted the importance of KenPom metrics and other data sources, particularly noting Auburn's strong offense (KenPom #2) and solid defense (#12), along with the SEC's impressive showing of 14 teams in the tournament.
For Duke's run through the East, Grok-3 specifically mapped out: "Elite Eight: Duke over Alabama; Sweet 16: Duke over Arizona, Alabama over Wisconsin; Second Round: Duke over Mississippi State, Arizona over Oregon, Wisconsin over Baylor, Alabama over UCLA."
What makes Grok-3's prediction interesting is its balance between conservative bracket construction and strategic upset picks, focusing on teams with statistical strengths and favorable matchups rather than pure shock value.
Final Four:
- Duke beats Auburn, Florida beats Houston.
Championship:
- Duke over Florida, predicted national champion.
(You can read Grok’s research by clicking on this link.)
ERNIE X1's Bold Gambles
Baidu's ERNIE X1 model predicts a Final Four of Duke (East), Florida (West), Alabama (South), and Kansas (Midwest), with Duke defeating Alabama for the title.
But where ERNIE really shines is in its audacious upset picks. How about Norfolk State giving a fight against Iowa State or New Mexico winning? If those happen, ERNIE might need to start picking lottery numbers as they are the clear underdogs, according to experts.
In its analysis, ERNIE X1 places emphasis on coaching experience as a key factor, noting: "Importance of coaching experience (Jon Scheyer - Duke, Todd Golden - Florida)" as critical to tournament success. The model also offers a unique perspective on team composition, highlighting that teams with "elite guards (Florida, Duke) or dominant big men (Alabama, Purdue)" tend to have advantages in tournament play.
ERNIE X1's Sweet 16 and Elite 8 predictions follow a logical progression from its first-round upsets, with Duke defeating Purdue and Kentucky upsetting Tennessee in the East before Duke advances to the Final Four. In the South, it has Alabama defeating Baylor and Gonzaga outlasting Michigan State before Alabama advances.
Final Four:
- Duke vs Florida
- Alabama vs Kansas
National Championship:
- Duke defeats Alabama: Duke’s depth and shooting prove too much for Alabama’s defense.
(You can read Ernie’s report by clicking on this link.)
The Bracket Bot Gets a Second Chance
Last year's failure, the March Madness Bracket Bot, returns with only one thing on its mind: redemption. In fairness, we all need to admit that the bot failed simply because BYU—the favorite team by consensus— got stomped by Duquesne, a team that hadn’t won a March Madness since 1969.
This year, the Bot’s Final Four picks are Duke (East), Houston (Midwest), Florida (West), and Auburn (South), with Duke defeating Auburn for the championship.
The Bot's upset picks focus on 12-5 and 11-6 matchups: Drake (12) over Baylor (5), Richmond (12) over Kansas State (5), Vermont (13) over Arizona (4), Texas (11) over Indiana (6), and San Diego State (11) over Kentucky (6).
In an incredible display of evolution, our Bot provided a uniquely detailed, round-by-round breakdown of how things might shake out rather than just giving us its final predictions.
It even outlined a "Cinderella path" for Drake to reach the Elite Eight before it must inevitably fall to Florida. It also identified San Diego State (11) as a "Dark Horse Final Four Candidate" based on the Aztecs’ elite defense.
Most reliable Final Four:
- Duke (1), Houston (1), Florida (1), Auburn (1) → These #1 seeds are elite.
National Championship
- Duke (1) → Best balance of offense, defense, and coaching.
You can check its full predictions by clicking on this link
The AI Consensus: Duke Dominance
Looking across all five AI models reveals some patterns:
Duke is the overwhelming favorite, picked by four of the five models to win it all. Only ChatGPT went against the grain, selecting Auburn instead.
Florida made the Final Four in four predictions, though only as the runner-up in two of them.
Auburn reaches at least the Elite Eight in all five brackets, with two models (ChatGPT and Bracket Bot) picking them as championship finalists.
The SEC dominance is unanimous, with multiple conference teams predicted to make deep runs across all models.
Major upset candidates that appear in multiple predictions include Yale over Texas A&M, Drake over higher seeds, and San Diego State making a surprising run.
Houston's path is one of the most contested, with predictions ranging from a second-round exit (ChatGPT) to a Final Four appearance (Gemini, Grok, Bracket Bot).
ChatGPT, meanwhile, is the only one predicting an SEC champion despite all models acknowledging the conference's strength.
Each AI brings something unique to the table. Gemini focused on injury analysis, which makes sense. Grok-3 emphasizes SEC dominance with a record 14 tournament teams. ERNIE X1 offers the boldest upset picks by far—which isn’t that dumb of an idea considering how last year was. The Bracket Buster Bot provided some accurate results back in 2024 and has been configured by a human specifically for this task.
Why AI Still Can't Solve March Madness
Despite their sophisticated algorithms and data processing, these AI models still can't agree on a perfect bracket. Why? Because March Madness thrives on the unpredictable—we’re talking about 10 humans with free will, interacting in a court with thousands of fans with free will, under the weather and other conditions that are unpredictable, with coaches and technicians working on unpredictable strategies with unpredictable moods. Determinism is not a word that would fit in this type of tournament.
As advanced as these models are, they're still just making educated guesses based on past performance—kind of like how degens rely on the trading gods for their technical analysis to be accurate. All models are based on theoretical—even scientific—methods, but none can fully capture the human element that makes college basketball so maddeningly unpredictable.
That's what makes the tournament great.
If Duke follows the AI consensus and wins the championship, we'll have five silicon prophets to thank. But even more likely, when the final buzzer sounds, we'll all be left with the same question we ask every April: "Who saw THAT coming?"
Edited by Sebastian Sinclair and Josh Quittner
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。