By OpenAI's own testing,Watch A MILFS Desires Online its newest reasoning models, o3 and o4-mini, hallucinate significantly higher than o1.
First reported by TechCrunch, OpenAI's system card detailed the PersonQA evaluation results, designed to test for hallucinations. From the results of this evaluation, o3's hallucination rate is 33 percent, and o4-mini's hallucination rate is 48 percent — almost half of the time. By comparison, o1's hallucination rate is 16 percent, meaning o3 hallucinated about twice as often.
SEE ALSO: All the AI news of the week: ChatGPT debuts o3 and o4-mini, Gemini talks to dolphinsThe system card noted how o3 "tends to make more claims overall, leading to more accurate claims as well as more inaccurate/hallucinated claims." But OpenAI doesn't know the underlying cause, simply saying, "More research is needed to understand the cause of this result."
OpenAI's reasoning models are billed as more accurate than its non-reasoning models like GPT-4o and GPT-4.5 because they use more computation to "spend more time thinking before they respond," as described in the o1 announcement. Rather than largely relying on stochastic methods to provide an answer, the o-series models are trained to "refine their thinking process, try different strategies, and recognize their mistakes."
However, the system card for GPT-4.5, which was released in February, shows a 19 percent hallucination rate on the PersonQA evaluation. The same card also compares it to GPT-4o, which had a 30 percent hallucination rate.
In a statement to Mashable, an OpenAI spokesperson said, “Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability.”
Evaluation benchmarks are tricky. They can be subjective, especially if developed in-house, and research has found flaws in their datasets and even how they evaluate models.
Plus, some rely on different benchmarks and methods to test accuracy and hallucinations. HuggingFace's hallucination benchmark evaluates models on the "occurrence of hallucinations in generated summaries" from around 1,000 public documents and found much lower hallucination rates across the board for major models on the market than OpenAI's evaluations. GPT-4o scored 1.5 percent, GPT-4.5 preview 1.2 percent, and o3-mini-high with reasoning scored 0.8 percent. It's worth noting o3 and o4-mini weren't included in the current leaderboard.
That's all to say; even industry standard benchmarks make it difficult to assess hallucination rates.
Then there's the added complexity that models tend to be more accurate when tapping into web search to source their answers. But in order to use ChatGPT search, OpenAI shares data with third-party search providers, and Enterprise customers using OpenAI models internally might not be willing to expose their prompts to that.
Regardless, if OpenAI is saying their brand-new o3 and o4-mini models hallucinate higher than their non-reasoning models, that might be a problem for its users.
UPDATE: Apr. 21, 2025, 1:16 p.m. EDT This story has been updated with a statement from OpenAI.
Topics ChatGPT OpenAI
The Speed of Motion by Harold EdgertonSpotify goes down worldwideChrissy Teigen promises big bail fund donation after Trump's 'MAGA Night' tweetJoin us at the Norwood by Thessaly La Force'Quordle' today: See each 'Quordle' answer and hints for April 18Emily Fragos on Emily Dickinson’s Letters by David O'NeillTaylor Swift calls out Donald Trump for stoking white supremacy, promises to vote him outHow doctors' receptionists really feel about their notorious TikTok reputationHow to watch 'Barry' Season 4 for freeNetflix to finally stop sending DVDs by mailTaylor Swift calls out Donald Trump for stoking white supremacy, promises to vote him out'Stardew Valley' is getting an update with 'new game content'Until Next Year ... by Thessaly La ForceSpotify goes down worldwideChrissy Teigen promises big bail fund donation after Trump's 'MAGA Night' tweetWhite House protesters were tear gassed because Trump wanted to create photo opLost recipes resurface on Facebook, and now we’re eating like crazySadie Stein to Join Editorial Staff of 'The Paris Review' by Lorin SteinBumble says UK Online Safety Bill isn't enoughGoogle's AI search engine will 'anticipate users' needs' Always the Model, Never the Artist by Madison Mainwaring Lucky by Shannon Pufahl Learning Curve by Curtis Gillespie Staff Picks: Free Verse, Farewells, and Fist City by The Paris Review Remembering Toni by The Paris Review A Cultural History of First Words by Michael Erard Crying in the Library by Shannon Reed Participating in the American Theater of Trauma by Patrick Nathan Sorry, Peter Pan, We’re Over You by Sabrina Orah Mark On Seeing, Waking, and Being Woke by Jess Row Eat This Book: A Food Maurice Sendak at the Opera by The Paris Review What We Deserve by Angie Cruz Toni Morrison, 1931–2019 by The Paris Review A Refusal to Defend or Even Stick Up for the Art of the Short Story by Peter Orner Redux: A Heat That Hung Like Rain by The Paris Review Redux: Collectors of Clippings by The Paris Review Rumple. Stilt. And Skin. by Sabrina Orah Mark Whither The Golden Penetrators? by Dan Piepenbring Natalia Ginzburg’s Broken Mirror by Tim Parks
1.8317s , 8223.65625 kb
Copyright © 2025 Powered by 【Watch A MILFS Desires Online】,Miracle Information Network