GPT-5 reduces hallucinations by focusing on training models to browse effectively for up-to-date information and minimizing hallucinations when relying on their internal knowledge. The system demonstrated a significantly lower hallucination rate compared to its predecessors, with gpt-5-thinking exhibiting a rate that is 65% smaller than OpenAI o3. At the response level, gpt-5-thinking shows a 78% decrease in the percentage of responses containing major factual errors when compared to earlier models[1].
Additionally, the introduction of safe completions training has bolstered the model's ability to provide correct information while accurately following its guidelines, thus further enhancing its performance and factual reliability[1].
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: