Overgeneralization in AI models refers to a phenomenon where models make incorrect predictions or assertions by applying learned patterns too broadly, ignoring critical differences. The text states, 'models overgeneralise, which means that they over-confidently make false predictions for (known or novel) concepts precisely because critical differences are ignored in prediction.' A specific example mentioned is 'hallucination,' which occurs when models deviate from their source of information, typically the pretraining data for large language models[1].
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: