Distributional shifts in AI can be measured using statistical distance measures such as the Kullback-Leibler divergence or the Wasserstein distance, which compare the feature distributions of the training and test sets. Generative models provide an explicit likelihood estimate \(p(x)\) that indicates how typical a sample is to the training distribution. For discriminative models, proxy techniques include calculating cosine similarity between embedding vectors and using nearest-neighbour distances in a transformed feature space. Additionally, perplexity is used to gauge familiarity in large language models when direct access to internal representations is not possible[1].
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: