The model card for gpt-oss-120b and gpt-oss-20b outlines their capabilities and safety measures, emphasizing that they are designed for instruction following, tool use, and reasoning. These models utilize a mixture-of-experts architecture with quantization techniques to operate efficiently. Evaluation results show that gpt-oss-120b does not meet high capability thresholds in areas like biological and chemical risks, even under adversarial conditions, indicating the intent to prioritize safety in open models[1].
The card also highlights the importance of a preparedness framework that aims to mitigate severe risks associated with AI. Safety testing has demonstrated that both models comply well with OpenAI's safety policies and are robust against various attempts to bypass restrictions[1].
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: