Reflections on model transparency and monitoring

Monitoring a reasoning model’s chain of thought was highly effective at detecting misbehavior.
Unknown[1]
Our commitment to keep our reasoning models' chain of thought as monitorable as possible allows us to conduct studies.
Unknown[1]
Vigilant monitoring supports improvements in reasoning models, enhancing their safety profile.
Unknown[1]
Deception remains an open research challenge, warranting continuous commitment.
Unknown[1]
Transparency in model behavior is critical for user trust and model improvement.
Unknown[1]
Space: Let’s explore the GPT-5 Model Card