Reasoning Models
Reasoning models struggle to control their chains of thought, and that’s good
OpenAI introduces CoT-Control and finds reasoning models struggle to control their chains of thought, reinforcing monitorability as an AI safety safeguard.
gpt-oss-safeguard technical report
gpt-oss-safeguard-120b and gpt-oss-safeguard-20b are two open-weight reasoning models post-trained from the gpt-oss models and trained to reason from a provided policy in order to label content under that policy. In this report, we describe gpt-oss-safeguard’s capabilities and provide our baseline safety evaluations on the gpt-oss-safeguard models, using the underlying gpt-oss models as a baseline. For more information about the development and architecture of the underlying gpt-oss models, see the original gpt-oss model model card.
Building an autonomous financial analyst with o1 and o3-mini
Endex builds the future of financial analysis, powered by OpenAI’s reasoning models.
OpenAI o3-mini
Pushing the frontier of cost-effective reasoning.
