Research

Model Scaling

2 articles in archive

Introducing GPT-4.5

We’re releasing a research preview of GPT‑4.5—our largest and best model for chat yet. GPT‑4.5 is a step forward in scaling up pre-training and post-training.

OpenAI Blog386d ago

Simplifying, stabilizing, and scaling continuous-time consistency models

We’ve simplified, stabilized, and scaled continuous-time consistency models, achieving comparable sample quality to leading diffusion models, while using only two sampling steps.

OpenAI Blog513d ago