Research

AI Agent Evaluation

1 articles in archive

MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering

We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering.

OpenAI Blog526d ago