Less than a thousand people in the world know how to shape a frontier model. They sit inside a handful of labs, working on proprietary systems. Everyone else has been relegated to prompt engineering, contorting requests to fit models built for the average use case.
At Adaption, we believe that should change. Intelligence should not arrive preconfigured, and building AI shouldn’t require a PhD.
Model training and reinforcement learning are among the most powerful ways to shape a model, and among the hardest to get right outside a frontier lab. Most attempts fail for the same reasons: catastrophic forgetting that erodes general knowledge, overfitting on small or low-quality datasets, and conflicting training signals that fail to teach new behaviors. The techniques that work are passed researcher-to-researcher, rarely written down. The result is a world where a small group of experts defines what AI can and cannot do, while everyone else is left on the sidelines.
Today we’re introducing AutoScientist, a system that self-improves and automates the full research loop behind model training and alignment.


