Mr5Pyb3MLk@OpenReview

Total: 1

#1 Training a Scientific Reasoning Model for Chemistry [PDF] [Copy] [Kimi] [REL]

Authors: Siddharth Narayanan, James D. Braza, Ryan-Rhys Griffiths, Albert Bou, Geemi Wellawatte, Mayk Caldas Ramos, Ludovico Mitchener, Michael Martin Pieler, Samuel G Rodriques, Andrew White

Reasoning models are large language models that use extra "thought tokens" before answering, providing both higher accuracy and explicit reasoning for their response. A major question has been whether language model reasoning generalizes beyond mathematics, programming, and logic, where most previous work has focused. We demonstrate that reasoning models can be post-trained in scientific domains without additional domain pretraining, and require substantially less data compared to contemporary domain-specific models. We report ether0, a 24B parameter LLM (based on Mistral-Small-24B) that can reason in natural language and respond with chemical structures. This reasoning model was trained with reinforcement learning on 577,790 experimentally-grounded chemistry tasks involving synthesized organic molecules. Our model outperforms all previous general-purpose chemistry models, frontier models, and humans, and is more data efficient relative to specialized models. We anticipate that this method can be applied to train highly data-efficient language models specialized for predictive and generative tasks across a wide variety of scientific domains.

Subject: NeurIPS.2025 - Poster