TNG Technology Consulting fine-tuned the 32-billion-parameter OLMo-2 Large Language Model using AMD's MI300X GPUs and the Open R1 dataset, focusing on enhancing the model's reasoning capabilities. The MI300X accelerators, with their multi-chip module architecture and substantial memory bandwidth, facilitated efficient handling of the model's training requirements. The Open R1 dataset, curated by Hugging Face, provided a comprehensive collection of mathematical problems with detailed reasoning traces, serving as an ideal foundation for this fine-tuning endeavor. This collaborative effort underscores the potential of open-source initiatives and advanced hardware in advancing AI research.
- Downloads last month
- 33
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for tngtech/OLMo-2-Instruct-Math-32B
Base model
allenai/OLMo-2-0325-32B
Finetuned
allenai/OLMo-2-0325-32B-SFT
Finetuned
allenai/OLMo-2-0325-32B-DPO
Finetuned
allenai/OLMo-2-0325-32B-Instruct