
Time-R1 Model Series
This collection hosts the official checkpoints for the Time-R1 model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation.
These models are trained using the Time-Bench dataset.
Model Checkpoints
We provide several checkpoints representing different stages of the Time-R1 training process:
Stage 1: Temporal Comprehension Models
These models are trained to develop foundational temporal understanding.
- Time-R1-S1P1: Checkpoint after Phase 1 of Stage 1 training.
- Focus: Foundational logic on easy timestamp inference tasks.
- Time-R1-S1P2: Checkpoint after Phase 2 of Stage 1 training.
- Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.
- Time-R1-Theta1: Checkpoint θ₁, after Phase 3 (full Stage 1 training).
- Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.
- Time-R1-Theta1_prime: Ablation model θ₁', trained for Stage 1 without the dynamic reward design.
- Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.
Stage 2: Future Event Time Prediction Model
This model builds upon Stage 1 capabilities to predict future event timings.
- Time-R1-Theta2: Checkpoint θ₂, after Stage 2 training.
- Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.
Please refer to the main paper for detailed discussions on the architecture, training methodology, and comprehensive evaluations.
How to Use
For loading and using these models, please refer to the example scripts and documentation provided in our GitHub repository.
Typically, you can load the models using the Hugging Face transformers
library:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Example for one of the models (replace with the specific model name)
model_name = "ulab-ai/Time-R1-Theta1" # Or your specific Hugging Face model path
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Further usage instructions would go here or in the repository
Citations
@article{liu2025time,
title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs},
author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
journal={arXiv preprint arXiv:2505.13508},
year={2025}
}
- Downloads last month
- 23