Datasets:
Tasks:
Time Series Forecasting
Modalities:
Image
Formats:
imagefolder
Size:
< 1K
ArXiv:
Tags:
time-series
multimodality
pretrained-model
foundation-model
multimodal-time-series-foundation-model
License:
license: apache-2.0 | |
task_categories: | |
- time-series-forecasting | |
tags: | |
- time-series | |
- multimodality | |
- pretrained-model | |
- foundation-model | |
- multimodal-time-series-foundation-model | |
size_categories: | |
- 100K<n<1M | |
# ChatTime: A Multimodal Time Series Foundation Model | |
## ✨ Introduction | |
In this paper, we innovatively model time series as a foreign language and construct ChatTime, a unified framework for time series and text processing. As an out-of-the-box multimodal time series foundation model, ChatTime provides zero-shot forecasting capability and supports bimodal input/output for both time series and text. We design a series of experiments to verify the superior performance of ChatTime across multiple tasks and scenarios, and create four multimodal datasets to address data gaps. The experimental results demonstrate the potential and utility of ChatTime. | |
As depicted in Figure 1(b), during the continuous pre-training stage, we pre-train [LLaMA-2-7B-Base](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [ChengsenWang/ChatTime-1-Pretrain-1M](https://huggingface.co/datasets/ChengsenWang/ChatTime-1-Pretrain-1M), yielding [ChengsenWang/ChatTime-1-7B-Base](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Base). | |
For details on ChatTime models, training data and procedures, and experimental results, please refer to the [arXiv](https://arxiv.org/abs/2412.11376). | |
![](architecture.png) | |
## 💾 Dataset | |
The data for continuous pre-training is sourced from two extensive open-source time series repositories, [Monash](https://forecastingdata.org/) and [TFB](https://github.com/decisionintelligence/TFB), encompassing approximately 100 sub-datasets. We apply sliding slices to the original time series using five distinct window and step sizes, as illustrated in the following table. We prioritize slicing the original time series into larger segments. Given the numerous repeating patterns and the limited computational resources, we perform K-means on 10M original time series slices. We categorize them into 1M and 25K groups, randomly selecting one sample from each group to serve as a representative. Consequently, we create a high-quality dataset for continuous pre-training (1M) and instruction fine-tuning (25K). | |
| Window Size | History Length | Prediction Length | Sliding Step | | |
| :---------: | :------------: | :---------------: | :----------: | | |
| 576 | 512 | 64 | 32 | | |
| 288 | 256 | 32 | 16 | | |
| 144 | 128 | 16 | 8 | | |
| 72 | 64 | 8 | 4 | | |
| 36 | 32 | 4 | 2 | | |
For details on pre-training dataset, please refer to the [arXiv](https://arxiv.org/abs/2412.11376). | |
## 📝 Citation | |
If you find this repo or our work useful for your research, please consider citing the paper: | |
```tex | |
@inproceedings{ | |
author = {Chengsen Wang and Qi Qi and Jingyu Wang and Haifeng Sun and Zirui Zhuang and Jinming Wu and Lei Zhang and Jianxin Liao}, | |
title = {ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data}, | |
booktitle = {AAAI Conference on Artificial Intelligence}, | |
year = {2025}, | |
} | |
``` | |
## 📪 Contact | |
If you have any question, please contact [[email protected]](). |