ChengsenWang commited on
Commit
0545d0e
1 Parent(s): 4d390b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -3
README.md CHANGED
@@ -1,3 +1,46 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - time-series-forecasting
5
+ tags:
6
+ - time-series
7
+ - multimodality
8
+ - pretrained-model
9
+ - foundation-model
10
+ - multimodal-time-series-foundation-model
11
+ size_categories:
12
+ - 100K<n<1M
13
+ ---
14
+
15
+ # ChatTime: A Multimodal Time Series Foundation Model
16
+
17
+ ## ✨ Introduction
18
+
19
+ In this paper, we innovatively model time series as a foreign language and construct ChatTime, a unified framework for time series and text processing. As an out-of-the-box multimodal time series foundation model, ChatTime provides zero-shot forecasting capability and supports bimodal input/output for both time series and text. We design a series of experiments to verify the superior performance of ChatTime across multiple tasks and scenarios, and create four multimodal datasets to address data gaps. The experimental results demonstrate the potential and utility of ChatTime.
20
+
21
+ As depicted in Figure 1(c), during the instruction fine-tuning stage, we fine-tune [ChatTime-1-7B-Base](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Base) on [ChatTime-1-Finetune-100K](https://huggingface.co/datasets/ChengsenWang/ChatTime-1-Finetune-100K), yielding [ChengsenWang/ChatTime-1-7B-Chat](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Chat).
22
+
23
+ For details on ChatTime models, training data and procedures, and experimental results, please refer to the [arXiv](https://arxiv.org/abs/0000.00000).
24
+
25
+ ![](architecture.png)
26
+
27
+ ## 💾 Dataset
28
+
29
+ The dataset for instruction fine-tuning is extracted from four task-specific datasets: [text question answering](https://huggingface.co/datasets/tatsu-lab/alpaca), [unimodal time series forecasting](https://huggingface.co/datasets/ChengsenWang/ChatTime-1-Pretrain-1M), [context-guided forecasting](https://huggingface.co/datasets/ChengsenWang/CGTSF), and [time series question answering](https://huggingface.co/datasets/ChengsenWang/TSQA). Each task contributes 25K samples, amounting to a total of 100K fine-tuned instances.
30
+
31
+ ## 📝 Citation
32
+
33
+ If you find this repo or our work useful for your research, please consider citing the paper:
34
+
35
+ ```tex
36
+ @inproceedings{
37
+ author = {Chengsen Wang and Qi Qi and Jingyu Wang and Haifeng Sun and Zirui Zhuang and Jinming Wu and Lei Zhang and Jianxin Liao},
38
+ title = {ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data},
39
+ booktitle = {},
40
+ year = {2024},
41
+ }
42
+ ```
43
+
44
+ ## 📪 Contact
45
+
46
+ If you have any question, please contact [[email protected]]().