vijaye12 commited on
Commit
43d9209
1 Parent(s): e1ff042

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -13
README.md CHANGED
@@ -46,12 +46,6 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
46
  in future. This model is targeted towards a long forecasting setting of context length 1024 and forecast length 96 and
47
  recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-v1) [[Benchmark Scripts]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
48
 
49
- - **New Releases (trained on larger pretraining datasets, released on October 2024)**:
50
-
51
- - **512-96-r2**: Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
52
- in future. This model is pre-trained with a larger pretraining dataset for improved accuracy. Recommended for hourly and minutely
53
- resolutions (Ex. 10 min, 15 min, 1 hour, etc). This model refers to the TTM-B variant used in the paper (branch name: 512-96-r2) [[Benchmark Scripts]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/tinytimemixer/ttm_v2_benchmarking_512_96.ipynb)
54
-
55
 
56
 
57
 
@@ -75,9 +69,7 @@ The below model scripts can be used for any of the above TTM models. Please upda
75
  TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
76
  Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
77
  adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) TTM-Q referred in the paper maps to the `512-96` model
78
- uploaded in the main branch, and TTM-B referred in the paper maps to the `512-96-r2` model. Please note that the Granite TTM models are pre-trained exclusively on datasets
79
- with clear commercial-use licenses that are approved by our legal team. As a result, the pre-training dataset used in this release differs slightly from the one used in the research
80
- paper, which may lead to minor variations in model performance as compared to the published results. Please refer to our paper for more details.
81
 
82
  ## Recommended Use
83
  1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
@@ -188,10 +180,6 @@ The original r1 TTM models were trained on a collection of datasets from the Mon
188
  - Wind Farms Production data: https://zenodo.org/records/4654858
189
  - Wind Power: https://zenodo.org/records/4656032
190
 
191
- In addition to the above datasets, the updated TTM model (512-96-r2) was trained on the following:
192
- - PEMSD3, PEMSD4, PEMSD7, PEMSD8, PEMS_BAY: https://drive.google.com/drive/folders/1g5v2Gq1tkOq8XO0HDCZ9nOTtRpB6-gPe
193
- - LOS_LOOP: https://drive.google.com/drive/folders/1g5v2Gq1tkOq8XO0HDCZ9nOTtRpB6-gPe
194
-
195
 
196
  ## Citation
197
  Kindly cite the following paper, if you intend to use our model or its associated architectures/approaches in your
 
46
  in future. This model is targeted towards a long forecasting setting of context length 1024 and forecast length 96 and
47
  recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-v1) [[Benchmark Scripts]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
48
 
 
 
 
 
 
 
49
 
50
 
51
 
 
69
  TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
70
  Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
71
  adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) TTM-Q referred in the paper maps to the `512-96` model
72
+ uploaded in the main branch. Please refer to our paper for more details.
 
 
73
 
74
  ## Recommended Use
75
  1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
 
180
  - Wind Farms Production data: https://zenodo.org/records/4654858
181
  - Wind Power: https://zenodo.org/records/4656032
182
 
 
 
 
 
183
 
184
  ## Citation
185
  Kindly cite the following paper, if you intend to use our model or its associated architectures/approaches in your