vijaye12 commited on
Commit
4cda9a9
1 Parent(s): 8b70129

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
31
  **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
32
 
33
 
34
- TTM-R1 comprises TTM variants pre-trained on 250M public training samples. We have another set of TTM models released under TTM-R2 trained on a much larger pretraining
35
  dataset (~700M samples) which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2). In general, TTM-R2 models perform better than
36
  TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
37
  try both R1 and R2 variants and pick the best for your data.
 
31
  **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
32
 
33
 
34
+ **New updates:** TTM-R1 comprises TTM variants pre-trained on 250M public training samples. We have another set of TTM models released recently under TTM-R2 trained on a much larger pretraining
35
  dataset (~700M samples) which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2). In general, TTM-R2 models perform better than
36
  TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
37
  try both R1 and R2 variants and pick the best for your data.