--- title: MOIRAI readme --- # Moirai-1.0-R-Large snapshot of [commit](https://huggingface.co/Salesforce/moirai-1.0-R-large/commit/001acc2aca5fb1d5023e2664bbe45471d8baf21a). A snapshot of the last Moirai-1.0-R-Large weights released under permissive license, hosted by sktime. For use with the sktime Moirai interface to ensure permissive license throughout the stack Moirai, the Masked Encoder-based Universal Time Series Forecasting Transformer is a Large Time Series Model pre-trained on [LOTSA data](https://huggingface.co/datasets/Salesforce/lotsa_data). For more details on the Moirai architecture, training, and results, please refer to the [paper](https://arxiv.org/abs/2402.02592).


Fig. 1: Overall architecture of Moirai. Visualized is a 3-variate time series, where variates 0 and 1 are target variables (i.e. to be forecasted, and variate 2 is a dynamic covariate (values in forecast horizon known). Based on a patch size of 64, each variate is patchified into 3 tokens. The patch embeddings along with sequence and variate id are fed into the Transformer. The shaded patches represent the forecast horizon to be forecasted, whose corresponding output representations are mapped into the mixture distribution parameters.

## Example ```python from sktime.datasets import load_tecator from sktime.forecasting.moirai_forecaster import MOIRAIForecaster y, _ = load_tecator(return_X_y=True, return_type="pd-multiindex") moirai_forecaster = MOIRAIForecaster(checkpoint_path=f"sktime/moirai-1.0-R-small", broadcasting=False) moirai_forecaster.fit(y) forecast = moirai_forecaster.predict(fh=range(1, 16)) ``` ## Moirai weights hosted under sktime | # Model | # Parameters | | :---: | :---: | | [Moirai-1.0-R-Small](https://huggingface.co/sktime/moirai-1.0-R-small) | 14m | | [Moirai-1.0-R-Base](https://huggingface.co/sktime/moirai-1.0-R-base) | 91m | | [Moirai-1.0-R-Large](https://huggingface.co/sktime/moirai-1.0-R-large) | 311m | Original Weights present at [Salesforce collection](https://huggingface.co/collections/Salesforce/moirai-10-r-models-65c8d3a94c51428c300e0742). ## Citation ```markdown @article{woo2024unified, title={Unified Training of Universal Time Series Forecasting Transformers}, author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Xiong, Caiming and Savarese, Silvio and Sahoo, Doyen}, journal={arXiv preprint arXiv:2402.02592}, year={2024} } ```