--- license: mit language: - ar --- ## Checkpoints ### Pre-Trained Models Model | Pre-train Dataset | Model | Tokenizer | | --- | --- | --- | --- | | ArTST v2 base | Dialects | soon | soon ### Finetuned Models Model | FInetune Dataset | Model | Tokenizer | | --- | --- | --- | --- | | ArTST v2 ASR | MGB2 | soon | soon | | ArTST v2 ASR | QASR | soon | soon | | ArTST v2 ASR | Dialects | soon | soon | # Acknowledgements ArTST is built on [SpeechT5](https://arxiv.org/abs/2110.07205) Architecture. If you use any of ArTST models, please cite ``` @inproceedings{toyin2023artst, title={ArTST: Arabic Text and Speech Transformer}, author={Toyin, Hawau and Djanibekov, Amirbek and Kulkarni, Ajinkya and Aldarmaki, Hanan}, booktitle={Proceedings of ArabicNLP 2023}, pages={41--51}, year={2023} } ```