File size: 1,409 Bytes
74d0bf6 6aabcfc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
# MiTC
## Introduction
[MiLMo](https://github.com/CMLI-NLP/MiLMo) constructs a minority multilingual text classification dataset named MiTC which contains five languages, including Mongolian, Tibetan, Uyghur, Kazakh and Korean.
We also use [MiLMo](https://github.com/CMLI-NLP/MiLMo) for the downstream experiment of text classification on MiTC.
## Hugging Face
https://huggingface.co/datasets/CMLI-NLP/MiTC
## Citation
Plain Text:
J. Deng, H. Shi, X. Yu, W. Bao, Y. Sun and X. Zhao, "MiLMo:Minority Multilingual Pre-Trained Language Model," 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Honolulu, Oahu, HI, USA, 2023, pp. 329-334, doi: 10.1109/SMC53992.2023.10393961.
BibTeX:
```
@INPROCEEDINGS{10393961,
author={Deng, Junjie and Shi, Hanru and Yu, Xinhe and Bao, Wugedele and Sun, Yuan and Zhao, Xiaobing},
booktitle={2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
title={MiLMo:Minority Multilingual Pre-Trained Language Model},
year={2023},
volume={},
number={},
pages={329-334},
keywords={Soft sensors;Text categorization;Social sciences;Government;Data acquisition;Morphology;Data models;Multilingual;Pre-trained language model;Datasets;Word2vec},
doi={10.1109/SMC53992.2023.10393961}}
```
## Disclaimer
This dataset/model is for academic research purposes only. Prohibited for any commercial or unethical purposes. |