Model Card for omarmomen/babylm_tokenizer_32k
This tokenizer is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023. The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
omarmomen/babylm_tokenizer_32k is a RobertaTokenizer that is pretrained on the BabyLM 10M dataset (cased) with 32K tokens.