Instructions to use omarmomen/structroberta_sx2_final with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use omarmomen/structroberta_sx2_final with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="omarmomen/structroberta_sx2_final", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("omarmomen/structroberta_sx2_final", trust_remote_code=True) model = AutoModelForMaskedLM.from_pretrained("omarmomen/structroberta_sx2_final", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Model Card for omarmomen/structroberta_sx2_final
This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023. The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
omarmomen/structroberta_sx2_final is a modification of the Roberta Model to incorporate syntactic inductive bias using an unsupervised parsing mechanism.
This model variant places the parser network after 4 attention blocks and increases the number of convolution layers in the parser network from 4 to 6.
The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).
- Downloads last month
- 20