Edit model card

Overview

SinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages.

Model Specifications.

model : Roberta

vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1

How to use from the Transformers Library

from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto")

model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto")

OR Clone the model repo

git lfs install

git clone https://huggingface.co/Kalindu/SinBerto

Downloads last month
48
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.