Edit model card

Model Card for Nepali Grammatical Error Detection (NepBERTa)

This model is designed for the Nepali Grammatical Error Detection (GED) task. It utilizes the BERT-based NepBERTa model to identify grammatical errors in Nepali text.

Model Details

Model Description

  • Developed by: Sumit Aryal
  • Model type: BERT (NepBERTa-based)
  • Language(s): Nepali
  • License: Apache 2.0
  • Finetuned from model: NepBERTa/NepBERTa

Dataset

  • Dataset Name: Nepali Grammatical Error Detection Dataset
  • Description: The dataset comprises a total of 2,568,682 correctly constructed sentences alongside their erroneous counterparts, resulting in 7,514,122 samples for the training dataset. For the validation dataset, it contains 365,606 correct sentences and 405,905 incorrect sentences. This diverse collection encompasses various types of grammatical errors, including verb inflections, homophones, punctuation errors, and sentence structure issues, making it a comprehensive resource for training and evaluating grammatical error detection models.

Model Sources

Uses

Direct Use

  • Grammar checking for written Nepali text.

Evaluation Metrics

  • Accuracy: 81.7336%
  • Training Loss: 0.277600
  • Validation Loss: 0.344654

How to Get Started with the Model

Use the code below to get started with the model.

import torch
from transformers import BertForSequenceClassification, AutoTokenizer

model = BertForSequenceClassification.from_pretrained("sumitaryal/Nepali_Grammatical_Error_Detection_NepBERTa")
tokenizer = AutoTokenizer.from_pretrained("sumitaryal/Nepali_Grammatical_Error_Detection_NepBERTa", do_lower_case=False)

input_sentence = "रामले भात खायो ।"
inputs = tokenizer(input_sentence, return_tensors="pt")

with torch.no_grad():
  logits = model(**inputs).logits

predicted_class_id = logits.argmax().item()
predicted_class = model.config.id2label[predicted_class_id]
print(f'The sentence "{input_sentence}" is "{predicted_class}"')

Training Details

  • Framework: PyTorch
  • Hyperparameters:
    • Epoch = 2
    • Train Batch Size = 256
    • Valid Batch Size = 256
    • Loss Function = Cross Entripy Loss
    • Optimizer = AdamW
    • Optimizer Parameters:
      • Learning Rate = 5e-5
      • β1 = 0.9
      • β2 = 0.999
      • ϵ = 1e−8
  • GPU = NVIDIA® GeForce® RTXTM 4060 GPU, 8GB VRAM
Downloads last month
7
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sumitaryal/Nepali_Grammatical_Error_Detection_NepBERTa

Base model

NepBERTa/NepBERTa
Finetuned
(3)
this model

Dataset used to train sumitaryal/Nepali_Grammatical_Error_Detection_NepBERTa