--- library_name: transformers tags: - grammatical-error-detection - token-classification - nlp - bert license: mit language: - en base_model: - google-bert/bert-base-uncased pipeline_tag: token-classification --- # Model Description Fine-tuning bert-base-uncased model for token-level binary grammatical error detection on English-FCE dataset provided by MultiGED-2023 - **[Github](https://github.com/sahilnishad/Fine-Tuning-BERT-for-Token-Level-GED)** - **[Dataset](https://github.com/spraakbanken/multiged-2023)** # Get Started with the Model ```python from transformers import AutoModelForTokenClassification, BertTokenizer # Load the model model = AutoModelForTokenClassification.from_pretrained("sahilnishad/BERT-GED-FCE-FT") tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") # Function to perform inference def infer(sentence): inputs = tokenizer(sentence, return_tensors="pt", add_special_tokens=True) with torch.no_grad(): outputs = model(**inputs) return outputs.logits.argmax(-1) # Example usage print(infer("Your example sentence here")) ``` --- # BibTeX: ```bibtex @misc{sahilnishad_bert_ged_fce_ft, author = {Sahil Nishad}, title = {Fine-tuned BERT Model for Grammatical Error Detection on the FCE Dataset}, year = {2024}, url = {https://huggingface.co/sahilnishad/BERT-GED-FCE-FT}, note = {Model available on HuggingFace Hub}, howpublished = {\url{https://huggingface.co/sahilnishad/BERT-GED-FCE-FT}}, }