AraT5 CODAfication Model

Model description

AraT5 CODA is a text normalization model that normalizes dialectal Arabic text into the Conventional Orthography for Dialectal Arabic (CODA). The model was built by fine-tuning AraT5-v2 on the MADAR CODA dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "Exploiting Dialect Identification in Automatic Dialectal Text Normalization." Our fine-tuning code and data can be found here.

Intended uses

You can use the AraT5 CODA model as part of Hugging Face's transformers >= 4.22.2.

How to use

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch

tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/arat5-coda')
model = AutoModelForSeq2SeqLM.from_pretrained('CAMeL-Lab/arat5-coda')

text = 'اثنين همبرقر و اثنين قهوة، لوسمحت. باخذهم تيك اواي.'

inputs = tokenizer(text, return_tensors='pt')
gen_kwargs = {'num_beams': 5, 'max_length': 200,
              'num_return_sequences': 1,
              'no_repeat_ngram_size': 0, 'early_stopping': False
              }

codafied_text = model.generate(**inputs, **gen_kwargs)
codafied_text = tokenizer.batch_decode(codafied_text,
                                       skip_special_tokens=True,
                                       clean_up_tokenization_spaces=False)[0]

print(codafied_text)
"اثنين همبرقر واثنين قهوة، لو سمحت. بآخذهم تيك اوي."

Citation

@inproceedings{alhafni-etal-2024-exploiting,
    title = "Exploiting Dialect Identification in Automatic Dialectal Text Normalization",
    author = "Alhafni, Bashar  and
      Al-Towaity, Sarah  and
      Fawzy, Ziyad  and
      Nassar, Fatema and
      Eryani, Fadhl and
      Bouamor, Houda and
      Habash, Nizar",
    booktitle = "Proceedings of ArabicNLP 2024"
    month = "aug",
    year = "2024",
    address = "Bangkok, Thailand",
    abstract = "Dialectal Arabic is the primary spoken language used by native Arabic speakers in daily communication. The rise of social media platforms has notably expanded its use as a written language. However, Arabic dialects do not have standard orthographies. This, combined with the inherent noise in user-generated content on social media, presents a major challenge to NLP applications dealing with Dialectal Arabic. In this paper, we explore and report on the task of CODAfication, which aims to normalize Dialectal Arabic into the Conventional Orthography for Dialectal Arabic (CODA). We work with a unique parallel corpus of multiple Arabic dialects focusing on five major city dialects. We benchmark newly developed pretrained sequence-to-sequence models on the task of CODAfication. We further show that using dialect identification information improves the performance across all dialects. We make our code, data, and pretrained models publicly available.",
}
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including CAMeL-Lab/arat5-coda