subhrokomol's picture
Update README.md
d3487fa verified
|
raw
history blame
433 Bytes
metadata
license: apache-2.0
language:
  - en
  - hi
metrics:
  - perplexity
base_model: meta-llama/Llama-2-7b-hf
pipeline_tag: text-generation
library_name: transformers
tags:
  - code

Finetune Llama-2-7B-hf on Hindi dataset after transtokenization

This model was trained on 24GB of RTX A500 on zicsx/mC4-Hindi-Cleaned-3.0 dataset (1%) for 3 hours.

We used Hugging Face PEFT-LoRA PyTorch for training.

Transtokenization process in --