QuantFactory/Llama2-7B-Hindi-finetuned-GGUF
This is quantized version of subhrokomol/Llama2-7B-Hindi-finetuned created using llama.cpp
Original Model Card
Finetune Llama-2-7B-hf on Hindi dataset after transtokenization
This model was trained on 24GB of RTX A500 on zicsx/mC4-Hindi-Cleaned-3.0 dataset (1%) for 3 hours.
We used Hugging Face PEFT-LoRA PyTorch for training.
Transtokenization process in --
- Downloads last month
- 334
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for QuantFactory/Llama2-7B-Hindi-finetuned-GGUF
Base model
meta-llama/Llama-2-7b-hf