File size: 1,116 Bytes
2fe5525 7c1bdda 2fe5525 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
license: llama3
datasets:
- Telugu-LLM-Labs/marathi_alpaca_yahma_cleaned_filtered
---
Meta llama3 8B trained on marathi alpaca cleaned for 1.5 epochs, On A100 40GB
## Model Overview
**Marathi-Llama3** is a fine-tuned version of the Llama3 model, tailored specifically for the Marathi language. This model leverages the power of the Llama3 architecture to provide accurate and nuanced responses in Marathi, opening up advanced AI capabilities to Marathi-speaking communities.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "Echelon-AI/marathi-llama3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Generate text
input_text = "कृपया मला मराठी भाषेत एक गोष्ट सांगा."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
GGUF available at [GGUF](https://huggingface.co/ayan-sh003/marathi-llama3-GGUF) |