Meta llama3 8B trained on marathi alpaca cleaned for 1.5 epochs, On A100 40GB

Model Overview

Marathi-Llama3 is a fine-tuned version of the Llama3 model, tailored specifically for the Marathi language. This model leverages the power of the Llama3 architecture to provide accurate and nuanced responses in Marathi, opening up advanced AI capabilities to Marathi-speaking communities.

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "Echelon-AI/marathi-llama3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate text
input_text = "कृपया मला मराठी भाषेत एक गोष्ट सांगा."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

GGUF available at GGUF

Downloads last month
37
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Echelon-AI/marathi-llama3