ML-Test-v01 / README.md
Haaaaarsh's picture
Update README.md
aa7c09c verified
|
raw
history blame
2.3 kB
metadata
library_name: transformers
pipeline_tag: text-generation
tags:
  - BGPT
  - meta
  - pytorch
  - llama
  - llama-3

Model Description

This model is a finetuned version of Llama3.2-3B-Instruct specifically designed for generating multilingual outputs across multiple Indic languages. The model has been trained on a diverse and curated dataset comprising Hindi, Punjabi, Marathi, Malayalam, Oriya, Kannada, Gujarati, Bengali, Urdu, Tamil, and Telugu. It is optimized to handle natural language tasks such as translation, summarization, and conversational generation across these languages effectively.

  • Developed by: [More Information Needed]
  • Model type: Finetuned LLaMA (Language Model for Multilingual Text Generation)
  • Language(s) (NLP): Hindi, Punjabi, Marathi, Malayalam, Oriya, Kannada, Gujarati, Bengali, Urdu, Tamil, Telugu
  • Finetuned from model: Llama3.2-3B-Instruct

How to Get Started with the Model

Make sure to update your transformers installation via pip install --upgrade transformers.

Use the code below to get started with the model.

import torch
from transformers import pipeline

model_id = "Onkarn/ML-Test-v01"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are a helpful assistant who responds in hindi"},
    {"role": "user", "content": "कर्नाटक की राजधानी क्या है?"},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Training Details

Training Data

The training dataset included a diverse collection of text sources in:

  • Hindi, Punjabi, Marathi, Malayalam, Oriya, Kannada, Gujarati, Bengali, Urdu, Tamil, and Telugu.

Training Parameters

  • Optimization Technique: LoRA (Low-Rank Adaptation)
  • Epochs: 3.0
  • Batch Size: 2.0 (per device train batch size)
  • Learning Rate: 5e-05

Environmental Impact

  • Hardware Type: T4
  • Hours used: 29 hours
  • Cloud Provider: Google Cloud Platform
  • Compute Region: asia-southeast1
  • Carbon Emitted: Total emissions are estimated to be 0.85 kgCO$_2$eq of which 100 percents were directly offset by the cloud provider.