Edit model card

OpenHathi Base GPTQ

Description

This repo contains GPTQ model files for Sarvam's OpenHathi.

Files are made using AutoGPTQ with following config.

quantization_config : {"bits": 4,
  "group_size": 128,
  "damp_percent": 0.1,
  "desc_act": true,

}

We use a custom dataset which has both Hindi and English wiki articles. We truncate to max_length=1024 and model may not perform well beyond that context size.

Prompt template

This is a base model not tuned for any instructions. Feel free to use any format. Alpaca/Vicuna works fine.

Oobagooba

Standard oobagooba works with exllama2 / autogptq loader

Using in code

from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
from transformers import AutoTokenizer

model_dir = 'cmeraki/OpenHathi-7B-Hi-v0.1-Base-gptq'

model = AutoGPTQForCausalLM.from_quantized(model_dir, device="cuda:0")
tokenizer = AutoTokenizer.from_pretrained(model_dir, fast=True)
tokens = tokenizer("do aur do", return_tensors="pt").to(model.device)

print(tokenizer.decode(model.generate(**tokens, max_length=1024)[0]))
Downloads last month
18
Safetensors
Model size
1.26B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for cmeraki/OpenHathi-7B-Hi-v0.1-Base-gptq

Quantized
(47)
this model