This the google gemma2 2 billion parameter which is finetuned on the lamini/spider_text_to_sql dataset.

model_name = 'shiv-am-04/gemma2-2b-SQL'

tokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True)

tokenizer.pad_token = tokenizer.eos_token

tokenizer.padding_side = 'right'

model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto', attn_implementation = 'eager', token=access_token)

Downloads last month
15
Safetensors
Model size
2.61B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.