File size: 1,763 Bytes
2cfa9dc 6bac9ed 2cfa9dc 6bac9ed 2cfa9dc 6bac9ed 2cfa9dc 6bac9ed 2cfa9dc 6bac9ed 2cfa9dc dcbb05d 2cfa9dc 6bac9ed 2cfa9dc 6bac9ed 2cfa9dc c34440f 6bac9ed 2cfa9dc 6bac9ed 2cfa9dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
library_name: transformers
tags:
- code
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
This model is trained on generating SQL code from user prompts.
## Model Details
### Model Description
This model is traiend for generating SQL code from user prompts. The prompt structure is based on this format.
###Question ###Context[SQL code of your table ] ###Answer:
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Ali Bidaran
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** Gemma 2B
### Model Sources [optional]
### Direct Use
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, GemmaTokenizer
model_id = "Gemma2_SQLGEN"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
tokenizer.padding_side = 'right'
)
from peft import LoraConfig, PeftModel, get_peft_model
from trl import SFTTrainer
prompt = "find unique items from name coloum."
text=f"<s>##Question: {prompt} \n ##Context: CREATE TABLE head (head_id VARCHAR, name VARCHAR) \n ##Answer:"
inputs=tokenizer(text,return_tensors='pt').to('cuda')
outputs=model.generate(**inputs,max_new_tokens=400,do_sample=True,top_p=0.92,top_k=10,temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|