File size: 3,784 Bytes
d98d5ea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
language:
- en
- it
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/Meta-Llama-3.1-8B-Text-to-SQL-GGUF
This is quantized version of [ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL](https://huggingface.co/ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL) created using llama.cpp
# Original Model Card
# Meta LLaMA 3.1 8B 4-bit Finetuned Model
This model is a fine-tuned version of `Meta-Llama-3.1-8B`, developed by **ruslanmv** for text generation tasks. It leverages 4-bit quantization, making it more efficient for inference while maintaining strong performance in natural language generation.
---
## Model Details
- **Base Model**: `unsloth/meta-llama-3.1-8b-bnb-4bit`
- **Finetuned by**: ruslanmv
- **Language**: English
- **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Tags**:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## Model Usage
### Installation
To use this model, you will need to install the necessary libraries:
```bash
pip install transformers accelerate
```
### Loading the Model in Python
Here’s an example of how to load this fine-tuned model using Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL"
# Ensure you have the right device setup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the model and tokenizer from the Hugging Face Hub
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize the tokenizer (adjust the model name as needed)
# Define EOS token for terminating the sequences
EOS_TOKEN = tokenizer.eos_token
# Define Alpaca-style prompt template
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
"""
# Format the prompt without the response part
prompt = alpaca_prompt.format(
"Provide the SQL query",
"Seleziona tutte le colonne della tabella table1 dove la colonna anni è uguale a 2020"
)
# Tokenize the prompt and generate text
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=64, use_cache=True)
# Decode the generated text
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
# Extract the generated response only (remove the prompt part)
response_start = generated_text.find("### Response:") + len("### Response:\n")
response = generated_text[response_start:].strip()
# Print the response (excluding the prompt)
print(response)
```
and the answer is
```
SELECT * FROM table1 WHERE anni = 2020
```
### Model Features
- **Text Generation**: This model is fine-tuned to generate coherent and contextually accurate text based on the provided input.
### License
This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). You are free to use, modify, and distribute this model, provided that you comply with the license terms.
### Acknowledgments
This model was fine-tuned by **ruslanmv** based on the original work of `unsloth` and the `meta-llama-3.1-8b-bnb-4bit` model.
|