Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,25 @@ The maximum GPU usage during training is **24GB**, and the model has preliminary
|
|
33 |
|
34 |
This model is fine-tuned using 1,000 examples of the Alpaca-GPT4 and Glaive-function-calling-v2 datasets, respectively.
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
## Training procedure
|
37 |
|
38 |
### Training hyperparameters
|
|
|
33 |
|
34 |
This model is fine-tuned using 1,000 examples of the Alpaca-GPT4 and Glaive-function-calling-v2 datasets, respectively.
|
35 |
|
36 |
+
## Usage
|
37 |
+
|
38 |
+
```python
|
39 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
40 |
+
from peft import PeftModel
|
41 |
+
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained("hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling")
|
43 |
+
model = AutoModelForCausalLM.from_pretrained("BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf", torch_dtype="auto", device_map="auto")
|
44 |
+
model = PeftModel.from_pretrained(model, "hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling")
|
45 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
46 |
+
|
47 |
+
messages = [
|
48 |
+
{"role": "user", "content": "Who are you?"}
|
49 |
+
]
|
50 |
+
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
|
51 |
+
inputs = inputs.to("cuda")
|
52 |
+
generate_ids = model.generate(inputs, streamer=streamer)
|
53 |
+
```
|
54 |
+
|
55 |
## Training procedure
|
56 |
|
57 |
### Training hyperparameters
|