Update README.md
Browse files
README.md
CHANGED
@@ -33,4 +33,89 @@ This model follows the same chat template as the base model one.
|
|
33 |
|
34 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
35 |
|
36 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
35 |
|
36 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
37 |
+
|
38 |
+
# Usage
|
39 |
+
|
40 |
+
**Dependencies**
|
41 |
+
|
42 |
+
```
|
43 |
+
!pip install unsloth
|
44 |
+
!pip install --upgrade "transformers>=4.45"
|
45 |
+
```
|
46 |
+
|
47 |
+
**Code**
|
48 |
+
|
49 |
+
```python
|
50 |
+
from unsloth.chat_templates import get_chat_template
|
51 |
+
from unsloth import FastLanguageModel
|
52 |
+
import torch
|
53 |
+
|
54 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
55 |
+
model_name = "NotASI/FineTome-Llama3.2-1B-0929",
|
56 |
+
)
|
57 |
+
|
58 |
+
tokenizer = get_chat_template(
|
59 |
+
tokenizer,
|
60 |
+
chat_template = "llama-3.1",
|
61 |
+
)
|
62 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
63 |
+
|
64 |
+
messages = [
|
65 |
+
{"role": "user", "content": "<message>"},
|
66 |
+
]
|
67 |
+
inputs = tokenizer.apply_chat_template(
|
68 |
+
messages,
|
69 |
+
tokenize = True,
|
70 |
+
add_generation_prompt = True, # Must add for generation
|
71 |
+
return_tensors = "pt",
|
72 |
+
).to("cuda")
|
73 |
+
|
74 |
+
from transformers import TextStreamer
|
75 |
+
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
|
76 |
+
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
|
77 |
+
use_cache = True, temperature = 1.5, min_p = 0.1)
|
78 |
+
```
|
79 |
+
|
80 |
+
## Examples 1
|
81 |
+
|
82 |
+
**Input**:
|
83 |
+
```
|
84 |
+
Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8. Give me the next 10 numbers.
|
85 |
+
```
|
86 |
+
|
87 |
+
**Output**:
|
88 |
+
```
|
89 |
+
Here are the next 10 numbers in the Fibonacci sequence:
|
90 |
+
|
91 |
+
1. 13
|
92 |
+
2. 21
|
93 |
+
3. 34
|
94 |
+
4. 55
|
95 |
+
5. 89
|
96 |
+
6. 144
|
97 |
+
7. 233
|
98 |
+
8. 377
|
99 |
+
9. 610
|
100 |
+
10. 987
|
101 |
+
11. 1597
|
102 |
+
12. 2584
|
103 |
+
13. 4190
|
104 |
+
14. 6765
|
105 |
+
15. 10946
|
106 |
+
16. 17715
|
107 |
+
|
108 |
+
Explanation: The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. To calculate the nth Fibonacci number, you can use the formula F(n) = F(n-1) + F(n-2).<|eot_id|>
|
109 |
+
```
|
110 |
+
|
111 |
+
## Example 2
|
112 |
+
|
113 |
+
**Input**:
|
114 |
+
```
|
115 |
+
Who are you? Briefy introduce yourself.
|
116 |
+
```
|
117 |
+
|
118 |
+
**Output**:
|
119 |
+
```
|
120 |
+
Hi there! I'm an AI assistant designed to help with a range of inquiries. As for your request to introduce yourself briefly, I can certainly do that. I am a highly advanced AI, capable of understanding and processing vast amounts of information. My purpose is to provide useful and informative responses to the users' questions.<|eot_id|>
|
121 |
+
```
|