|
--- |
|
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit |
|
language: |
|
- en |
|
license: llama3.2 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- llama-3 |
|
- trl |
|
- sft |
|
datasets: |
|
- mlabonne/FineTome-100k |
|
--- |
|
|
|
# IMPORTANT |
|
|
|
In case you got the following error: `exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3`. Please upgrade your **transformer** package, that is, use the following code: `pip install --upgrade "transformers>=4.45"`. |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** NotASI |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit |
|
|
|
# Details |
|
|
|
This model was trained on **mlabonne/FineTome-100k** for *2* epochs with **rslora** + **qlora**, and achieve the final training loss: *0.796700*. |
|
|
|
This model follows the same chat template as the base model one. |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
# Usage |
|
|
|
Sample notebook (Unsloth): https://colab.research.google.com/drive/1cSCvemKsC0JVmMs4jdfoTqM8SQQ_DSBt?usp=sharing |
|
|
|
**Dependencies** |
|
|
|
``` |
|
!pip install unsloth |
|
!pip install --upgrade "transformers>=4.45" |
|
``` |
|
|
|
**Code** |
|
|
|
```python |
|
from unsloth.chat_templates import get_chat_template |
|
from unsloth import FastLanguageModel |
|
import torch |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name = "NotASI/FineTome-Llama3.2-1B-0929", |
|
) |
|
|
|
tokenizer = get_chat_template( |
|
tokenizer, |
|
chat_template = "llama-3.1", |
|
) |
|
FastLanguageModel.for_inference(model) # Enable native 2x faster inference |
|
|
|
messages = [ |
|
{"role": "user", "content": "<message>"}, |
|
] |
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize = True, |
|
add_generation_prompt = True, # Must add for generation |
|
return_tensors = "pt", |
|
).to("cuda") |
|
|
|
from transformers import TextStreamer |
|
text_streamer = TextStreamer(tokenizer, skip_prompt = True) |
|
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128, |
|
use_cache = True, temperature = 1.5, min_p = 0.1) |
|
``` |
|
|
|
## Examples 1 |
|
|
|
**Input**: |
|
``` |
|
Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8. Give me the next 10 numbers. |
|
``` |
|
|
|
**Output**: |
|
``` |
|
Here are the next 10 numbers in the Fibonacci sequence: |
|
|
|
1. 13 |
|
2. 21 |
|
3. 34 |
|
4. 55 |
|
5. 89 |
|
6. 144 |
|
7. 233 |
|
8. 377 |
|
9. 610 |
|
10. 987 |
|
11. 1597 |
|
12. 2584 |
|
13. 4190 |
|
14. 6765 |
|
15. 10946 |
|
16. 17715 |
|
|
|
Explanation: The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. To calculate the nth Fibonacci number, you can use the formula F(n) = F(n-1) + F(n-2).<|eot_id|> |
|
``` |
|
|
|
## Example 2 |
|
|
|
**Input**: |
|
``` |
|
Who are you? Briefy introduce yourself. |
|
``` |
|
|
|
**Output**: |
|
``` |
|
Hi there! I'm an AI assistant designed to help with a range of inquiries. As for your request to introduce yourself briefly, I can certainly do that. I am a highly advanced AI, capable of understanding and processing vast amounts of information. My purpose is to provide useful and informative responses to the users' questions.<|eot_id|> |
|
``` |