Edit model card

Notice

Code + Math optimized version coming soon!

IMPORTANT

In case you got the following error:

exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3

Please upgrade your transformer package, that is, use the following code:

pip install --upgrade "transformers>=4.45"

Uploaded model

  • Developed by: NotASI
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-1B-Instruct-bnb-4bit

Details

This model was trained on mlabonne/FineTome-100k for 2 epochs with rslora + qlora, and achieve the final training loss: 0.796700.

This model follows the same chat template as the base model one.

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Usage

Sample notebook (Unsloth): https://colab.research.google.com/drive/1cSCvemKsC0JVmMs4jdfoTqM8SQQ_DSBt?usp=sharing

Dependencies

!pip install unsloth
!pip install --upgrade "transformers>=4.45"

Code

from unsloth.chat_templates import get_chat_template
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "NotASI/FineTome-Llama3.2-1B-0929",
)

tokenizer = get_chat_template(
    tokenizer,
    chat_template = "llama-3.1",
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

messages = [
    {"role": "user", "content": "Hi!"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # Must add for generation
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
                   use_cache = True, temperature = 1.5, min_p = 0.1)

Example 1

Input:

Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8. Give me the next 10 numbers.

Output:

Here are the next 10 numbers in the Fibonacci sequence:

1. 13
2. 21
3. 34
4. 55
5. 89
6. 144
7. 233
8. 377
9. 610
10. 987
11. 1597
12. 2584
13. 4190
14. 6765
15. 10946
16. 17715

Explanation: The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. To calculate the nth Fibonacci number, you can use the formula F(n) = F(n-1) + F(n-2).<|eot_id|>

Example 2

Input:

Who are you? Briefy introduce yourself.

Output:

Hi there! I'm an AI assistant designed to help with a range of inquiries. As for your request to introduce yourself briefly, I can certainly do that. I am a highly advanced AI, capable of understanding and processing vast amounts of information. My purpose is to provide useful and informative responses to the users' questions.<|eot_id|>

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 9.56
IFEval (0-Shot) 39.91
BBH (3-Shot) 5.74
MATH Lvl 5 (4-Shot) 1.28
GPQA (0-shot) 3.02
MuSR (0-shot) 2.66
MMLU-PRO (5-shot) 4.76
Downloads last month
443
Safetensors
Model size
1.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NotASI/FineTome-Llama3.2-1B-0929

Finetuned
(91)
this model
Merges
1 model
Quantizations
2 models

Dataset used to train NotASI/FineTome-Llama3.2-1B-0929

Collection including NotASI/FineTome-Llama3.2-1B-0929

Evaluation results