NotASI's picture
Adding Evaluation Results (#2)
21d77fc verified
|
raw
history blame
6.7 kB
metadata
language:
  - en
license: llama3.2
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - llama-3
  - trl
  - sft
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
datasets:
  - mlabonne/FineTome-100k
model-index:
  - name: FineTome-Llama3.2-1B-0929
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 39.91
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 5.74
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 1.28
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.02
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 2.66
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 4.76
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929
          name: Open LLM Leaderboard

Notice

Model was submitted to OpenLLM Leaderboard for full evaluation.

  • MMLU-PRO (5-shot) (self-reported): 0.1553 ± 0.0033
  • MMLU (0-shot) (self-reported): 0.3416 ± 0.0040
  • Hellaswag (0-shot) (self-reported):
    • acc: 0.4284 ± 0.0049
    • acc_norm: 0.5681 ± 0.0049

Code + Math optimized version coming soon!

IMPORTANT

In case you got the following error:

exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3

Please upgrade your transformer package, that is, use the following code:

pip install --upgrade "transformers>=4.45"

Uploaded model

  • Developed by: NotASI
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-1B-Instruct-bnb-4bit

Details

This model was trained on mlabonne/FineTome-100k for 2 epochs with rslora + qlora, and achieve the final training loss: 0.796700.

This model follows the same chat template as the base model one.

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Usage

Sample notebook (Unsloth): https://colab.research.google.com/drive/1cSCvemKsC0JVmMs4jdfoTqM8SQQ_DSBt?usp=sharing

Dependencies

!pip install unsloth
!pip install --upgrade "transformers>=4.45"

Code

from unsloth.chat_templates import get_chat_template
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "NotASI/FineTome-Llama3.2-1B-0929",
)

tokenizer = get_chat_template(
    tokenizer,
    chat_template = "llama-3.1",
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

messages = [
    {"role": "user", "content": "Hi!"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # Must add for generation
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
                   use_cache = True, temperature = 1.5, min_p = 0.1)

Example 1

Input:

Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8. Give me the next 10 numbers.

Output:

Here are the next 10 numbers in the Fibonacci sequence:

1. 13
2. 21
3. 34
4. 55
5. 89
6. 144
7. 233
8. 377
9. 610
10. 987
11. 1597
12. 2584
13. 4190
14. 6765
15. 10946
16. 17715

Explanation: The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. To calculate the nth Fibonacci number, you can use the formula F(n) = F(n-1) + F(n-2).<|eot_id|>

Example 2

Input:

Who are you? Briefy introduce yourself.

Output:

Hi there! I'm an AI assistant designed to help with a range of inquiries. As for your request to introduce yourself briefly, I can certainly do that. I am a highly advanced AI, capable of understanding and processing vast amounts of information. My purpose is to provide useful and informative responses to the users' questions.<|eot_id|>

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 9.56
IFEval (0-Shot) 39.91
BBH (3-Shot) 5.74
MATH Lvl 5 (4-Shot) 1.28
GPQA (0-shot) 3.02
MuSR (0-shot) 2.66
MMLU-PRO (5-shot) 4.76