mattritchey's picture
Upload README.md with huggingface_hub
24f6d58 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - qwen2
  - trl
  - sft
  - llama-cpp
  - gguf-my-repo
base_model: FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
datasets:
  - UncovAI/fineweb_CC-MAIN-2024-18_100k_output_UncovAI_83362
model-index:
  - name: 83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 28.69
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 8.13
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 0
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.13
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 1.42
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 6.17
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
          name: Open LLM Leaderboard

mattritchey/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-Q4_K_M-GGUF

This model was converted to GGUF format from FlofloB/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo mattritchey/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-Q4_K_M-GGUF --hf-file 83k_continued_pretraining_qwen2.5-0.5b-instruct_unsloth_merged_16bit-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo mattritchey/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-Q4_K_M-GGUF --hf-file 83k_continued_pretraining_qwen2.5-0.5b-instruct_unsloth_merged_16bit-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo mattritchey/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-Q4_K_M-GGUF --hf-file 83k_continued_pretraining_qwen2.5-0.5b-instruct_unsloth_merged_16bit-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo mattritchey/83k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-Q4_K_M-GGUF --hf-file 83k_continued_pretraining_qwen2.5-0.5b-instruct_unsloth_merged_16bit-q4_k_m.gguf -c 2048