apepkuss79's picture
Update README.md
d30bb06
|
raw
history blame
5.55 kB
metadata
language:
  - en
license: other
license_name: llama3
model_name: Llama3 70B Instruct
arxiv: 2307.09288
base_model: meta-llama/Meta-Llama-3-70B-Instruct
inference: false
model_creator: Meta Llama3
model_type: llama
pipeline_tag: text-generation
quantized_by: Second State Inc.

Meta-Llama-3-70B-Instruct-GGUF

Original Model

meta-llama/Meta-Llama-3-70B-Instruct

Run with LlamaEdge

  • LlamaEdge version: v0.8.3 and above

  • Prompt template

    • Prompt type: llama-3-chat

    • Prompt string

      <|begin_of_text|><|start_header_id|>system<|end_header_id|>
      
      {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
      
      {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
      
      {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
      
      {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
      
  • Context size: 8192

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Meta-Llama-3-70B-Instruct-Q5_K_M.gguf \
      llama-api-server.wasm \
      --prompt-template llama-3-chat \
      --ctx-size 8192 \
      --model-name Llama-3-70b
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Meta-Llama-3-70B-Instruct-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template llama-3-chat \
      --ctx-size 8192
    

Quantized GGUF Models

Name Quant method Bits Size Use case
Meta-Llama-3-70B-Instruct-Q2_K.gguf Q2_K 2 26.4 GB smallest, significant quality loss - not recommended for most purposes
Meta-Llama-3-70B-Instruct-Q3_K_L.gguf Q3_K_L 3 37.1 GB small, substantial quality loss
Meta-Llama-3-70B-Instruct-Q3_K_M.gguf Q3_K_M 3 34.3 GB very small, high quality loss
Meta-Llama-3-70B-Instruct-Q3_K_S.gguf Q3_K_S 3 30.9 GB very small, high quality loss
Meta-Llama-3-70B-Instruct-Q4_0.gguf Q4_0 4 40 GB legacy; small, very high quality loss - prefer using Q3_K_M
Meta-Llama-3-70B-Instruct-Q4_K_M.gguf Q4_K_M 4 42.5 GB medium, balanced quality - recommended
Meta-Llama-3-70B-Instruct-Q5_0.gguf Q5_0 5 48.7 GB legacy; medium, balanced quality - prefer using Q4_K_M
Meta-Llama-3-70B-Instruct-Q5_K_M.gguf Q5_K_M 5 50 GB large, very low quality loss - recommended
Meta-Llama-3-70B-Instruct-Q5_K_S.gguf Q5_K_S 5 48.7 GB large, low quality loss - recommended
Meta-Llama-3-70B-Instruct-Q6_K-00001-of-00002.gguf Q6_K 6 32.1 GB very large, extremely low quality loss
Meta-Llama-3-70B-Instruct-Q6_K-00002-of-00002.gguf Q6_K 6 25.7 GB very large, extremely low quality loss
Meta-Llama-3-70B-Instruct-Q8_0-00001-of-00003.gguf Q8_0 8 32 GB very large, extremely low quality loss - not recommended
Meta-Llama-3-70B-Instruct-Q8_0-00002-of-00003.gguf Q8_0 8 32.1 GB very large, extremely low quality loss - not recommended
Meta-Llama-3-70B-Instruct-Q8_0-00003-of-00003.gguf Q8_0 8 10.9 GB very large, extremely low quality loss - not recommended

The f16 GGUF model for the original model can be found in second-state/Meta-Llama-3-70B-Instruct-f16-GGUF

Quantized with llama.cpp b2715.