Sakura-SOLAR-Instruct

Quantizations

Measured using ExLlamav2_HF and 4096 max_seq_len with Oobabooga's Text Generation WebUI.

I also provided zipped quantization because a lot of people find gguf single download convenient. Zipped quantization is relatively smaller in size to download. After extracted, you can use the model folder as usual.

Use TheBloke's 4bit-32g quants (7.4GB VRAM usage) if you have 8GB cards.

Branch BPW Folder Size Zipped File Size VRAM Usage Description
3.0bpw/3.0bpw-zip 3.0BPW 4.01GB 3.72GB 5.1 GB For >=6GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)
5.0bpw (main)/5.0bpw-zip 5.0BPW 6.45GB 6.3GB 7.7 GB For >=10GB VRAM cards
6.0bpw/6.0bpw-zip 6.0BPW 7.66GB 7.4GB 9.0 GB For >=10GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)
7.0bpw/7.0bpw-zip 7.0BPW 8.89GB 8.6GB 10.2 GB For >=11GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)
8.0bpw/8.0bpw-zip 8.0BPW 10.1GB 9.7GB 11.3 GB For >=12GB VRAM cards with idle VRAM atleast or below 500MB (headroom for other things)

Calibration Dataset

Prompt template: Orca-Hashes

From TheBloke

### System:
{system_message}

### User:
{prompt}

### Assistant:

If you use Oobabooga's Chat tab

From my testing, the template "Orca-Mini" or any of the Orca templates produced the best result. Feel free to leave a suggestion if you know better.

Original Info

Sakura-SOLAR-Instruct

(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다

Model Details

Model Developers Kyujin Han (kyujinpy)

Method
Using Mergekit.
I shared the information about my model. (training and code)
Please see: ⭐Sakura-SOLAR.

Blog

Model Benchmark

Open leaderboard

  • Follow up as link.
Model Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
Sakura-SOLRCA-Instruct-DPO 74.05 71.16 88.49 66.17 72.10 82.95 63.46
Sakura-SOLAR-Instruct-DPO-v2 74.14 70.90 88.41 66.48 71.86 83.43 63.76
kyujinpy/Sakura-SOLAR-Instruct 74.40 70.99 88.42 66.33 71.79 83.66 65.20

Rank1 2023.12.27 PM 11:50

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/Sakura-SOLAR-Instruct"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hgloow/Sakura-SOLAR-Instruct-EXL2

Finetuned
(1)
this model