metadata
license: apache-2.0
library_name: transformers
tags:
- merge
base_model:
- 01-ai/Yi-1.5-34B-Chat
- 01-ai/Yi-1.5-34B
pipeline_tag: text-generation
model-index:
- name: YiSM-34B-0rn
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.54
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.67
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.51
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.68
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.66
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.82
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 42.84
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 45.38
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 20.62
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.22
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.76
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.06
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=altomek/YiSM-34B-0rn
name: Open LLM Leaderboard
intro music...
YiSM-34B-0rn
This is Yi Self Merged. I wanted model that will follow most instuctions yet preserve its base model nature.
Ingridients
Settings
I use max_seq_len 8K with alpha_value 2.65.
SillyTavern presets:
{
"temp": 0.1,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0,
"rep_pen": 1.08,
"rep_pen_range": 0,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0.01,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"banned_tokens": "",
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"rep_pen_size": 0,
"genamt": 2048,
"max_length": 8192
}
Terms and Conditions of Use
The following table outlines the primary characteristics and intended uses of my YiSM-34B-0rn models:
Model Type | Purpose | Target Users | Key Features |
---|---|---|---|
Censored | Suitable for general audiences and sensitive topics | Educational institutions, families, and individuals seeking age-appropriate content | Restricts explicit or mature material |
Neutral (**this one) | Balances accessibility with openness | Universities, researchers, and curious minds | Encourages exploration and intellectual exchange |
Uncensored | Ideal for adults and specialized fields | Professionals, experts, and advanced scholars | Offers unfiltered access to diverse viewpoints and knowledge |
Please remember that all YiSM-34B-0rn models operate under the apache-2.0 license, so familiarize yourself with its terms and conditions before employing their content.
Quants
- GGUF
- 8bpw
- 6.5bpw
- 4.65bpw
- 4bpw
- 3.2bpw -> Fits in 16GB VRAM but not recomended. Performance is significantly degraded in lower quants.
- measurements --> ExLlamav2 measurments
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 75.65 |
AI2 Reasoning Challenge (25-Shot) | 69.54 |
HellaSwag (10-Shot) | 86.67 |
MMLU (5-Shot) | 78.51 |
TruthfulQA (0-shot) | 59.68 |
Winogrande (5-shot) | 83.66 |
GSM8k (5-shot) | 75.82 |
5th in 34B size range excluding "Private or deleted" or 8th with all models included as of 2024-06-10 ;P
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 30.15 |
IFEval (0-Shot) | 42.84 |
BBH (3-Shot) | 45.38 |
MATH Lvl 5 (4-Shot) | 20.62 |
GPQA (0-shot) | 16.22 |
MuSR (0-shot) | 14.76 |
MMLU-PRO (5-shot) | 41.06 |