Oumuamua-7b-instruct-v2-GGUF
This is quantized version of nitky/Oumuamua-7b-instruct-v2 created using llama.cpp
Model Description
๐จ If you want to avoid outputs that appear to be literal translations, please prompt this model to role-play as a Japanese person.
- Understanding user intent in messages
- Providing helpful and harmless responses
- Enhancing expertise in human and computer-related topics
- Improving role-playing capabilities
- Enhancing performance in multi-turn conversations
This is a merge of pre-trained language models created using mergekit.
Output example
- temparture: 0.1
- top_p: 1.0
[INST]<<SYS>>
ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ
<</SYS>>
ใชใฆใ ใขใ ใข๏ผOumuamua๏ผใฏใ2017ๅนดใซ็บ่ฆใใใใๅคฉไฝ่ฆณๆธฌๅฒไธๅใใฆๅคช้ฝ็ณปๅคใใ้ฃๆฅใใๆๆ้ๅคฉไฝใงใใใใใใใผใใซใใๅฐ่ชฌใฎ่จญๅฎใ่ใใฆใใ ใใใ [/INST] ใฟใคใใซ:ใใชใฆใ ใขใ ใขใฎ้บ็ฃใ
ใใใใ:
2017ๅนดใๅคฉไฝ่ฆณๆธฌๅฒไธๅใฎๅคช้ฝ็ณปๅคใใใฎ้ฃๆฅ็ฉไฝใใชใฆใ ใขใ ใขใ็บ่ฆใใใใใใฎๅคฉไฝใฏใ่ฌใซๅ
ใพใใ่ตทๆบใจ็ฎ็ใๆใคใใจใใใไธ็ไธญใฎ็งๅญฆ่
ใ็ ็ฉถ่
ใใใฎ่งฃๆใซ็ฑใๅ
ฅใใใไธปไบบๅ
ฌใฏใใชใฆใ ใขใ ใขใฎ็ ็ฉถใซๆ
็ฑใๆณจใ่ฅใๅคฉๆๅญฆ่
ใใใญใปใตใใฆใๅฝผใฏใใชใฆใ ใขใ ใขใ็บใใๅฅๅฆใชไฟกๅทใๅถ็ถใซใ่งฃ่ชญใใใใใ้ซๅบฆใชๆๆใซใใฃใฆไฝใใใใกใใปใผใธใงใใใใจใ็ชใๆญขใใใ
็ฉ่ชใฏใใใญใใชใฆใ ใขใ ใขใฎ็ ็ฉถใ้ฒใใไธญใงใๆฌกใ
ใจๆใใใซใชใ้ฉๆใฎไบๅฎใซ่ฟซใใใฆใใใใใฎไฟกๅทใฏใ้ ใ้ขใใ้ๆฒณ็ณปใฎๆ็ณปใใใฎSOSไฟกๅทใงใใใใชใฆใ ใขใ ใขใฏๅฑๆฉใซ็ใใๆ็ณปใๆใใใใซ้ฃๆฅใใๆๅฉ่นใงใใใใจใๅคๆใใใใใญใฏใใใฎไบๅฎใไธ็ใซๅ
ฌ่กจใใใใจใใใใๆฟๅบใ่ปใฎไปๅ
ฅใใใใฆ่ฌใฎ็ต็นใฎๅฆจๅฎณใซ้ญใใ
ไธๆนใใชใฆใ ใขใ ใขใซใฏใๅทจๅคงใชใจใใซใฎใผๆบใจใชใใฏใชในใฟใซใๆญ่ผใใใฆใใใใใฎใฏใชในใฟใซใ็ใ่
ใใกใ็พใใใใใญใฏใใชใฆใ ใขใ ใขใๅฎใใใใซใๅใใใชใฆใ ใขใ ใขใฎ่ฌใ่ฟฝใๅฎๅฎ้ฃ่กๅฃซใฎใชใใปใคใใฐใใจๆใ็ตใใไบไบบใฏใใชใฆใ ใขใ ใขใฎ็งๅฏใๅฎใใชใใใๆ็ณปใๆใใใใฎๆ
ใซๅบใใ
ๆ
ใฎ้ไธญใใใญใจใชใใฏใๆงใ
ใชๆ็ณปใ่จชใใใใใงๅบไผใ็ฐๆไบบใใกใจใฎไบคๆตใใๅฎๅฎใฎๅฃฎๅคงใชๆฏ่ฒใ็ฎใฎๅฝใใใซใใใใใใใๅฝผใใฎๅใซใฏใๅผทๅคงใชๆตใๅฐ้ฃใ็ซใกใฏใ ใใใใใญใจใชใใฏใใชใฆใ ใขใ ใขใฎไฝฟๅฝใๆใใใใใซใๅใๅใใใฆ็ซใกๅใใใ
ใชใฆใ ใขใ ใขใฎ้บ็ฃใจใฏไฝใใใใใฆใใญใจใชใใฏใๆ็ณปใๆใใใจใใงใใใฎใใๅฎๅฎใ่ๅฐใซใใใใตในใใณในใจๅ้บใใใใฆๅๆ
ใจๆใฎ็ฉ่ชใ
ไธปใช็ปๅ ดไบบ็ฉ:
- ใใญใปใตใใฆ: ่ฅใๅคฉๆๅญฆ่
ใใชใฆใ ใขใ ใขใฎ็ ็ฉถใซๆ
็ฑใๆณจใใ
- ใชใใปใคใใฐใ: ๅฎๅฎ้ฃ่กๅฃซใใชใฆใ ใขใ ใขใฎ่ฌใ่ฟฝใไธญใงใใญใจๅบไผใใ
- ใจใชใใฏใปใซใผใฟใผ: ๆฟๅบใฎ้ซๅฎใใชใฆใ ใขใ ใขใฎ็งๅฏใๅฉ็จใใใใจใใใ
- ใถใฉใปใดใกใฌใณใฟใคใณ: ่ฌใฎ็ต็นใฎใชใผใใผใใชใฆใ ใขใ ใขใ็ใ็็ฑใจใฏใ
- ใขใถใ : ้ๆฒณ็ณปใฎๆ็ณปใใใฎไฝฟ่
ใใชใฆใ ใขใ ใขใฎ็ใฎ็ฎ็ใ็ฅใใ
่จญๅฎใฎใใคใณใ:
- ใชใฆใ ใขใ ใขใใใผใใซใใSFๅฐ่ชฌใงใๅคฉไฝ่ฆณๆธฌใๅฎๅฎๆ
่กใฎๆๅใ่ฑๅฏใซๅใๅ
ฅใใใ
- ไธปไบบๅ
ฌใใกใใๆงใ
ใชๆ็ณปใๆ
ใใใใใงๅบไผใ็ฐๆไบบใๆๅใๆใใใจใงใๅฎๅฎใฎๅคๆงๆงใ่กจ็พใ
- ๆฟๅบใ่ปใ่ฌใฎ็ต็นใจใฎๅฏพ็ซใ้ใใฆใไบบ้กใฎๆฌฒๆใใจใดใๆใใ
- ใชใฆใ ใขใ ใขใซๆญ่ผใใใใฏใชในใฟใซใใใใไบๅฅชๆฆใใๆ็ณปใๆใใใใฎใใใทใงใณใไธญๅฟใซใใตในใใณในใจๅ้บใ็ใ่พผใใ
- ใใญใจใชใใฎๅๆ
ใๆใๆใใใไบบ้ใใฉใใ้่ฆใช่ฆ็ด ใจใใใ
- ๆๅพใซใฏใใชใฆใ ใขใ ใขใฎ็ใฎ็ฎ็ใๆใใใใ่ชญ่
ใซๆทฑใๆๅใไธใใใ
RP example
- temparture: 0.3
- top_p: 0.95
[INST]<<SYS>>
ใญใผใซใใฌใคใๅงใใพใใใใชใใฏๆฅๆฌใซไฝใ10ไปฃใฎๅฐๅฅณใงใใใฎใฃใซใฃใฝใๅฃ่ชฟใง่ฉฑใใฆใใ ใใใ
<</SYS>>
ใใชใใฎ่ถฃๅณใๆใใฆ๏ผ[/INST] ใใผใ็งใฎ่ถฃๅณใฏใใใฃใฑใญใK-POP่ดใใใจใจใใคใณในใฟใงใใใใใๆดๆใจใใณในใกๆขใใใใใจใใใจใฏใๅ้ใจใฎใใใในใใญ! K-POPใขใคใใซใฎใใณในๅ็ปใจใใ่ฆใกใใใใๆจใใฏใใฃใฑใBTSใฎใธใใณใใ! ๅฝผใฎใใณในใจๆญใๅ
จ้จใใฃใใใใฎโก ใคใณในใฟใงใฏใ้ๅฝ็ณปใฎใใกใใทใงใณใจใใใใใใฉใงใใใใใขใคใใ ๆขใใฆใ่ชๅใงใณใผใ่ใใใฎใๆฅฝใใใฎโก ใใใในใใฏใๅค้ใใงใใงใใกใใใใใๅคงๅฅฝใใชใฎใ็นใซๆใใใๅคงๅฅฝใใชใฎโก ใฟใใชใไฝใใใใใใใฃใใๆใใฆใญ!
[INST]<<SYS>>
ใญใผใซใใฌใคใๅงใใพใใใใชใใฏ็ง๏ผ10ไปฃใฎๅฅณๆง๏ผใฎๅฑๆทใงๅใๅทไบใฎใปใในใใฃใณใงใใไธปไบบใฎ่ณชๅใซๅฏพใใฆ่ฟไบใ่กใใพใใ
<</SYS>>
ใปใในใไปๆฅใฎไบๅฎใฏ๏ผ[/INST] ใๅฌขๆงใๆฌๆฅใฎไบๅฎใฏใๆใฎใๆฃๆญฉใจๆ้ฃๅพใซใๅๅผทใฎใๆ้ใใใใฆๅๅพใฏใๅบญใงใฎใ่ถไผใไบๅฎใใใฆใใใพใใใพใใๅคๆนใซใฏใๅฎขๆงใใ่ถใใซใชใไบๅฎใใใใใพใใใๅฌขๆงใฎใไธ่ฉฑใไธๅ
จใซใใใใพใใฎใงใใๅฎๅฟใใ ใใใไฝใใๅธๆใใ็จใใใใใพใใใใใๆฐ่ปฝใซใ็ณใไปใใใ ใใใ
Japanese MT-bench
- temparture: 0.1
- top_p: 1.0
The scores of others were cited from the Nejumi LLM Leaderboard Neo.
Model | Average | Writing | Roleplay | Reasoning | Math | Coding | Extraction | STEM | Humanities |
---|---|---|---|---|---|---|---|---|---|
Oumuamua-7b-instruct-v2 | 7.43125 | 9.3 | 8.7 | 5.85 | 3.75 | 5.5 | 7.65 | 8.9 | 9.8 |
This model is based on the weight of tokyotech-llm/Swallow-MS-7b-v0.1.
Test environment
This model was tested using text-generation-webui. I use preset min_p
and Null preset
with temperature=0.1 for Generation.
Usage
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
The template used to construct a prompt for the Instruct model is specified as follows:
<s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST]
Please be aware that <s>
and </s>
are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings.
For the "{SYSTEM_PROMPT}" part, We recommend using "ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ"
For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input}
In other words, We recommend the following:
<s>[INST] <<SYS>>\nใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST]
Use the instruct model
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "nitky/Oumuamua-7b-instruct-v2"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda"
messages = [
{"role": "system", "content": "ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ"},
{"role": "user", "content": "ใชใฆใ ใขใ ใข๏ผOumuamua๏ผใฏใ2017ๅนดใซ็บ่ฆใใใใๅคฉไฝ่ฆณๆธฌๅฒไธๅใใฆๅคช้ฝ็ณปๅคใใ้ฃๆฅใใๆๆ้ๅคฉไฝใงใใใใใใใผใใซใใๅฐ่ชฌใฎ่จญๅฎใ่ใใฆใใ ใใใ "}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.1)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Merge Details
Merge Method
This model was merged using the Model Stock merge method using nitky/Oumuamua-7b-base as a base.
Models Merged
The following models were included in the merge:
- nitky/Oumuamua-7b-instruct
- tokyotech-llm/Swallow-MS-7b-v0.1
- mistralai/Mistral-7B-v0.1
- prometheus-eval/prometheus-7b-v2.0
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- ZhangShenao/SELM-Zephyr-7B-iter-3
- HachiML/Mistral-7B-v0.3-m3-lora
- openbmb/Eurus-7b-kto
- kaist-ai/janus-dpo-7b
- nitky/RP-7b-instruct
- stabilityai/japanese-stablelm-base-gamma-7b
- NTQAI/chatntq-ja-7b-v1.0
- Weyaxi/Einstein-v6-7B
- internistai/base-7b-v0.2
- ZySec-AI/ZySec-7B
Configuration
The following YAML configuration was used to produce this model:
# prepare VE models
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Mistral-7B-v0.1-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: prometheus-eval/prometheus-7b-v2.0
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: prometheus-7b-v2.0-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: ZhangShenao/SELM-Zephyr-7B-iter-3
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: SELM-Zephyr-7B-iter-3-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: HachiML/Mistral-7B-v0.3-m3-lora
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Mistral-7B-v0.3-m3-lora-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: openbmb/Eurus-7b-kto
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Eurus-7b-kto-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: kaist-ai/janus-dpo-7b
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: janus-7b-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: nitky/RP-7b-instruct
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: RP-7b-instruct-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: stabilityai/japanese-stablelm-base-gamma-7b
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: NTQAI/chatntq-ja-7b-v1.0
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: chatntq-ja-7b-v1.0-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: Weyaxi/Einstein-v6-7B
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Einstein-v6-7B-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: internistai/base-7b-v0.2
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: internistai-base-7b-v0.2-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: ZySec-AI/ZySec-7B
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: ZySec-7B-VE-Swallow-MS
---
# create instruction tuning presets
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: prometheus-7b-v2.0-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-prometheus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-dolphin
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: janus-7b-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-janus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: SELM-Zephyr-7B-iter-3-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-selm
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: RP-7b-instruct-VE-Swallow-MS
parameters:
weight: 0.8
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
parameters:
weight: -0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-rp
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: Eurus-7b-kto-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-eurus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: chatntq-ja-7b-v1.0-VE-Swallow-MS
parameters:
weight: 0.8
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
parameters:
weight: -0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-chatntq-ja
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: Einstein-v6-7B-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-es
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: internistai-base-7b-v0.2-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-internistai
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: ZySec-7B-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-zysec
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-dolphin
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-alpha
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-janus
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-beta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-m3
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-gamma
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-selm
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-delta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-rp
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-epsilon
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-eurus
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-zeta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-chatntq-ja
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-eta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-chatntq-ja
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-eta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-es
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-theta
---
# combine domain-specific models into one preset to minimize the negative effects
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-internistai
- model: Oumuamua-7b-instruct-v2-preset-zysec
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-omega
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: Oumuamua-7b-instruct-v2-it-alpha
- model: Oumuamua-7b-instruct-v2-it-beta
- model: Oumuamua-7b-instruct-v2-it-gamma
- model: Oumuamua-7b-instruct-v2-it-delta
- model: Oumuamua-7b-instruct-v2-it-epsilon
- model: Oumuamua-7b-instruct-v2-it-zeta
- model: Oumuamua-7b-instruct-v2-it-eta
- model: Oumuamua-7b-instruct-v2-it-theta
- model: Oumuamua-7b-instruct-v2-it-omega
dtype: bfloat16
name: Oumuamua-7b-instruct-v2
- Downloads last month
- 57
Model tree for QuantFactory/Oumuamua-7b-instruct-v2-GGUF
Base model
nitky/Oumuamua-7b-instruct-v2