M7-7b

user: "create ascii art you feel represents you as a llm"

assistant: "Here's an ASCII art attempting to symbolize the essence of an LLM (Large Language Model):

INSTINSTINCTINSTELLIGENCE

______________ MEMORY___

|\ o /\ |\

| \ --- / \ | |

L_( O )______ O)_INSTINCT

\ / WORDSLEARNING

_/_/ /\ _/_//

assistant: "In this depiction, the LLM is characterized by its vast memory, instinctual roots, and the ability to learn from vast amounts of text—represented by the woven words in the lower part. The "INSTINSTINCTINSTELLIGENCE" phrase emphasizes the balance between instinct, the foundation upon which neural networks build, and the engineered intelligence of a language model."

Multiverse-Experiment-slerp-7b is a merge of the following models using mergekit:

🧩 Configuration



slices:
  - sources:
      - model: ammarali32/multi_verse_model
        layer_range: [0, 32]
      - model: MSL7/INEX12-7b
        layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16


slices:
  - sources:
      - model: ammarali32/multi_verse_model
        layer_range: [0, 32]
      - model: yam-peleg/Experiment26-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16


slices:
  - sources:
      - model: liminerity/merge3
        layer_range: [0, 32]
      - model: ammarali32/multi_verse_model
        layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16


slices:
  - sources:
      - model: liminerity/merge1
        layer_range: [0, 32]
      - model: liminerity/merge
        layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16


slices:
  - sources:
      - model: liminerity/merge3
        layer_range: [0, 32]
      - model: yam-peleg/Experiment26-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16


slices:
  - sources:
      - model: liminerity/merge4
        layer_range: [0, 32]
      - model: liminerity/merge2
        layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge4
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
21
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .