seldonium-2x7b-MoE-v0.1

seldonium-2x7b-MoE-v0.1-coder-logic is a Mixture of Experts (MoE) model that combines the capabilities of two specialized language models:

Locutusque/Hercules-4.0-Mistral-v0.2-7B: A 7B parameter model focused on programming tasks, such as writing functions, implementing algorithms, and working with data structures.

Open-Orca/Mistral-7B-OpenOrca: A 7B parameter model focused on logical reasoning and analysis, including solving logic problems, evaluating arguments, and assessing the validity of statements.

This MoE model was created using the LazyMergekit colab, which allows for efficient combination of specialized models to produce a more capable and efficient overall model. The seldonium-2x3b-MoE-v0.1 can be used for a variety of natural language processing tasks that benefit from the complementary strengths of its expert components.

🧩 Configuration

base_model: NousResearch/Hermes-2-Pro-Mistral-7B
gate_mode: cheap_embed  # Use hidden state representations to determine MoE gates
dtype: bfloat16  # Output data type
experts_per_token: 2  # Number of experts per token
experts:
  - source_model: Locutusque/Hercules-4.0-Mistral-v0.2-7B
    positive_prompts:
      - "Write a Python function to calculate the factorial of a number."
      - "Implement a quicksort algorithm to sort a list of integers."
      - "Design a Python class to represent a binary search tree."
  
  - source_model: Open-Orca/Mistral-7B-OpenOrca
    positive_prompts:
      - "Solve the logic puzzle: 'If Ann is older than Belinda, and Belinda is younger than Cathy, who is the oldest?'"
      - "Analyze the argument: 'All cats are animals. Some animals are pets. Therefore, all cats are pets.' Determine if the conclusion follows logically from the premises."
      - "Evaluate the validity of the statements: 'A is true. A is false.'"

πŸ’» Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "jomangbp/seldonium-2x3b-MoE-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
16
Safetensors
Model size
12.9B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jomangbp/seldonium-2x7b-MoE-v0.1