Phiter

Phiter Logo

Phiter is a merge of the following models using LazyMergekit:

Thanks to the great Maxime Labonne we have evaluation results on YALL.

The model tops all other phi-2 finetunes on the leaderboard, even most MoE implementations like Phixtral(Date: 27th February 2024)

License: MIT

This model wouldn't have been possible without the support of:

Maxime Labonne - he helped me troubleshoot the merge process

brittlewis12 - helped me troubleshooting the creation of GGUF files

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

GGUF: Phiter-GGUF

🧩 Configuration

models:
  - model: mixedbread-ai/phi-2
    # no parameters necessary for base model
  - model: rhysjones/phi-2-orange
    parameters:
      density: 0.5
      weight: 0.5
  - model: cognitivecomputations/dolphin-2_6-phi-2
    parameters:
      density: 0.5
      weight: 0.3
merge_method: ties
base_model: mixedbread-ai/phi-2
parameters:
  normalize: true
dtype: float16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Venkman42/Phiter"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
15
Safetensors
Model size
2.78B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Venkman42/Phiter

Collection including Venkman42/Phiter