Text Generation
Transformers
Safetensors
Japanese
English
qwen
custom_code

drawing

Qwen/Qwen-14B-Chat + Karasu's finetuning datasets

Demo ・ モデルのデモ

Model demo ・ モデルのデモ

Blog post・説明の記事

Blog post・説明の記事

Evaluation

image/png

In our internal evaluations, we find the Qarasu model to have particularly high performance on the MTーBench benchmark. We are currently awaiting external evaluations.

How to use

Hugggingface

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lightblue/qarasu-14B-chat-plus-unleashed", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})

prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)

pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)

VLLM

from vllm import LLM, SamplingParams

sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True)

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Base checkpoint

Qwen/Qwen-14B-Chat

Training datasets (total ~7B)

The same as the 'plus' checkpoint, but with about 6K refusals ("申し訳ありませんが、。。。") filtered out from the category dataset

  • Lightblue's suite of Kujira datasets (unreleased)
  • Lightblue's own question-based datasets (unreleased)
  • Lightblue's own category-based datasets (unreleased)
  • OASST (Japanese chats only)
  • ShareGPT (Japanese chats only)
  • augmxnt/ultra-orca-boros-en-ja-v1 (['airoboros', 'slimorca', 'ultrafeedback', 'airoboros_ja_new'] only)

Developed by

Lightblue technology logo

Engineers

Peter Devine

Sho Higuchi

Advisors

Yuuki Yamanaka

Atom Sonoda

Project manager

Shunichi Taniguchi

Dataset evaluator

Renju Aoki

Downloads last month
57
Safetensors
Model size
14.2B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for lightblue/qarasu-14B-chat-plus-unleashed

Quantizations
2 models

Datasets used to train lightblue/qarasu-14B-chat-plus-unleashed

Collection including lightblue/qarasu-14B-chat-plus-unleashed