LLäMmlein 1B Chat

This is a chat adapter for the German Tinyllama 1B language model. Find more details on our page and our preprint! We also merged the adapter and converted it to GGUF here.

Run it

import torch
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.manual_seed(42)

# script config
base_model_name = "LSX-UniWue/LLaMmlein_1B"
chat_adapter_name = "LSX-UniWue/LLaMmlein_1B_chat_all"
device = "cuda"  # or mps

# chat history
messages = [
    {
        "role": "user",
        "content": """Na wie geht's?""",
    },
]

# load model
config = PeftConfig.from_pretrained(chat_adapter_name)
base_model = model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    torch_dtype=torch.bfloat16,
    device_map=device,
)
base_model.resize_token_embeddings(32064)
model = PeftModel.from_pretrained(base_model, chat_adapter_name)
tokenizer = AutoTokenizer.from_pretrained(chat_adapter_name)

# encode message in "ChatML" format
chat = tokenizer.apply_chat_template(
    messages,
    return_tensors="pt",
    add_generation_prompt=True,
).to(device)

# generate response
print(
    tokenizer.decode(
        model.generate(
            chat,
            max_new_tokens=300,
            pad_token_id=tokenizer.pad_token_id,
            eos_token_id=tokenizer.eos_token_id,
        )[0],
        skip_special_tokens=False,
    )
)
Downloads last month
224
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for LSX-UniWue/LLaMmlein_1B_chat_all

Adapter
(8)
this model
Finetunes
1 model

Datasets used to train LSX-UniWue/LLaMmlein_1B_chat_all

Collection including LSX-UniWue/LLaMmlein_1B_chat_all