Demo on Google Colab: https://colab.research.google.com/drive/1i5plJtq_6HIOuk_x7D-LkYDpcd3SADLf?usp=sharing

Similarly as Qwen-1.5-14B-Chat, you can always call this model from the AutoModel class.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "ljsabc/Qwen-1.5-14B-Chat-Fujisaki",
    torch_dtype="auto",
    device_map="auto",
    #load_in_4bit=True
)
tokenizer = AutoTokenizer.from_pretrained("ljsabc/Qwen-1.5-14B-Chat-Fujisaki")

prompt = "请撰写一条新的推文。"
messages = [
    {"role": "system", "content": "你将扮演推特用户@ljsabc,你需要撰写你的原创推文或回复别人的推文。所有你的回复都应该使用简体中文书写。"},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512,
    temperature=0.95,
    top_p=0.99
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Downloads last month
19
Safetensors
Model size
14.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ljsabc/Qwen-1.5-14B-Chat-Fujisaki

Quantizations
1 model