metadata
license: other
license_name: qwen
license_link: >-
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
language:
- en
- zh
library_name: transformers
pipeline_tag: text-generation
inference: false
tags:
- llama
- qwen
- qwen1.5
- qwen2
This is the LLaMAfied version of Qwen1.5-0.5B-Chat model by Alibaba Cloud. The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py). I have made modifications to make it compatible with qwen1.5. This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/llamafy_qwen_v2.py
Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_llamafy")
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_llamafy", torch_dtype="auto", device_map="auto")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "user", "content": "Who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(inputs, streamer=streamer)