Uploaded model

  • Developed by: thanhkt
  • License: apache-2.0
  • Finetuned from model : unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

๐Ÿค— Hugging Face Transformers

Qwen2.5-Math can be deployed and infered in the same way as Qwen2.5. Here we show a code snippet to show you how to use the chat model with transformers:


from unsloth import FastLanguageModel
import torch
max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.


model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "thanhkt/Qwen2.5-1.5B-Vi-Alpaca-GGUF", 
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
alpaca_prompt = """Below...

### Instruct:
{}

### Input:
{}

### Output:
{}"""

FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        """You are a teacher , you can explain the complex things with simple word""", # instruction
        "What is word 2 vec", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
Downloads last month
8
GGUF
Model size
1.54B params
Architecture
qwen2

4-bit

16-bit

Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Model tree for thanhkt/Qwen2.5-1.5B-Vi-Alpaca-GGUF

Base model

Qwen/Qwen2.5-1.5B
Quantized
(4)
this model