Quantized Qwen Model

This repository contains a quantized version of the Qwen model for causal language modeling.

Model Details

  • Model Type: Qwen2ForCausalLM
  • Quantization: Dynamic Quantization

Usage

You can load this model using the Hugging Face Transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ShubhaLabs/quantized_qwen_model")
tokenizer = AutoTokenizer.from_pretrained("ShubhaLabs/quantized_qwen_model")
Downloads last month
20
Inference API
Unable to determine this model's library. Check the docs .

Model tree for ShubhaLabs/quantized_qwen_model

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(97)
this model