qwen2.5-32b-gguf / README.md
doberst's picture
Upload 2 files
1310ac6 verified
|
raw
history blame
908 Bytes
metadata
license: apache-2.0
inference: false
tags:
  - green
  - llmware-chat
  - p32
  - gguf
  - emerald

qwen2.5-32b-instruct-gguf

qwen2.5-32b-instruct-gguf is a GGUF Q4_K_M int4 quantized version of Qwen2.5-32B-Instruct, providing a very fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.

This is from the latest release series from Qwen.

Model Description

  • Developed by: Qwen
  • Model type: qwen2.5
  • Parameters: 32 billion
  • Model Parent: Qwen/Qwen2.5-32B-Instruct
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat, general-purpose LLM
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website