Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- chat
|
8 |
+
---
|
9 |
+
|
10 |
+
# Qwen2-0.5B-Instruct-GGUF
|
11 |
+
|
12 |
+
## Introduction
|
13 |
+
|
14 |
+
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 0.5B Qwen2 model.
|
15 |
+
|
16 |
+
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
|
17 |
+
|
18 |
+
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
|
19 |
+
|
20 |
+
In this repo, we provide `fp16` model and quantized models in the GGUF formats, including `q2_k`, `q3_k_m`, `q4_0`, `q4_k_m`, `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
|
21 |
+
|
22 |
+
## Model Details
|
23 |
+
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
|
24 |
+
|
25 |
+
## Training details
|
26 |
+
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
|
27 |
+
|
28 |
+
|
29 |
+
## Requirements
|
30 |
+
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide.
|
31 |
+
|
32 |
+
|
33 |
+
## How to use
|
34 |
+
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
|
35 |
+
```shell
|
36 |
+
huggingface-cli download Qwen/Qwen2-0.5B-Instruct-GGUF qwen2-0_5b-instruct-q8_0.gguf --local-dir . --local-dir-use-symlinks False
|
37 |
+
```
|
38 |
+
|
39 |
+
We demonstrate how to use `llama.cpp` to run Qwen1.5:
|
40 |
+
```shell
|
41 |
+
./main -m qwen2-0_5b-instruct-q8_0.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
|
42 |
+
```
|
43 |
+
|
44 |
+
## Citation
|
45 |
+
|
46 |
+
If you find our work helpful, feel free to give us a cite.
|
47 |
+
|
48 |
+
```
|
49 |
+
@article{qwen2,
|
50 |
+
title={Qwen2 Technical Report},
|
51 |
+
year={2024}
|
52 |
+
}
|
53 |
+
```
|