Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-GGUF/blob/main/LICENSE
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
base_model: Qwen/Qwen2.5-14B-Instruct
|
8 |
+
tags:
|
9 |
+
- chat
|
10 |
+
---
|
11 |
+
<!-- markdownlint-disable MD041 -->
|
12 |
+
|
13 |
+
<!-- header start -->
|
14 |
+
<!-- 200823 -->
|
15 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
16 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64a523ba1ed90082dafde3d3/kJrkxofwOp-89uYFe0EBb.png" alt="LlamaFile" style="width: 50%; min-width: 400px; display: block; margin: auto;">
|
17 |
+
|
18 |
+
<!-- markdownlint-disable MD041 -->
|
19 |
+
|
20 |
+
<!-- header start -->
|
21 |
+
<!-- 200823 -->
|
22 |
+
|
23 |
+
I am not the original creator of llamafile, all credit of llamafile goes to Jartine:
|
24 |
+
<!-- README_llamafile.md-about-llamafile end -->
|
25 |
+
<!-- repositories-available start -->
|
26 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
27 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/FwAVVu7eJ4">Chat & support: jartine's Discord server</a></p>
|
28 |
+
</div>
|
29 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">jartine's LLM work is generously supported by a grant from <a href="https://mozilla.org">mozilla</a></p></div>
|
30 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
31 |
+
<!-- header end -->
|
32 |
+
|
33 |
+
# Qwen2.5 14B Instruct GGUF - llamafile
|
34 |
+
|
35 |
+
## Run LLMs locally with a single file - No installation required!
|
36 |
+
|
37 |
+
All you need is download a file and run it.
|
38 |
+
|
39 |
+
Our goal is to make open source large language models much more
|
40 |
+
accessible to both developers and end users. We're doing that by
|
41 |
+
combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one
|
42 |
+
framework that collapses all the complexity of LLMs down to
|
43 |
+
a single-file executable (called a "llamafile") that runs
|
44 |
+
locally on most computers, with no installation.
|
45 |
+
|
46 |
+
## How to Use (Modified from [Git README](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a?tab=readme-ov-file#quickstart))
|
47 |
+
|
48 |
+
The easiest way to try it for yourself is to download our example llamafile.
|
49 |
+
With llamafile, all inference happens locally; no data ever leaves your computer.
|
50 |
+
|
51 |
+
1. Download the llamafile.
|
52 |
+
|
53 |
+
2. Open your computer's terminal.
|
54 |
+
|
55 |
+
3. If you're using macOS, Linux, or BSD, you'll need to grant permission
|
56 |
+
for your computer to execute this new file. (You only need to do this
|
57 |
+
once.)
|
58 |
+
|
59 |
+
```sh
|
60 |
+
chmod +x qwen2.5-14b-instruct-q8_0.gguf
|
61 |
+
```
|
62 |
+
|
63 |
+
4. If you're on Windows, rename the file by adding ".exe" on the end.
|
64 |
+
|
65 |
+
5. Run the llamafile. e.g.:
|
66 |
+
|
67 |
+
```sh
|
68 |
+
./qwen2.5-14b-instruct-q8_0.gguf
|
69 |
+
```
|
70 |
+
|
71 |
+
6. Your browser should open automatically and display a chat interface.
|
72 |
+
(If it doesn't, just open your browser and point it at http://localhost:8080.)
|
73 |
+
|
74 |
+
7. When you're done chatting, return to your terminal and hit
|
75 |
+
`Control-C` to shut down llamafile.
|
76 |
+
|
77 |
+
Please note that LlamaFile is still under active development. Some methods may be not be compatible with the most recent documents.
|
78 |
+
|
79 |
+
## Settings for Qwen2.5 14B Instruct GGUF Llamafiles
|
80 |
+
|
81 |
+
- Model creator: [Qwen](https://huggingface.co/Qwen)
|
82 |
+
- Quantized GGUF files used: [Qwen/Qwen2.5-14B-Instruct-GGUF](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-GGUF/tree/b466e1f8c07172155743e8e1307507d8a4f91fbd)
|
83 |
+
- Commit message "upload fp16 weights"
|
84 |
+
- Commit hash b466e1f8c07172155743e8e1307507d8a4f91fbd
|
85 |
+
- LlamaFile version used: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile/tree/29b5f27172306da39a9c70fe25173da1b1564f82)
|
86 |
+
- Commit message "Merge pull request #687 from Xydane/main Add Support for DeepSeek-R1 models"
|
87 |
+
- Commit hash 29b5f27172306da39a9c70fe25173da1b1564f82
|
88 |
+
- `.args` content format (example):
|
89 |
+
|
90 |
+
```
|
91 |
+
-m
|
92 |
+
qwen2.5-14b-instruct-q8_0.gguf
|
93 |
+
...
|
94 |
+
```
|
95 |
+
|
96 |
+
## (Following is original model card for Qwen2.5 14B Instruct GGUF)
|
97 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
98 |
+
|
99 |
+
|
100 |
+
# Qwen2.5-14B-Instruct-GGUF
|
101 |
+
|
102 |
+
## Introduction
|
103 |
+
|
104 |
+
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
|
105 |
+
|
106 |
+
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
|
107 |
+
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
|
108 |
+
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
|
109 |
+
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
110 |
+
|
111 |
+
**This repo contains the instruction-tuned 14B Qwen2.5 model in the GGUF Format**, which has the following features:
|
112 |
+
- Type: Causal Language Models
|
113 |
+
- Training Stage: Pretraining & Post-training
|
114 |
+
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
|
115 |
+
- Number of Parameters: 14.7B
|
116 |
+
- Number of Paramaters (Non-Embedding): 13.1B
|
117 |
+
- Number of Layers: 48
|
118 |
+
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
|
119 |
+
- Context Length: Full 32,768 tokens and generation 8192 tokens
|
120 |
+
- Note: Currently, only vLLM supports YARN for length extrapolating. If you want to process sequences up to 131,072 tokens, please refer to non-GGUF models.
|
121 |
+
- Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
|
122 |
+
|
123 |
+
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
124 |
+
|
125 |
+
## Quickstart
|
126 |
+
|
127 |
+
Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
|
128 |
+
|
129 |
+
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
|
130 |
+
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
|
131 |
+
|
132 |
+
Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`:
|
133 |
+
1. Install
|
134 |
+
```shell
|
135 |
+
pip install -U huggingface_hub
|
136 |
+
```
|
137 |
+
2. Download:
|
138 |
+
```shell
|
139 |
+
huggingface-cli download Qwen/Qwen2.5-14B-Instruct-GGUF --include "qwen2.5-14b-instruct-q5_k_m*.gguf" --local-dir . --local-dir-use-symlinks False
|
140 |
+
```
|
141 |
+
For large files, we split them into multiple segments due to the limitation of file upload. They share a prefix, with a suffix indicating its index. For examples, `qwen2.5-14b-instruct-q5_k_m-00001-of-00003.gguf` to `qwen2.5-14b-instruct-q5_k_m-00001-of-00003.gguf`. The above command will download all of them.
|
142 |
+
3. (Optional) Merge:
|
143 |
+
For split files, you need to merge them first with the command `llama-gguf-split` as shown below:
|
144 |
+
```bash
|
145 |
+
# ./llama-gguf-split --merge <first-split-file-path> <merged-file-path>
|
146 |
+
./llama-gguf-split --merge qwen2.5-14b-instruct-q5_k_m-00001-of-00003.gguf qwen2.5-14b-instruct-q5_k_m.gguf
|
147 |
+
```
|
148 |
+
|
149 |
+
For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:
|
150 |
+
|
151 |
+
```shell
|
152 |
+
./llama-cli -m <gguf-file-path> \
|
153 |
+
-co -cnv -p "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." \
|
154 |
+
-fa -ngl 80 -n 512
|
155 |
+
```
|
156 |
+
|
157 |
+
|
158 |
+
## Evaluation & Performance
|
159 |
+
|
160 |
+
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
|
161 |
+
|
162 |
+
For quantized models, the benchmark results against the original bfloat16 models can be found [here](https://qwen.readthedocs.io/en/latest/benchmark/quantization_benchmark.html)
|
163 |
+
|
164 |
+
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
165 |
+
|
166 |
+
## Citation
|
167 |
+
|
168 |
+
If you find our work helpful, feel free to give us a cite.
|
169 |
+
|
170 |
+
```
|
171 |
+
@misc{qwen2.5,
|
172 |
+
title = {Qwen2.5: A Party of Foundation Models},
|
173 |
+
url = {https://qwenlm.github.io/blog/qwen2.5/},
|
174 |
+
author = {Qwen Team},
|
175 |
+
month = {September},
|
176 |
+
year = {2024}
|
177 |
+
}
|
178 |
+
@article{qwen2,
|
179 |
+
title={Qwen2 Technical Report},
|
180 |
+
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
|
181 |
+
journal={arXiv preprint arXiv:2407.10671},
|
182 |
+
year={2024}
|
183 |
+
}
|
184 |
+
```
|