File size: 9,543 Bytes
91d33e5 b73777f 91d33e5 b73777f b8ea4b8 665c4e8 b8ea4b8 91d33e5 cac17c1 91d33e5 b73777f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 |
---
license: apache-2.0
base_model: Qwen/Qwen2-7B-Instruct
pipeline_tag: text-generation
inference: false
model_creator: Qwen
model_name: Qwen2-7B-Instruct
model_type: qwen2
language:
- en
- zh
library_name: transformers
quantized_by: ThiloteE
tags:
- text-generation-inference
- transformers
- GGUF
- GPT4All-community
- GPT4All
- chat
- aligned
- instruct
---
> [!NOTE]
> This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
# About
- Static quants of https://huggingface.co/Qwen/Qwen2-7B-Instruct at commit [41c66b0](https://huggingface.co/Qwen/Qwen2-7B-Instruct/commit/41c66b0be1c3081f13defc6bdf946c2ef240d6a6)
- Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [84eb2f4](https://github.com/ggerganov/llama.cpp/commit/84eb2f4fad28ceadd415a4e775320c983f4d9a7d)
These quants were created with a customized configuration that have been proven to be compatible with [GPT4All](https://www.nomic.ai/gpt4all) and that fixes issues with bos and eos after [feedback](https://huggingface.co/Qwen/Qwen2-7B-Instruct/discussions/15) by Qwen developers.
# Prompt Template (for GPT4All)
Example System Prompt:
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
```
Chat Template:
```
<|im_start|>User
%1<|im_end|>
<|im_start|>assistant
%2<|im_end|>
```
# Context Length
`32768`
Use a lower value during inference, if you do not have enough RAM or VRAM.
# Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/GPT4All-Community/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct-Q4_0.gguf?download=true) | Q4_0 | 4.43 | fast, recommended |
# About GGUF
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) for
more details, including on how to concatenate multi-part files.
Here is a handy graph by ikawrakow comparing some quant types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
# Thanks
I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. Also 3Simplex for lots of help along the way.
Shoutout to the GPT4All and llama.cpp communities :-)
<br>
<br>
<br>
<br>
------
<!-- footer end -->
<!-- original-model-card start -->
# Original Model card:
>
> ---
> license: apache-2.0
> language:
> - en
> pipeline_tag: text-generation
> tags:
> - chat
> ---
>
> # Qwen2-7B-Instruct
>
> ## Introduction
>
> Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen2 model.
>
> Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
>
> Qwen2-7B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
>
> For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
> <br>
>
> ## Model Details
> Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
>
> ## Training details
> We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
>
>
> ## Requirements
> The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
> ```
> KeyError: 'qwen2'
> ```
>
> ## Quickstart
>
> Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
>
> ```python
> from transformers import AutoModelForCausalLM, AutoTokenizer
> device = "cuda" # the device to load the model onto
>
> model = AutoModelForCausalLM.from_pretrained(
> "Qwen/Qwen2-7B-Instruct",
> torch_dtype="auto",
> device_map="auto"
> )
> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B-Instruct")
>
> prompt = "Give me a short introduction to large language model."
> messages = [
> {"role": "system", "content": "You are a helpful assistant."},
> {"role": "user", "content": prompt}
> ]
> text = tokenizer.apply_chat_template(
> messages,
> tokenize=False,
> add_generation_prompt=True
> )
> model_inputs = tokenizer([text], return_tensors="pt").to(device)
>
> generated_ids = model.generate(
> model_inputs.input_ids,
> max_new_tokens=512
> )
> generated_ids = [
> output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
> ]
>
> response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
> ```
>
> ### Processing Long Texts
>
> To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
>
> For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
>
> 1. **Install vLLM**: You can install vLLM by running the following command.
>
> ```bash
> pip install "vllm>=0.4.3"
> ```
>
> Or you can install vLLM from [source](https://github.com/vllm-project/vllm/).
>
> 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
> ```json
> {
> "architectures": [
> "Qwen2ForCausalLM"
> ],
> // ...
> "vocab_size": 152064,
>
> // adding the following snippets
> "rope_scaling": {
> "factor": 4.0,
> "original_max_position_embeddings": 32768,
> "type": "yarn"
> }
> }
> ```
> This snippet enable YARN to support longer contexts.
>
> 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
>
> ```bash
> python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-7B-Instruct --model path/to/weights
> ```
>
> Then you can access the Chat API by:
>
> ```bash
> curl http://localhost:8000/v1/chat/completions \
> -H "Content-Type: application/json" \
> -d '{
> "model": "Qwen2-7B-Instruct",
> "messages": [
> {"role": "system", "content": "You are a helpful assistant."},
> {"role": "user", "content": "Your Long Input Here."}
> ]
> }'
> ```
>
> For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2).
>
> **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
>
> ## Evaluation
>
> We briefly compare Qwen2-7B-Instruct with similar-sized instruction-tuned LLMs, including Qwen1.5-7B-Chat. The results are shown below:
>
> | Datasets | Llama-3-8B-Instruct | Yi-1.5-9B-Chat | GLM-4-9B-Chat | Qwen1.5-7B-Chat | Qwen2-7B-Instruct |
> | :--- | :---: | :---: | :---: | :---: | :---: |
> | _**English**_ | | | | | |
> | MMLU | 68.4 | 69.5 | **72.4** | 59.5 | 70.5 |
> | MMLU-Pro | 41.0 | - | - | 29.1 | **44.1** |
> | GPQA | **34.2** | - | **-** | 27.8 | 25.3 |
> | TheroemQA | 23.0 | - | - | 14.1 | **25.3** |
> | MT-Bench | 8.05 | 8.20 | 8.35 | 7.60 | **8.41** |
> | _**Coding**_ | | | | | |
> | Humaneval | 62.2 | 66.5 | 71.8 | 46.3 | **79.9** |
> | MBPP | **67.9** | - | - | 48.9 | 67.2 |
> | MultiPL-E | 48.5 | - | - | 27.2 | **59.1** |
> | Evalplus | 60.9 | - | - | 44.8 | **70.3** |
> | LiveCodeBench | 17.3 | - | - | 6.0 | **26.6** |
> | _**Mathematics**_ | | | | | |
> | GSM8K | 79.6 | **84.8** | 79.6 | 60.3 | 82.3 |
> | MATH | 30.0 | 47.7 | **50.6** | 23.2 | 49.6 |
> | _**Chinese**_ | | | | | |
> | C-Eval | 45.9 | - | 75.6 | 67.3 | **77.2** |
> | AlignBench | 6.20 | 6.90 | 7.01 | 6.20 | **7.21** |
>
> ## Citation
>
> If you find our work helpful, feel free to give us a cite.
>
> ```
> @article{qwen2,
> title={Qwen2 Technical Report},
> year={2024}
> }
> ```
>
<!-- original-model-card end -->
<!-- end -->
|