feihu.hf commited on
Commit
12354c2
1 Parent(s): 36c9f7f

update README

Browse files
Files changed (2) hide show
  1. README.md +53 -0
  2. config.json +1 -1
README.md CHANGED
@@ -14,6 +14,8 @@ Qwen2 is the new series of Qwen large language models. For Qwen2, we release a n
14
 
15
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
16
 
 
 
17
  For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2/) and [GitHub repo](https://github.com/QwenLM/Qwen2).
18
  <br>
19
 
@@ -68,6 +70,57 @@ generated_ids = [
68
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
69
  ```
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  ## Citation
72
 
73
  If you find our work helpful, feel free to give us a cite.
 
14
 
15
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
16
 
17
+ Qwen2-72B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
18
+
19
  For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2/) and [GitHub repo](https://github.com/QwenLM/Qwen2).
20
  <br>
21
 
 
70
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
71
  ```
72
 
73
+ ### Processing Long Texts
74
+
75
+ To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
76
+
77
+ For deployment, we recommend using vLLM. You can enable long-context capabilities, follow these steps:
78
+
79
+ 1. **Install vLLM**: Ensure you have the latest version from the main branch of [vLLM](https://github.com/vllm-project/vllm).
80
+
81
+ 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
82
+ ```json5
83
+ {
84
+ "architectures": [
85
+ "Qwen2ForCausalLM"
86
+ ],
87
+ // ...
88
+ "vocab_size": 152064,
89
+
90
+ // adding the following snippets
91
+ "rope_scaling": {
92
+ "factor": 4.0,
93
+ "original_max_position_embeddings": 32768,
94
+ "type": "yarn"
95
+ }
96
+ }
97
+ ```
98
+ This snippet enable YARN to support longer contexts.
99
+
100
+ 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
101
+
102
+ ```bash
103
+ python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-72B-Instruct --model path/to/weights
104
+ ```
105
+
106
+ Then you can access the Chat API by:
107
+
108
+ ```bash
109
+ curl http://localhost:8000/v1/chat/completions \
110
+ -H "Content-Type: application/json" \
111
+ -d '{
112
+ "model": "Qwen2-72B-Instruct",
113
+ "messages": [
114
+ {"role": "system", "content": "You are a helpful assistant."},
115
+ {"role": "user", "content": "Your Long Input Here."}
116
+ ]
117
+ }'
118
+ ```
119
+
120
+ For further usage instructions of vLLM, please refer to [our repository](https://github.com/QwenLM/Qwen2).
121
+
122
+ **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
123
+
124
  ## Citation
125
 
126
  If you find our work helpful, feel free to give us a cite.
config.json CHANGED
@@ -17,7 +17,7 @@
17
  "num_key_value_heads": 8,
18
  "rms_norm_eps": 1e-06,
19
  "rope_theta": 1000000.0,
20
- "sliding_window": 32768,
21
  "tie_word_embeddings": false,
22
  "torch_dtype": "bfloat16",
23
  "transformers_version": "4.40.1",
 
17
  "num_key_value_heads": 8,
18
  "rms_norm_eps": 1e-06,
19
  "rope_theta": 1000000.0,
20
+ "sliding_window": 131072,
21
  "tie_word_embeddings": false,
22
  "torch_dtype": "bfloat16",
23
  "transformers_version": "4.40.1",