Update README.md
Browse files
README.md
CHANGED
@@ -49,18 +49,17 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
|
|
49 |
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
50 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
51 |
|
52 |
-
**This repo contains the
|
53 |
- Type: Causal Language Models
|
54 |
-
- Training Stage: Pretraining
|
55 |
-
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias
|
56 |
-
- Number of Parameters:
|
57 |
-
- Number of Paramaters (Non-Embedding):
|
58 |
-
- Number of Layers:
|
59 |
-
- Number of Attention Heads (GQA):
|
60 |
-
- Context Length: Full
|
|
|
61 |
|
62 |
-
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
|
63 |
-
|
64 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
65 |
|
66 |
## Requirements
|
|
|
49 |
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
50 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
51 |
|
52 |
+
**This repo contains the instruction-tuned 32B Qwen2.5-Coder model**, which has the following features:
|
53 |
- Type: Causal Language Models
|
54 |
+
- Training Stage: Pretraining & Post-training
|
55 |
+
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
|
56 |
+
- Number of Parameters: 32.5B
|
57 |
+
- Number of Paramaters (Non-Embedding): 31.0B
|
58 |
+
- Number of Layers: 64
|
59 |
+
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
|
60 |
+
- Context Length: Full 131,072 tokens
|
61 |
+
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
|
62 |
|
|
|
|
|
63 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
64 |
|
65 |
## Requirements
|