feihu.hf
commited on
Commit
•
3aac2f5
1
Parent(s):
2f71adb
update README.md
Browse files- README.md +12 -12
- config.json +1 -1
README.md
CHANGED
@@ -1,12 +1,12 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
-
- Qwen/Qwen2.5-Coder-7B-Instruct
|
4 |
-
language:
|
5 |
-
- en
|
6 |
-
library_name: transformers
|
7 |
license: apache-2.0
|
8 |
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
|
|
|
|
|
|
|
|
|
9 |
pipeline_tag: text-generation
|
|
|
10 |
tags:
|
11 |
- code
|
12 |
- codeqwen
|
@@ -20,7 +20,7 @@ tags:
|
|
20 |
|
21 |
## Introduction
|
22 |
|
23 |
-
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers
|
24 |
|
25 |
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
26 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
@@ -38,7 +38,7 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
|
|
38 |
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
|
39 |
- Quantization: GPTQ 8-bit
|
40 |
|
41 |
-
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
42 |
|
43 |
## Requirements
|
44 |
|
@@ -114,7 +114,7 @@ We advise adding the `rope_scaling` configuration only when processing long cont
|
|
114 |
|
115 |
## Evaluation & Performance
|
116 |
|
117 |
-
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
|
118 |
|
119 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
120 |
|
@@ -124,10 +124,10 @@ If you find our work helpful, feel free to give us a cite.
|
|
124 |
|
125 |
```
|
126 |
@article{hui2024qwen2,
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
}
|
132 |
@article{qwen2,
|
133 |
title={Qwen2 Technical Report},
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
base_model:
|
7 |
+
- Qwen/Qwen2.5-Coder-7B-Instruct
|
8 |
pipeline_tag: text-generation
|
9 |
+
library_name: transformers
|
10 |
tags:
|
11 |
- code
|
12 |
- codeqwen
|
|
|
20 |
|
21 |
## Introduction
|
22 |
|
23 |
+
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
24 |
|
25 |
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
26 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
|
|
38 |
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
|
39 |
- Quantization: GPTQ 8-bit
|
40 |
|
41 |
+
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
42 |
|
43 |
## Requirements
|
44 |
|
|
|
114 |
|
115 |
## Evaluation & Performance
|
116 |
|
117 |
+
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
|
118 |
|
119 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
120 |
|
|
|
124 |
|
125 |
```
|
126 |
@article{hui2024qwen2,
|
127 |
+
title={Qwen2. 5-Coder Technical Report},
|
128 |
+
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
|
129 |
+
journal={arXiv preprint arXiv:2409.12186},
|
130 |
+
year={2024}
|
131 |
}
|
132 |
@article{qwen2,
|
133 |
title={Qwen2 Technical Report},
|
config.json
CHANGED
@@ -48,4 +48,4 @@
|
|
48 |
"use_cache": true,
|
49 |
"use_sliding_window": false,
|
50 |
"vocab_size": 152064
|
51 |
-
}
|
|
|
48 |
"use_cache": true,
|
49 |
"use_sliding_window": false,
|
50 |
"vocab_size": 152064
|
51 |
+
}
|