Update README.md
Browse files
README.md
CHANGED
@@ -18,14 +18,6 @@ tags:
|
|
18 |
**Original model**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)<br>
|
19 |
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3772](https://github.com/ggerganov/llama.cpp/releases/tag/b3772)<br>
|
20 |
|
21 |
-
## Model Summary:
|
22 |
-
|
23 |
-
Qwen 2.5 is an update to the highly successful Qwen 2 series of models, with a huge range of releases.
|
24 |
-
|
25 |
-
This 72B model represents the flagship of them all, with a large-scale training dataset and enhanced instruction following.
|
26 |
-
|
27 |
-
It also features a large context, and support for generating long texts in over 29 languages.
|
28 |
-
|
29 |
## Technical Details
|
30 |
|
31 |
Long context: Support for 128k tokens and 8k token generation
|
|
|
18 |
**Original model**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)<br>
|
19 |
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3772](https://github.com/ggerganov/llama.cpp/releases/tag/b3772)<br>
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
## Technical Details
|
22 |
|
23 |
Long context: Support for 128k tokens and 8k token generation
|