Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -19,7 +19,7 @@ Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we rele | |
| 19 | 
             
            - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
         | 
| 20 | 
             
            - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. 
         | 
| 21 |  | 
| 22 | 
            -
            **This repo contains the instruction-tuned  | 
| 23 | 
             
            - Type: Causal Language Models
         | 
| 24 | 
             
            - Training Stage: Pretraining & Post-training
         | 
| 25 | 
             
            - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
         | 
|  | |
| 19 | 
             
            - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
         | 
| 20 | 
             
            - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. 
         | 
| 21 |  | 
| 22 | 
            +
            **This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
         | 
| 23 | 
             
            - Type: Causal Language Models
         | 
| 24 | 
             
            - Training Stage: Pretraining & Post-training
         | 
| 25 | 
             
            - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
         | 

