|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- zh |
|
pipeline_tag: text-generation |
|
base_model: UnicomLLM/Unichat-llama3-Chinese-8B |
|
--- |
|
|
|
# Unichat-llama3-Chinese-8B- GGUF |
|
- This is quantized version of [UnicomLLM/Unichat-llama3-Chinese-8B](https://huggingface.co/UnicomLLM/Unichat-llama3-Chinese-8B) |
|
|
|
# Model Description (Translated) |
|
|
|
- China Unicom AI Innovation Center released the industry's first llama3 Chinese instruction fine-tuning model (full parameter fine-tuning), uploaded at 22:00 on April 19, 2024 |
|
- This model is based on [**Meta Llama 3**](https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6) , adds Chinese data for training, and achieves high-quality Chinese question and answer using the llama3 model. |
|
- The model context maintains the native length of 8K, and a version that supports 64K length will be released later. |
|
- Base model [**Meta-Llama-3-8B**](https://huggingface.co/meta-llama/Meta-Llama-3-8B) |
|
|
|
|
|
### 📊 Data |
|
|
|
- High-quality instruction data, covering multiple fields and industries, providing sufficient data support for model training |
|
- Fine-tuning instruction data undergoes strict manual screening to ensure high-quality instruction data is used for model fine-tuning. |
|
|
|
For more details on models, datasets and training please refer to: |
|
* Github:[**Unichat-llama3-Chinese**](https://github.com/UnicomAI/Unichat-llama3-Chinese) |
|
|
|
|