Update README.md
Browse files
README.md
CHANGED
@@ -13,14 +13,13 @@ tags:
|
|
13 |
|
14 |
|
15 |
## Llama-VARCO-8B-Instruct-GGUF
|
16 |
-
|
17 |
## Thanks to https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct
|
18 |
-
## <Translate the original text>
|
19 |
|
|
|
|
|
20 |
## Llama-VARCO-8B-Instruct
|
21 |
|
22 |
### About the Model
|
23 |
-
|
24 |
**Llama-VARCO-8B-Instruct** is a *generative model* built with Llama, specifically designed to excel in Korean through additional training. The model uses continual pre-training with both Korean and English datasets to enhance its understanding and generation capabilites in Korean, while also maintaining its proficiency in English. It performs supervised fine-tuning (SFT) and direct preference optimization (DPO) in Korean to align with human preferences.
|
25 |
|
26 |
- **Developed by:** NC Research, Language Model Team
|
|
|
13 |
|
14 |
|
15 |
## Llama-VARCO-8B-Instruct-GGUF
|
|
|
16 |
## Thanks to https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct
|
|
|
17 |
|
18 |
+
|
19 |
+
## **Translate the original text**
|
20 |
## Llama-VARCO-8B-Instruct
|
21 |
|
22 |
### About the Model
|
|
|
23 |
**Llama-VARCO-8B-Instruct** is a *generative model* built with Llama, specifically designed to excel in Korean through additional training. The model uses continual pre-training with both Korean and English datasets to enhance its understanding and generation capabilites in Korean, while also maintaining its proficiency in English. It performs supervised fine-tuning (SFT) and direct preference optimization (DPO) in Korean to align with human preferences.
|
24 |
|
25 |
- **Developed by:** NC Research, Language Model Team
|