Update README.md
Browse files
README.md
CHANGED
@@ -49,6 +49,23 @@ for output in outputs:
|
|
49 |
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
50 |
```
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
# Training config
|
53 |
|
54 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
@@ -118,23 +135,6 @@ special_tokens:
|
|
118 |
|
119 |
</details><br>
|
120 |
|
121 |
-
# workspace/llm_training/axolotl/llama3-ja/output_openchat_megagon_lbgpt4_ja_8B_instruct
|
122 |
-
|
123 |
-
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
|
124 |
-
It achieves the following results on the evaluation set:
|
125 |
-
- Loss: 0.9555
|
126 |
-
|
127 |
-
## Model description
|
128 |
-
|
129 |
-
More information needed
|
130 |
-
|
131 |
-
## Intended uses & limitations
|
132 |
-
|
133 |
-
More information needed
|
134 |
-
|
135 |
-
## Training and evaluation data
|
136 |
-
|
137 |
-
More information needed
|
138 |
|
139 |
## Training procedure
|
140 |
|
|
|
49 |
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
50 |
```
|
51 |
|
52 |
+
# Evaluation scores
|
53 |
+
|
54 |
+
We find that this is the best performing model in the 7/8B class of LLMs on a multitude of Japanese language benchmarks.
|
55 |
+
|
56 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/2obyDbrjiNV3PGfwom6EI.png)
|
57 |
+
|
58 |
+
# Training data
|
59 |
+
|
60 |
+
We train on three sources of data to create this model
|
61 |
+
|
62 |
+
* [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations
|
63 |
+
* A hand-edited dataset of nearly 700 conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset.
|
64 |
+
* [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) (Japanese conversations only) - 167 conversations
|
65 |
+
* Conversations taken from humans talking to GPT-4
|
66 |
+
* lightblue/tagengo-gpt4 (Japanese prompts only) (Link coming soon!) - 2,482 conversations
|
67 |
+
* Almost 2,500 diverse Japanese prompts sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview`
|
68 |
+
|
69 |
# Training config
|
70 |
|
71 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
135 |
|
136 |
</details><br>
|
137 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
|
139 |
## Training procedure
|
140 |
|