Update README.md
Browse files
README.md
CHANGED
|
@@ -11,37 +11,21 @@ language:
|
|
| 11 |
|
| 12 |
|
| 13 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 14 |
-
**EvoLLM-JP-v1-7B** is a Japanese Math LLM by Evolutionary Model Merge.
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
### Model Description
|
| 19 |
-
|
| 20 |
-
<!-- Provide a longer summary of what this model is. -->
|
| 21 |
-
**EvoLLM-JP-v1-7B** is a Japanese Math LLM, merged the following source models in the Parameter Space (PS) by Evolutionary Model Merge.
|
| 22 |
-
|
| 23 |
-
- **Developed by:** [Sakana AI](https://sakana.ai/)
|
| 24 |
-
- **Model type:** Autoregressive Language Model
|
| 25 |
-
- **Language(s):** Japanese
|
| 26 |
-
- **License:** [MICROSOFT RESEARCH LICENSE TERMS](./LICENSE)
|
| 27 |
-
- **Source models:**
|
| 28 |
-
- [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1)
|
| 29 |
-
- [WizardMath 7B V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
| 30 |
-
- [Abel 7B 002](https://huggingface.co/GAIR/Abel-7B-002)
|
| 31 |
-
|
| 32 |
-
### Model Sources
|
| 33 |
|
| 34 |
-
|
|
|
|
|
|
|
| 35 |
|
| 36 |
-
- **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge)
|
| 37 |
-
- **Paper:** TODO
|
| 38 |
-
- **Blog:** TODO
|
| 39 |
|
| 40 |
|
| 41 |
## Usage
|
| 42 |
|
| 43 |
Use the code below to get started with the model.
|
| 44 |
|
|
|
|
|
|
|
| 45 |
|
| 46 |
```python
|
| 47 |
import torch
|
|
@@ -70,21 +54,23 @@ generated_text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
|
|
| 70 |
print(generated_text)
|
| 71 |
```
|
| 72 |
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
|
|
|
|
|
|
| 88 |
|
| 89 |
## Acknowledgement
|
| 90 |
|
|
|
|
| 11 |
|
| 12 |
|
| 13 |
<!-- Provide a quick summary of what the model is/does. -->
|
|
|
|
| 14 |
|
| 15 |
+
**EvoLLM-JP-v1-7B** is an experimental general-purpose Japanese LLM. This model was created using the Evolutionary Model Merge method. Please refer to our [report](TOOD) and [blog](TODO) for more details. This model was produced by merging the following models. We are grateful to the developers of the source models.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
+
- [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1)
|
| 18 |
+
- [WizardMath 7B V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
| 19 |
+
- [Abel 7B 002](https://huggingface.co/GAIR/Abel-7B-002)
|
| 20 |
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
|
| 23 |
## Usage
|
| 24 |
|
| 25 |
Use the code below to get started with the model.
|
| 26 |
|
| 27 |
+
<details>
|
| 28 |
+
<summary> Click to expand </summary>
|
| 29 |
|
| 30 |
```python
|
| 31 |
import torch
|
|
|
|
| 54 |
print(generated_text)
|
| 55 |
```
|
| 56 |
|
| 57 |
+
</details>
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
## Model Details
|
| 62 |
+
|
| 63 |
+
<!-- Provide a longer summary of what this model is. -->
|
| 64 |
+
|
| 65 |
+
- **Developed by:** [Sakana AI](https://sakana.ai/)
|
| 66 |
+
- **Model type:** Autoregressive Language Model
|
| 67 |
+
- **Language(s):** Japanese
|
| 68 |
+
- **License:** [MICROSOFT RESEARCH LICENSE TERMS](./LICENSE) (due to the inclusion of the WizardMath model)
|
| 69 |
+
- **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge)
|
| 70 |
+
- **Paper:** TODO
|
| 71 |
+
- **Blog:** TODO
|
| 72 |
+
|
| 73 |
+
|
| 74 |
|
| 75 |
## Acknowledgement
|
| 76 |
|