Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
language:
|
4 |
+
- de
|
5 |
+
library_name: transformers
|
6 |
+
---
|
7 |
+
|
8 |
+
# Llama3_DiscoLeo_8B_DARE_Experimental_4bit_awq_glc
|
9 |
+
|
10 |
+
This model is a 4 bit quantization of [DiscoResearch/Llama3_DiscoLeo_8B_DARE_Experimental](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_8B_DARE_Experimental).
|
11 |
+
Copy of the original model card:
|
12 |
+
|
13 |
+
|
14 |
+
[DiscoResearch/Llama3_German_8B_v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) is a large language model based on [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). It is specialized for the German language through continuous pretraining on 65 billion high-quality tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models.
|
15 |
+
|
16 |
+
This is a merge of our instruct model with the Instruct model by Meta. Created using [mergekit](https://github.com/cg123/mergekit).
|
17 |
+
|
18 |
+
## Merge Details
|
19 |
+
### Merge Method
|
20 |
+
|
21 |
+
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
|
22 |
+
|
23 |
+
### Models Merged
|
24 |
+
|
25 |
+
The following models were included in the merge:
|
26 |
+
* [DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1)
|
27 |
+
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
28 |
+
|
29 |
+
### Configuration
|
30 |
+
|
31 |
+
The following YAML configuration was used to produce this model:
|
32 |
+
|
33 |
+
```yaml
|
34 |
+
models:
|
35 |
+
- model: DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1
|
36 |
+
parameters:
|
37 |
+
density: 0.5
|
38 |
+
weight: 0.5
|
39 |
+
- model: meta-llama/Meta-Llama-3-8B-Instruct
|
40 |
+
parameters:
|
41 |
+
density: 0.5
|
42 |
+
weight: 0.5
|
43 |
+
merge_method: dare_ties
|
44 |
+
base_model: meta-llama/Meta-Llama-3-8B
|
45 |
+
parameters:
|
46 |
+
normalize: true
|
47 |
+
int8_mask: false
|
48 |
+
dtype: bfloat16
|
49 |
+
|
50 |
+
```
|