Steelskull commited on
Commit
0c4850f
·
verified ·
1 Parent(s): b65cf91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -24
README.md CHANGED
@@ -1,31 +1,64 @@
1
  ---
2
- base_model:
3
- - failspy/Llama-3-8B-Instruct-abliterated
4
- library_name: transformers
5
  tags:
6
- - mergekit
7
  - merge
8
-
 
 
9
  ---
10
- # merge
11
-
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
-
14
- ## Merge Details
15
- ### Merge Method
16
-
17
- This model was merged using the passthrough merge method.
18
-
19
- ### Models Merged
20
-
21
- The following models were included in the merge:
22
- * [failspy/Llama-3-8B-Instruct-abliterated](https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated)
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
 
28
- ```yaml
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  dtype: bfloat16
30
  merge_method: passthrough
31
  slices:
@@ -41,4 +74,8 @@ slices:
41
  - sources:
42
  - layer_range: [24, 32]
43
  model: failspy/Llama-3-8B-Instruct-abliterated
44
- ```
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
 
 
3
  tags:
 
4
  - merge
5
+ - mergekit
6
+ base_model:
7
+ - NousResearch/Meta-Llama-3-8B-Instruct
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
+ <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0">
11
+ <title>Aura-llama-3 Data Card</title>
12
+ <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
13
+ <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; }
14
+ .container { width: 80%; max-width: 800px; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); }
15
+ .header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); }
16
+ .update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; }
17
+ .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; }
18
+ .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; }
19
+ a { color: #88C0D0; text-decoration: none; }
20
+ a:hover { color: #A3BE8C; }
21
+ pre { background-color: rgba(255, 255, 255, 0.05); padding: 10px; border-radius: 5px; overflow-x: auto; }
22
+ code { font-family: 'Courier New', monospace; color: #A3BE8C; } </style> </head> <body> <div class="container">
23
+ <div class="header">
24
+ <h1>Aura-llama-3</h1> </div> <div class="info">
25
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/QYpWMEXTe0_X3A7HyeBm0.webp" alt="Aura-llama image">
26
+ <p>Now that the cute anime girl has your attention.</p>
27
+ <p>UPDATE: Model has been fixed</p>
28
+ <p>Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.</p>
29
+ <p>Aura-llama is a merge of the following models to create a base model to work from:</p>
30
+ <ul>
31
+ <li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li>
32
+ <li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li>
33
+ </ul>
34
+ </div>
35
+ <div class="update-section">
36
+ <h2>Abliterated Merged Evals (Has Not Been Finetuned):</h2>
37
+ <p>Aura-llama</p>
38
+ <ul>
39
+ <li>Avg: ?</li>
40
+ <li>ARC: ?</li>
41
+ <li>HellaSwag: ?</li>
42
+ <li>MMLU: ?</li>
43
+ <li>T-QA: ?</li>
44
+ <li>Winogrande: ?</li>
45
+ <li>GSM8K: ?</li>
46
+ </ul>
47
+ <h2>Non Abliterated Merged Evals (Has Not Been Finetuned):</h2>
48
+ <p>Aura-llama</p>
49
+ <ul>
50
+ <li>Avg: 63.13</li>
51
+ <li>ARC: 58.02</li>
52
+ <li>HellaSwag: 77.82</li>
53
+ <li>MMLU: 65.61</li>
54
+ <li>T-QA: 51.94</li>
55
+ <li>Winogrande: 73.40</li>
56
+ <li>GSM8K: 52.01</li>
57
+ </ul>
58
+ </div>
59
+ <div class="update-section">
60
+ <h2>🧩 Configuration</h2>
61
+ <pre><code>
62
  dtype: bfloat16
63
  merge_method: passthrough
64
  slices:
 
74
  - sources:
75
  - layer_range: [24, 32]
76
  model: failspy/Llama-3-8B-Instruct-abliterated
77
+ </code></pre>
78
+ </div>
79
+ </div>
80
+ </body>
81
+ </html>