migtissera
commited on
Commit
•
ff1aab1
1
Parent(s):
43a304f
Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ library_name: transformers
|
|
8 |
|
9 |
Change from 1.2 -> 1.2b: More data, 14 days of training for 1 epoch.
|
10 |
|
11 |
-
# Synthia-70B
|
12 |
SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
|
13 |
|
14 |
<br>
|
@@ -27,18 +27,18 @@ This model is bound by the license & usage restrictions of the original Llama-2
|
|
27 |
|
28 |
## Evaluation
|
29 |
|
30 |
-
We evaluated Synthia-70B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
|
31 |
|
32 |
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
33 |
|
34 |
||||
|
35 |
|:------:|:--------:|:-------:|
|
36 |
|**Task**|**Metric**|**Value**|
|
37 |
-
|*arc_challenge*|acc_norm|
|
38 |
-
|*hellaswag*|acc_norm|
|
39 |
-
|*mmlu*|acc_norm|
|
40 |
-
|*truthfulqa_mc*|mc2|
|
41 |
-
|**Total Average**|-|**
|
42 |
|
43 |
<br>
|
44 |
|
@@ -103,7 +103,7 @@ def generate_text(instruction):
|
|
103 |
return f"{answer}"
|
104 |
|
105 |
|
106 |
-
conversation = f"SYSTEM:
|
107 |
|
108 |
|
109 |
while True:
|
@@ -138,9 +138,9 @@ Exercise caution and cross-check information when necessary. This is an uncensor
|
|
138 |
Please kindly cite using the following BibTeX:
|
139 |
|
140 |
```
|
141 |
-
@misc{Synthia-
|
142 |
author = {Migel Tissera},
|
143 |
-
title = {Synthia-
|
144 |
year = {2023},
|
145 |
publisher = {GitHub, HuggingFace},
|
146 |
journal = {GitHub repository, HuggingFace repository},
|
|
|
8 |
|
9 |
Change from 1.2 -> 1.2b: More data, 14 days of training for 1 epoch.
|
10 |
|
11 |
+
# Synthia-70B-v1.2b
|
12 |
SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
|
13 |
|
14 |
<br>
|
|
|
27 |
|
28 |
## Evaluation
|
29 |
|
30 |
+
We evaluated Synthia-70B-v1.2b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
|
31 |
|
32 |
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
33 |
|
34 |
||||
|
35 |
|:------:|:--------:|:-------:|
|
36 |
|**Task**|**Metric**|**Value**|
|
37 |
+
|*arc_challenge*|acc_norm|68.77|
|
38 |
+
|*hellaswag*|acc_norm|87.57|
|
39 |
+
|*mmlu*|acc_norm|68.81|
|
40 |
+
|*truthfulqa_mc*|mc2|57.69|
|
41 |
+
|**Total Average**|-|**70.71**||
|
42 |
|
43 |
<br>
|
44 |
|
|
|
103 |
return f"{answer}"
|
104 |
|
105 |
|
106 |
+
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
|
107 |
|
108 |
|
109 |
while True:
|
|
|
138 |
Please kindly cite using the following BibTeX:
|
139 |
|
140 |
```
|
141 |
+
@misc{Synthia-70B-v1.2b,
|
142 |
author = {Migel Tissera},
|
143 |
+
title = {Synthia-70B-v1.2b: Synthetic Intelligent Agent},
|
144 |
year = {2023},
|
145 |
publisher = {GitHub, HuggingFace},
|
146 |
journal = {GitHub repository, HuggingFace repository},
|