migtissera commited on
Commit
ff1aab1
1 Parent(s): 43a304f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -8,7 +8,7 @@ library_name: transformers
8
 
9
  Change from 1.2 -> 1.2b: More data, 14 days of training for 1 epoch.
10
 
11
- # Synthia-70B
12
  SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
13
 
14
  <br>
@@ -27,18 +27,18 @@ This model is bound by the license & usage restrictions of the original Llama-2
27
 
28
  ## Evaluation
29
 
30
- We evaluated Synthia-70B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
31
 
32
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
33
 
34
  ||||
35
  |:------:|:--------:|:-------:|
36
  |**Task**|**Metric**|**Value**|
37
- |*arc_challenge*|acc_norm|TBC|
38
- |*hellaswag*|acc_norm|TBC|
39
- |*mmlu*|acc_norm|TBC|
40
- |*truthfulqa_mc*|mc2|TBC|
41
- |**Total Average**|-|**TBC**||
42
 
43
  <br>
44
 
@@ -103,7 +103,7 @@ def generate_text(instruction):
103
  return f"{answer}"
104
 
105
 
106
- conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually."
107
 
108
 
109
  while True:
@@ -138,9 +138,9 @@ Exercise caution and cross-check information when necessary. This is an uncensor
138
  Please kindly cite using the following BibTeX:
139
 
140
  ```
141
- @misc{Synthia-13B,
142
  author = {Migel Tissera},
143
- title = {Synthia-13B: Synthetic Intelligent Agent},
144
  year = {2023},
145
  publisher = {GitHub, HuggingFace},
146
  journal = {GitHub repository, HuggingFace repository},
 
8
 
9
  Change from 1.2 -> 1.2b: More data, 14 days of training for 1 epoch.
10
 
11
+ # Synthia-70B-v1.2b
12
  SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
13
 
14
  <br>
 
27
 
28
  ## Evaluation
29
 
30
+ We evaluated Synthia-70B-v1.2b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
31
 
32
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
33
 
34
  ||||
35
  |:------:|:--------:|:-------:|
36
  |**Task**|**Metric**|**Value**|
37
+ |*arc_challenge*|acc_norm|68.77|
38
+ |*hellaswag*|acc_norm|87.57|
39
+ |*mmlu*|acc_norm|68.81|
40
+ |*truthfulqa_mc*|mc2|57.69|
41
+ |**Total Average**|-|**70.71**||
42
 
43
  <br>
44
 
 
103
  return f"{answer}"
104
 
105
 
106
+ conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
107
 
108
 
109
  while True:
 
138
  Please kindly cite using the following BibTeX:
139
 
140
  ```
141
+ @misc{Synthia-70B-v1.2b,
142
  author = {Migel Tissera},
143
+ title = {Synthia-70B-v1.2b: Synthetic Intelligent Agent},
144
  year = {2023},
145
  publisher = {GitHub, HuggingFace},
146
  journal = {GitHub repository, HuggingFace repository},