Yudhanjaya
commited on
Commit
•
adb1212
1
Parent(s):
9764ab9
Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ Below are the results of Vicuna-style testing: 80 questions in various categorie
|
|
30 |
| Total | 320 | 408 | 439 |
|
31 |
|
32 |
|
33 |
-
A csv of questions, answers and GPT's reviews are also included in this repo in the /TestResults/ folder, along with the base
|
34 |
|
35 |
## Using Eluwa
|
36 |
|
@@ -41,8 +41,8 @@ To load Eluwa, download [OPT 6.7b from Huggingface](https://huggingface.co/faceb
|
|
41 |
## Training and notes
|
42 |
|
43 |
Training Eluwa is a straightforward process. It is essentially Facebook's GPT-like OPT 6.7b model, loaded in 8-bit and trained using [Stanford's Alapaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
|
44 |
-
|
45 |
-
|
46 |
|
47 |
## Why "Eluwa"?
|
48 |
|
|
|
30 |
| Total | 320 | 408 | 439 |
|
31 |
|
32 |
|
33 |
+
A csv of questions, answers and GPT's reviews are also included in this repo in the /TestResults/ folder of the [Eluwa github repo](https://github.com/yudhanjaya/Eluwa), along with results from the base models for comparison.
|
34 |
|
35 |
## Using Eluwa
|
36 |
|
|
|
41 |
## Training and notes
|
42 |
|
43 |
Training Eluwa is a straightforward process. It is essentially Facebook's GPT-like OPT 6.7b model, loaded in 8-bit and trained using [Stanford's Alapaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
|
44 |
+
|
45 |
+
The training code is available on the [Eluwa github repo](https://github.com/yudhanjaya/Eluwa) and will as-is in Google Colab.
|
46 |
|
47 |
## Why "Eluwa"?
|
48 |
|