English
Yudhanjaya commited on
Commit
ac82c05
1 Parent(s): 3e264c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -41,7 +41,7 @@ Below are the results of Vicuna-style testing: 80 questions in various categorie
41
  | Writing | 8 | 19 | 19 |
42
  | Total | 125 | 271 | 312 |
43
 
44
- A csv of questions, answers and GPT's reviews are also included in this repo in the /TestResults/ folder, along with the base model for comparison.
45
 
46
  Because of its small size, Eluwa can be used as research into conversational models with older and slower hardware.
47
  ## Using Eluwa
@@ -53,8 +53,7 @@ To load Eluwa, download [OPT 2.7b from Huggingface](https://huggingface.co/faceb
53
  ## Training and notes
54
 
55
  Training Eluwa is a straightforward process. It is essentially Facebook's GPT-like OPT 2.7b model, loaded in 8-bit and trained using [Stanford's Alapaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
56
- Use the [Colab notebook here](https://huggingface.co/BackyardLabs/Eluwa/blob/main/Train_eluwa.ipynb). I've written notes in there on what the functions do.
57
-
58
 
59
  ## Why "Eluwa"?
60
 
 
41
  | Writing | 8 | 19 | 19 |
42
  | Total | 125 | 271 | 312 |
43
 
44
+ A csv of questions, answers and GPT's reviews are also included in the [Eluwa github repo](https://github.com/yudhanjaya/Eluwa) in the /TestResults/ folder, along with the base model for comparison.
45
 
46
  Because of its small size, Eluwa can be used as research into conversational models with older and slower hardware.
47
  ## Using Eluwa
 
53
  ## Training and notes
54
 
55
  Training Eluwa is a straightforward process. It is essentially Facebook's GPT-like OPT 2.7b model, loaded in 8-bit and trained using [Stanford's Alapaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
56
+ The training code is available on the [Eluwa github repo](https://github.com/yudhanjaya/Eluwa).
 
57
 
58
  ## Why "Eluwa"?
59