English
Yudhanjaya commited on
Commit
3edda12
1 Parent(s): 9016b1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -52,9 +52,9 @@ To load Eluwa, download [OPT 2.7b from Huggingface](https://huggingface.co/faceb
52
 
53
  ## Training and notes
54
 
55
- Training Eluwa is a straightforward process. It is essentially Facebook's GPT-like OPT 2.7b model, loaded in 8-bit and trained using [Stanford's Alapaca dataset](https://github.com/tatsu-lab/stanford_alpaca). Use the [Colab notebook here](https://colab.research.google.com/drive/1rkLx0oI8pbix0EznjYeaLDqPoMHdw0x8?usp=sharing). I've written notes in there on what the functions do.
56
-
57
- When loaded thusly, OPT 2.7b gives us 5242880 trainable params out of a total 2656839680 (trainable%: 0.19733520390662038).
58
 
59
  ## Why "Eluwa"?
60
 
 
52
 
53
  ## Training and notes
54
 
55
+ Training Eluwa is a straightforward process. It is essentially Facebook's GPT-like OPT 2.7b model, loaded in 8-bit and trained using [Stanford's Alapaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
56
+ Use the [Colab notebook here](https://huggingface.co/BackyardLabs/Eluwa/blob/main/Train_eluwa.ipynb). I've written notes in there on what the functions do.
57
+
58
 
59
  ## Why "Eluwa"?
60