crumb commited on
Commit
26c506a
·
1 Parent(s): 8de4523

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -30,6 +30,9 @@ I also come up with a new pretraining method inspired by UL2, the only differenc
30
  | GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
31
  | GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
32
  | --- | --- | --- | --- | --- | --- | --- |
33
- | GerbilLab/Gerbil-Blender-A-15m | 15m | A-Class | 20 | 280M | 131k | coming soon |
 
 
 
34
 
35
  The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
 
30
  | GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
31
  | GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
32
  | --- | --- | --- | --- | --- | --- | --- |
33
+ | GerbilLab/GerbilBlender-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | coming soon |
34
+ | GerbilLab/GerbilBlender-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | coming soon |
35
+ | GerbilLab/GerbilBlender-A-15m | 15m | A-Class | 20 | 280M | 131k | coming soon |
36
+ | GerbilLab/GerbilBlender-A-32m | 32m | A-Class | 20 | 640M | 262K | coming soon |
37
 
38
  The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.