joaoalvarenga
commited on
Commit
•
8b76f74
1
Parent(s):
261fbda
Update README.md
Browse files
README.md
CHANGED
@@ -50,13 +50,13 @@ pipeline_tag: text-generation
|
|
50 |
---
|
51 |
### Quantized bigscience/bloom with 8-bit weights
|
52 |
|
53 |
-
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom) a ~176
|
54 |
|
55 |
Here, we also apply [LoRA (Low Rank Adapters)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes \~353GB memory, this version takes **\~180GB**.
|
56 |
|
57 |
Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
|
58 |
|
59 |
-
### How to fine
|
60 |
|
61 |
In this [notebook](https://nbviewer.org/urls/huggingface.co/joaoalvarenga/bloom-8bit/raw/main/fine-tuning-example.ipynb) you can find an adaptation from [Hivemind's GPT-J 8-bit fine-tuning notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to fine-tune Bloom 8-bit with a 3x NVIDIA A100 instance.
|
62 |
|
|
|
50 |
---
|
51 |
### Quantized bigscience/bloom with 8-bit weights
|
52 |
|
53 |
+
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom) a ~176 billion parameters language model that you run and fine-tune with less memory.
|
54 |
|
55 |
Here, we also apply [LoRA (Low Rank Adapters)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes \~353GB memory, this version takes **\~180GB**.
|
56 |
|
57 |
Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
|
58 |
|
59 |
+
### How to fine-tune
|
60 |
|
61 |
In this [notebook](https://nbviewer.org/urls/huggingface.co/joaoalvarenga/bloom-8bit/raw/main/fine-tuning-example.ipynb) you can find an adaptation from [Hivemind's GPT-J 8-bit fine-tuning notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to fine-tune Bloom 8-bit with a 3x NVIDIA A100 instance.
|
62 |
|