ashish-merlyn
commited on
Commit
•
fa2f61b
1
Parent(s):
08ed83c
Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ Apache-2.0
|
|
30 |
|
31 |
## Usage
|
32 |
|
33 |
-
At full precision the model needs > 48G GPU memory. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use `model.half()`)
|
34 |
|
35 |
Loading model and tokenizer:
|
36 |
|
|
|
30 |
|
31 |
## Usage
|
32 |
|
33 |
+
At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use `model.half()` before moving to device)
|
34 |
|
35 |
Loading model and tokenizer:
|
36 |
|