xiaodongguaAIGC
commited on
Update README.md
Browse filesbf16 model, half memory size
README.md
CHANGED
@@ -16,6 +16,9 @@ tags:
|
|
16 |
- generation
|
17 |
- xiaodongguaAIGC
|
18 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
19 |
---
|
20 |
|
21 |
# llama-3-debug
|
@@ -23,7 +26,7 @@ pipeline_tag: text-generation
|
|
23 |
|
24 |
This model use for debug, the parameter is random.
|
25 |
|
26 |
-
It's small only '~
|
27 |
|
28 |
`llama-3-debug` model config modified as follow
|
29 |
|
@@ -40,7 +43,7 @@ If you want to load it by this code
|
|
40 |
```python
|
41 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
42 |
model_name = 'xiaodongguaAIGC/llama-3-debug'
|
43 |
-
model = AutoModelForCausalLM.from_pretrained(model_name)
|
44 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
45 |
print(model)
|
46 |
print(tokenizer)
|
|
|
16 |
- generation
|
17 |
- xiaodongguaAIGC
|
18 |
pipeline_tag: text-generation
|
19 |
+
language:
|
20 |
+
- en
|
21 |
+
- zh
|
22 |
---
|
23 |
|
24 |
# llama-3-debug
|
|
|
26 |
|
27 |
This model use for debug, the parameter is random.
|
28 |
|
29 |
+
It's small only '~32MB' memory size, that is efficent for you to download and debug.
|
30 |
|
31 |
`llama-3-debug` model config modified as follow
|
32 |
|
|
|
43 |
```python
|
44 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
45 |
model_name = 'xiaodongguaAIGC/llama-3-debug'
|
46 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
|
47 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
48 |
print(model)
|
49 |
print(tokenizer)
|