xiaodongguaAIGC commited on
Commit
b663718
·
verified ·
1 Parent(s): 6d445a9

Update README.md

Browse files

bf16 model, half memory size

Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -16,6 +16,9 @@ tags:
16
  - generation
17
  - xiaodongguaAIGC
18
  pipeline_tag: text-generation
 
 
 
19
  ---
20
 
21
  # llama-3-debug
@@ -23,7 +26,7 @@ pipeline_tag: text-generation
23
 
24
  This model use for debug, the parameter is random.
25
 
26
- It's small only '~64MB' memory size, that is efficent for you to download and debug.
27
 
28
  `llama-3-debug` model config modified as follow
29
 
@@ -40,7 +43,7 @@ If you want to load it by this code
40
  ```python
41
  from transformers import AutoModelForCausalLM, AutoTokenizer
42
  model_name = 'xiaodongguaAIGC/llama-3-debug'
43
- model = AutoModelForCausalLM.from_pretrained(model_name)
44
  tokenizer = AutoTokenizer.from_pretrained(model_name)
45
  print(model)
46
  print(tokenizer)
 
16
  - generation
17
  - xiaodongguaAIGC
18
  pipeline_tag: text-generation
19
+ language:
20
+ - en
21
+ - zh
22
  ---
23
 
24
  # llama-3-debug
 
26
 
27
  This model use for debug, the parameter is random.
28
 
29
+ It's small only '~32MB' memory size, that is efficent for you to download and debug.
30
 
31
  `llama-3-debug` model config modified as follow
32
 
 
43
  ```python
44
  from transformers import AutoModelForCausalLM, AutoTokenizer
45
  model_name = 'xiaodongguaAIGC/llama-3-debug'
46
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
47
  tokenizer = AutoTokenizer.from_pretrained(model_name)
48
  print(model)
49
  print(tokenizer)