onekq commited on
Commit
7a72b0a
1 Parent(s): 04a8e39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -10,7 +10,7 @@ Bitsandbytes quantization of https://huggingface.co/bigcode/starcoder2-3b.
10
  See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.
11
 
12
  ```python
13
- from transformers import AutoModelForCausalLM
14
  from transformers import BitsAndBytesConfig
15
  import torch
16
 
@@ -21,5 +21,8 @@ nf4_config = BitsAndBytesConfig(
21
  bnb_4bit_compute_dtype=torch.bfloat16
22
  )
23
  model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-3b", quantization_config=nf4_config)
 
 
24
  model.push_to_hub("onekq-ai/starcoder2-3b-bnb-4bit")
 
25
  ```
 
10
  See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.
11
 
12
  ```python
13
+ from transformers import AutoModelForCausalLM, AutoTokenizer
14
  from transformers import BitsAndBytesConfig
15
  import torch
16
 
 
21
  bnb_4bit_compute_dtype=torch.bfloat16
22
  )
23
  model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-3b", quantization_config=nf4_config)
24
+ tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder2-3b")
25
+
26
  model.push_to_hub("onekq-ai/starcoder2-3b-bnb-4bit")
27
+ tokenizer.push_to_hub("onekq-ai/starcoder2-3b-bnb-4bit")
28
  ```