OPEA
/

Safetensors
molmo
custom_code
4-bit precision
gptq
cicdatopea commited on
Commit
2dd5b9f
·
verified ·
1 Parent(s): 168c41d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -13,7 +13,6 @@ This model is an int4 model with group_size 128 and symmetric quantization of [a
13
  ## How To Use
14
  ### INT4 Inference
15
  ```python
16
- from auto_round import AutoRoundConfig ## must import for auto-round format
17
  from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
18
  from PIL import Image
19
  import requests
 
13
  ## How To Use
14
  ### INT4 Inference
15
  ```python
 
16
  from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
17
  from PIL import Image
18
  import requests