OPEA
/

Safetensors
mllama
4-bit precision
intel/auto-round
cicdatopea commited on
Commit
e63d7ab
1 Parent(s): cf3e11b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  datasets:
3
  - NeelNanda/pile-10k
 
 
 
4
  ---
5
 
6
  ## Model Details
@@ -135,4 +138,4 @@ The license on this model does not constitute legal advice. We are not responsib
135
 
136
  @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
137
 
138
- [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
 
1
  ---
2
  datasets:
3
  - NeelNanda/pile-10k
4
+ license: llama3.2
5
+ base_model:
6
+ - meta-llama/Llama-3.2-90B-Vision-Instruct
7
  ---
8
 
9
  ## Model Details
 
138
 
139
  @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
140
 
141
+ [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)