elysiantech
commited on
Commit
•
f2b33fb
1
Parent(s):
064a1a7
Update README.md
Browse files
README.md
CHANGED
@@ -26,5 +26,5 @@ gemma-2b-gptq-4bit is a version of the [2B base model](https://huggingface.co/go
|
|
26 |
Please refer to the [Original Gemma Model Card](https://ai.google.dev/gemma/docs) for details about the model preparation and training processes.
|
27 |
|
28 |
## Dependencies
|
29 |
-
- [`auto-gptq](https://pypi.org/project/auto-gptq/0.7.1/) – [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ.git) was used to quantize the phi-3 model.
|
30 |
- [`vllm==0.4.2`](https://pypi.org/project/vllm/0.4.2/) – [vLLM](https://github.com/vllm-project/vllm) was used to host models for benchmarking.
|
|
|
26 |
Please refer to the [Original Gemma Model Card](https://ai.google.dev/gemma/docs) for details about the model preparation and training processes.
|
27 |
|
28 |
## Dependencies
|
29 |
+
- [`auto-gptq'](https://pypi.org/project/auto-gptq/0.7.1/) – [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ.git) was used to quantize the phi-3 model.
|
30 |
- [`vllm==0.4.2`](https://pypi.org/project/vllm/0.4.2/) – [vLLM](https://github.com/vllm-project/vllm) was used to host models for benchmarking.
|