File size: 1,415 Bytes
cffa9b5
efa73de
 
cffa9b5
efa73de
 
 
 
 
064a1a7
 
 
efa73de
 
 
 
 
cffa9b5
 
064a1a7
cffa9b5
064a1a7
cffa9b5
064a1a7
cffa9b5
064a1a7
cffa9b5
064a1a7
3459fe9
064a1a7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
language:
- en
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
tags:
- text-generation-inference
- gemma
- gptq
- google
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
  agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kFznlPlWYOrcgd7Q1NI2tYMLH_vTRuys?usp=sharing) 

# elysiantech/gemma-2b-gptq-4bit

gemma-2b-gptq-4bit is a version of the [2B base model](https://huggingface.co/google/gemma-2b) model that was quantized using the GPTQ method developed by [Lin et al. (2023)](https://arxiv.org/abs/2308.07662).

Please refer to the [Original Gemma Model Card](https://ai.google.dev/gemma/docs) for details about the model preparation and training processes.

## Dependencies
- [`auto-gptq`](https://pypi.org/project/auto-gptq/0.7.1/) – [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ.git) was used to quantize the phi-3 model.
- [`vllm==0.4.2`](https://pypi.org/project/vllm/0.4.2/) – [vLLM](https://github.com/vllm-project/vllm) was used to host models for benchmarking.