LLMs quantized with GPTQ
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
updated
a model
2 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
updated
a model
2 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
updated
a model
2 days ago
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Organizations
Collections
4
models
55

iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
16

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
24

iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
11

iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64
Text Generation
•
Updated
•
11

iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g128
Text Generation
•
Updated
•
12

iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g128
Text Generation
•
Updated
•
12

iproskurina/Mistral-7B-v0.1-GPTQ-4bit-g128
Text Generation
•
Updated
•
16

iproskurina/opt-125m-GPTQ-4bit-g128
Text Generation
•
Updated
•
43

iproskurina/opt-13b-GPTQ-4bit-g128
Text Generation
•
Updated
•
14

iproskurina/opt-6.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
29