LLMs quantized with GPTQ
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
new activity
9 days ago
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ:weights not used when initializing MistralForCausalLM
updated
a model
14 days ago
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
updated
a model
14 days ago
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g2
Organizations
Collections
4
models
43
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
Text Generation
•
Updated
•
29
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g2
Text Generation
•
Updated
•
29
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g1
Text Generation
•
Updated
•
7
iproskurina/opt-125m-gptq2
Text Generation
•
Updated
•
10
iproskurina/distilbert-base-alternate-layers
Updated
•
3
iproskurina/en_grammar_checker
Updated
•
9
•
4
iproskurina/Mistral-7B-v0.3-gptq-3bit
Text Generation
•
Updated
•
19
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
11
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
20
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
11