Michael Goin
mgoin
AI & ML interests
LLM inference optimization, compression, quantization, pruning, distillation
Recent Activity
published
a model
1 day ago
RedHatAI/Phi-3-mini-128k-instruct-quantized.w8a16
published
a model
1 day ago
RedHatAI/Phi-3-medium-128k-instruct-quantized.w8a16
published
a model
1 day ago
RedHatAI/Phi-3-medium-128k-instruct-quantized.w8a8
Organizations
mgoin's activity
Address discrepancies in the languages supported by the Mistral Small 3.1 2503
1
#54 opened 5 days ago
by
fpaupier

Please update the chat template
1
#1 opened 5 days ago
by
stelterlab

FP8 Dynamic/W8A16 Quants Please
3
#44 opened 14 days ago
by
rjmehta
Problem hosting the model using vllm
4
#45 opened 14 days ago
by
ShaoServient
Remove image_processor_type
#1 opened about 1 month ago
by
pooya-davoodi-parasail
Remove image_processor_type
1
#1 opened about 1 month ago
by
pooya-davoodi-parasail
Remove image_processor_type
#2 opened about 1 month ago
by
pooya-davoodi-parasail
Use Qwen2VLImageProcessor for image_processor_type
5
#2 opened about 2 months ago
by
pooya-davoodi-parasail
Use Qwen2VLImageProcessor for image_processor_type
#3 opened about 2 months ago
by
pooya-davoodi-parasail
when i use vllm v0.7.2 to deploy r1 awq, i got empty content
13
#10 opened about 2 months ago
by
bupalinyu
MLA is not supported with moe_wna16 quantization. Disabling MLA.
5
#7 opened about 2 months ago
by
AMOSE
compressed-tensors MLA support requires fp8 activations and weights in group 'group_0',
2
#1 opened 2 months ago
by
samos123
How to load this model?
2
#1 opened 9 months ago
by
Frz614
Model does not run with VLLM
2
#3 opened 4 months ago
by
aswad546
Nice model, any info on scripts used to quantize?
1
#1 opened 4 months ago
by
RonanMcGovern

Add config_format and load_format to vLLM args
#5 opened 5 months ago
by
mgoin

Update config.json to use null for sliding_window
#4 opened 5 months ago
by
mgoin

Adding `safetensors` variant of this model
#1 opened 5 months ago
by
SFconvertbot
