alyssavance's picture
Create README.md
46b6adf verified
|
raw
history blame contribute delete
No virus
242 Bytes
4-bit HQQ quantized version of Meta-Llama-3.1-405B (base version). Quantization parameters:
nbits=2, group_size=128, quant_zero=True, quant_scale=True, axis=0
Shards have been split with "split", to recombine:
cat qmodel_shard* > qmodel.pt