Deep Bidirectional Language-Knowledge Graph Pretraining Paper • 2210.09338 • Published Oct 17, 2022 • 1
Switch-Transformers release Collection This release included various MoE (Mixture of expert) models, based on the T5 architecture . The base models use from 8 to 256 experts. • 9 items • Updated 12 days ago • 15
Mixtral HQQ Quantized Models Collection 4-bit and 2-bit Mixtral models quantized using https://github.com/mobiusml/hqq • 9 items • Updated Mar 29 • 14