Spaces:
Running
Running
Why does converting the Qwen/Qwen2.5-Omni-7B model using mlx-community/mlx-my-repo result in an error?
#47 opened about 5 hours ago
by
CHSFM
Space for converting models with vlm?
#45 opened 16 days ago
by
alexgusevski

Add support for converting GGUF models to MLX
1
#43 opened 21 days ago
by
Fmuaddib

Error: rope_scaling 'type' currently only supports 'linear'
#42 opened 21 days ago
by
Fmuaddib
