llava-v1.6-34b-8bit / README.md
prince-canuma's picture
Upload folder using huggingface_hub
4fb3b52 verified
|
raw
history blame
476 Bytes
---
tags:
- vision
- image-text-to-text
- mlx
---
# mlx-community/llava-v1.6-34b-8bit
This model was converted to MLX format from [`llava-hf/llava-v1.6-34b-hf`]() using mlx-vlm version **0.0.9**.
Refer to the [original model card](https://huggingface.co/llava-hf/llava-v1.6-34b-hf) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/llava-v1.6-34b-8bit --max-tokens 100 --temp 0.0
```