--- datasets: - openbmb/UltraFeedback language: - en license: gemma pipeline_tag: text-generation tags: - mlx --- # cogbuji/MrGrammaticaOntology-gemma-2-9B-It-SPPO-Iter3-SCT-DRIFT-core-0.6.5 The Model [cogbuji/MrGrammaticaOntology-gemma-2-9B-It-SPPO-Iter3-SCT-DRIFT-core-0.6.5](https://huggingface.co/cogbuji/MrGrammaticaOntology-gemma-2-9B-It-SPPO-Iter3-SCT-DRIFT-core-0.6.5) was converted to MLX format from [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) using mlx-lm version **0.16.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("cogbuji/MrGrammaticaOntology-gemma-2-9B-It-SPPO-Iter3-SCT-DRIFT-core-0.6.5") response = generate(model, tokenizer, prompt="hello", verbose=True) ```