--- license: llama3.2 pipeline_tag: text-generation base_model: scb10x/llama3.2-typhoon2-3b-instruct tags: - mlx --- # Float16-cloud/llama3.2-typhoon2-3b-instruct-mlx-8bit The Model [Float16-cloud/llama3.2-typhoon2-3b-instruct-mlx-8bit](https://huggingface.co/Float16-cloud/llama3.2-typhoon2-3b-instruct-mlx-8bit) was converted to MLX format from [scb10x/llama3.2-typhoon2-3b-instruct](https://huggingface.co/scb10x/llama3.2-typhoon2-3b-instruct) using mlx-lm version **0.20.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Float16-cloud/llama3.2-typhoon2-3b-instruct-mlx-8bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```