Original Model Link : https://huggingface.co/PleIAs/Pleias-3b-Preview
name: Pleias-3b-Preview-Q8-mlx
license: apache-2.0
base_model: PleIAs/Pleias-3b-Preview
datasets: PleIAs/common_corpus
thumbnail: "https://cdn-avatars.huggingface.co/v1/production/uploads/65ff1816871b36bf84fc3c37/JZLA1RQXQ7NanLF5G9c6Q.png"
pipeline_tag: text2text-generation
library_name: mlx
hardware_type: "NVIDIA H100 x192"
hours_used: 480
cloud_provider: "GENCI"
cloud_region: "France"
co2_emitted: "16 tons CO2eq"
model_type: Llama/GPT-Neox
tags:
- text-to-text
- completions
funded_by:
- "Mozilla Foundation Local AI Program"
- "étalab"
task:
- text-generation
- text-to-text
- text2text-generation
language:
- en
- fr
- es
- de
- it
- nl
- la
- pt
get_started_code:
- uvx --from mlx-lm mlx_lm.generate --model "darkshapes/Pleias-3b-Preview-Q8-mlx" --prompt ' def create_pipeline(self, architecture, *args, **kwargs):\n """\n Build a diffusers pipe based on model type\n '
Pleias 3b MLX
Pleias is a novel, fully-open completion model that infers the remainder of partial prompts.
From the original model card:
It includes the following features, that would apply to any responsibly trained variant:
- Only trained on open data under a permissible license and in compliance with the European AI Act. By design, all Pleias model are unable to output copyrighted content.
- Extensive multilingual support for main European languages.
- A new tokenizer designed for enhanced document processing tasks and better multilingual support.
- Extremely low level of toxicity and problematic content.
MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)
Generation using
uv
https://docs.astral.sh/uv/**:uvx --from mlx-lm mlx_lm.generate --model "darkshapes/Pleias-3b-Preview-Q8-mlx" --prompt ' def create_pipeline(self, architecture, *args, **kwargs):\n """\n Build a diffusers pipe based on model type\n '
Generation using pip:
pip install mlx_lm python -m mlx_lm.generate --model "darkshapes/Pleias-3b-Preview-Q8-mlx" --prompt ' def create_pipeline(self, architecture, *args, **kwargs):\n """\n Build a diffusers pipe based on model type\n '
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support text2text-generation models for mlx
library.
Model tree for darkshapes/Pleias-3b-Preview-Q8-mlx
Base model
PleIAs/Pleias-3b-Preview