prince-canuma commited on
Commit
b4669ac
·
verified ·
1 Parent(s): 5683279

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: arcee-ai/Arcee-Maestro-7B-Preview
3
+ library_name: transformers
4
+ tags:
5
+ - mlx
6
+ license: apache-2.0
7
+ ---
8
+
9
+ # mlx-community/Virtuoso-Medium-v2-bf16
10
+
11
+ The Model [arcee-ai/Arcee-Maestro-7B-Preview-mlx](https://huggingface.co/arcee-ai/Arcee-Maestro-7B-Preview-mlx) was
12
+ converted to MLX format from [arcee-ai/Arcee-Maestro-7B-Preview](https://huggingface.co/arcee-ai/Arcee-Maestro-7B-Preview)
13
+ using mlx-lm version **0.21.1**.
14
+
15
+ ## Use with mlx
16
+
17
+ ```bash
18
+ pip install mlx-lm
19
+ ```
20
+
21
+ ```python
22
+ from mlx_lm import load, generate
23
+ from huggingface_hub import snapshot_download
24
+
25
+ path = snapshot_download(
26
+ repo_id="arcee-ai/Arcee-Maestro-7B-Preview-MLX",
27
+ allow_patterns="4bit/*", # This will download everything in the 4bit folder
28
+ local_dir="Arcee-Maestro-7B-Preview-4bit" # Optional: specify where to save
29
+ )
30
+
31
+ model, tokenizer = load("Arcee-Maestro-7B-Preview-4bit")
32
+
33
+ prompt = "hello"
34
+
35
+ if tokenizer.chat_template is not None:
36
+ messages = [{"role": "user", "content": prompt}]
37
+ prompt = tokenizer.apply_chat_template(
38
+ messages, add_generation_prompt=True
39
+ )
40
+
41
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
42
+ ```