bartowski commited on
Commit
a241d70
·
verified ·
1 Parent(s): 8abb20a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -17,6 +17,18 @@ Original model: https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct
17
 
18
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
19
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Prompt format
21
 
22
  ```
 
17
 
18
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
19
 
20
+ ## How to run
21
+
22
+ Since this a new vision model, I'll add special instructions this one time
23
+
24
+ If you've build llama.cpp locally, you'll want to run:
25
+
26
+ ```
27
+ ./llama-qwen2vl-cli -m /models/Qwen2-VL-2B-Instruct-Q4_0.gguf --mmproj /models/mmproj-Qwen2-VL-2B-Instruct-f32.gguf -p 'Describe this image.' --image '/models/test_image.jpg'
28
+ ```
29
+
30
+ And the model will output the answer. Very simple stuff, similar to other llava models, just make sure you use the correct binary!
31
+
32
  ## Prompt format
33
 
34
  ```