apepkuss79 commited on
Commit
031b5f5
·
verified ·
1 Parent(s): 62e0b3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -35
README.md CHANGED
@@ -25,48 +25,18 @@ tags:
25
 
26
  [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
27
 
28
- ## Run with `sd-api-server`
29
 
30
- Go to the [sd-api-server](https://github.com/LlamaEdge/sd-api-server/blob/main/README.md) repository for more information.
31
-
32
- <!-- - LlamaEdge version: [v0.12.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.2) and above
33
-
34
- - Prompt template
35
-
36
- - Prompt type: `chatml`
37
-
38
- - Prompt string
39
-
40
- ```text
41
- <|im_start|>system
42
- {system_message}<|im_end|>
43
- <|im_start|>user
44
- {prompt}<|im_end|>
45
- <|im_start|>assistant
46
- ```
47
-
48
- - Context size: `4096`
49
 
50
  - Run as LlamaEdge service
51
 
52
  ```bash
53
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:stablelm-2-12b-chat-Q5_K_M.gguf \
54
- llama-api-server.wasm \
55
- --prompt-template chatml \
56
- --ctx-size 4096 \
57
- --model-name stablelm-2-12b-chat
58
  ```
59
 
60
- - Run as LlamaEdge command app
61
-
62
- ```bash
63
- wasmedge --dir .:. \
64
- --nn-preload default:GGML:AUTO:stablelm-2-12b-chat-Q5_K_M.gguf \
65
- llama-chat.wasm \
66
- --prompt-template chatml \
67
- --ctx-size 4096
68
- ``` -->
69
-
70
  ## Quantized GGUF Models
71
 
72
  Using formats of different precisions will yield results of varying quality.
 
25
 
26
  [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
27
 
28
+ ## Run with LlamaEdge-StableDiffusion
29
 
30
+ - Version: [v0.2.0](https://github.com/LlamaEdge/sd-api-server/releases/tag/0.2.0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  - Run as LlamaEdge service
33
 
34
  ```bash
35
+ wasmedge --dir .:. sd-api-server.wasm \
36
+ --model-name sd-v1.5 \
37
+ --model stable-diffusion-v1-5-pruned-emaonly-Q8_0.gguf
 
 
38
  ```
39
 
 
 
 
 
 
 
 
 
 
 
40
  ## Quantized GGUF Models
41
 
42
  Using formats of different precisions will yield results of varying quality.