Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ colorTo: yellow
|
|
6 |
sdk: docker
|
7 |
pinned: true
|
8 |
app_port: 3000
|
9 |
-
disable_embedding:
|
10 |
---
|
11 |
|
12 |
# AI Comic Factory
|
@@ -88,7 +88,7 @@ LLM_ENGINE="INFERENCE_ENDPOINT"
|
|
88 |
|
89 |
HF_API_TOKEN="Your Hugging Face token"
|
90 |
|
91 |
-
HF_INFERENCE_ENDPOINT_URL="path to your
|
92 |
```
|
93 |
|
94 |
To run this kind of LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about the licensing).
|
@@ -139,7 +139,7 @@ To use Replicate, create a `.env.local` configuration file:
|
|
139 |
```bash
|
140 |
RENDERING_ENGINE="REPLICATE"
|
141 |
|
142 |
-
REPLICATE_API_TOKEN="Your
|
143 |
|
144 |
REPLICATE_API_MODEL="stabilityai/sdxl"
|
145 |
|
|
|
6 |
sdk: docker
|
7 |
pinned: true
|
8 |
app_port: 3000
|
9 |
+
disable_embedding: false
|
10 |
---
|
11 |
|
12 |
# AI Comic Factory
|
|
|
88 |
|
89 |
HF_API_TOKEN="Your Hugging Face token"
|
90 |
|
91 |
+
HF_INFERENCE_ENDPOINT_URL="path to your INFERENCE endpoint url"
|
92 |
```
|
93 |
|
94 |
To run this kind of LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about the licensing).
|
|
|
139 |
```bash
|
140 |
RENDERING_ENGINE="REPLICATE"
|
141 |
|
142 |
+
REPLICATE_API_TOKEN="Your REPLICATE token"
|
143 |
|
144 |
REPLICATE_API_MODEL="stabilityai/sdxl"
|
145 |
|