ChrisJohnson111 commited on
Commit
dfd65b7
·
verified ·
1 Parent(s): 2728346

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -6,7 +6,7 @@ colorTo: yellow
6
  sdk: docker
7
  pinned: true
8
  app_port: 3000
9
- disable_embedding: true
10
  ---
11
 
12
  # AI Comic Factory
@@ -88,7 +88,7 @@ LLM_ENGINE="INFERENCE_ENDPOINT"
88
 
89
  HF_API_TOKEN="Your Hugging Face token"
90
 
91
- HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url"
92
  ```
93
 
94
  To run this kind of LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about the licensing).
@@ -139,7 +139,7 @@ To use Replicate, create a `.env.local` configuration file:
139
  ```bash
140
  RENDERING_ENGINE="REPLICATE"
141
 
142
- REPLICATE_API_TOKEN="Your Replicate token"
143
 
144
  REPLICATE_API_MODEL="stabilityai/sdxl"
145
 
 
6
  sdk: docker
7
  pinned: true
8
  app_port: 3000
9
+ disable_embedding: false
10
  ---
11
 
12
  # AI Comic Factory
 
88
 
89
  HF_API_TOKEN="Your Hugging Face token"
90
 
91
+ HF_INFERENCE_ENDPOINT_URL="path to your INFERENCE endpoint url"
92
  ```
93
 
94
  To run this kind of LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about the licensing).
 
139
  ```bash
140
  RENDERING_ENGINE="REPLICATE"
141
 
142
+ REPLICATE_API_TOKEN="Your REPLICATE token"
143
 
144
  REPLICATE_API_MODEL="stabilityai/sdxl"
145