This example, based on this [MJPEG server](https://github.com/radames/Real-Time-Latent-Consistency-Model/), runs image-to-image with a live webcam feed or screen capture on a web browser.
## Usage
### 1. Prepare Dependencies
You need Node.js 18+ and Python 3.10 to run this example. Please make sure you've installed all dependencies according to the [installation instructions](../README.md#installation).
```bash
cd frontend
npm i
npm run build
cd ..
pip install -r requirements.txt
```
If you face some difficulties in install `npm`, you can try to install it via `conda`:
```bash
conda install -c conda-forge nodejs
```
### 2. Run Demo
If you run the demo with default [setting](./demo_cfg.yaml), you should download the model for style `felted`.
```bash
bash ../scripts/download_model.sh felted
```
Then, you can run the demo with the following command, and open `http://127.0.0.1:7860` in your browser:
```bash
# with TensorRT acceleration, please pay patience for the first time, may take more than 20 minutes
python main.py --port 7860 --host 127.0.0.1 --acceleration tensorrt
# if you don't have TensorRT, you can run it with `none` acceleration
python main.py --port 7860 --host 127.0.0.1 --acceleration none
```
If you want to run this demo on a remote server, you can set host to `0.0.0.0`, e.g.
```bash
python main.py --port 7860 --host 0.0.0.0 --acceleration tensorrt
```