vlm-demo / README.md
abalakrishnaTRI's picture
fix
c361680
|
raw
history blame
1.66 kB
# VLM Demo
> *VLM Demo*: Lightweight repo for chatting with models loaded into *VLM Bench*.
---
## Installation
This repository can be installed as follows:
```bash
git clone [email protected]:TRI-ML/vlm-demo.git
cd vlm-demo
pip install -e .
```
This repository also requires that the `vlm-bench` package (`vlbench`) and
`prismatic-vlms` package (`prisma`) are installed in the current environment.
These can both be installed from source from the following git repos:
`vlm-bench`: `https://github.com/TRI-ML/vlm-bench`
`prismatic-vlms`: `https://github.com/TRI-ML/prismatic-vlms`
## Usage
Start Gradio Controller: `serve/gradio_controller.py`
Start Gradio Web Server: `serve/gradio_web_server.py`
Run interactive demo: `interactive_demo.py`
To run the demo, run the following commands:
Start Gradio Controller: `python -m serve.controller --host 0.0.0.0 --port 10000`
Start Gradio Web Server: `python -m serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload --share`
Run interactive demo: `CUDA_VISIBLE_DEVICES=0 python -m interactive_demo --port 40000 --model_dir <PATH TO MODEL CKPT>`
## Contributing
Before committing to the repository, *make sure to set up your dev environment!*
Here are the basic development environment setup guidelines:
+ Fork/clone the repository, performing an editable installation. Make sure to install with the development dependencies
(e.g., `pip install -e ".[dev]"`); this will install `black`, `ruff`, and `pre-commit`.
+ Install `pre-commit` hooks (`pre-commit install`).
+ Branch for the specific feature/issue, issuing PR against the upstream repository for review.