File size: 11,113 Bytes
91e25a7 6426e9a 91e25a7 eedbec8 5ff02af 91e25a7 dbd7242 7223d1e 91e25a7 dbd7242 91e25a7 dbd7242 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 |
---
title: EchoMimic
emoji: π¨
colorFrom: pink
colorTo: blue
sdk: gradio
sdk_version: 5.4.0
app_file: webgui.py
pinned: false
suggested_hardware: a10g-large
short_description: Audio-Driven Portrait Animations
---
<h1 align='center'>EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning</h1>
<div align='center'>
<a href='https://github.com/yuange250' target='_blank'>Zhiyuan Chen</a><sup>*</sup> 
<a href='https://github.com/JoeFannie' target='_blank'>Jiajiong Cao</a><sup>*</sup> 
<a href='https://github.com/octavianChen' target='_blank'>Zhiquan Chen</a><sup></sup> 
<a href='https://github.com/lymhust' target='_blank'>Yuming Li</a><sup></sup> 
<a href='https://github.com/' target='_blank'>Chenguang Ma</a><sup></sup>
</div>
<div align='center'>
*Equal Contribution.
</div>
<div align='center'>
Terminal Technology Department, Alipay, Ant Group.
</div>
<br>
<div align='center'>
<a href='https://badtobest.github.io/echomimic.html'><img src='https://img.shields.io/badge/Project-Page-blue'></a>
<a href='https://huggingface.co/BadToBest/EchoMimic'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>
<a href='https://www.modelscope.cn/models/BadToBest/EchoMimic'><img src='https://img.shields.io/badge/ModelScope-Model-purple'></a>
<a href='https://arxiv.org/abs/2407.08136'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
<a href='assets/echomimic.png'><img src='https://badges.aleen42.com/src/wechat.svg'></a>
</div>
## 📣 📣 Updates
* [2024.07.17] π₯π₯π₯ Accelerated models and pipe are released. The inference speed can be improved by **10x** (from ~7mins/240frames to ~50s/240frames on V100 GPU)
* [2024.07.14] π₯ [ComfyUI](https://github.com/smthemex/ComfyUI_EchoMimic) is now available. Thanks @smthemex for the contribution.
* [2024.07.13] π₯ Thanks [NewGenAI](https://www.youtube.com/@StableAIHub) for the [video installation tutorial](https://www.youtube.com/watch?v=8R0lTIY7tfI).
* [2024.07.13] π₯ We release our pose&audio driven codes and models.
* [2024.07.12] π₯ WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
* [2024.07.12] π₯ Our [paper](https://arxiv.org/abs/2407.08136) is in public on arxiv.
* [2024.07.09] π₯ We release our audio driven codes and models.
## Gallery
### Audio Driven (Sing)
<table class="center">
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/d014d921-9f94-4640-97ad-035b00effbfe" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/877603a5-a4f9-4486-a19f-8888422daf78" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/e0cb5afb-40a6-4365-84f8-cb2834c4cfe7" muted="false"></video>
</td>
</tr>
</table>
### Audio Driven (English)
<table class="center">
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/386982cd-3ff8-470d-a6d9-b621e112f8a5" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/5c60bb91-1776-434e-a720-8857a00b1501" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/1f15adc5-0f33-4afa-b96a-2011886a4a06" muted="false"></video>
</td>
</tr>
</table>
### Audio Driven (Chinese)
<table class="center">
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/a8092f9a-a5dc-4cd6-95be-1831afaccf00" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/c8b5c59f-0483-42ef-b3ee-4cffae6c7a52" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/532a3e60-2bac-4039-a06c-ff6bf06cb4a4" muted="false"></video>
</td>
</tr>
</table>
### Landmark Driven
<table class="center">
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/1da6c46f-4532-4375-a0dc-0a4d6fd30a39" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/d4f4d5c1-e228-463a-b383-27fb90ed6172" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/18bd2c93-319e-4d1c-8255-3f02ba717475" muted="false"></video>
</td>
</tr>
</table>
### Audio + Selected Landmark Driven
<table class="center">
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/4a29d735-ec1b-474d-b843-3ff0bdf85f55" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/b994c8f5-8dae-4dd8-870f-962b50dc091f" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/BadToBest/EchoMimic/assets/11451501/955c1d51-07b2-494d-ab93-895b9c43b896" muted="false"></video>
</td>
</tr>
</table>
**οΌSome demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.οΌ**
## Installation
### Download the Codes
```bash
git clone https://github.com/BadToBest/EchoMimic
cd EchoMimic
```
### Python Environment Setup
- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
- Tested Python Version: 3.8 / 3.10 / 3.11
Create conda environment (Recommended):
```bash
conda create -n echomimic python=3.8
conda activate echomimic
```
Install packages with `pip`
```bash
pip install -r requirements.txt
```
### Download ffmpeg-static
Download and decompress [ffmpeg-static](https://www.johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.4-amd64-static.tar.xz), then
```
export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static
```
### Download pretrained weights
```shell
git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights
```
The **pretrained_weights** is organized as follows.
```
./pretrained_weights/
βββ denoising_unet.pth
βββ reference_unet.pth
βββ motion_module.pth
βββ face_locator.pth
βββ sd-vae-ft-mse
β βββ ...
βββ sd-image-variations-diffusers
β βββ ...
βββ audio_processor
βββ whisper_tiny.pt
```
In which **denoising_unet.pth** / **reference_unet.pth** / **motion_module.pth** / **face_locator.pth** are the main checkpoints of **EchoMimic**. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:
- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)
- [sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
- [audio_processor(whisper)](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt)
### Audio-Drived Algo Inference
Run the python inference script:
```bash
python -u infer_audio2vid.py
python -u infer_audio2vid_pose.py
```
### Audio-Drived Algo Inference On Your Own Cases
Edit the inference config file **./configs/prompts/animation.yaml**, and add your own case:
```bash
test_cases:
"path/to/your/image":
- "path/to/your/audio"
```
The run the python inference script:
```bash
python -u infer_audio2vid.py
```
### Motion Alignment between Ref. Img. and Driven Vid.
(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)
Edit driver_video and ref_image to your path in demo_motion_sync.py, then run
```bash
python -u demo_motion_sync.py
```
### Audio&Pose-Drived Algo Inference
Edit ./configs/prompts/animation_pose.yaml, then run
```bash
python -u infer_audio2vid_pose.py
```
### Pose-Drived Algo Inference
Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run
```bash
python -u infer_audio2vid_pose.py
```
### Run the Gradio UI
Thanks to the contribution from @Robin021:
```bash
python -u webgui.py --server_port=3000
```
## Release Plans
| Status | Milestone | ETA |
|:--------:|:-------------------------------------------------------------------------|:--:|
| β
| The inference source code of the Audio-Driven algo meet everyone on GitHub | 9th July, 2024 |
| β
| Pretrained models trained on English and Mandarin Chinese to be released | 9th July, 2024 |
| β
| The inference source code of the Pose-Driven algo meet everyone on GitHub | 13th July, 2024 |
| β
| Pretrained models with better pose control to be released | 13th July, 2024 |
| β
| Accelerated models to be released | 17th July, 2024 |
| π | Pretrained models with better sing performance to be released | TBD |
| π | Large-Scale and High-resolution Chinese-Based Talking Head Dataset | TBD |
## Acknowledgements
We would like to thank the contributors to the [AnimateDiff](https://github.com/guoyww/AnimateDiff), [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) and [MuseTalk](https://github.com/TMElyralab/MuseTalk) repositories, for their open research and exploration.
We are also grateful to [V-Express](https://github.com/tencent-ailab/V-Express) and [hallo](https://github.com/fudan-generative-vision/hallo) for their outstanding work in the area of diffusion-based talking heads.
If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.
## Citation
If you find our work useful for your research, please consider citing the paper :
```
@misc{chen2024echomimic,
title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},
author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=BadToBest/EchoMimic&type=Date)](https://star-history.com/?spm=5176.28103460.0.0.342a3da23STWrU#BadToBest/EchoMimic&Date) |