Spaces:
Runtime error
Runtime error
Rohit Kochikkat Francis
commited on
Commit
•
e447c26
1
Parent(s):
e7cae83
Update README.md
Browse files
README.md
CHANGED
@@ -1,112 +1,13 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
Installation
|
17 |
-
------------
|
18 |
-
|
19 |
-
Be aware, the installation needs technical skills and is not for beginners. Please do not open platform and installation related issues on GitHub. We have a very helpful [Discord](https://join.facefusion.io) community that will guide you to complete the installation.
|
20 |
-
|
21 |
-
Get started with the [installation](https://docs.facefusion.io/installation) guide.
|
22 |
-
|
23 |
-
|
24 |
-
Usage
|
25 |
-
-----
|
26 |
-
|
27 |
-
Run the command:
|
28 |
-
|
29 |
-
```
|
30 |
-
python run.py [options]
|
31 |
-
|
32 |
-
options:
|
33 |
-
-h, --help show this help message and exit
|
34 |
-
-s SOURCE_PATHS, --source SOURCE_PATHS choose single or multiple source images or audios
|
35 |
-
-t TARGET_PATH, --target TARGET_PATH choose single target image or video
|
36 |
-
-o OUTPUT_PATH, --output OUTPUT_PATH specify the output file or directory
|
37 |
-
-v, --version show program's version number and exit
|
38 |
-
|
39 |
-
misc:
|
40 |
-
--force-download force automate downloads and exit
|
41 |
-
--skip-download omit automate downloads and remote lookups
|
42 |
-
--headless run the program without a user interface
|
43 |
-
--log-level {error,warn,info,debug} adjust the message severity displayed in the terminal
|
44 |
-
|
45 |
-
execution:
|
46 |
-
--execution-providers EXECUTION_PROVIDERS [EXECUTION_PROVIDERS ...] accelerate the model inference using different providers (choices: cpu, ...)
|
47 |
-
--execution-thread-count [1-128] specify the amount of parallel threads while processing
|
48 |
-
--execution-queue-count [1-32] specify the amount of frames each thread is processing
|
49 |
-
|
50 |
-
memory:
|
51 |
-
--video-memory-strategy {strict,moderate,tolerant} balance fast frame processing and low VRAM usage
|
52 |
-
--system-memory-limit [0-128] limit the available RAM that can be used while processing
|
53 |
-
|
54 |
-
face analyser:
|
55 |
-
--face-analyser-order {left-right,right-left,top-bottom,bottom-top,small-large,large-small,best-worst,worst-best} specify the order in which the face analyser detects faces
|
56 |
-
--face-analyser-age {child,teen,adult,senior} filter the detected faces based on their age
|
57 |
-
--face-analyser-gender {female,male} filter the detected faces based on their gender
|
58 |
-
--face-detector-model {many,retinaface,scrfd,yoloface,yunet} choose the model responsible for detecting the face
|
59 |
-
--face-detector-size FACE_DETECTOR_SIZE specify the size of the frame provided to the face detector
|
60 |
-
--face-detector-score [0.0-1.0] filter the detected faces base on the confidence score
|
61 |
-
--face-landmarker-score [0.0-1.0] filter the detected landmarks base on the confidence score
|
62 |
-
|
63 |
-
face selector:
|
64 |
-
--face-selector-mode {many,one,reference} use reference based tracking or simple matching
|
65 |
-
--reference-face-position REFERENCE_FACE_POSITION specify the position used to create the reference face
|
66 |
-
--reference-face-distance [0.0-1.5] specify the desired similarity between the reference face and target face
|
67 |
-
--reference-frame-number REFERENCE_FRAME_NUMBER specify the frame used to create the reference face
|
68 |
-
|
69 |
-
face mask:
|
70 |
-
--face-mask-types FACE_MASK_TYPES [FACE_MASK_TYPES ...] mix and match different face mask types (choices: box, occlusion, region)
|
71 |
-
--face-mask-blur [0.0-1.0] specify the degree of blur applied the box mask
|
72 |
-
--face-mask-padding FACE_MASK_PADDING [FACE_MASK_PADDING ...] apply top, right, bottom and left padding to the box mask
|
73 |
-
--face-mask-regions FACE_MASK_REGIONS [FACE_MASK_REGIONS ...] choose the facial features used for the region mask (choices: skin, left-eyebrow, right-eyebrow, left-eye, right-eye, glasses, nose, mouth, upper-lip, lower-lip)
|
74 |
-
|
75 |
-
frame extraction:
|
76 |
-
--trim-frame-start TRIM_FRAME_START specify the the start frame of the target video
|
77 |
-
--trim-frame-end TRIM_FRAME_END specify the the end frame of the target video
|
78 |
-
--temp-frame-format {bmp,jpg,png} specify the temporary resources format
|
79 |
-
--keep-temp keep the temporary resources after processing
|
80 |
-
|
81 |
-
output creation:
|
82 |
-
--output-image-quality [0-100] specify the image quality which translates to the compression factor
|
83 |
-
--output-image-resolution OUTPUT_IMAGE_RESOLUTION specify the image output resolution based on the target image
|
84 |
-
--output-video-encoder {libx264,libx265,libvpx-vp9,h264_nvenc,hevc_nvenc,h264_amf,hevc_amf} specify the encoder use for the video compression
|
85 |
-
--output-video-preset {ultrafast,superfast,veryfast,faster,fast,medium,slow,slower,veryslow} balance fast video processing and video file size
|
86 |
-
--output-video-quality [0-100] specify the video quality which translates to the compression factor
|
87 |
-
--output-video-resolution OUTPUT_VIDEO_RESOLUTION specify the video output resolution based on the target video
|
88 |
-
--output-video-fps OUTPUT_VIDEO_FPS specify the video output fps based on the target video
|
89 |
-
--skip-audio omit the audio from the target video
|
90 |
-
|
91 |
-
frame processors:
|
92 |
-
--frame-processors FRAME_PROCESSORS [FRAME_PROCESSORS ...] load a single or multiple frame processors. (choices: face_debugger, face_enhancer, face_swapper, frame_colorizer, frame_enhancer, lip_syncer, ...)
|
93 |
-
--face-debugger-items FACE_DEBUGGER_ITEMS [FACE_DEBUGGER_ITEMS ...] load a single or multiple frame processors (choices: bounding-box, face-landmark-5, face-landmark-5/68, face-landmark-68, face-landmark-68/5, face-mask, face-detector-score, face-landmarker-score, age, gender)
|
94 |
-
--face-enhancer-model {codeformer,gfpgan_1.2,gfpgan_1.3,gfpgan_1.4,gpen_bfr_256,gpen_bfr_512,gpen_bfr_1024,gpen_bfr_2048,restoreformer_plus_plus} choose the model responsible for enhancing the face
|
95 |
-
--face-enhancer-blend [0-100] blend the enhanced into the previous face
|
96 |
-
--face-swapper-model {blendswap_256,inswapper_128,inswapper_128_fp16,simswap_256,simswap_512_unofficial,uniface_256} choose the model responsible for swapping the face
|
97 |
-
--frame-colorizer-model {ddcolor,ddcolor_artistic,deoldify,deoldify_artistic,deoldify_stable} choose the model responsible for colorizing the frame
|
98 |
-
--frame-colorizer-blend [0-100] blend the colorized into the previous frame
|
99 |
-
--frame-colorizer-size {192x192,256x256,384x384,512x512} specify the size of the frame provided to the frame colorizer
|
100 |
-
--frame-enhancer-model {lsdir_x4,nomos8k_sc_x4,real_esrgan_x2,real_esrgan_x2_fp16,real_esrgan_x4,real_esrgan_x4_fp16,real_hatgan_x4,span_kendata_x4} choose the model responsible for enhancing the frame
|
101 |
-
--frame-enhancer-blend [0-100] blend the enhanced into the previous frame
|
102 |
-
--lip-syncer-model {wav2lip_gan} choose the model responsible for syncing the lips
|
103 |
-
|
104 |
-
uis:
|
105 |
-
--ui-layouts UI_LAYOUTS [UI_LAYOUTS ...] launch a single or multiple UI layouts (choices: benchmark, default, webcam, ...)
|
106 |
-
```
|
107 |
-
|
108 |
-
|
109 |
-
Documentation
|
110 |
-
-------------
|
111 |
-
|
112 |
-
Read the [documentation](https://docs.facefusion.io) for a deep dive.
|
|
|
1 |
+
---
|
2 |
+
title: Test
|
3 |
+
emoji: 🐢
|
4 |
+
colorFrom: pink
|
5 |
+
colorTo: pink
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 4.27.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
|
13 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|