svjack commited on
Commit
b2e1391
·
verified ·
1 Parent(s): 78faea2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -126
README.md CHANGED
@@ -1,129 +1,14 @@
1
- # Prince Xiang HunyuanVideo LoRA
2
-
3
- This repository contains the necessary setup and scripts to generate videos using the HunyuanVideo model with a LoRA (Low-Rank Adaptation) fine-tuned for Prince Xiang. Below are the instructions to install dependencies, download models, and run the demo.
4
-
5
- ---
6
-
7
- ## Installation
8
-
9
- ### Step 1: Install System Dependencies
10
- Run the following command to install required system packages:
11
- ```bash
12
- sudo apt-get update && sudo apt-get install git-lfs ffmpeg cbm
13
- ```
14
-
15
- ### Step 2: Clone the Repository
16
- Clone the repository and navigate to the project directory:
17
- ```bash
18
- git clone https://huggingface.co/svjack/Prince_Xiang_ConsistentID_HunyuanVideo_lora
19
- cd Prince_Xiang_ConsistentID_HunyuanVideo_lora
20
- ```
21
-
22
- ### Step 3: Install Python Dependencies
23
- Install the required Python packages:
24
- ```bash
25
- conda create -n py310 python=3.10
26
- conda activate py310
27
- pip install ipykernel
28
- python -m ipykernel install --user --name py310 --display-name "py310"
29
-
30
- pip install -r requirements.txt
31
- pip install ascii-magic matplotlib tensorboard huggingface_hub
32
- pip install moviepy==1.0.3
33
- pip install sageattention==1.0.6
34
-
35
- pip install torch==2.5.0 torchvision
36
- ```
37
-
38
- ---
39
-
40
- ## Download Models
41
-
42
- ### Step 1: Download HunyuanVideo Model
43
- Download the HunyuanVideo model and place it in the `ckpts` directory:
44
- ```bash
45
- huggingface-cli download tencent/HunyuanVideo --local-dir ./ckpts
46
- ```
47
-
48
- ### Step 2: Download LLaVA Model
49
- Download the LLaVA model and preprocess it:
50
- ```bash
51
- cd ckpts
52
- huggingface-cli download xtuner/llava-llama-3-8b-v1_1-transformers --local-dir ./llava-llama-3-8b-v1_1-transformers
53
- wget https://raw.githubusercontent.com/Tencent/HunyuanVideo/refs/heads/main/hyvideo/utils/preprocess_text_encoder_tokenizer_utils.py
54
- python preprocess_text_encoder_tokenizer_utils.py --input_dir llava-llama-3-8b-v1_1-transformers --output_dir text_encoder
55
- ```
56
-
57
- ### Step 3: Download CLIP Model
58
- Download the CLIP model for the text encoder:
59
- ```bash
60
- huggingface-cli download openai/clip-vit-large-patch14 --local-dir ./text_encoder_2
61
- ```
62
-
63
  ---
64
-
65
- ## Demo
66
-
67
- ### Generate Video 1: Prince Xiang
68
- Run the following command to generate a video of Prince Xiang:
69
- ```bash
70
- python hv_generate_video.py \
71
- --fp8 \
72
- --video_size 544 960 \
73
- --video_length 60 \
74
- --infer_steps 30 \
75
- --prompt "Unreal 5 render of a handsome man img. warm atmosphere, at home, bedroom. a small fishing village on a pier in the background." \
76
- --save_path . \
77
- --output_type both \
78
- --dit ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt \
79
- --attn_mode sdpa \
80
- --vae ckpts/hunyuan-video-t2v-720p/vae/pytorch_model.pt \
81
- --vae_chunk_size 32 \
82
- --vae_spatial_tile_sample_min_size 128 \
83
- --text_encoder1 ckpts/text_encoder \
84
- --text_encoder2 ckpts/text_encoder_2 \
85
- --seed 1234 \
86
- --lora_multiplier 1.0 \
87
- --lora_weight Xiang_Consis_im_lora_dir/Xiang_Consis_im_lora-000006.safetensors
88
- ```
89
-
90
-
91
- <video controls autoplay src="https://huggingface.co/svjack/Prince_Xiang_ConsistentID_HunyuanVideo_lora/resolve/main/20250209-123847_1234.mp4 "></video>
92
-
93
-
94
- ### Generate Video 2: Prince Xiang
95
- Run the following command to generate a video of Prince Xiang:
96
- ```bash
97
- python hv_generate_video.py \
98
- --fp8 \
99
- --video_size 544 960 \
100
- --video_length 60 \
101
- --infer_steps 30 \
102
- --prompt "Unreal 5 render of a handsome man, warm atmosphere, in a lush, vibrant forest. The scene is bathed in golden sunlight filtering through the dense canopy." \
103
- --save_path . \
104
- --output_type both \
105
- --dit ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt \
106
- --attn_mode sdpa \
107
- --vae ckpts/hunyuan-video-t2v-720p/vae/pytorch_model.pt \
108
- --vae_chunk_size 32 \
109
- --vae_spatial_tile_sample_min_size 128 \
110
- --text_encoder1 ckpts/text_encoder \
111
- --text_encoder2 ckpts/text_encoder_2 \
112
- --seed 1234 \
113
- --lora_multiplier 1.0 \
114
- --lora_weight Xiang_Consis_im_lora_dir/Xiang_Consis_im_lora-000006.safetensors
115
-
116
- ```
117
-
118
-
119
- <video controls autoplay src="https://huggingface.co/svjack/Prince_Xiang_ConsistentID_HunyuanVideo_lora/resolve/main/20250209-131316_1234.mp4"></video>
120
-
121
-
122
  ---
123
 
124
- ## Notes
125
- - Ensure you have sufficient GPU resources for video generation.
126
- - Adjust the `--video_size`, `--video_length`, and `--infer_steps` parameters as needed for different output qualities and lengths.
127
- - The `--prompt` parameter can be modified to generate videos with different scenes or actions.
128
-
129
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Hunyuan_Video_Lora_Demo
3
+ emoji: 📽️
4
+ colorFrom: yellow
5
+ colorTo: pink
6
+ sdk: gradio
7
+ sdk_version: 5.1.0
8
+ app_file: app.py
9
+ pinned: true
10
+ short_description: Generate video use Hunyuan with or without lora
11
+ license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference