--- license: other language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- Donald Trump speaking into a microphone, then t2k1s takes off clothes revealing a lean muscular body and shows off muscles, pointing his index finger. output: url: example_videos/trump_muscle.mp4 - text: >- Elon Musk smiling slightly in a suit jacket, then t2k1s takes off clothes revealing a lean muscular body and shows off muscles, giving a friendly smile. output: url: example_videos/elon_muscle.mp4 - text: >- A man with a well-groomed beard and blue shirt smiles at the camera, then t2k1s takes off clothes revealing a lean muscular body and shows off muscles while smiling widely. output: url: example_videos/man1_muscle.mp4 - text: >- A smiling man with dark, curly hair and a white t-shirt, then t2k1s takes off clothes revealing a lean muscular body and shows off muscles, flexing and smiling. output: url: example_videos/man2_muscle.mp4 ---
This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to give muscles to anyone in an image!
The key trigger phrase is: t2k1s takes off clothes revealing a lean muscular body and shows off muscles
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!