LiFT-HRA-10K / README.md
Fudan-FUXI's picture
Update README.md
2813c34 verified
metadata
license: mit
task_categories:
  - video-text-to-text
  - question-answering
language:
  - en
size_categories:
  - 1K<n<10K

LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment

Summary

This is the dataset proposed in our paper "LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment". LiFT-HRA is a high-quality Human Preference Annotation dataset that can be used to train video-text-to-text reward models. All videos in the LiFT-HRA dataset have resolutions of at least 512×512.

Project: https://codegoat24.github.io/LiFT/

Code: https://github.com/CodeGoat24/LiFT

Directory

DATA_PATH
└─ LiFT-HRA-data.json
└─ videos
    └─ HRA_part0.zip
    └─ HRA_part1.zip
    └─ HRA_part2.zip

Usage

Installation

  1. Clone the github repository and navigate to LiFT folder
git clone https://github.com/CodeGoat24/LiFT.git
cd LiFT
  1. Install packages
bash ./environment_setup.sh lift

Training

Dataset

Please download this LiFT-HRA dataset and put it under ./dataset directory. The data structure is like this:

dataset
├── LiFT-HRA
│  ├── LiFT-HRA-data.json
│  ├── videos

Training

LiFT-Critic-13b

bash LiFT_Critic/train/train_critic_13b.sh

LiFT-Critic-40b

bash LiFT_Critic/train/train_critic_40b.sh

Model Weights

We provide pre-trained model weights LiFT-Critic on our LiFT-HRA dataset. Please refer to here.

Citation

If you find our dataset helpful, please cite our paper.

@article{LiFT,
  title={LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.},
  author={Wang, Yibin and Tan, Zhiyu, and Wang, Junyan and Yang, Xiaomeng and Jin, Cheng and Li, Hao},
  journal={arXiv preprint arXiv:2412.04814},
  year={2024}
}