Fudan-FUXI
commited on
Commit
•
2813c34
1
Parent(s):
d158980
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,82 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- video-text-to-text
|
5 |
+
- question-answering
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
size_categories:
|
9 |
+
- 1K<n<10K
|
10 |
+
---
|
11 |
+
|
12 |
+
# LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment
|
13 |
+
|
14 |
+
## Summary
|
15 |
+
This is the dataset proposed in our paper "LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment". LiFT-HRA is a high-quality Human Preference Annotation dataset that can be used to train video-text-to-text reward models. All videos in the LiFT-HRA dataset have resolutions of at least 512×512.
|
16 |
+
|
17 |
+
Project: https://codegoat24.github.io/LiFT/
|
18 |
+
|
19 |
+
Code: https://github.com/CodeGoat24/LiFT
|
20 |
+
|
21 |
+
## Directory
|
22 |
+
```
|
23 |
+
DATA_PATH
|
24 |
+
└─ LiFT-HRA-data.json
|
25 |
+
└─ videos
|
26 |
+
└─ HRA_part0.zip
|
27 |
+
└─ HRA_part1.zip
|
28 |
+
└─ HRA_part2.zip
|
29 |
+
```
|
30 |
+
|
31 |
+
## Usage
|
32 |
+
### Installation
|
33 |
+
|
34 |
+
1. Clone the github repository and navigate to LiFT folder
|
35 |
+
```bash
|
36 |
+
git clone https://github.com/CodeGoat24/LiFT.git
|
37 |
+
cd LiFT
|
38 |
+
```
|
39 |
+
2. Install packages
|
40 |
+
```
|
41 |
+
bash ./environment_setup.sh lift
|
42 |
+
```
|
43 |
+
|
44 |
+
### Training
|
45 |
+
|
46 |
+
**Dataset**
|
47 |
+
|
48 |
+
Please download this LiFT-HRA dataset and put it under `./dataset` directory. The data structure is like this:
|
49 |
+
```
|
50 |
+
dataset
|
51 |
+
├── LiFT-HRA
|
52 |
+
│ ├── LiFT-HRA-data.json
|
53 |
+
│ ├── videos
|
54 |
+
```
|
55 |
+
|
56 |
+
**Training**
|
57 |
+
|
58 |
+
LiFT-Critic-13b
|
59 |
+
```bash
|
60 |
+
bash LiFT_Critic/train/train_critic_13b.sh
|
61 |
+
```
|
62 |
+
LiFT-Critic-40b
|
63 |
+
```bash
|
64 |
+
bash LiFT_Critic/train/train_critic_40b.sh
|
65 |
+
```
|
66 |
+
|
67 |
+
|
68 |
+
## Model Weights
|
69 |
+
We provide pre-trained model weights LiFT-Critic on our LiFT-HRA dataset. Please refer to [here](https://huggingface.co/collections/Fudan-FUXI/lift-6756e628d83c390221e02857).
|
70 |
+
|
71 |
+
|
72 |
+
## Citation
|
73 |
+
If you find our dataset helpful, please cite our paper.
|
74 |
+
|
75 |
+
```bibtex
|
76 |
+
@article{LiFT,
|
77 |
+
title={LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.},
|
78 |
+
author={Wang, Yibin and Tan, Zhiyu, and Wang, Junyan and Yang, Xiaomeng and Jin, Cheng and Li, Hao},
|
79 |
+
journal={arXiv preprint arXiv:2412.04814},
|
80 |
+
year={2024}
|
81 |
+
}
|
82 |
+
```
|