Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
@@ -1,119 +1,13 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
**Time-Travel Rephotography**
|
15 |
-
<br/>
|
16 |
-
[Xuan Luo](https://roxanneluo.github.io),
|
17 |
-
[Xuaner Zhang](https://people.eecs.berkeley.edu/~cecilia77/),
|
18 |
-
[Paul Yoo](https://www.linkedin.com/in/paul-yoo-768a3715b),
|
19 |
-
[Ricardo Martin-Brualla](http://www.ricardomartinbrualla.com/),
|
20 |
-
[Jason Lawrence](http://jasonlawrence.info/), and
|
21 |
-
[Steven M. Seitz](https://homes.cs.washington.edu/~seitz/)
|
22 |
-
<br/>
|
23 |
-
In SIGGRAPH Asia 2021.
|
24 |
-
|
25 |
-
## Demo
|
26 |
-
We provide an easy-to-get-started demo using Google Colab!
|
27 |
-
The Colab will allow you to try our method on the sample Abraham Lincoln photo or **your own photos** using Cloud GPUs on Google Colab.
|
28 |
-
|
29 |
-
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/15D2WIF_vE2l48ddxEx45cM3RykZwQXM8?usp=sharing)
|
30 |
-
|
31 |
-
Or you can run our method on your own machine following the instructions below.
|
32 |
-
|
33 |
-
## Prerequisite
|
34 |
-
- Pull third-party packages.
|
35 |
-
```
|
36 |
-
git submodule update --init --recursive
|
37 |
-
```
|
38 |
-
- Install python packages.
|
39 |
-
```
|
40 |
-
conda create --name rephotography python=3.8.5
|
41 |
-
conda activate rephotography
|
42 |
-
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
|
43 |
-
pip install -r requirements.txt
|
44 |
-
```
|
45 |
-
|
46 |
-
## Quick Start
|
47 |
-
Run our method on the example photo of Abraham Lincoln.
|
48 |
-
- Download models:
|
49 |
-
```
|
50 |
-
./scripts/download_checkpoints.sh
|
51 |
-
```
|
52 |
-
- Run:
|
53 |
-
```
|
54 |
-
./scripts/run.sh b "dataset/Abraham Lincoln_01.png" 0.75
|
55 |
-
```
|
56 |
-
- You can inspect the optimization process by
|
57 |
-
```
|
58 |
-
tensorboard --logdir "log/Abraham Lincoln_01"
|
59 |
-
```
|
60 |
-
- You can find your results as below.
|
61 |
-
```
|
62 |
-
results/
|
63 |
-
Abraham Lincoln_01/ # intermediate outputs for histogram matching and face parsing
|
64 |
-
Abraham Lincoln_01_b.png # the input after matching the histogram of the sibling image
|
65 |
-
Abraham Lincoln_01-b-G0.75-init(10,18)-s256-vgg1-vggface0.3-eye0.1-color1.0e+10-cx0.1(relu3_4,relu2_2,relu1_2)-NR5.0e+04-lr0.1_0.01-c32-wp(250,750)-init.png # the sibling image
|
66 |
-
Abraham Lincoln_01-b-G0.75-init(10,18)-s256-vgg1-vggface0.3-eye0.1-color1.0e+10-cx0.1(relu3_4,relu2_2,relu1_2)-NR5.0e+04-lr0.1_0.01-c32-wp(250,750)-init.pt # the sibing latent codes and initialized noise maps
|
67 |
-
Abraham Lincoln_01-b-G0.75-init(10,18)-s256-vgg1-vggface0.3-eye0.1-color1.0e+10-cx0.1(relu3_4,relu2_2,relu1_2)-NR5.0e+04-lr0.1_0.01-c32-wp(250,750).png # the output result
|
68 |
-
Abraham Lincoln_01-b-G0.75-init(10,18)-s256-vgg1-vggface0.3-eye0.1-color1.0e+10-cx0.1(relu3_4,relu2_2,relu1_2)-NR5.0e+04-lr0.1_0.01-c32-wp(250,750).pt # the final optimized latent codes and noise maps
|
69 |
-
Abraham Lincoln_01-b-G0.75-init(10,18)-s256-vgg1-vggface0.3-eye0.1-color1.0e+10-cx0.1(relu3_4,relu2_2,relu1_2)-NR5.0e+04-lr0.1_0.01-c32-wp(250,750)-rand.png # the result with the final latent codes but random noise maps
|
70 |
-
|
71 |
-
```
|
72 |
-
|
73 |
-
## Run on Your Own Image
|
74 |
-
- Crop and align the head regions of your images:
|
75 |
-
```
|
76 |
-
python -m tools.data.align_images <input_raw_image_dir> <aligned_image_dir>
|
77 |
-
```
|
78 |
-
- Run:
|
79 |
-
```
|
80 |
-
./scripts/run.sh <spectral_sensitivity> <input_image_path> <blur_radius>
|
81 |
-
```
|
82 |
-
The `spectral_sensitivity` can be `b` (blue-sensitive), `gb` (orthochromatic), or `g` (panchromatic). You can roughly estimate the `spectral_sensitivity` of your photo as follows. Use the *blue-sensitive* model for photos before 1873, manually select between blue-sensitive and *orthochromatic* for images from 1873 to 1906 and among all models for photos taken afterwards.
|
83 |
-
|
84 |
-
The `blur_radius` is the estimated gaussian blur radius in pixels if the input photot is resized to 1024x1024.
|
85 |
-
|
86 |
-
## Historical Wiki Face Dataset
|
87 |
-
| Path | Size | Description |
|
88 |
-
|----------- | ----------- | ----------- |
|
89 |
-
| [Historical Wiki Face Dataset.zip](https://drive.google.com/open?id=1mgC2U7quhKSz_lTL97M-0cPrIILTiUCE&authuser=xuanluo%40cs.washington.edu&usp=drive_fs)| 148 MB | Images|
|
90 |
-
| [spectral_sensitivity.json](https://drive.google.com/open?id=1n3Bqd8G0g-wNpshlgoZiOMXxLlOycAXr&authuser=xuanluo%40cs.washington.edu&usp=drive_fs)| 6 KB | Spectral sensitivity (`b`, `gb`, or `g`). |
|
91 |
-
| [blur_radius.json](https://drive.google.com/open?id=1n4vUsbQo2BcxtKVMGfD1wFHaINzEmAVP&authuser=xuanluo%40cs.washington.edu&usp=drive_fs)| 6 KB | Blur radius in pixels|
|
92 |
-
|
93 |
-
The `json`s are dictionares that map input names to the corresponding spectral sensitivity or blur radius.
|
94 |
-
Due to copyright constraints, `Historical Wiki Face Dataset.zip` contains all images in the *Historical Wiki Face Dataset* that were used in our user study except the photo of [Mao Zedong](https://en.wikipedia.org/wiki/File:Mao_Zedong_in_1959_%28cropped%29.jpg). You can download it separately and crop it as [above](#run-on-your-own-image).
|
95 |
-
|
96 |
-
## Citation
|
97 |
-
If you find our code useful, please consider citing our paper:
|
98 |
-
```
|
99 |
-
@article{Luo-Rephotography-2021,
|
100 |
-
author = {Luo, Xuan and Zhang, Xuaner and Yoo, Paul and Martin-Brualla, Ricardo and Lawrence, Jason and Seitz, Steven M.},
|
101 |
-
title = {Time-Travel Rephotography},
|
102 |
-
journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH Asia 2021)},
|
103 |
-
publisher = {ACM New York, NY, USA},
|
104 |
-
volume = {40},
|
105 |
-
number = {6},
|
106 |
-
articleno = {213},
|
107 |
-
doi = {https://doi.org/10.1145/3478513.3480485},
|
108 |
-
year = {2021},
|
109 |
-
month = {12}
|
110 |
-
}
|
111 |
-
```
|
112 |
-
|
113 |
-
## License
|
114 |
-
This work is licensed under MIT License. See [LICENSE](LICENSE) for details.
|
115 |
-
|
116 |
-
Codes for the StyleGAN2 model come from [https://github.com/rosinality/stylegan2-pytorch](https://github.com/rosinality/stylegan2-pytorch).
|
117 |
-
|
118 |
-
## Acknowledgments
|
119 |
-
We thank [Nick Brandreth](https://www.nickbrandreth.com/) for capturing the dry plate photos. We thank Bo Zhang, Qingnan Fan, Roy Or-El, Aleksander Holynski and Keunhong Park for insightful advice.
|
|
|
1 |
+
---
|
2 |
+
title: Time TravelRephotography
|
3 |
+
emoji: 🦀
|
4 |
+
colorFrom: yellow
|
5 |
+
colorTo: red
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 2.9.4
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
|
13 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|