File size: 3,077 Bytes
afbfcd8
 
 
 
 
 
 
d62eb7a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
afbfcd8
d62eb7a
 
 
 
 
afbfcd8
d62eb7a
 
 
 
 
 
 
afbfcd8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
task_categories:
- image-classification
pretty_name: imagenet5GB
size_categories:
- 1M<n<10M
---
# ImageNet-1k in 5GB

The full ImageNet-1k compressed to less than 5 GB

Compression procedure:

* Resize shorter edge to 288 and crop longer edge to a multiple of 32
* Analysis transform: [DC-AE f32 c32](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers)
* Quantization: 8 bit float (e4m3)
* Entropy coding: TIFF (CMYK) with deflate

# Example dataloader for training


```python
import torch
import datasets
from types import SimpleNamespace
from diffusers import AutoencoderDC
from torchvision.transforms.v2 import ToPILImage, PILToTensor, RandomCrop, CenterCrop
from walloc.walloc import pil_to_latent
from IPython.display import display 
```


```python
device = 'cuda'
config = SimpleNamespace()
config.crop_size = 160
config.valid_crop_size = 288

ds = datasets.load_dataset('danjacobellis/imagenet_288_dcae_fp8')
decoder = AutoencoderDC.from_pretrained("mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers", torch_dtype=torch.float32).decoder.to(device)

rand_crop = RandomCrop((config.crop_size//32,config.crop_size//32))
cent_crop = CenterCrop((config.valid_crop_size//32,config.valid_crop_size//32))

def train_collate_fn(batch):
    B = len(batch)
    x = torch.zeros((B, 32, config.crop_size//32, config.crop_size//32), dtype=torch.torch.float8_e4m3fn)
    y = torch.zeros(B, dtype=torch.long)
    for i_sample, sample in enumerate(batch):
        y[i_sample] = sample['cls']
        z = pil_to_latent([sample['latent']], N=36, n_bits=8, C=4)[:,:32]
        x[i_sample,:,:,:] = rand_crop(z.to(torch.int8).view(torch.float8_e4m3fn))
    return x, y

def valid_collate_fn(batch):
    B = len(batch)
    x = torch.zeros((B, 32, config.valid_crop_size//32, config.valid_crop_size//32), dtype=torch.torch.float8_e4m3fn)
    y = torch.zeros(B, dtype=torch.long)
    for i_sample, sample in enumerate(batch):
        y[i_sample] = sample['cls']
        z = pil_to_latent([sample['latent']], N=36, n_bits=8, C=4)[:,:32]
        x[i_sample,:,:,:] = cent_crop(z.to(torch.int8).view(torch.float8_e4m3fn))
    return x, y
```


```python
%%time
# warmup batch
x,y = valid_collate_fn(ds['train'].select(range(64)))
with torch.no_grad():
    xh = decoder(x.to(torch.float32).to(device))
```

    CPU times: user 1.68 s, sys: 124 ms, total: 1.8 s
    Wall time: 1.47 s



```python
%%time
x,y = valid_collate_fn(ds['train'].select(range(64)))
with torch.no_grad():
    xh = decoder(x.to(torch.float32).to(device))
```

    CPU times: user 282 ms, sys: 2.51 ms, total: 285 ms
    Wall time: 29.2 ms



```python
for img in xh[:2]:
    display(ToPILImage()(img.clamp(-0.5,0.5)+0.5))
```


    
![png](README_files/README_6_0.png)
    



    
![png](README_files/README_6_1.png)
    



```python
!jupyter nbconvert --to markdown README.ipynb
```

    [NbConvertApp] Converting notebook README.ipynb to markdown
    [NbConvertApp] Support files will be in README_files/
    [NbConvertApp] Making directory README_files
    [NbConvertApp] Writing 2752 bytes to README.md