File size: 3,039 Bytes
b64c3d4
 
 
 
 
 
 
d6bfcb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b64c3d4
 
d6bfcb1
b64c3d4
d6bfcb1
 
b64c3d4
 
 
 
 
9824e18
 
 
 
 
 
 
 
 
 
 
 
 
 
26e8037
 
b64c3d4
cb4347d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad2895a
cb4347d
ad2895a
cb4347d
26e8037
 
ad2895a
 
cb4347d
ad2895a
 
 
 
 
 
 
 
 
 
cb4347d
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
dataset_info:
  features:
  - name: url
    dtype: string
  - name: ai_description
    dtype: string
  - name: palettes
    struct:
    - name: '1'
      dtype:
        array2_d:
          shape:
          - 1
          - 3
          dtype: uint8
    - name: '2'
      dtype:
        array2_d:
          shape:
          - 2
          - 3
          dtype: uint8
    - name: '3'
      dtype:
        array2_d:
          shape:
          - 3
          - 3
          dtype: uint8
    - name: '4'
      dtype:
        array2_d:
          shape:
          - 4
          - 3
          dtype: uint8
    - name: '5'
      dtype:
        array2_d:
          shape:
          - 5
          - 3
          dtype: uint8
    - name: '6'
      dtype:
        array2_d:
          shape:
          - 6
          - 3
          dtype: uint8
    - name: '7'
      dtype:
        array2_d:
          shape:
          - 7
          - 3
          dtype: uint8
    - name: '8'
      dtype:
        array2_d:
          shape:
          - 8
          - 3
          dtype: uint8
  splits:
  - name: train
    num_bytes: 28536733
    num_examples: 24998
  download_size: 4159745
  dataset_size: 28536733
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: other
license_name: unsplash-commercial
license_link: https://github.com/unsplash/datasets/blob/master/DOCS.md
task_categories:
- text-to-image
- image-to-text
language:
- en
tags:
- unsplash
- v1.2.1
pretty_name: Unsplash Lite w/ Palettes
size_categories:
- 10K<n<100K
source_datasets:
- 1aurent/unsplash-lite
---

# The Unsplash Lite Dataset (v1.2.1) with color palettes

![](https://unsplash.com/blog/content/images/2020/08/dataheader.jpg)

The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos.
It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).

The Unsplash Dataset is made available for research purposes.
[It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md).
To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).

This subset of the dataset contains only urls to the images, their descriptions generated from an AI service, and 8 palettes (generated using [okolors](https://github.com/Ivordir/Okolors)).

To download the images from the urls, you may do something like this:
```python
from datasets import load_dataset, DownloadManager, Image

ds = load_dataset("1aurent/unsplash-lite-palette")

def download_image(url: str | list[str]) -> str | list[str]:
  dl_manager = DownloadManager()
  filename = dl_manager.download(url)
  return {"image": filename}

ds = ds.map(
  function=download_image,
  input_columns=["url"],
  batched=True,
  num_proc=6,
)
ds = ds.cast_column(
  column="image",
  feature=Image(),
)
```

![](https://unsplash.com/blog/content/images/2020/08/footer-alt.jpg)