Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -5,22 +5,22 @@ dataset_info:
|
|
5 |
dtype: image
|
6 |
splits:
|
7 |
- name: miku
|
8 |
-
num_bytes: 558051034
|
9 |
num_examples: 1000
|
10 |
- name: MikumoGuynemer
|
11 |
-
num_bytes: 101689673
|
12 |
num_examples: 298
|
13 |
- name: aiohto
|
14 |
-
num_bytes: 191185
|
15 |
num_examples: 4
|
16 |
- name: mima
|
17 |
-
num_bytes: 5008975
|
18 |
num_examples: 5
|
19 |
- name: esdeath
|
20 |
-
num_bytes: 1603306
|
21 |
num_examples: 13
|
22 |
download_size: 665777858
|
23 |
-
dataset_size: 666544173
|
24 |
configs:
|
25 |
- config_name: default
|
26 |
data_files:
|
@@ -34,4 +34,44 @@ configs:
|
|
34 |
path: data/mima-*
|
35 |
- split: esdeath
|
36 |
path: data/esdeath-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
dtype: image
|
6 |
splits:
|
7 |
- name: miku
|
8 |
+
num_bytes: 558051034
|
9 |
num_examples: 1000
|
10 |
- name: MikumoGuynemer
|
11 |
+
num_bytes: 101689673
|
12 |
num_examples: 298
|
13 |
- name: aiohto
|
14 |
+
num_bytes: 191185
|
15 |
num_examples: 4
|
16 |
- name: mima
|
17 |
+
num_bytes: 5008975
|
18 |
num_examples: 5
|
19 |
- name: esdeath
|
20 |
+
num_bytes: 1603306
|
21 |
num_examples: 13
|
22 |
download_size: 665777858
|
23 |
+
dataset_size: 666544173
|
24 |
configs:
|
25 |
- config_name: default
|
26 |
data_files:
|
|
|
34 |
path: data/mima-*
|
35 |
- split: esdeath
|
36 |
path: data/esdeath-*
|
37 |
+
task_categories:
|
38 |
+
- text-to-image
|
39 |
+
tags:
|
40 |
+
- art
|
41 |
+
pretty_name: anime
|
42 |
+
size_categories:
|
43 |
+
- 1K<n<10K
|
44 |
---
|
45 |
+
|
46 |
+
# anime characters datasets
|
47 |
+
This is an anime/manga/2D characters dataset, it is intended to be an encyclopedia for anime characters.
|
48 |
+
|
49 |
+
The dataset is open source to use without limitations or any restrictions.
|
50 |
+
## how to use
|
51 |
+
```python
|
52 |
+
from datasets import load_dataset
|
53 |
+
from huggingface_hub.utils import _runtime
|
54 |
+
_runtime._is_google_colab = False # workaround for problems with colab
|
55 |
+
dataset = load_dataset("parsee-mizuhashi/miku")
|
56 |
+
```
|
57 |
+
|
58 |
+
## how to contribute
|
59 |
+
* to add your own dataset, simply join the organization and create a new dataset repo and upload your images there. else you can open a new discussion and we'll check it out
|
60 |
+
* to merge your dataset with this repo simply run the following code :
|
61 |
+
|
62 |
+
```python
|
63 |
+
from huggingface_hub import notebook_login
|
64 |
+
notebook_login() # 👈 add your token with "writing access"
|
65 |
+
# you can find your token in settings>tokens and grab your token
|
66 |
+
```
|
67 |
+
```python
|
68 |
+
from datasets import load_dataset
|
69 |
+
from huggingface_hub.utils import _runtime
|
70 |
+
_runtime._is_google_colab = False # workaround for problems with colab
|
71 |
+
repo_id = "lowres/aiohto" # 👈 change this
|
72 |
+
ds = load_dataset("lowres/anime")
|
73 |
+
ds2 = load_dataset(repo_id)
|
74 |
+
character_name = repo_id.split("/")[1]
|
75 |
+
ds[character_name] = ds["train"]
|
76 |
+
ds.push_to_hub("lowres/anime")
|
77 |
+
```
|