AlekseyKorshuk commited on
Commit
5ba248c
·
1 Parent(s): ca74b1a

huggingartists

Browse files
Files changed (1) hide show
  1. README.md +147 -40
README.md CHANGED
@@ -1,43 +1,76 @@
1
  ---
2
- language: en
3
- datasets:
4
- - huggingartists/ciggy-blacc
5
  tags:
6
  - huggingartists
7
  - lyrics
8
- - lm-head
9
- - causal-lm
10
- widget:
11
- - text: "I am"
12
  ---
13
 
14
- <div class="inline-flex flex-col" style="line-height: 1.5;">
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  <div class="flex">
16
- <div
17
- style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/7ba8a81d32ea254df43b31447958e85f.500x500x1.png&#39;)">
18
  </div>
19
  </div>
20
- <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
 
 
21
  <div style="text-align: center; font-size: 16px; font-weight: 800">Ciggy Blacc</div>
22
  <a href="https://genius.com/artists/ciggy-blacc">
23
  <div style="text-align: center; font-size: 14px;">@ciggy-blacc</div>
24
  </a>
25
  </div>
26
 
27
- I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
 
 
 
28
 
29
- Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
30
 
31
- ## How does it work?
32
 
33
- To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
34
 
35
- ## Training data
36
 
37
- The model was trained on lyrics from Ciggy Blacc.
38
 
39
- Dataset is available [here](https://huggingface.co/datasets/huggingartists/ciggy-blacc).
40
- And can be used with:
41
 
42
  ```python
43
  from datasets import load_dataset
@@ -45,42 +78,116 @@ from datasets import load_dataset
45
  dataset = load_dataset("huggingartists/ciggy-blacc")
46
  ```
47
 
48
- [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/ei5jqzy8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
49
 
50
- ## Training procedure
 
 
51
 
52
- The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Ciggy Blacc's lyrics.
 
 
 
53
 
54
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1xsvugxq) for full transparency and reproducibility.
55
 
56
- At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1xsvugxq/artifacts) is logged and versioned.
57
 
58
- ## How to use
59
 
60
- You can use this model directly with a pipeline for text generation:
61
 
62
- ```python
63
- from transformers import pipeline
64
- generator = pipeline('text-generation',
65
- model='huggingartists/ciggy-blacc')
66
- generator("I am", num_return_sequences=5)
67
- ```
68
 
69
- Or with Transformers library:
70
 
71
  ```python
72
- from transformers import AutoTokenizer, AutoModelWithLMHead
73
-
74
- tokenizer = AutoTokenizer.from_pretrained("huggingartists/ciggy-blacc")
 
 
 
 
 
75
 
76
- model = AutoModelWithLMHead.from_pretrained("huggingartists/ciggy-blacc")
 
 
 
 
 
 
 
 
77
  ```
78
 
79
- ## Limitations and bias
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
- The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
- In addition, the data present in the user's tweets further affects the text generated by the model.
84
 
85
  ## About
86
 
 
1
  ---
2
+ languages:
3
+ - en
 
4
  tags:
5
  - huggingartists
6
  - lyrics
 
 
 
 
7
  ---
8
 
9
+ # Dataset Card for "huggingartists/ciggy-blacc"
10
+
11
+ ## Table of Contents
12
+ - [Dataset Description](#dataset-description)
13
+ - [Dataset Summary](#dataset-summary)
14
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
15
+ - [Languages](#languages)
16
+ - [How to use](#how-to-use)
17
+ - [Dataset Structure](#dataset-structure)
18
+ - [Data Fields](#data-fields)
19
+ - [Data Splits](#data-splits)
20
+ - [Dataset Creation](#dataset-creation)
21
+ - [Curation Rationale](#curation-rationale)
22
+ - [Source Data](#source-data)
23
+ - [Annotations](#annotations)
24
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
25
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
26
+ - [Social Impact of Dataset](#social-impact-of-dataset)
27
+ - [Discussion of Biases](#discussion-of-biases)
28
+ - [Other Known Limitations](#other-known-limitations)
29
+ - [Additional Information](#additional-information)
30
+ - [Dataset Curators](#dataset-curators)
31
+ - [Licensing Information](#licensing-information)
32
+ - [Citation Information](#citation-information)
33
+ - [About](#about)
34
+
35
+ ## Dataset Description
36
+
37
+ - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
38
+ - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
39
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
41
+ - **Size of the generated dataset:** 4014.257119 MB
42
+
43
+
44
+ <div class="inline-flex flex-col" style="line-height: 1.5;">
45
  <div class="flex">
46
+ <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/7ba8a81d32ea254df43b31447958e85f.500x500x1.png&#39;)">
 
47
  </div>
48
  </div>
49
+ <a href="https://huggingface.co/huggingartists/ciggy-blacc">
50
+ <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
51
+ </a>
52
  <div style="text-align: center; font-size: 16px; font-weight: 800">Ciggy Blacc</div>
53
  <a href="https://genius.com/artists/ciggy-blacc">
54
  <div style="text-align: center; font-size: 14px;">@ciggy-blacc</div>
55
  </a>
56
  </div>
57
 
58
+ ### Dataset Summary
59
+
60
+ The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
61
+ Model is available [here](https://huggingface.co/huggingartists/ciggy-blacc).
62
 
63
+ ### Supported Tasks and Leaderboards
64
 
65
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
 
67
+ ### Languages
68
 
69
+ en
70
 
71
+ ## How to use
72
 
73
+ How to load this dataset directly with the datasets library:
 
74
 
75
  ```python
76
  from datasets import load_dataset
 
78
  dataset = load_dataset("huggingartists/ciggy-blacc")
79
  ```
80
 
81
+ ## Dataset Structure
82
 
83
+ An example of 'train' looks as follows.
84
+ ```
85
+ This example was too long and was cropped:
86
 
87
+ {
88
+ "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
89
+ }
90
+ ```
91
 
92
+ ### Data Fields
93
 
94
+ The data fields are the same among all splits.
95
 
96
+ - `text`: a `string` feature.
97
 
 
98
 
99
+ ### Data Splits
100
+
101
+ | train |validation|test|
102
+ |------:|---------:|---:|
103
+ |23| -| -|
 
104
 
105
+ 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
106
 
107
  ```python
108
+ from datasets import load_dataset, Dataset, DatasetDict
109
+ import numpy as np
110
+
111
+ datasets = load_dataset("huggingartists/ciggy-blacc")
112
+
113
+ train_percentage = 0.9
114
+ validation_percentage = 0.07
115
+ test_percentage = 0.03
116
 
117
+ train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
118
+
119
+ datasets = DatasetDict(
120
+ {
121
+ 'train': Dataset.from_dict({'text': list(train)}),
122
+ 'validation': Dataset.from_dict({'text': list(validation)}),
123
+ 'test': Dataset.from_dict({'text': list(test)})
124
+ }
125
+ )
126
  ```
127
 
128
+ ## Dataset Creation
129
+
130
+ ### Curation Rationale
131
+
132
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
+
134
+ ### Source Data
135
+
136
+ #### Initial Data Collection and Normalization
137
+
138
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
+
140
+ #### Who are the source language producers?
141
+
142
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
+
144
+ ### Annotations
145
+
146
+ #### Annotation process
147
 
148
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
149
+
150
+ #### Who are the annotators?
151
+
152
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+
154
+ ### Personal and Sensitive Information
155
+
156
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
+
158
+ ## Considerations for Using the Data
159
+
160
+ ### Social Impact of Dataset
161
+
162
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
+
164
+ ### Discussion of Biases
165
+
166
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
167
+
168
+ ### Other Known Limitations
169
+
170
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
+
172
+ ## Additional Information
173
+
174
+ ### Dataset Curators
175
+
176
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
+
178
+ ### Licensing Information
179
+
180
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
181
+
182
+ ### Citation Information
183
+
184
+ ```
185
+ @InProceedings{huggingartists,
186
+ author={Aleksey Korshuk}
187
+ year=2022
188
+ }
189
+ ```
190
 
 
191
 
192
  ## About
193