dareenharthi commited on
Commit
d802f6d
·
verified ·
1 Parent(s): 401481c

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,41 +1,12 @@
 
 
1
 
2
- # Discrete Speech Units Dataset
 
 
3
 
4
- This dataset contains discrete representations of speech units with various augmentations.
5
 
6
- ## Dataset Structure
7
-
8
- The dataset contains 128 examples with the following columns:
9
- - `id`: Unique identifier for each example
10
- - `tokens`: Discrete token sequences
11
- - `text`: Textual transcriptions
12
- - `augmentation`: Type of augmentation applied
13
-
14
- ## Augmentations
15
-
16
- The following augmentations are included:
17
- specaugment, gaussian_noise, speed_perturbation, original
18
-
19
- ## Metadata
20
-
21
- ```json
22
- {
23
- "specaugment": {
24
- "vocab_size": 2000,
25
- "augmentation": "specaugment"
26
- },
27
- "gaussian_noise": {
28
- "vocab_size": 2000,
29
- "augmentation": "gaussian_noise"
30
- },
31
- "speed_perturbation": {
32
- "vocab_size": 2000,
33
- "augmentation": "speed_perturbation"
34
- },
35
- "original": {
36
- "vocab_size": 2000,
37
- "augmentation": "original"
38
- }
39
- }
40
- ```
41
-
 
1
+ # Discrete Speech Units Dataset
2
+ This dataset contains discrete speech unit representations with various augmentations.
3
 
4
+ ## Dataset Structure
5
+ - Train: 231 examples
6
+ - Test: 25 examples
7
 
8
+ Features: id, tokens, text, augmentation
9
 
10
+ ## Augmentations
11
+ specaugment, gaussian_noise, combined_dataset, speed_perturbation, original
12
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_card.yaml ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ---
2
+ language: [en]
3
+ license: cc-by-4.0
4
+ task_categories: [speech-processing]
5
+ ---
test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a82e330b3224d5a1a48e21af848538bb074311776417ba929a7ba77210f01589
3
+ size 106984
test/dataset_info.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "tokens": {
10
+ "feature": {
11
+ "dtype": "int64",
12
+ "_type": "Value"
13
+ },
14
+ "_type": "Sequence"
15
+ },
16
+ "text": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "augmentation": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ }
24
+ },
25
+ "homepage": "",
26
+ "license": ""
27
+ }
test/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "f70649c659389c50",
8
+ "_format_columns": [
9
+ "augmentation",
10
+ "id",
11
+ "text",
12
+ "tokens"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }
train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fdfb360a211bac6ffacac4963b0fb60d2e93c0481a9b459a1260020511d2d5e
3
+ size 921768
train/dataset_info.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "tokens": {
10
+ "feature": {
11
+ "dtype": "int64",
12
+ "_type": "Value"
13
+ },
14
+ "_type": "Sequence"
15
+ },
16
+ "text": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "augmentation": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ }
24
+ },
25
+ "homepage": "",
26
+ "license": ""
27
+ }
train/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "30f9dc661a791160",
8
+ "_format_columns": [
9
+ "augmentation",
10
+ "id",
11
+ "text",
12
+ "tokens"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }