autotrain-data-processor commited on
Commit
4bf5b5e
·
1 Parent(s): 2206051

Processed data from AutoTrain data processor ([2022-11-09 08:45 ]

Browse files
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - conditional-text-generation
4
+
5
+ ---
6
+ # AutoTrain Dataset for project: led-samsum-dialogsum
7
+
8
+ ## Dataset Description
9
+
10
+ This dataset has been automatically processed by AutoTrain for project led-samsum-dialogsum.
11
+
12
+ ### Languages
13
+
14
+ The BCP-47 code for the dataset's language is unk.
15
+
16
+ ## Dataset Structure
17
+
18
+ ### Data Instances
19
+
20
+ A sample from this dataset looks as follows:
21
+
22
+ ```json
23
+ [
24
+ {
25
+ "feat_Unnamed: 0": 0,
26
+ "feat_id": 0,
27
+ "text": "Amanda: I baked cookies. Do you want some?\nJerry: Sure!\nAmanda: I'll bring you tomorrow :-)",
28
+ "target": "Amanda baked cookies and will bring Jerry some tomorrow."
29
+ },
30
+ {
31
+ "feat_Unnamed: 0": 1,
32
+ "feat_id": 1,
33
+ "text": "Olivia: Who are you voting for in this election? \nOliver: Liberals as always.\nOlivia: Me too!!\nOliver: Great",
34
+ "target": "Olivia and Olivier are voting for liberals in this election. "
35
+ }
36
+ ]
37
+ ```
38
+
39
+ ### Dataset Fields
40
+
41
+ The dataset has the following fields (also called "features"):
42
+
43
+ ```json
44
+ {
45
+ "feat_Unnamed: 0": "Value(dtype='int64', id=None)",
46
+ "feat_id": "Value(dtype='int64', id=None)",
47
+ "text": "Value(dtype='string', id=None)",
48
+ "target": "Value(dtype='string', id=None)"
49
+ }
50
+ ```
51
+
52
+ ### Dataset Splits
53
+
54
+ This dataset is split into a train and validation split. The split sizes are as follow:
55
+
56
+ | Split name | Num samples |
57
+ | ------------ | ------------------- |
58
+ | train | 27191 |
59
+ | valid | 1318 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9051944df51c1d8b5325ff305855bd8fadf2c42d4887dd536e5e8ac7e61f6097
3
+ size 20667544
processed/train/dataset_info.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "feat_Unnamed: 0": {
11
+ "dtype": "int64",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "feat_id": {
16
+ "dtype": "int64",
17
+ "id": null,
18
+ "_type": "Value"
19
+ },
20
+ "text": {
21
+ "dtype": "string",
22
+ "id": null,
23
+ "_type": "Value"
24
+ },
25
+ "target": {
26
+ "dtype": "string",
27
+ "id": null,
28
+ "_type": "Value"
29
+ }
30
+ },
31
+ "homepage": "",
32
+ "license": "",
33
+ "post_processed": null,
34
+ "post_processing_size": null,
35
+ "size_in_bytes": null,
36
+ "splits": {
37
+ "train": {
38
+ "name": "train",
39
+ "num_bytes": 20657323,
40
+ "num_examples": 27191,
41
+ "dataset_name": null
42
+ }
43
+ },
44
+ "supervised_keys": null,
45
+ "task_templates": null,
46
+ "version": null
47
+ }
processed/train/state.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a4393616ebece64e",
8
+ "_format_columns": [
9
+ "feat_Unnamed: 0",
10
+ "feat_id",
11
+ "target",
12
+ "text"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_indexes": {},
17
+ "_output_all_columns": false,
18
+ "_split": null
19
+ }
processed/valid/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:675cbd2bc4699af799988dcf6d03f0c0e0eacfd08b63fff2b85f1aa18b08f899
3
+ size 954576
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "feat_Unnamed: 0": {
11
+ "dtype": "int64",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "feat_id": {
16
+ "dtype": "int64",
17
+ "id": null,
18
+ "_type": "Value"
19
+ },
20
+ "text": {
21
+ "dtype": "string",
22
+ "id": null,
23
+ "_type": "Value"
24
+ },
25
+ "target": {
26
+ "dtype": "string",
27
+ "id": null,
28
+ "_type": "Value"
29
+ }
30
+ },
31
+ "homepage": "",
32
+ "license": "",
33
+ "post_processed": null,
34
+ "post_processing_size": null,
35
+ "size_in_bytes": null,
36
+ "splits": {
37
+ "valid": {
38
+ "name": "valid",
39
+ "num_bytes": 953267,
40
+ "num_examples": 1318,
41
+ "dataset_name": null
42
+ }
43
+ },
44
+ "supervised_keys": null,
45
+ "task_templates": null,
46
+ "version": null
47
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "6e6778de01806a47",
8
+ "_format_columns": [
9
+ "feat_Unnamed: 0",
10
+ "feat_id",
11
+ "target",
12
+ "text"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_indexes": {},
17
+ "_output_all_columns": false,
18
+ "_split": null
19
+ }