holylovenia commited on
Commit
d9f7616
·
verified ·
1 Parent(s): 2757343

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-nc-4.0
4
+ language:
5
+ - eng
6
+ - ind
7
+ - zlm
8
+ - tha
9
+ - vie
10
+ pretty_name: Sap Wat
11
+ task_categories:
12
+ - machine-translation
13
+ tags:
14
+ - machine-translation
15
+ ---
16
+
17
+ The data set originates from the SAP Help Portal that contains documentation for SAP products and user
18
+ assistance for product-related questions. The data has been processed in a way that makes it suitable as development and
19
+ test data for machine translation purposes. The current language scope is English to Hindi, Indonesian, Japanese, Korean,
20
+ Malay, Thai, Vietnamese, Simplified Chinese and Traditional Chinese. For each language pair about 4k segments are available,
21
+ split into development and test data. The segments are provided in their document context and are annotated with additional
22
+ metadata from the document.
23
+
24
+ ## Languages
25
+
26
+ eng, ind, zlm, tha, vie
27
+
28
+ ## Supported Tasks
29
+
30
+ Machine Translation
31
+
32
+ ## Dataset Usage
33
+ ### Using `datasets` library
34
+ ```
35
+ from datasets import load_dataset
36
+ dset = datasets.load_dataset("SEACrowd/sap_wat", trust_remote_code=True)
37
+ ```
38
+ ### Using `seacrowd` library
39
+ ```import seacrowd as sc
40
+ # Load the dataset using the default config
41
+ dset = sc.load_dataset("sap_wat", schema="seacrowd")
42
+ # Check all available subsets (config names) of the dataset
43
+ print(sc.available_config_names("sap_wat"))
44
+ # Load the dataset using a specific config
45
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
46
+ ```
47
+
48
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
49
+
50
+
51
+ ## Dataset Homepage
52
+
53
+ [https://github.com/SAP/software-documentation-data-set-for-machine-translation](https://github.com/SAP/software-documentation-data-set-for-machine-translation)
54
+
55
+ ## Dataset Version
56
+
57
+ Source: 1.0.0. SEACrowd: 2024.06.20.
58
+
59
+ ## Dataset License
60
+
61
+ Creative Commons Attribution Non Commercial 4.0 (cc-by-nc-4.0)
62
+
63
+ ## Citation
64
+
65
+ If you are using the **Sap Wat** dataloader in your work, please cite the following:
66
+ ```
67
+ @inproceedings{buschbeck-exel-2020-parallel,
68
+ title = "A Parallel Evaluation Data Set of Software Documentation with Document Structure Annotation",
69
+ author = "Buschbeck, Bianka and
70
+ Exel, Miriam",
71
+ editor = "Nakazawa, Toshiaki and
72
+ Nakayama, Hideki and
73
+ Ding, Chenchen and
74
+ Dabre, Raj and
75
+ Kunchukuttan, Anoop and
76
+ Pa, Win Pa and
77
+ Bojar, Ond{ {r}}ej and
78
+ Parida, Shantipriya and
79
+ Goto, Isao and
80
+ Mino, Hidaya and
81
+ Manabe, Hiroshi and
82
+ Sudoh, Katsuhito and
83
+ Kurohashi, Sadao and
84
+ Bhattacharyya, Pushpak",
85
+ booktitle = "Proceedings of the 7th Workshop on Asian Translation",
86
+ month = dec,
87
+ year = "2020",
88
+ address = "Suzhou, China",
89
+ publisher = "Association for Computational Linguistics",
90
+ url = "https://aclanthology.org/2020.wat-1.20",
91
+ pages = "160--169",
92
+ abstract = "This paper accompanies the software documentation data set for machine translation, a parallel
93
+ evaluation data set of data originating from the SAP Help Portal, that we released to the machine translation
94
+ community for research purposes. It offers the possibility to tune and evaluate machine translation systems
95
+ in the domain of corporate software documentation and contributes to the availability of a wider range of
96
+ evaluation scenarios. The data set comprises of the language pairs English to Hindi, Indonesian, Malay and
97
+ Thai, and thus also increases the test coverage for the many low-resource language pairs. Unlike most evaluation
98
+ data sets that consist of plain parallel text, the segments in this data set come with additional metadata that
99
+ describes structural information of the document context. We provide insights into the origin and creation, the
100
+ particularities and characteristics of the data set as well as machine translation results.",
101
+ }
102
+
103
+
104
+
105
+ @article{lovenia2024seacrowd,
106
+ title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
107
+ author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
108
+ year={2024},
109
+ eprint={2406.10118},
110
+ journal={arXiv preprint arXiv: 2406.10118}
111
+ }
112
+
113
+ ```