Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Yue Chinese
Size:
10K - 100K
License:
dataset_info: | |
features: | |
- name: chars | |
sequence: string | |
- name: labels | |
sequence: | |
class_label: | |
names: | |
'0': D | |
'1': I | |
'2': P | |
'3': S | |
splits: | |
- name: train | |
num_bytes: 2298515 | |
num_examples: 12290 | |
- name: validation | |
num_bytes: 37643 | |
num_examples: 221 | |
- name: test | |
num_bytes: 44203 | |
num_examples: 257 | |
download_size: 341126 | |
dataset_size: 2380361 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: validation | |
path: data/validation-* | |
- split: test | |
path: data/test-* | |
language: | |
- yue | |
license: cc-by-4.0 | |
task_categories: | |
- token-classification | |
This data is the subset of the Hong Kong Cantonese Corpus (HKCanCor) that has been re-segmented by the multi-tiered word segmentation scheme described in the following paper: | |
Charles Lam, Chaak-ming Lau, and Jackson L. Lee. 2024. Multi-Tiered Cantonese Word Segmentation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 11993–12002, Torino, Italy. ELRA and ICCL. | |
## Processing from original format | |
Chinese word segmentation is commonly framed as a sequence labelling task. To ease the adoption of the original dataset, we have transformed it into two columns: | |
1. `chars`: The original sentence split into characters | |
2. `labels`: One of `D` (Dash), `I` (Intermediate), `P` (Pipe), `S` (Space). This labelling scheme expands the common BI (Beginning/Intermediate) scheme into four labels. | |
In particular, the I is the same as the BI scheme but the B tag is further split into D, P, and S. | |
Here's a sample segmented sentence in the original format and the processed format: | |
Original: 即係 噉樣 嗰-啲 呀 ? | |
Processed: | |
* chars: 即 係 噉 樣 嗰 啲 呀 ? | |
* labels: S I S I S D S S | |
Note how the label is taken from the left boundary of the labelled character. For the left most character in a sentence, there is no boundary character to its left. In these cases, we set up a convention to always label the left most character as S (Space). | |
**Check out https://github.com/AlienKevin/dips for more details.** | |