Dataset Viewer (First 5GB)
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
High-quality Chinese text from Common Crawl cleaned by the following steps:
- Documents containing more than 2% non-Chinese, non-English characters are removed. Those containing more than 30% digits or capital letters are also removed.
- Documents whose language is identified as non-Chinese by fasttext are removed.
- All text in Traditional Chinese is converted into Simplified Chinese.
- Low-quality documents (e.g. boilerplates, advertisements) are heuristically removed based on statistics such as average line length, portion of special characters, etc.
- Exact deduplication is performed in buckets of around 100GB compressed text. We did not deduplicate globally due to memory constraints, and estimate that about 0.03% of the documents are exact duplicates based on small-scale cross-bucket deduplication.
- Qwen2.5-32B-Instruct is used to generate language quality annotation (on a scale of 1-5) for 9.3M Chinese documents and 9.2M English documents, from which we sample 398K Chinese documents and 250K English documents to balance label distribution. An XLM-RoBERT-large classifier is trained with regression on these annotations. Any document receiving a score lower than 4 is removed.
Details about Model Annotations
On 2K samples, we compared the annotation distribution (in percentage) of Qwen2.5-Instruct 32B and 72B:
Score | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
32B | 0.7 | 17.1 | 45.7 | 35.8 | 0.8 |
72B | 0.3 | 4.7 | 22.9 | 58.1 | 14.1 |
The scores between the two models have a correlation coefficient of 0.75, and manual inspection suggests that both are satisfactory. We eventually choose the 32B model for both efficiency and more balanced label distribution.
- Downloads last month
- 177