|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 51355991596 |
|
num_examples: 15930958 |
|
download_size: 29126915011 |
|
dataset_size: 51355991596 |
|
language: |
|
- ja |
|
- en |
|
- code |
|
size_categories: |
|
- 10M<n<100M |
|
license: apache-2.0 |
|
--- |
|
# JetCopper-10B |
|
|
|
## Description |
|
|
|
JetCopper-10B was created by extracting a portion of the data after cleaning, filtering, and deduplicating the following datasets. |
|
|
|
* The japanese subset of [C4](https://huggingface.co/datasets/allenai/c4) |
|
* The japanese subset of [CC-100](https://data.statmt.org/cc-100) |
|
* The japanese subset of [OSCAR-2301](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301) |
|
* The japanese subset of [HPLT Datasets v1.2](https://hplt-project.org/datasets/v1.2) |
|
* [wiki40b-ja](https://huggingface.co/datasets/range3/wiki40b-ja) |
|
|
|
This dataset was used to pre-train [Contrail-200m-64k](https://huggingface.co/sudy-super/Contrail-200m-64k) when we participated in [LOCAL AI HACKATHON #000](https://imminent-land-e64.notion.site/000-2024-04-01-8b9b0ce5c2454002ac8ecdc6311e3a49). |
|
|
|
## The number of tokens (Using tokenizer of [calm2-chat](https://huggingface.co/cyberagent/calm2-7b-chat)) |
|
|
|
| Language | The number of tokens | |
|
| --- | --- | |
|
| Japanese | 4.7b | |
|
| English | 5b | |
|
| Code | 0.9b | |
|
|
|
|
|
## NOTE |
|
|
|
This dataset has not passed sentence end boundary determination or Perplexity Filtering, so there is room for improvement in quality. |