KoichiYasuoka
commited on
Commit
·
c5b2162
1
Parent(s):
ec39844
base_model
Browse files
README.md
CHANGED
@@ -5,6 +5,7 @@ tags:
|
|
5 |
- "japanese"
|
6 |
- "masked-lm"
|
7 |
- "wikipedia"
|
|
|
8 |
license: "cc-by-sa-4.0"
|
9 |
pipeline_tag: "fill-mask"
|
10 |
mask_token: "[MASK]"
|
@@ -16,7 +17,7 @@ widget:
|
|
16 |
|
17 |
## Model Description
|
18 |
|
19 |
-
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-base-japanese-char-v2](https://huggingface.co/
|
20 |
|
21 |
## How to Use
|
22 |
|
|
|
5 |
- "japanese"
|
6 |
- "masked-lm"
|
7 |
- "wikipedia"
|
8 |
+
base_model: tohoku-nlp/bert-base-japanese-char-v2
|
9 |
license: "cc-by-sa-4.0"
|
10 |
pipeline_tag: "fill-mask"
|
11 |
mask_token: "[MASK]"
|
|
|
17 |
|
18 |
## Model Description
|
19 |
|
20 |
+
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-base-japanese-char-v2](https://huggingface.co/tohoku-nlp/bert-base-japanese-char-v2). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-base-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-base-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-base-japanese-wikipedia-ud-head), and so on.
|
21 |
|
22 |
## How to Use
|
23 |
|