Bert-base-chinese
Table of Contents
- Model Details
- Uses
- Risks, Limitations and Biases
- Training
- Evaluation
- How to Get Started With the Model
Model Details
Model Description: This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
Developed by: HuggingFace team
Model Type: Fill-Mask
Language(s): Chinese
License: [More Information needed]
Parent Model: See the BERT base uncased model for more information about the BERT base model.
Uses
Direct Use
This model can be used for masked language modeling
Risks, Limitations and Biases
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Training
Training Procedure
- type_vocab_size: 2
- vocab_size: 21128
- num_hidden_layers: 12
Training Data
[More Information Needed]
Evaluation
Results
[More Information Needed]
How to Get Started With the Model
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
- Downloads last month
- 26