Dongsung's picture
Create README.md
fd24740
|
raw
history blame
1.57 kB
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Graphcore/bert-large-uncased-squad
results: []
---
# Graphcore/bert-large-uncased-squad
This model is a fine-tuned version of [Graphcore/bert-large-uncased](https://huggingface.co/Graphcore/bert-large-uncased) on the squad dataset.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabeled texts. It enables easy and fast fine-tuning for different downstream task such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modeling(MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
More information needed
## Training and evaluation data
[squad dataset](https://huggingface.co/datasets/squad)
## Training procedure
Model was trained on 16 Graphcore Mk2 IPUs using the [optimum-graphcore](https://github.com/huggingface/optimum-graphcore) library.