File size: 3,208 Bytes
c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 7683e97 c5275b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
language: en
license: mit
tags:
- deberta
- deberta-v3
datasets:
- squad_v2
pipeline_tag: question-answering
model-index:
- name: navteca/deberta-v3-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 83.8248
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjFkNmYwODcyYjY3MjJjMzAwNjQzZjI2NjliYmQ4MGZiMDI2OWZkMTdhYmFmN2UyMzE2NDk4YTBjNTdjYTE2ZCIsInZlcnNpb24iOjF9.LgIENpA4WbqDCo_noI-6Dc2UmpufMqCLYAb7rZpEj33vqp4kqOkUGNaHC1iOgfPmyyeedk0NylgUEVmkS51lBQ
- type: f1
value: 87.41
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2E3NWYxMTc2NDUzOGM3ZWUyNDA0NDRhNGEyY2QyYmFmZmJlNGYwZmRhMjljZmE2OTIyNmFlMmQ1YWExNDQwNyIsInZlcnNpb24iOjF9.oRi3d751NQo6jQfSWB3xuw9e54-UhjeiNRyiIjE6WgeYd5T3-oRuphubLwnhv8xQPYQqSih8VOuEYj4Qbqj-AA
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 84.9678
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGZkZWUyZjJlZWMwOTZiMWU1NmNlN2RiNDI4MWY5YTI3Njc3Y2NjMmYzMDYxYjUwOWI3NTMyOGQ1YjM5MjNhYyIsInZlcnNpb24iOjF9.1Ti7oa5RXpETbOlpHtKpKZ2gz0spb4kzkBfOG1LQGbFMp5v3sRz4u_LhSXYiS2ksJ3sJNz7yIMK8Ci5xT05ODg
- type: f1
value: 92.2777
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWE0Mjc5OTE2NjExYzZiM2YyNjdjMjI5Nzk5MTkxZDcxNjMwMjU5MWNkOWNkOTRmMjk1OTczZGRiZGY2ZWRlYSIsInZlcnNpb24iOjF9.Gyhns0q1kBjiDgG7rE2X78lK4HATol9R2d53rWmdf6QamGb5qX2-d8tA48KTEP8WTCxvvvfOPV1es6qmMzN1BQ
---
# Deberta v3 base model for QA (SQuAD 2.0)
This is the [deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
It can be used for question answering task.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
deberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/deberta-v3-base-squad2')
deberta_tokenizer = AutoTokenizer.from_pretrained('navteca/deberta-v3-base-squad2')
# Get predictions
nlp = pipeline('question-answering', model=deberta_model, tokenizer=deberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.96186668,
# "start": 27,
#}
```
## Author
[deepset](http://deepset.ai/)
|