File size: 4,133 Bytes
104528e 3cfc734 e1ee1d5 4c2b8f8 e1ee1d5 104528e e1ee1d5 3c99528 e1ee1d5 ab3467f e1ee1d5 a644308 e1ee1d5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
---
language: sw
datasets:
- kenyacorpus_v2
license: cc-by-4.0
model-index:
- name: innocent-charles/Swahili-question-answer-latest-cased
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: kenyacorpus
type: kenyacorpus
config: kenyacorpus
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 79.9309
verified: true
- name: F1
type: f1
value: 82.9501
verified: true
- name: total
type: total
value: 11869
verified: true
---
# SWAHILI QUESTION - ANSWER MODEL
This is the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model, fine-tuned using the [KenyaCorpus](https://github.com/Neurotech-HQ/Swahili-QA-dataset) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering in Swahili Language.
Question answering (QA) is a computer science discipline within the fields of information retrieval and NLP that help in the development of systems in such a way that, given a question in natural language, can extract relevant information from provided data and present it in the form of natural language answers.
## Overview
**Language model used:** bert-base-multilingual-cased
**Language:** Kiswahili
**Downstream-task:** Extractive Swahili QA
**Training data:** KenyaCorpus
**Eval data:** KenyaCorpus
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai)
**Infrastructure**: AWS NVIDIA A100 Tensor Core GPU
## Hyperparameters
```
batch_size = 16
n_epochs = 10
base_LM_model = "bert-base-multilingual-cased"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="innocent-charles/Swahili-question-answer-latest-cased")
# or
reader = TransformersReader(model_name_or_path="innocent-charles/Swahili-question-answer-latest-cased",tokenizer="innocent-charles/Swahili-question-answer-latest-cased")
```
For a complete example of ``Swahili-question-answer-latest-cased`` being used for Swahili Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "innocent-charles/Swahili-question-answer-latest-cased"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Asubuhi ilitupata pambajioi pa hospitali gani?',
'context': 'Asubuhi hiyo ilitupata pambajioni pa hospitali ya Uguzwa.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Performance
The project is still going, hence the model is still updated after training the model in more data, Therefore pull requests are welcome to contribute to increase the performance of the data.
## Author
**Innocent Charles:** [email protected]
## About Me
<P>
I build good things using Artificial Intelligence ,Data and Analytics , with over 3 Years of Experience as Applied AI Engineer & Data scientist from a strong background in Software Engineering ,with passion and extensive experience in Data and Businesses.
</P>
[Linkedin](https://www.linkedin.com/in/innocent-charles/) | [GitHub](https://github.com/innocent-charles) | [Website](innocentcharles.com)
|