YAML Metadata
Error:
"tags" must be an array
BatteryBERT-cased for QA
Language model: batterybert-cased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
Hyperparameters
batch_size = 16
n_epochs = 4
base_LM_model = "batterybert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
Performance
Evaluated on the SQuAD v1.0 dev set.
"exact": 81.54,
"f1": 89.16,
Evaluated on the battery device dataset.
"precision": 70.74,
"recall": 84.19,
Usage
In Transformers
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Authors
Shu Huang: sh2009 [at] cam.ac.uk
Jacqueline Cole: jmc61 [at] cam.ac.uk
Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
- Downloads last month
- 4,426
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.