80% 1x4 Block Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1

This model is a result of fine-tuning a Prune OFA 80% 1x4 block sparse pre-trained BERT-Large combined with knowledge distillation. This model yields the following results on SQuADv1.1 development set:
{"exact_match": 84.673, "f1": 91.174}

For further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here.

Downloads last month
155
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Collection including Intel/bert-large-uncased-squadv1.1-sparse-80-1x4-block-pruneofa