Fine-tuned LongT5 for Conversational QA (ONNX Format)
This model is an ONNX export of tryolabs/long-t5-tglobal-base-blogpost-cqa, a fine-tuned version of long-t5-tglobal-base for the task of Conversational QA. The model was fine-tuned on the SQuADv2 and CoQA datasets and on Tryolabs' own custom dataset, TryoCoQA.
The model was exported using 🤗 Optimum's exporters
feature, which separates the original model into three componentes: the encoder, the decoder with the Language Modeling head, and the decoder with hidden states as additional inputs. Using 🤗 Optimum and ONNX Runtime, you can combine these components for faster inference.
You can find the details on how we fine-tuned the model and built TryoCoQA on our blog post!
You can also play with the model on the following space.
Results
- Fine-tuning for x epochs on SQuADv2 and CoQA combined achieved a xx F1 score on the test set.
- Fine-tuning for x epochs on TryoCoQA achieved a xx F1 score on the test set.