YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned LongT5 for Conversational QA (ONNX Format)

This model is an ONNX export of tryolabs/long-t5-tglobal-base-blogpost-cqa, a fine-tuned version of long-t5-tglobal-base for the task of Conversational QA. The model was fine-tuned on the SQuADv2 and CoQA datasets and on Tryolabs' own custom dataset, TryoCoQA.

The model was exported using πŸ€— Optimum's exporters feature, which separates the original model into three componentes: the encoder, the decoder with the Language Modeling head, and the decoder with hidden states as additional inputs. Using πŸ€— Optimum and ONNX Runtime, you can combine these components for faster inference.

You can find the details on how we fine-tuned the model and built TryoCoQA on our blog post!

You can also play with the model on the following space.

Results

  • Fine-tuning for 3 epochs on SQuADv2 and CoQA combined achieved a 74.29 F1 score on the test set.
  • Fine-tuning for 166 epochs on TryoCoQA achieved a 54.77 F1 score on the test set.
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tryolabs/long-t5-tglobal-base-blogpost-cqa-onnx

Finetunes
1 model

Space using tryolabs/long-t5-tglobal-base-blogpost-cqa-onnx 1