Model

t5-base-msmarco-squad-query-generation-firstp-v2

Task: query generation Architecture: T5

Base model: t5-base

Note: This is supposed to be a baseline model.

Prompt:

"Generate Query: {document}. Query:"

Sequence length:

512 tokens

Training details

Hyperparameters

Batch size: 8;
Gradient acc: 8; LR: 3e-4, linear scheduler, 400 warmup steps.

Data

Total: 252059 pairs (document, query)

From MARCO-V2: 165238 From SQuAD: 86821

The remaining queries from MARCO-V2 train split were not used.

Evaluation

This model is supposed to be used for data augmentation. Hence, meaningful evaluation will come from downstream tasks.

MARCO-V2 Dev1: BLEU: 0.105 ROUGE: 0.449

MARCO-V2 Dev2: BLEU: 0.171 ROUGE: 0.503

Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train jmvcoelho/t5-base-msmarco-squad-query-generation-firstp-v2