File size: 2,719 Bytes
53e90a8 af4f81b d3bca73 53e90a8 af4f81b d3bca73 af4f81b 4986ecd af4f81b d3bca73 53e90a8 331d23f 53e90a8 a4758a9 53e90a8 331d23f 53e90a8 d0dbeff 53e90a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
language:
- en
license: apache-2.0
tags:
- t5
- qa
- askscience
- lfqa
- information retrieval
datasets:
- vblagoje/lfqa
metrics:
- rouge
widget:
- text: why hasn't humanity expanded to live on other planets in our solar system?
example_title: solar system
- text: 'question: what is a probability distribution? context: I am just learning
about statistics.'
example_title: probability distribution
- text: 'question: What are the underlying physical processes by which exercise helps
us lose weight? context: I started working out two weeks ago and already feel
a lot better, and started to think about it and became deeply confused.'
example_title: pumpen
- text: what is a neural network?
example_title: deep learning
- text: What is the process that computers use to understand human language in deep
learning models?
example_title: NLP
inference:
parameters:
max_length: 64
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 4
repetition_penalty: 3.51
length_penalty: 0.8
num_beams: 4
early_stopping: true
base_model: google/t5-v1_1-base
---
# checkpoints
- This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the `vblagoje/lfqa` dataset, with training duration of 2 epochs, for a (_somewhat_) apples-to-apples comparison with [t5-base](https://huggingface.co/pszemraj/t5-base-askscience) on the standard eli5 dataset.
- This checkpoint does seem to be more coherent than t5-base on the original dataset.
- Compared to [bart on lfqa](https://huggingface.co/vblagoje/bart_lfqa), it seems to be able to respond to some questions independently of retrieval.
> NOTE: the inference API is limited to generating approx. 64 chars for runtime reasons, for longer outputs try using it in python as a transformers pipeline object.
## Intended uses & limitations
- Q&A, information retrieval
- it is probably better to use it with a [retrieval pipeline](https://github.com/deepset-ai/haystack) than alone
## Training and evaluation data
- see linked dataset. the dataset was filtered to only included the `askscience` subreddit in an attempt to focus on academic/technical queries.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|