mohamedemam
commited on
Commit
•
0ae1bb4
1
Parent(s):
cb32ab4
Update README.md
Browse files
README.md
CHANGED
@@ -32,13 +32,14 @@ alt="drawing" width="600"/>
|
|
32 |
7. [Environmental Impact](#environmental-impact)
|
33 |
8. [Citation](#citation)
|
34 |
9. [Model Card Authors](#model-card-authors)
|
|
|
|
|
35 |
|
36 |
# TL;DR
|
37 |
|
38 |
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
|
39 |
As mentioned in the first few lines of the abstract :
|
40 |
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
|
41 |
-
>This model is fine tuned to generate a question with answers from a context , why is can be very usful this can help you to generate a dataset from a book article any thing you would to make from it dataset and train another model on this dataset
|
42 |
# Model Details
|
43 |
|
44 |
## Model Description
|
|
|
32 |
7. [Environmental Impact](#environmental-impact)
|
33 |
8. [Citation](#citation)
|
34 |
9. [Model Card Authors](#model-card-authors)
|
35 |
+
# my fine tuned model
|
36 |
+
>This model is fine tuned to generate a question with answers from a context , why is can be very usful this can help you to generate a dataset from a book article any thing you would to make from it dataset and train another model on this dataset
|
37 |
|
38 |
# TL;DR
|
39 |
|
40 |
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
|
41 |
As mentioned in the first few lines of the abstract :
|
42 |
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
|
|
|
43 |
# Model Details
|
44 |
|
45 |
## Model Description
|