Evaluation results for andreaparker/flan-t5-base-samsum model as a base model for other tasks

#1
by eladven - opened

As part of a research effort to identify high quality models in Huggingface that can serve as base models for further finetuning, we evaluated this by finetuning on 36 datasets. The model ranks 2nd among all tested models for the google/t5-v1_1-base architecture as of 07/02/2023.

To share this information with others in your model card, please add the following evaluation results to your README.md page.

For more information please see https://ibm.github.io/model-recycling/ or contact me.

Best regards,
Elad Venezian
[email protected]
IBM Research AI

Dear Elad, I appreciate your reaching out and sharing the results of the model-recycling project conducted by your team at IBM. It was really neat to find out that the FlanT5 model variant that I'd trained for a somewhat toy task of generating podcast summaries for my employer's podcast was such a performant model in your 'backtesting' of various NLP model families.

I'm eager to see how the upcoming versions of my model will perform in your evaluations, and I hope that more organizations will undertake large-scale assessments of model families. These comprehensive analyses not only help identify exceptional multitask learners and foundational models but also uncover valuable datasets, such as the SAMSum Corpus dataset from Samsung Research and the Salesforce Booksum dataset. The latter, in particular, showcases the invaluable contributions made by Drago Radev and other NLP scholars with a classical background.

Thank you once again for your recognition and engagement in the ever-evolving field of natural language processing, understanding, and generation. If there are any follow-up studies that the model-recycling team is engaging on do reach out.

andreaparker changed pull request status to closed

Sign up or log in to comment