|
--- |
|
library_name: transformers, peft |
|
datasets: |
|
- glue |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
This model is a peft version of the roberta-large finetuned on the mrpc task of glue dataset |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This model is finetuned on the mrpc task of Glue dataset which essentially compares two statements and decides and gets a label as to if they are equivalent or not. Dataset example is shown below |
|
|
|
![Screenshot 2024-02-07 at 4.40.51 PM.png](https://cdn-uploads.huggingface.co/production/uploads/6461ad7196259bec21d4f206/w1ZJOYpkv6KvD0mfMkGDi.png) |
|
|
|
The model was tested on the testing set and gave an accuracy of 86.6% and F1 score of 90% |
|
|
|
Similar fine tuning and evaluation can be done on the other tasks of the GLUE dataset by loading the correspodning config files or defining appropriate LORA config uing sample code : |
|
|
|
- **Developed by:** PEFT library example |
|
- **Model type:** [More Information Needed] |
|
- **Language(s) (NLP):** [More Information Needed] |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model [optional]:** roberta-large |
|
|
|
|