metadata
license: apache-2.0
Model Card for Model ID
The model is fine-tuned from valurank/distilroberta-bias model for research purpose.
Model Details
Model Description
The data used for fine-tuning is MBIC dataset, which contains texts with bias labels.
The model is capable of classifying any text into Biased or Non_biased. Max length set for the tokenizer is 512.
- Developed by: [More Information Needed]
- Model type: DistillRoBERTa transformer
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model [optional]: valurank/distilroberta-bias
Model Sources [optional]
- Repository: To be uploaded
The following sections are under construction...
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]