--- language: - en tags: - Text Classification co2_eq_emissions: 0.319355 Kg widget: - text: "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property." example_title: "Biased example 1" - text: "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion." example_title: "Biased example 2" - text: "Strategic purchases of U.S. businesses and the placement of Chinese companies on American stock exchanges and indexes have also given the PRC enormous suasion over the avenues of American soft power." example_title: "Non-Biased example 1" - text: "While emphasizing he’s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology." example_title: "Non-Biased example 2" --- ## About the Model An English Classification model, trained on MBAD Dataset to detect bias and fairness in sentences. - Dataset : MBAD Data - Carbon emission 0.319355 Kg | Train Accuracy | Validation Accuracy | Train loss | Test loss | |---------------:| -------------------:| ----------:|----------:| | 76.97 | 62.00 | 0.45 | 0.96 | ## Usage ```python from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("dreji18/bias-detection-model", use_auth_token=True) model = TFAutoModelForSequenceClassification.from_pretrained("dreji18/bias-detection-model", use_auth_token=True) classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability classifier("While emphasizing he’s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology.") ``` ## Author This model is part of the Research topic "Bias and Fairness in AI" conducted by Shaina Raza, Deepak John Reji, Chen Ding. If you use this work (code, model or dataset), please cite as: > Bias & Fairness in AI, (2020), GitHub repository,