dreji18's picture
Update README.md
158d3a2
|
raw
history blame
2.02 kB
metadata
language:
  - en
tags:
  - Text Classification
co2_eq_emissions: 0.319355 Kg
widget:
  - text: >-
      Nevertheless, Trump and other Republicans have tarred the protests as
      havens for terrorists intent on destroying property.
    example_title: Biased example 1
  - text: >-
      Christians should make clear that the perpetuation of objectionable
      vaccines and the lack of alternatives is a kind of coercion.
    example_title: Biased example 2
  - text: There have been a protest by a group of people
    example_title: Non-Biased example 1
  - text: >-
      While emphasizing he’s not singling out either party, Cohen warned about
      the danger of normalizing white supremacist ideology.
    example_title: Non-Biased example 2

About the Model

An English Classification model, trained on MBAD Dataset to detect bias and fairness in sentences.

  • Dataset : MBAD Data
  • Carbon emission 0.319355 Kg
Train Accuracy Validation Accuracy Train loss Test loss
76.97 62.00 0.45 0.96

Usage

from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dreji18/bias-detection-model", use_auth_token=True)
model = TFAutoModelForSequenceClassification.from_pretrained("dreji18/bias-detection-model", use_auth_token=True)

classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
classifier("The irony, of course, is that the exhibit that invites people to throw trash at vacuuming Ivanka Trump lookalike reflects every stereotype feminists claim to stand against, oversexualizing Ivanka’s body and ignoring her hard work.")

Author

This model is part of the Research topic "Bias and Fairness in AI" conducted by Shaina Raza, Deepak John Reji, Chen Ding. If you use this work (code, model or dataset), please cite as:

Bias & Fairness in AI, (2020), GitHub repository, https://github.com/dreji18/Fairness-in-AI/tree/dev