davanstrien HF staff commited on
Commit
eeef13a
·
1 Parent(s): 972b33c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -206,17 +206,17 @@ The training software is built on top of HuggingFace Transformers + Accelerate,
206
 
207
  Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
208
  As a derivative of such a language model, IDEFICS can produce texts that include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
209
- Moreover, IDEFICS can produce factually incorrect texts, and should not be relied on to produce factually accurate information.
210
 
211
- Here are a few examples of outputs that could be categorized as factually incorrect, biased, or offensive:
212
- TODO: give 4/5 representative examples
213
 
214
  When prompted with a misleading image, the model's generations offer factually incorrect information. For example, the prompt:
215
 
216
- ```"Who is the 46th President of the United States of America?" + and image of Donald Trump```
217
 
218
  Returns: `The 46th President of the United States of America is Donald Trump.`.
219
 
 
220
 
221
 
222
  ## Bias Evaluation
 
206
 
207
  Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
208
  As a derivative of such a language model, IDEFICS can produce texts that include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
209
+ Moreover, IDEFICS can produce factually incorrect texts and should not be relied on to produce factually accurate information.
210
 
211
+ Here are a few examples of outputs that could be categorized as factually incorrect, biased, or offensive:
 
212
 
213
  When prompted with a misleading image, the model's generations offer factually incorrect information. For example, the prompt:
214
 
215
+ ```"Who is the 46th President of the United States of America?" + an image of Donald Trump```
216
 
217
  Returns: `The 46th President of the United States of America is Donald Trump.`.
218
 
219
+ The model will offer a response when prompted with medical images, for example, an X-ray, and asked for a diagnosis. This behaviour occurs both with specific prompts i.e. does this image show X disease and asked for a generic diagnosis i.e. what disease does this image show.
220
 
221
 
222
  ## Bias Evaluation