mapama247 commited on
Commit
cca9f44
1 Parent(s): 78bb285

update bias section

Browse files
Files changed (1) hide show
  1. README.md +15 -20
README.md CHANGED
@@ -620,26 +620,21 @@ This instruction-tuned variant has been trained with a mixture of 276k English,
620
 
621
  We examine the presence of undesired societal and cognitive biases present in this model using different benchmarks. For societal biases,
622
  we test performance using the BBQ dataset (Parrish et al., 2022) in the original English and the Regard dataset (Sheng et al., 2019).
623
- We report that while performance is high (accuracies between 0.69 and 0.87 depending on the social category) in disambiguated settings
624
- the model performs very poorly in ambiguous settings, which is indicative of the presence of societal biases which need to be addressed in post-training phases.
625
-
626
- We additionally analyse model generations using the Regard dataset and classifier in Catalan, Spanish, and English using backtranslation and manual revision of the
627
- translations. We find no statistically significant difference in regard between majority and minority groups for any regard types,
628
- with the exception of negative regard in Catalan where model generations are actually slightly worse for social majorities.
629
- Our analyses on societal biases show that while these biases are capable of interfering with model performance as expressed in the results on the BBQ dataset,
630
- their tendency for representational harm is limited given the results of the Regard dataset. We highlight that our analyses of these biases are by no means exhaustive
631
- and are limited by the relative scarcity of adequate resources in all languages present in the training data. We aim to gradually extend and expand our analyses
632
- in future work.
633
-
634
- Our cognitive bias analysis focuses on positional effects in 0-shot settings, and majority class bias in few-shot settings.
635
- For positional effects, we leverage the ARC Multiple Choice Question dataset (Clark et al., 2018).
636
- We observe moderate to strong primacy effects, whereby the model shows a preference for answers towards the beginning of the list of provided answers.
637
- We measure effects of majority class effects in few-shot settings using SST-2 (Socher et al., 2013). We detect moderate effects,
638
- implying that outputs can be influenced by the prompts.
639
-
640
- We highlight that these results can be expected from a pretrained model that has not yet been instruction-tuned or aligned.
641
- These tests are performed in order to show the biases the model may contain.
642
- We urge developers to take them into account and perform safety testing and tuning tailored to their specific applications of the model.
643
 
644
  ---
645
 
 
620
 
621
  We examine the presence of undesired societal and cognitive biases present in this model using different benchmarks. For societal biases,
622
  we test performance using the BBQ dataset (Parrish et al., 2022) in the original English and the Regard dataset (Sheng et al., 2019).
623
+ We report that while performance is high (accuracies around 0.8 depending on the social category) in disambiguated settings,
624
+ the model performs very poorly in ambiguous settings, which indicates the presence of societal biases that need to be further addressed in post-training phases.
625
+
626
+ Our cognitive bias analysis focuses on positional effects in 0-shot settings, and majority class bias in few-shot settings.
627
+ For positional effects, we leverage the ARC Multiple Choice Question dataset (Clark et al., 2018). We observe significant,
628
+ but relatively weak primacy effects, whereby the model shows a preference for answers towards the beginning of the list of provided answers.
629
+ We measure effects of majority class effects in few-shot settings using SST-2 (Socher et al., 2013). We again detect significant effects,
630
+ with a small effect size. This suggests that the model is relatively robust against the examined cognitive biases.
631
+
632
+ We highlight that our analyses of these biases are by no means exhaustive and are limited by the relative scarcity of adequate resources
633
+ in all languages present in the training data. We aim to gradually extend and expand our analyses in future work.
634
+
635
+ These results can be expected from a model that has undergone only a preliminary instruction tuning.
636
+ These tests are performed in order to show the biases the model may contain. We urge developers to take
637
+ them into account and perform safety testing and tuning tailored to their specific applications of the model.
 
 
 
 
 
638
 
639
  ---
640