Spaces:
Running
Running
remove newline
Browse files- introduction.md +1 -1
introduction.md
CHANGED
@@ -268,7 +268,7 @@ finds difficult to count after three; this is a general limitation that is commo
|
|
268 |
|
269 |
There are even more evident issues that we found in our model. Due to the unfiltered nature of our training data the model is exposed to many biases such as sexism, racism, stereotypes,
|
270 |
slurs and gore that it might replicate without the awareness of their hurtful and harmful nature. Indeed, different BERT models - Italian ones included - are prone to create stereotyped
|
271 |
-
sentences that are hurtful ([Nozza et al., 2021](https://www.aclweb.org/anthology/2021.naacl-main.191.pdf)).
|
272 |
While this is not something we intended, it certainly is something that we share the blame for since we were not able to avoid it.
|
273 |
|
274 |
Unfortunately, these kinds of issues are common to many machine learning algorithms (check [Abit et al., 2021](https://arxiv.org/abs/2101.05783) for bias in GPT-3 as an example).
|
|
|
268 |
|
269 |
There are even more evident issues that we found in our model. Due to the unfiltered nature of our training data the model is exposed to many biases such as sexism, racism, stereotypes,
|
270 |
slurs and gore that it might replicate without the awareness of their hurtful and harmful nature. Indeed, different BERT models - Italian ones included - are prone to create stereotyped
|
271 |
+
sentences that are hurtful ([Nozza et al., 2021](https://www.aclweb.org/anthology/2021.naacl-main.191.pdf)).
|
272 |
While this is not something we intended, it certainly is something that we share the blame for since we were not able to avoid it.
|
273 |
|
274 |
Unfortunately, these kinds of issues are common to many machine learning algorithms (check [Abit et al., 2021](https://arxiv.org/abs/2101.05783) for bias in GPT-3 as an example).
|