File size: 4,218 Bytes
00276cc
8f32db4
8ca70f1
 
 
 
21bfd1d
 
 
 
 
 
 
8f32db4
 
 
307039d
 
 
 
 
 
 
 
8f32db4
 
 
307039d
8f32db4
307039d
8f32db4
307039d
8f32db4
307039d
8f32db4
307039d
8f32db4
307039d
8f32db4
307039d
8f32db4
307039d
8f32db4
9734972
 
849612e
9734972
 
f72586c
ffa2879
 
 
 
f72586c
ffa2879
 
 
 
 
 
 
 
 
 
 
9734972
 
 
849612e
 
142c621
849612e
9734972
 
 
 
f72586c
9734972
 
 
f72586c
 
 
 
 
 
 
 
 
 
9734972
 
 
ffa2879
9734972
 
 
ffa2879
9734972
 
 
 
 
ffa2879
9734972
ffa2879
9734972
 
ffa2879
 
9734972
 
 
 
 
 
 
 
849612e
 
9734972
 
 
 
 
 
 
3f798df
 
 
9734972
 
 
 
 
 
 
 
0c9d99b
 
 
 
 
 
21bfd1d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
language:
- en
tags:
- media
- mediabias
- media-bias
- media bias
size_categories:
- 1M<n<10M
dataset_info:
  config_name: plain_text
  splits:
    - name: cognitive_bias
    - name: fake_news
    - name: gender_bias
    - name: hate_speech
    - name: linguistic_bias
    - name: political_bias
    - name: racial_bias
    - name: text_level_bias
configs:
  - config_name: default
    data_files:
      - split: cognitive_bias
        path: mbib-aggregated/cognitive-bias.csv
      - split: fake_news
        path: mbib-aggregated/fake-news.csv
      - split: gender_bias
        path: mbib-aggregated/gender-bias.csv
      - split: hate_speech
        path: mbib-aggregated/hate-speech.csv
      - split: linguistic_bias
        path: mbib-aggregated/linguistic-bias.csv
      - split: political_bias
        path: mbib-aggregated/political-bias.csv
      - split: racial_bias
        path: mbib-aggregated/racial-bias.csv
      - split: text_level_bias
        path: mbib-aggregated/text-level-bias.csv
---

# Dataset Card for Media-Bias-Identification-Benchmark

## Table of Contents
- [Dataset Card for Media-Bias-Identification-Benchmark](#dataset-card-for-mbib)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
    - [Tasks and Information](#tasks-and-information)
    - [Baseline](#baseline)
    - [Languages](#languages)
  - [Dataset Structure](#dataset-structure)
    - [Data Instances](#data-instances)
      - [cognitive-bias](#cognitive-bias)
    - [Data Fields](#data-fields)
  - [Considerations for Using the Data](#considerations-for-using-the-data)
    - [Social Impact of Dataset](#social-impact-of-dataset)
    - [Discussion of Biases](#discussion-of-biases)
    - [Other Known Limitations](#other-known-limitations)
    - [Citation Information](#citation-information)
    - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- **Repository:** https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- **Paper:** https://doi.org/10.1145/3539618.3591882
- **Point of Contact:** [Martin Wessel](mailto:[email protected])




### Baseline


<table>
        <tr><td><b>Task</b></td><td><b>Model</b></td><td><b>Micro F1</b></td><td><b>Macro F1</b></td></tr>

<td>cognitive-bias</td> <td> ConvBERT/ConvBERT</td> <td>0.7126</td> <td> 0.7664</td></tr>
<td>fake-news</td> <td>Bart/RoBERTa-T</td> <td>0.6811</td> <td> 0.7533</td> </tr>
<td>gender-bias</td> <td> RoBERTa-T/ELECTRA</td> <td>0.8334</td> <td>0.8211</td> </tr>
<td>hate-speech</td> <td>RoBERTA-T/Bart</td> <td>0.8897</td> <td> 0.7310</td> </tr>
<td>linguistic-bias</td> <td> ConvBERT/Bart </td> <td> 0.7044 </td> <td> 0.4995 </td> </tr>
<td>political-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7041 </td> <td> 0.7110 </td> </tr>
<td>racial-bias</td> <td> ConvBERT/ELECTRA </td> <td> 0.8772 </td> <td> 0.6170 </td> </tr>
<td>text-leve-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7697</td> <td> 0.7532 </td> </tr>
</table>




### Languages

All datasets are in English

## Dataset Structure

### Data Instances

#### cognitive-bias

An example of one training instance looks as follows. 
```json
{
  "text": "A defense bill includes language that would require military hospitals to provide abortions on demand",
  "label": 1
}
```




### Data Fields

- `text`: a sentence from various sources (eg., news articles, twitter, other social media).
- `label`: binary indicator of bias (0 = unbiased, 1 = biased)




## Considerations for Using the Data

### Social Impact of Dataset
We believe that MBIB offers a new common ground
for research in the domain, especially given the rising amount of
(research) attention directed toward media bias





### Citation Information

```
@inproceedings{
    title = {Introducing MBIB - the first Media Bias Identification Benchmark Task and Dataset Collection},
    author = {Wessel, Martin and Spinde, Timo and Horych, Tomáš and Ruas, Terry and Aizawa, Akiko and Gipp, Bela},
    year = {2023},
    note = {[in review]}
}
```