Datasets:
[DEV] Add scripts for hallucination detection
Browse files- README.md +121 -231
- scripts/README.md +85 -0
- scripts/flagAnomalousStrings.py +117 -0
- scripts/flagHallucinations.py +169 -0
- scripts/flagSuspiciousSingleWord.py +192 -0
README.md
CHANGED
@@ -1,231 +1,121 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
data_files:
|
123 |
-
- split: train_voxpopuli
|
124 |
-
path: pt/voxpopuli*
|
125 |
-
- config_name: ro
|
126 |
-
data_files:
|
127 |
-
- split: train_voxpopuli
|
128 |
-
path: ro/voxpopuli*
|
129 |
-
- config_name: sk
|
130 |
-
data_files:
|
131 |
-
- split: train_voxpopuli
|
132 |
-
path: sk/voxpopuli*
|
133 |
-
- config_name: sl
|
134 |
-
data_files:
|
135 |
-
- split: train_voxpopuli
|
136 |
-
path: sl/voxpopuli*
|
137 |
-
- config_name: sv
|
138 |
-
data_files:
|
139 |
-
- split: train_voxpopuli
|
140 |
-
path: sv/voxpopuli*
|
141 |
-
---
|
142 |
-
|
143 |
-
<img src="./mosel-logo-transparent.png" align="center" width="100%">
|
144 |
-
|
145 |
-
### Dataset Description, Collection, and Source
|
146 |
-
|
147 |
-
The MOSEL corpus is a multilingual dataset collection including up to 950K hours of open-source speech recordings covering the 24 official languages of the European Union. We collect data by surveying labeled and unlabeled speech corpora under open-source compliant licenses.
|
148 |
-
In particular, MOSEL includes the automatic transcripts of 441k hours of unlabeled speech from VoxPopuli and LibriLight. The data is transcribed using [Whisper large v3](https://huggingface.co/openai/whisper-large-v3).
|
149 |
-
Whisper is released under the OS Apache 2.0 License which allows releasing the generated content under any license. Since LibriLight, differently from VoxPopuli, contains segments longer than Whisper's maximum duration limit of 30sec, we split them into chunks of up to 30sec.
|
150 |
-
|
151 |
-
- **Curated by:** Marco Gaido, Sara Papi, Luisa Bentivogli, Alessio Brutti, Mauro Cettolo, Roberto Gretter, Marco Matassoni, Mohamed Nabih, and Matteo Negri
|
152 |
-
- **Funded by:** FAIR, Meetween, and CINECA
|
153 |
-
- **Shared by:** Fondazione Bruno Kessler
|
154 |
-
|
155 |
-
### License
|
156 |
-
- CC-BY-4.0
|
157 |
-
|
158 |
-
### Dataset Sources
|
159 |
-
|
160 |
-
- **Collection Repository:** [MOSEL](https://github.com/hlt-mt/mosel)
|
161 |
-
- **Paper:** [MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages](http://arxiv.org/abs/2410.01036)
|
162 |
-
|
163 |
-
## Dataset Structure
|
164 |
-
|
165 |
-
### Data Config
|
166 |
-
The dataset is split into folders corresponding to the languages using the [2-letters ISO codes](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes), one for each language. Within each folder, a split for each psuedo-labeled dataset is provided.
|
167 |
-
|
168 |
-
### Data Field
|
169 |
-
`id`: alphanumeric identifier for the segment
|
170 |
-
|
171 |
-
`language`: extended language (e.g., "english")
|
172 |
-
|
173 |
-
`text`: the content of the psuedo label
|
174 |
-
|
175 |
-
`hall_repeated_ngrams`: True/False - indicates the repetition of an *n*-gram in `text` for a minimum number of times; for *n* in 1 to 2, the threshold is 4, for *n* in 3 to 5, it is 3
|
176 |
-
|
177 |
-
`hall_long_word`: True/False - indicates the presence of a word of at least 40 characters in `text`
|
178 |
-
|
179 |
-
`hall_frequent_single_word`: True/False - indicates that `text` consists of only one word which is the most frequent inside the whole text
|
180 |
-
|
181 |
-
## Dataset Statistics (in hours)
|
182 |
-
|
183 |
-
| Language (LangID) | Labeled | Unlabeled | Total |
|
184 |
-
|--------|--------|--------|-------|
|
185 |
-
| Bulgarian (bg) | 111 | 17609 | 17720 |
|
186 |
-
| Croatian (hr) | 55 | 8106 | 8161 |
|
187 |
-
| Czech (cs) | 591 | 18705 | 19296 |
|
188 |
-
| Danish (da) | 20 | 13600 | 13620 |
|
189 |
-
| Dutch (nl) | 3395 | 19014 | 22409 |
|
190 |
-
| English (en) | 437239 | 84704 | 521943|
|
191 |
-
| Estonian (et) | 60 | 10604 | 10664 |
|
192 |
-
| Finnish (fi) | 64 | 14200 | 14264 |
|
193 |
-
| French (fr) | 26984 | 22896 | 49880 |
|
194 |
-
| German (de) | 9236 | 23228 | 32464 |
|
195 |
-
| Greek (el) | 35 | 17703 | 17738 |
|
196 |
-
| Hungarian (hu) | 189 | 17701 | 17890 |
|
197 |
-
| Irish (ga) | 17 | 0 | 17 |
|
198 |
-
| Italian (it) | 3756 | 21933 | 25689 |
|
199 |
-
| Latvian (lv) | 173 | 13100 | 13273 |
|
200 |
-
| Lithuanian (lt) | 36 | 14400 | 14436 |
|
201 |
-
| Maltese (mt) | 19 | 9100 | 9119 |
|
202 |
-
| Polish (pl) | 510 | 21207 | 21717 |
|
203 |
-
| Portuguese (pt) | 5492 | 17526 | 23018 |
|
204 |
-
| Romanian (ro) | 121 | 17906 | 18021 |
|
205 |
-
| Slovak (sk) | 61 | 12100 | 12161 |
|
206 |
-
| Slovenian (sl) | 32 | 11300 | 11332 |
|
207 |
-
| Spanish (es) | 17471 | 21526 | 38997 |
|
208 |
-
| Swedish (sv) | 58 | 16300 | 16358 |
|
209 |
-
| Total | 505725 | 444467 | 950192|
|
210 |
-
|
211 |
-
|
212 |
-
## Dataset Creation
|
213 |
-
To reproduce the dataset creation, please refer to the [MOSEL README in the fbk-llm](https://github.com/hlt-mt/fbk-llm) repository.
|
214 |
-
|
215 |
-
|
216 |
-
## Citation
|
217 |
-
Release 1.0:
|
218 |
-
```
|
219 |
-
@inproceedings{mosel,
|
220 |
-
title = {{MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages}},
|
221 |
-
author = {Marco Gaido and Sara Papi and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabihand Matteo Negri},
|
222 |
-
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
|
223 |
-
month = nov,
|
224 |
-
year = "2024",
|
225 |
-
address = "Miami, United States",
|
226 |
-
publisher = "Association for Computational Linguistics",
|
227 |
-
}
|
228 |
-
```
|
229 |
-
|
230 |
-
## Dataset Card Contact
|
231 |
-
[@spapi](https://huggingface.co/spapi)
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- automatic-speech-recognition
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
- bg
|
7 |
+
- hr
|
8 |
+
- cs
|
9 |
+
- da
|
10 |
+
- nl
|
11 |
+
- et
|
12 |
+
- fi
|
13 |
+
- fr
|
14 |
+
- de
|
15 |
+
- el
|
16 |
+
- hu
|
17 |
+
- ga
|
18 |
+
- it
|
19 |
+
- lv
|
20 |
+
- lt
|
21 |
+
- mt
|
22 |
+
- pl
|
23 |
+
- pt
|
24 |
+
- ro
|
25 |
+
- sk
|
26 |
+
- sl
|
27 |
+
- es
|
28 |
+
- sv
|
29 |
+
pretty_name: MOSEL
|
30 |
+
license: cc-by-4.0
|
31 |
+
---
|
32 |
+
|
33 |
+
<img src="./mosel-logo-transparent.png" align="center" width="100%">
|
34 |
+
|
35 |
+
### Dataset Description, Collection, and Source
|
36 |
+
|
37 |
+
The MOSEL corpus is a multilingual dataset collection including up to 950K hours of open-source speech recordings covering the 24 official languages of the European Union. We collect data by surveying labeled and unlabeled speech corpora under open-source compliant licenses.
|
38 |
+
In particular, MOSEL includes the automatic transcripts of 441k hours of unlabeled speech from VoxPopuli and LibriLight. The data is transcribed using [Whisper large v3](https://huggingface.co/openai/whisper-large-v3).
|
39 |
+
Whisper is released under the OS Apache 2.0 License which allows releasing the generated content under any license. Since LibriLight, differently from VoxPopuli, contains segments longer than Whisper's maximum duration limit of 30sec, we split them into chunks of up to 30sec.
|
40 |
+
|
41 |
+
- **Curated by:** Marco Gaido, Sara Papi, Luisa Bentivogli, Alessio Brutti, Mauro Cettolo, Roberto Gretter, Marco Matassoni, Mohamed Nabih, and Matteo Negri
|
42 |
+
- **Funded by:** FAIR, Meetween, and CINECA
|
43 |
+
- **Shared by:** Fondazione Bruno Kessler
|
44 |
+
|
45 |
+
### License
|
46 |
+
- CC-BY-4.0
|
47 |
+
|
48 |
+
### Dataset Sources
|
49 |
+
|
50 |
+
- **Collection Repository:** [MOSEL](https://github.com/hlt-mt/mosel)
|
51 |
+
- **Paper:** [MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages](https://arxiv.org/)
|
52 |
+
|
53 |
+
## Dataset Structure
|
54 |
+
|
55 |
+
### Data Config
|
56 |
+
The dataset is split into folders corresponding to the languages using the [2-letters ISO codes](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes), one for each language. Within each folder, a split for each psuedo-labeled dataset is provided.
|
57 |
+
|
58 |
+
### Data Field
|
59 |
+
`id`: alphanumeric identifier for the segment
|
60 |
+
|
61 |
+
`language`: extended language (e.g., "english")
|
62 |
+
|
63 |
+
`text`: the content of the psuedo label
|
64 |
+
|
65 |
+
`hall_repeated_ngrams`: True/False - indicates the repetition of an *n*-gram in `text` for a minimum number of times; for *n* in 1 to 2, the threshold is 4, for *n* in 3 to 5, it is 3
|
66 |
+
|
67 |
+
`hall_long_word`: True/False - indicates the presence of a word of at least 40 characters in `text`
|
68 |
+
|
69 |
+
`hall_frequent_single_word`: True/False - indicates that `text` consists of only one word which is the most frequent inside the whole text
|
70 |
+
|
71 |
+
## Dataset Statistics (in hours)
|
72 |
+
|
73 |
+
| Language (LangID) | Labeled | Unlabeled | Total |
|
74 |
+
|--------|--------|--------|-------|
|
75 |
+
| Bulgarian (bg) | 111 | 17609 | 17720 |
|
76 |
+
| Croatian (hr) | 55 | 8106 | 8161 |
|
77 |
+
| Czech (cs) | 591 | 18705 | 19296 |
|
78 |
+
| Danish (da) | 20 | 13600 | 13620 |
|
79 |
+
| Dutch (nl) | 3395 | 19014 | 22409 |
|
80 |
+
| English (en) | 437239 | 84704 | 521943|
|
81 |
+
| Estonian (et) | 60 | 10604 | 10664 |
|
82 |
+
| Finnish (fi) | 64 | 14200 | 14264 |
|
83 |
+
| French (fr) | 26984 | 22896 | 49880 |
|
84 |
+
| German (de) | 9236 | 23228 | 32464 |
|
85 |
+
| Greek (el) | 35 | 17703 | 17738 |
|
86 |
+
| Hungarian (hu) | 189 | 17701 | 17890 |
|
87 |
+
| Irish (ga) | 17 | 0 | 17 |
|
88 |
+
| Italian (it) | 3756 | 21933 | 25689 |
|
89 |
+
| Latvian (lv) | 173 | 13100 | 13273 |
|
90 |
+
| Lithuanian (lt) | 36 | 14400 | 14436 |
|
91 |
+
| Maltese (mt) | 19 | 9100 | 9119 |
|
92 |
+
| Polish (pl) | 510 | 21207 | 21717 |
|
93 |
+
| Portuguese (pt) | 5492 | 17526 | 23018 |
|
94 |
+
| Romanian (ro) | 121 | 17906 | 18021 |
|
95 |
+
| Slovak (sk) | 61 | 12100 | 12161 |
|
96 |
+
| Slovenian (sl) | 32 | 11300 | 11332 |
|
97 |
+
| Spanish (es) | 17471 | 21526 | 38997 |
|
98 |
+
| Swedish (sv) | 58 | 16300 | 16358 |
|
99 |
+
| Total | 505725 | 444467 | 950192|
|
100 |
+
|
101 |
+
|
102 |
+
## Dataset Creation
|
103 |
+
To reproduce the dataset creation, please refer to the [MOSEL README in the fbk-llm](https://github.com/hlt-mt/fbk-llm) repository.
|
104 |
+
The scripts used for hallucination detection are availabe in the `scripts` folder of this repository.
|
105 |
+
|
106 |
+
## Citation
|
107 |
+
Release 1.0:
|
108 |
+
```
|
109 |
+
@inproceedings{mosel,
|
110 |
+
title = {{MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages}},
|
111 |
+
author = {Marco Gaido and Sara Papi and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabihand Matteo Negri},
|
112 |
+
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
|
113 |
+
month = nov,
|
114 |
+
year = "2024",
|
115 |
+
address = "Miami, United States",
|
116 |
+
publisher = "Association for Computational Linguistics",
|
117 |
+
}
|
118 |
+
```
|
119 |
+
|
120 |
+
## Dataset Card Contact
|
121 |
+
[@spapi](https://huggingface.co/spapi)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
scripts/README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Mosel: scripts for flagging hallucinated text
|
2 |
+
|
3 |
+
#### Last update: 1 Oct 2024
|
4 |
+
|
5 |
+
### Overview
|
6 |
+
|
7 |
+
In order to automatically decide whether transcriptions of audio documents generated automatically are reliable, and therefore useful for training, or not, and therefore to be discarded, we focused on the detection of *hallucinations*. In general, “Hallucination in the context of LLMs refers to the generation of text that is erroneous, nonsensical, or detached from reality”. Here, referring to surface hallucinations we consider the subset of the phenomenon which refer to anomalies of the text, disregarding hallucinations related to contents. In fact, it happens that pre-trained LLMs generate **anomalous repetitions** of the same pattern, like (real examples):
|
8 |
+
|
9 |
+
- Hey, hey, hey, here, hey.
|
10 |
+
- No, no, no, no, no, no, no, no.
|
11 |
+
|
12 |
+
or output **very long** and noisy **strings**:
|
13 |
+
|
14 |
+
- T-J-N-D-F-Z-3-2-8-W-M-L-G-0-Z-P-O-2-M-2-M-M-O-G-W.
|
15 |
+
|
16 |
+
or still produce **single word lines** using a common token but a suspiciously high number of times. For each of the three types of hallucinations a specific script has been designed, whose usage is listed in the following:
|
17 |
+
|
18 |
+
```
|
19 |
+
flagHallucinations.py
|
20 |
+
-h, --help show this help message and exit
|
21 |
+
--tsv-InFile TSV_INFILE, -i TSV_INFILE
|
22 |
+
The input TSV file [Mandatory]
|
23 |
+
--tsv-OutFile TSV_OUTFILE, -o TSV_OUTFILE
|
24 |
+
The output TSV file [Mandatory. If equal to input TSV
|
25 |
+
file, the new column is added to the original file]
|
26 |
+
--column COLUMN, -c COLUMN
|
27 |
+
Column name of the text to process [Optional]
|
28 |
+
(default: source)
|
29 |
+
--thresh1grams THRESH1GRAMS, -u THRESH1GRAMS
|
30 |
+
Threshold for 1-2_word hallucinations [Optional]
|
31 |
+
(default: 4)
|
32 |
+
--threshNgrams THRESHNGRAMS, -n THRESHNGRAMS
|
33 |
+
Threshold for 3-5_word hallucinations [Optional]
|
34 |
+
(default: 2)
|
35 |
+
--quiet, -q Print only True/False, no explanation for True's
|
36 |
+
--version, -v Print version of the script and exit
|
37 |
+
|
38 |
+
```
|
39 |
+
```
|
40 |
+
flagAnomalousStrings.py
|
41 |
+
-h, --help show this help message and exit
|
42 |
+
--tsv-InFile TSV_INFILE, -i TSV_INFILE
|
43 |
+
The input TSV file [Mandatory]
|
44 |
+
--tsv-OutFile TSV_OUTFILE, -o TSV_OUTFILE
|
45 |
+
The output TSV file [Mandatory. If equal to input TSV
|
46 |
+
file, the new column is added to the original file]
|
47 |
+
--column COLUMN, -c COLUMN
|
48 |
+
Column name of the text to process [Optional]
|
49 |
+
(default: source)
|
50 |
+
--thresh THRESH, -t THRESH
|
51 |
+
Max number of chars of a string to be unflagged
|
52 |
+
[Optional] (default: 40)
|
53 |
+
--quiet, -q Print only True/False, no explanation for True's
|
54 |
+
--version, -v Print version of the script and exit
|
55 |
+
```
|
56 |
+
|
57 |
+
```
|
58 |
+
flagSuspiciousSingleWord.py
|
59 |
+
-h, --help show this help message and exit
|
60 |
+
--tsv-InFile TSV_INFILE, -i TSV_INFILE
|
61 |
+
The input TSV file [Mandatory]
|
62 |
+
--tsv-OutFile TSV_OUTFILE, -o TSV_OUTFILE
|
63 |
+
The output TSV file [Mandatory. If equal to input TSV
|
64 |
+
file, the new column ('suspicious single word') is
|
65 |
+
added to the original file]
|
66 |
+
--tsv-SuspiciousWordFiles TSV_SUSPICIOUSWORDFILES [TSV_SUSPICIOUSWORDFILES ...], -s TSV_SUSPICIOUSWORDFILES [TSV_SUSPICIOUSWORDFILES ...]
|
67 |
+
The TSV file(s) used to look for the suspicious word
|
68 |
+
[Optional. If not present, the input TSV file is used
|
69 |
+
instead]
|
70 |
+
--column COLUMN, -c COLUMN
|
71 |
+
Column name of the text to process [Optional]
|
72 |
+
(default: source)
|
73 |
+
--suspiciousWord SUSPICIOUSWORD, -w SUSPICIOUSWORD
|
74 |
+
suspicious word [if not specified, found in other TSV
|
75 |
+
files passed as parameters]
|
76 |
+
--quiet, -q Print only True/False, no explanation for True's
|
77 |
+
--version, -v Print version of the script and exit
|
78 |
+
```
|
79 |
+
|
80 |
+
All of them read a TSV file from input (-i) which is expected to include a column (-c) with the text to process; they output a TSV file (-o) with an additional column flagging the specific hallucination each script looks for. If input and output files are the same, the former is replaced by the latter. The -q option suppresses the verbosity on hallucinations that have been found.<br>
|
81 |
+
Here the processing carried out by the three scripts:
|
82 |
+
- **flagHallucinations.py** It flags (by setting True the corresponding entry of the *hall_repeated_ngrams* column) those sentences where a pattern (*n*-gram, that is a sequence of *n* words) is repeated at least a given number of times; for patterns of size 1 to 2, the minimum number of times for flagging it is set by the thresh1grams parameter (default value: 4), for those of size 3-5 by threshNgrams (2).
|
83 |
+
- **flagAnomalousStrings.py** It flags (by setting True the corresponding entry of the *hall_long_word* column) those sentences where at least one word is abnormally long; the abnormality is set through the thresh parameter, whose default value is 40.
|
84 |
+
- **flagSuspiciousSingleWord.py** It flags (by setting True the corresponding entry of the *hall_frequent_single_word*) those sentences which consists of only one single suspicious word. This word can be either passed as a parameter (*suspiciousWord* option) or found inside the TSV input files. In the latter case, it is set as the most frequent word in the text included in files to be inspected. The TSV files to inspect can be passed through the *tsv-SuspiciousWordFiles* option. If no explicit *suspiciousWord* nor *tsv-SuspiciousWordFiles* is passed, the *tsv-InFile* is inspected.
|
85 |
+
|
scripts/flagAnomalousStrings.py
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copyright 2024 FBK
|
2 |
+
|
3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
+
# you may not use this file except in compliance with the License.
|
5 |
+
# You may obtain a copy of the License at
|
6 |
+
|
7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
+
|
9 |
+
# Unless required by applicable law or agreed to in writing, software
|
10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
+
# See the License for the specific language governing permissions and
|
13 |
+
# limitations under the License
|
14 |
+
|
15 |
+
try:
|
16 |
+
import pandas as pd
|
17 |
+
except ImportError:
|
18 |
+
print("Please install the pandas package with 'pip install pandas' and try again.")
|
19 |
+
exit(1)
|
20 |
+
|
21 |
+
import argparse
|
22 |
+
|
23 |
+
_VERSION = "1.01"
|
24 |
+
|
25 |
+
class ExplicitDefaultsHelpFormatter(argparse.ArgumentDefaultsHelpFormatter):
|
26 |
+
def _get_help_string(self, action):
|
27 |
+
if action.default is None or action.default is False:
|
28 |
+
return action.help
|
29 |
+
return super()._get_help_string(action)
|
30 |
+
|
31 |
+
def main(args):
|
32 |
+
"""
|
33 |
+
This script flags (by setting True the corresponding entry of
|
34 |
+
the hall_long_word column) those sentences where at least one
|
35 |
+
word is abnormally long; the abnormality is set through the
|
36 |
+
thresh parameter, whose default value is 40.
|
37 |
+
"""
|
38 |
+
|
39 |
+
if (parsed_args.version):
|
40 |
+
print(f"Version {_VERSION} of anomalous string detector")
|
41 |
+
exit(1)
|
42 |
+
|
43 |
+
if not (tsv_files_specified):
|
44 |
+
print("--tsv-InFile and --tsv-OutFile are both required")
|
45 |
+
parser.print_usage()
|
46 |
+
exit(1)
|
47 |
+
|
48 |
+
inDF = pd.read_csv(args.tsv_InFile, sep='\t', dtype=str, low_memory=False, na_filter=False, quoting=3)
|
49 |
+
try:
|
50 |
+
txt = inDF[parsed_args.column]
|
51 |
+
except KeyError:
|
52 |
+
print("Error in reading column <"+parsed_args.column+"> in TSV file")
|
53 |
+
exit(1)
|
54 |
+
|
55 |
+
flag = []
|
56 |
+
for line in txt:
|
57 |
+
anomal = False
|
58 |
+
try:
|
59 |
+
words = line.split()
|
60 |
+
except TabError:
|
61 |
+
words = []
|
62 |
+
for w in words:
|
63 |
+
lw=len(w.strip())
|
64 |
+
if lw > parsed_args.thresh:
|
65 |
+
if args.quiet:
|
66 |
+
flag.append("True")
|
67 |
+
else:
|
68 |
+
flag.append("True ("+str(lw)+"): "+w.strip())
|
69 |
+
anomal = True
|
70 |
+
break
|
71 |
+
if not anomal:
|
72 |
+
flag.append("False")
|
73 |
+
|
74 |
+
inDF['hall_long_word'] = flag
|
75 |
+
inDF.to_csv(args.tsv_OutFile, sep="\t", index=False, quoting=3)
|
76 |
+
|
77 |
+
|
78 |
+
if __name__ == '__main__':
|
79 |
+
parser = argparse.ArgumentParser(formatter_class=ExplicitDefaultsHelpFormatter)
|
80 |
+
|
81 |
+
# I/O related arguments
|
82 |
+
parser.add_argument(
|
83 |
+
'--tsv-InFile', '-i', type=str,
|
84 |
+
help="The input TSV file [Mandatory]")
|
85 |
+
|
86 |
+
parser.add_argument(
|
87 |
+
'--tsv-OutFile', '-o', type=str,
|
88 |
+
help="The output TSV file [Mandatory. If equal to input TSV file, the new column is added to the original file]")
|
89 |
+
|
90 |
+
# Processing arguments:
|
91 |
+
parser.add_argument(
|
92 |
+
'--column', '-c', default='source',
|
93 |
+
help="Column name of the text to process [Optional]")
|
94 |
+
|
95 |
+
parser.add_argument(
|
96 |
+
'--thresh', '-t', type=int, default=40,
|
97 |
+
help="Max number of chars of a string to be unflagged [Optional]")
|
98 |
+
|
99 |
+
# Reporting related arguments
|
100 |
+
parser.add_argument(
|
101 |
+
'--quiet', '-q', default=False, action='store_true',
|
102 |
+
help='Print only True/False, no explanation for True\'s')
|
103 |
+
|
104 |
+
# Get version information:
|
105 |
+
parser.add_argument(
|
106 |
+
'--version', '-v', action='store_true', default=False,
|
107 |
+
help="Print version of the script and exit")
|
108 |
+
|
109 |
+
|
110 |
+
parsed_args = parser.parse_args()
|
111 |
+
tsv_files_specified = \
|
112 |
+
getattr(parsed_args, 'tsv_InFile') is not None \
|
113 |
+
and len(parsed_args.tsv_InFile) > 0 \
|
114 |
+
and getattr(parsed_args, 'tsv_OutFile') is not None \
|
115 |
+
and len(parsed_args.tsv_OutFile) > 0
|
116 |
+
|
117 |
+
main(parsed_args)
|
scripts/flagHallucinations.py
ADDED
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copyright 2024 FBK
|
2 |
+
|
3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
+
# you may not use this file except in compliance with the License.
|
5 |
+
# You may obtain a copy of the License at
|
6 |
+
|
7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
+
|
9 |
+
# Unless required by applicable law or agreed to in writing, software
|
10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
+
# See the License for the specific language governing permissions and
|
13 |
+
# limitations under the License
|
14 |
+
|
15 |
+
try:
|
16 |
+
import pandas as pd
|
17 |
+
except ImportError:
|
18 |
+
print("Please install the pandas package with 'pip install pandas' and try again.")
|
19 |
+
exit(1)
|
20 |
+
|
21 |
+
import argparse
|
22 |
+
|
23 |
+
_VERSION = "1.01"
|
24 |
+
|
25 |
+
class ExplicitDefaultsHelpFormatter(argparse.ArgumentDefaultsHelpFormatter):
|
26 |
+
def _get_help_string(self, action):
|
27 |
+
if action.default is None or action.default is False:
|
28 |
+
return action.help
|
29 |
+
return super()._get_help_string(action)
|
30 |
+
|
31 |
+
# max size of pattern (seq of words) that,
|
32 |
+
# if repeated at least thresh1grams/threshNgrams times,
|
33 |
+
# risesflags the hallucination:
|
34 |
+
maxN = 5
|
35 |
+
|
36 |
+
def findHall(wrd):
|
37 |
+
for N in range(maxN, 0, -1):
|
38 |
+
count = 0
|
39 |
+
for idx in range(0,len(wrd)-2*N+2): # the max idx value must be leave
|
40 |
+
# room for at least one repetition
|
41 |
+
# of the N_sized pattern
|
42 |
+
count += hallLen(idx,N,wrd)
|
43 |
+
if ( N<3 and count+1>=parsed_args.thresh1grams ) or \
|
44 |
+
( N>=3 and count+1>=parsed_args.threshNgrams ):
|
45 |
+
return [N, idx, count]
|
46 |
+
else:
|
47 |
+
count = 0 # reset
|
48 |
+
|
49 |
+
return [0,0,0]
|
50 |
+
|
51 |
+
def hallLen(startIdx,N,wrd):
|
52 |
+
hallLen = 0
|
53 |
+
startRep = startIdx + N # first index after the end of the current pattern
|
54 |
+
while startRep < len(wrd)-N+1:
|
55 |
+
if isHall(startIdx,startRep,N,wrd):
|
56 |
+
hallLen+=1
|
57 |
+
startRep+=N
|
58 |
+
else:
|
59 |
+
break
|
60 |
+
return hallLen
|
61 |
+
|
62 |
+
def isHall(s1,s2,N,wrd):
|
63 |
+
i = 0
|
64 |
+
while i<N and wrd[s1+i] == wrd[s2+i]:
|
65 |
+
i+=1
|
66 |
+
return i == N
|
67 |
+
|
68 |
+
def main(args):
|
69 |
+
"""
|
70 |
+
This script flags (by setting True the corresponding entry
|
71 |
+
of the hall_repeated_ngrams column) those sentences where a
|
72 |
+
pattern (n-gram, that is a sequence of n words) is repeated
|
73 |
+
at least a given number of times; for patterns of size 1 to 2,
|
74 |
+
the minimum number of times for flagging it is set by the
|
75 |
+
thresh1grams parameter (default value: 4), for those of size
|
76 |
+
3-5 by threshNgrams (2)
|
77 |
+
"""
|
78 |
+
|
79 |
+
if (parsed_args.version):
|
80 |
+
print(f"Version {_VERSION} of anomalous string detector")
|
81 |
+
exit(1)
|
82 |
+
|
83 |
+
if not (tsv_files_specified):
|
84 |
+
print("--tsv-InFile and --tsv-OutFile are both required")
|
85 |
+
parser.print_usage()
|
86 |
+
exit(1)
|
87 |
+
|
88 |
+
if not (wrong_thresh_values):
|
89 |
+
print("--thresh1grams and --threshNgrams must both be positive integers")
|
90 |
+
parser.print_usage()
|
91 |
+
exit(1)
|
92 |
+
|
93 |
+
try:
|
94 |
+
inDF = pd.read_csv(args.tsv_InFile, sep='\t', dtype=str, low_memory=False, na_filter=False, quoting=3)
|
95 |
+
except IOError:
|
96 |
+
print("Error in opening "+args.tsv_InFile+" file")
|
97 |
+
|
98 |
+
try:
|
99 |
+
txt = inDF[parsed_args.column]
|
100 |
+
except KeyError:
|
101 |
+
print("Error in reading column <"+parsed_args.column+"> in TSV file")
|
102 |
+
exit(1)
|
103 |
+
|
104 |
+
flag = []
|
105 |
+
for line in txt:
|
106 |
+
words = line.split()
|
107 |
+
[size, idx, count] = findHall(words)
|
108 |
+
if size>0:
|
109 |
+
if args.quiet:
|
110 |
+
flag.append("True")
|
111 |
+
else:
|
112 |
+
flag.append("True (pattern of length " + str(size) + \
|
113 |
+
" from index " + str(idx) + \
|
114 |
+
", repeated at least " + str(count+1) + " times)")
|
115 |
+
else:
|
116 |
+
flag.append("False")
|
117 |
+
|
118 |
+
inDF['hall_repeated_ngrams'] = flag
|
119 |
+
inDF.to_csv(args.tsv_OutFile, sep="\t", index=False, quoting=3)
|
120 |
+
|
121 |
+
|
122 |
+
if __name__ == '__main__':
|
123 |
+
parser = argparse.ArgumentParser(formatter_class=ExplicitDefaultsHelpFormatter)
|
124 |
+
|
125 |
+
# I/O related arguments
|
126 |
+
parser.add_argument(
|
127 |
+
'--tsv-InFile', '-i', type=str,
|
128 |
+
help="The input TSV file [Mandatory]")
|
129 |
+
|
130 |
+
parser.add_argument(
|
131 |
+
'--tsv-OutFile', '-o', type=str,
|
132 |
+
help="The output TSV file [Mandatory. If equal to input TSV file, the new column is added to the original file]")
|
133 |
+
|
134 |
+
# Processing arguments:
|
135 |
+
parser.add_argument(
|
136 |
+
'--column', '-c', default='source',
|
137 |
+
help="Column name of the text to process [Optional]")
|
138 |
+
|
139 |
+
parser.add_argument(
|
140 |
+
'--thresh1grams', '-u', type=int, default=4,
|
141 |
+
help="Threshold for 1-2_word hallucinations [Optional]")
|
142 |
+
|
143 |
+
parser.add_argument(
|
144 |
+
'--threshNgrams', '-n', type=int, default=2,
|
145 |
+
help="Threshold for 3-5_word hallucinations [Optional]")
|
146 |
+
|
147 |
+
# Reporting related arguments
|
148 |
+
parser.add_argument(
|
149 |
+
'--quiet', '-q', default=False, action='store_true',
|
150 |
+
help='Print only True/False, no explanation for True\'s')
|
151 |
+
|
152 |
+
# Get version information:
|
153 |
+
parser.add_argument(
|
154 |
+
'--version', '-v', action='store_true', default=False,
|
155 |
+
help="Print version of the script and exit")
|
156 |
+
|
157 |
+
|
158 |
+
parsed_args = parser.parse_args()
|
159 |
+
tsv_files_specified = \
|
160 |
+
getattr(parsed_args, 'tsv_InFile') is not None \
|
161 |
+
and len(parsed_args.tsv_InFile) > 0 \
|
162 |
+
and getattr(parsed_args, 'tsv_OutFile') is not None \
|
163 |
+
and len(parsed_args.tsv_OutFile) > 0
|
164 |
+
|
165 |
+
wrong_thresh_values = parsed_args.thresh1grams > 0 \
|
166 |
+
and parsed_args.threshNgrams > 0
|
167 |
+
|
168 |
+
|
169 |
+
main(parsed_args)
|
scripts/flagSuspiciousSingleWord.py
ADDED
@@ -0,0 +1,192 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copyright 2024 FBK
|
2 |
+
|
3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
+
# you may not use this file except in compliance with the License.
|
5 |
+
# You may obtain a copy of the License at
|
6 |
+
|
7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
+
|
9 |
+
# Unless required by applicable law or agreed to in writing, software
|
10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
+
# See the License for the specific language governing permissions and
|
13 |
+
# limitations under the License
|
14 |
+
|
15 |
+
try:
|
16 |
+
import pandas as pd
|
17 |
+
except ImportError:
|
18 |
+
print("Please install the pandas package with 'pip install pandas' and try again.")
|
19 |
+
exit(1)
|
20 |
+
|
21 |
+
import argparse
|
22 |
+
|
23 |
+
_VERSION = "1.01"
|
24 |
+
|
25 |
+
class ExplicitDefaultsHelpFormatter(argparse.ArgumentDefaultsHelpFormatter):
|
26 |
+
def _get_help_string(self, action):
|
27 |
+
if action.default is None or action.default is False:
|
28 |
+
return action.help
|
29 |
+
return super()._get_help_string(action)
|
30 |
+
|
31 |
+
def processColumn(col, dict):
|
32 |
+
# compute the counts of words that appear alone in the lines
|
33 |
+
for line in col:
|
34 |
+
if len(line.split()) == 1:
|
35 |
+
key=line.strip()
|
36 |
+
if key in dict:
|
37 |
+
dict[key]+=1
|
38 |
+
else:
|
39 |
+
dict[key]=1
|
40 |
+
|
41 |
+
def processTSVfiles(files, dict):
|
42 |
+
# open and process textual content of each TSV file in the input list;
|
43 |
+
# return the word with the greatest count
|
44 |
+
for tsv in files:
|
45 |
+
try:
|
46 |
+
inDF = pd.read_csv(tsv, sep='\t', dtype=str, low_memory=False, na_filter=False, quoting=3)
|
47 |
+
except IOError:
|
48 |
+
print("Error in opening "+tsv+" file")
|
49 |
+
try:
|
50 |
+
col = inDF[parsed_args.column]
|
51 |
+
except KeyError:
|
52 |
+
print("Error in reading column <"+parsed_args.column+"> in TSV file")
|
53 |
+
exit(1)
|
54 |
+
processColumn(col, dict)
|
55 |
+
return findSuspiciousWord(dict)
|
56 |
+
|
57 |
+
def findSuspiciousWord(dict):
|
58 |
+
# look for the word in dict with the greatest count
|
59 |
+
argmax = ""
|
60 |
+
max = -1
|
61 |
+
for w,c in list(dict.items()):
|
62 |
+
if c > max:
|
63 |
+
max = c
|
64 |
+
argmax = w
|
65 |
+
return argmax
|
66 |
+
|
67 |
+
|
68 |
+
|
69 |
+
def main(args):
|
70 |
+
"""
|
71 |
+
This script flags (by setting True the corresponding entry of
|
72 |
+
the hall_frequent_single_word) those sentences which consists
|
73 |
+
of only one single suspicious word. This word can be either passed
|
74 |
+
as a parameter (suspiciousWord option) or found inside the TSV
|
75 |
+
input files. In the latter case, it is set as the most frequent
|
76 |
+
word in the text included in files to be inspected.
|
77 |
+
The TSV files to inspect can be passed through the
|
78 |
+
tsv-SuspiciousWordFiles option. If no explicit suspiciousWord nor
|
79 |
+
tsv-SuspiciousWordFiles is passed, the tsv-InFile is inspected.
|
80 |
+
"""
|
81 |
+
|
82 |
+
# Support structure:
|
83 |
+
dict = {}
|
84 |
+
|
85 |
+
if (parsed_args.version):
|
86 |
+
print(f"Version {_VERSION} of anomalous string detector")
|
87 |
+
exit(1)
|
88 |
+
|
89 |
+
if not (tsv_files_specified):
|
90 |
+
print("--tsv-InFile and --tsv-OutFile are both required")
|
91 |
+
parser.print_usage()
|
92 |
+
exit(1)
|
93 |
+
|
94 |
+
if (contrastive_options):
|
95 |
+
print("Either specify SuspiciousWord or SuspiciousWordFiles, both cannot be passed")
|
96 |
+
parser.print_usage()
|
97 |
+
exit(1)
|
98 |
+
|
99 |
+
"""
|
100 |
+
Get the suspiciousWord:
|
101 |
+
"""
|
102 |
+
if getattr(parsed_args, 'suspiciousWord') is not None:
|
103 |
+
# passed as parameter
|
104 |
+
suspiciousWord = parsed_args.suspiciousWord.strip()
|
105 |
+
elif getattr(parsed_args, 'tsv_SuspiciousWordFiles') is not None:
|
106 |
+
# to be searched in TSV files passed for that
|
107 |
+
suspiciousWord = processTSVfiles(parsed_args.tsv_SuspiciousWordFiles, dict)
|
108 |
+
else:
|
109 |
+
# to be searched in the input TSV file to process
|
110 |
+
suspiciousWord = processTSVfiles([parsed_args.tsv_InFile], dict)
|
111 |
+
|
112 |
+
|
113 |
+
# open input TSV file and get the text to process
|
114 |
+
try:
|
115 |
+
inDF = pd.read_csv(args.tsv_InFile, sep='\t', dtype=str, low_memory=False, na_filter=False, quoting=3)
|
116 |
+
|
117 |
+
except IOError:
|
118 |
+
print("Error in opening "+tsv+" file")
|
119 |
+
|
120 |
+
try:
|
121 |
+
txt = inDF[parsed_args.column]
|
122 |
+
except KeyError:
|
123 |
+
print("Error in reading column <"+parsed_args.column+"> in TSV file")
|
124 |
+
exit(1)
|
125 |
+
|
126 |
+
# scan each input line and check if it consists of
|
127 |
+
# only the suspicious word
|
128 |
+
flag = []
|
129 |
+
for line in txt:
|
130 |
+
if suspiciousWord == line.strip():
|
131 |
+
if args.quiet:
|
132 |
+
flag.append("True")
|
133 |
+
else:
|
134 |
+
flag.append("True ("+suspiciousWord+")")
|
135 |
+
else:
|
136 |
+
flag.append("False")
|
137 |
+
|
138 |
+
# add the column to the original Data Frame read from the input TSV file
|
139 |
+
# and store the updated Data Frame in the output TSV file:
|
140 |
+
inDF['hall_frequent_single_word'] = flag
|
141 |
+
inDF.to_csv(args.tsv_OutFile, sep="\t", index=False, quoting=3)
|
142 |
+
|
143 |
+
|
144 |
+
|
145 |
+
if __name__ == '__main__':
|
146 |
+
parser = argparse.ArgumentParser(formatter_class=ExplicitDefaultsHelpFormatter)
|
147 |
+
|
148 |
+
# I/O related arguments
|
149 |
+
parser.add_argument(
|
150 |
+
'--tsv-InFile', '-i', type=str,
|
151 |
+
help="The input TSV file [Mandatory]")
|
152 |
+
|
153 |
+
parser.add_argument(
|
154 |
+
'--tsv-OutFile', '-o', type=str,
|
155 |
+
help="The output TSV file [Mandatory. If equal to input TSV file, the new column ('suspicious single word') is added to the original file]")
|
156 |
+
|
157 |
+
parser.add_argument(
|
158 |
+
'--tsv-SuspiciousWordFiles', '-s', type=str, nargs='+',
|
159 |
+
help="The TSV file(s) used to look for the suspicious word [Optional. If not present, the input TSV file is used instead]")
|
160 |
+
|
161 |
+
# Processing arguments:
|
162 |
+
parser.add_argument(
|
163 |
+
'--column', '-c', default='source',
|
164 |
+
help="Column name of the text to process [Optional]")
|
165 |
+
|
166 |
+
parser.add_argument(
|
167 |
+
'--suspiciousWord', '-w', type=str,
|
168 |
+
help="suspicious word [if not specified, found in other TSV files passed as parameters]")
|
169 |
+
|
170 |
+
# Reporting related arguments
|
171 |
+
parser.add_argument(
|
172 |
+
'--quiet', '-q', default=False, action='store_true',
|
173 |
+
help='Print only True/False, no explanation for True\'s')
|
174 |
+
|
175 |
+
# Get version information:
|
176 |
+
parser.add_argument(
|
177 |
+
'--version', '-v', action='store_true', default=False,
|
178 |
+
help="Print version of the script and exit")
|
179 |
+
|
180 |
+
|
181 |
+
parsed_args = parser.parse_args()
|
182 |
+
tsv_files_specified = \
|
183 |
+
getattr(parsed_args, 'tsv_InFile') is not None \
|
184 |
+
and len(parsed_args.tsv_InFile) > 0 \
|
185 |
+
and getattr(parsed_args, 'tsv_OutFile') is not None \
|
186 |
+
and len(parsed_args.tsv_OutFile) > 0
|
187 |
+
|
188 |
+
contrastive_options = \
|
189 |
+
getattr(parsed_args, 'tsv_SuspiciousWordFiles') is not None \
|
190 |
+
and getattr(parsed_args, 'suspiciousWord') is not None
|
191 |
+
|
192 |
+
main(parsed_args)
|