The Dataset Viewer has been disabled on this dataset.

MILQA Hungarian question-answer benchmark database

MILQA is a Hungarian machine reading comprehension, specifically, question answering (QA) benchmark database. In English, the most basic resource for the task is the Stanford Question Answering Dataset (SQuAD). The database was largely built following the principles of SQuAD 2.0, and is therefore characterized by the following:

  • Excerpts from high quality Wikipedia articles are used as context for the questions (free-to-use texts, free-to-use language database).
  • It contains factual (not opinion) questions.
  • Includes questions that are not answered in the text.
  • The (shortest possible) answer to the question (if any) is indicated in the original text.
  • When formulating the questions, we have paraphrased the original text, so that in most cases the answer cannot be found by lexical search.
  • The questions are formulated in such a way that they are not only meaningful in the context of the text, but also stand on their own (e.g., they do not contain pronouns.)

Compared to SQUAD, the following innovations have been introduced (thus the Hungarian question-answer database contains more difficult questions than the original; the specific types of questions detailed below are marked separately in the database):

  • There can be more than one short answer to a question in the context (list-type answers; this is natural for some questions (5-6% in the database), but SQUAD never has more than one answer).
  • In addition to the short answer, a long answer is also given, which includes all the circumstances relevant to answering the question (min. 1 sentence, often several sentences).
  • Includes yes/no questions (about 10%); here, in addition to the long answer, which includes all relevant circumstances, a yes/no answer is also given.
  • Unanswerable questions (about 30% of questions) are relevant questions related to the topic, not questions generated by substitution from answerable questions.
  • Includes questions that can be answered by counting or performing arithmetic operations (these are difficult for current models).
  • Some of the unanswerable questions are "tricky questions" where a large proportion of native speakers would read an answer from the text, often based on incorrect default assumptions. These cases have been marked separately, with the hypothetical answer given.

The questions were created by 5 annotators under the supervision of the Language Technology Research Group at the Pázmány Péter Catholic University, using a web annotation environment also created during the project. The database currently contains more than 23500 questions, with 70.93% of the questions being answered in the text.

For further details and some baseline models trained on MILQA, please refer to the publication below.

Annotators worked according to the following guidelines:

  • Everyday questions should be asked.
  • It should not be trivial to find the answer.
  • Some questions may be rephrased - you can ask a question concerning the same passage or answer in two different ways.
  • Write half as many unanswerable questions as answerable ones.
  • You will have 12 options for answerable questions, but you don't have to use them all.
  • It is not necessary to give both short and long answers in every case (there may be no short answer).
  • If possible, use a short answer and make it as short as possible.
  • Only complete words can be selected as an answer.
  • There may be several short answers to a question (list). This is always the answer to questions such as who and when. In such cases, the short answer selector with the given number of short answers must be selected several times in succession and the answers marked in sequence.
  • If the answer appears more than once in the text, select the one that is in the context pertaining to the question.
  • Short and long answers, or answers to different questions, may overlap in the text.
  • About 10% should be Boolean (yes/no) questions.
  • For Boolean questions, select a text passage as the answer (short or long, it doesn't matter) and click on the answer to bring up further options where you can tick No or Yes
  • If the answer is not grammatically correct in the context of the question (e.g. a different case ending should be used for the predicate used in the question), then after selecting the answer, click on the answer and tick the box Different answer. Do the same if there is a spelling mistake in the original.
  • Why...? (cause, effect) questions should also be included
  • There are no word order restrictions. You do not necessarily have to start with a question word.
  • Whenever possible, rephrase questions so that they do not use the same words as in the text - as many grammatical twists, word order changes, word changes, synonyms as possible, while keeping the question natural
  • The question should be 'self-contained', i.e. it should not contain parts that can only be understood knowing the text, e.g. pronouns.
  • The questions do not need to be entered in the order of the location of the answers in the text. The order of the questions is irrelevant.
  • If it's a text about XY, you should put XY in each question, to make the question self-contained. But it is good to have some variation in the formulation of XY as far as possible.
  • For unanswerable questions, ask questions that come to mind when reading the text but are not addressed in the text. Ask a question that, at least for the whole of the passage, has no answer and does not follow from it.
  • The question can be complex or arithmetical: e.g., the answer must be calculated from two given pieces of data. In this case, check the Arithmetic checkbox.
  • With "why?" questions, you can often formulate a shorter or better answer to the question. You may want to write this in the Different answer box.
  • For a counting question (how many types...), after giving x short answers, write x in the other exact answer box and put Arithmetic in the box.
  • If one sentence some information that makes the next sentence meaningful, and the short answer to question is in sentence 2, both sentences should be included in the long answer.
  • Long answers should always be at least complete clauses, but preferably complete sentences or multiple complete sentences: they should contain all information relevant to the question.
  • If a particular passage is very ungrammatical or sounds wrong, do NOT add questions to it, leave it out.
  • If there are factual errors or self-contradictions in the text, do not enter questions concerning those parts.

Format

The database is stored as json data files. Its format is based on the format of SQuAD 2.0. However, there are lists of long and short answers (values of the keys "short" and "long"), each answer may have a "modanswer", and a special "type". Question type "qtype" is aggregated from the type feature of answers belonging to the question.

Publication

If you use MILQA or any models trained on it, please cite the following publication. If you train a model on MILQA, please include the following publication among the ones to be cited.

Attila Novák; Borbála Novák; Tamás Zombori; Gergő Szabó; Zsolt Szántó; Richárd Farkas A Question Answering Benchmark Database for Hungarian In: Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII) Stroudsburg (PA), USA: Association for Computational Linguistics (2023) pp. 188-198., 11 p.

@inproceedings{novak-etal-2023-question,
    title = "A Question Answering Benchmark Database for {H}ungarian",
    author = "Nov{\'a}k, Attila  and
      Nov{\'a}k, Borb{\'a}la  and
      Zombori, Tam{\'a}s  and
      Szab{\'o}, Gerg{\H{o}}  and
      Sz{\'a}nt{\'o}, Zsolt  and
      Farkas, Rich{\'a}rd",
    booktitle = "Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.law-1.19",
    doi = "10.18653/v1/2023.law-1.19",
    pages = "188--198",
}
Downloads last month
75