question_id
stringlengths 40
40
| question
stringlengths 4
171
| answer
sequence | evidence
sequence |
---|---|---|---|
848ab388703c24faad79d83d254e4fd88ab27e2a | How are proof scores calculated? | [
"'= ( , { ll k(h:, g:) if hV, gV\n\n1 otherwise } )\n\nwhere $_{h:}$ and $_{g:}$ denote the embedding representations of $h$ and $g$ , respectively."
] | [
[
"Unification Module. In backward chaining, unification between two atoms is used for checking whether they can represent the same structure. In discrete unification, non-variable symbols are checked for equality, and the proof fails if the symbols differ. In NTP, rather than comparing symbols, their embedding representations are compared by means of a RBF kernel. This allows matching different symbols with similar semantics, such as matching relations like ${grandFatherOf}$ and ${grandpaOf}$ . Given a proof state $= (_, _)$ , where $_$ and $_$ denote a substitution set and a proof score, respectively, unification is computed as follows:",
"The resulting proof score of $g$ is given by:",
"$$ \\begin{aligned} \\max _{f \\in \\mathcal {K}} & \\; {unify}_(g, [f_{p}, f_{s}, f_{o}], (\\emptyset , )) \\\\ & = \\max _{f \\in \\mathcal {K}} \\; \\min \\big \\lbrace , \\operatorname{k}(_{\\scriptsize {grandpaOf}:}, _{f_{p}:}),\\\\ &\\qquad \\qquad \\qquad \\operatorname{k}(_{{abe}:}, _{f_{s}:}), \\operatorname{k}(_{{bart}:}, _{f_{o}:}) \\big \\rbrace , \\end{aligned}$$ (Eq. 3)",
"where $f \\triangleq [f_{p}, f_{s}, f_{o}]$ is a fact in $\\mathcal {K}$ denoting a relationship of type $f_{p}$ between $f_{s}$ and $f_{o}$ , $_{s:}$ is the embedding representation of a symbol $s$ , $$ denotes the initial proof score, and $\\operatorname{k}({}\\cdot {}, {}\\cdot {})$ denotes the RBF kernel. Note that the maximum proof score is given by the fact $f \\in \\mathcal {K}$ that maximises the similarity between its components and the goal $\\mathcal {K}$0 : solving the maximisation problem in eq:inference can be equivalently stated as a nearest neighbour search problem. In this work, we use ANNS during the forward pass for considering only the most promising proof paths during the construction of the neural network."
]
] |
68794289ed6078b49760dc5fdf88618290e94993 | What are proof paths? | [
"A sequence of logical statements represented in a computational graph"
] | [
[]
] |
62048ea0aab61abe21fb30d70c4a1bc5fb946137 | What is the size of the model? | [
"Unanswerable"
] | [
[]
] |
25e4dbc7e211a1ebe02ee8dff675b846fb18fdc5 | What external sources are used? | [
"Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily"
] | [
[
"Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained over large unsegmented data. We pretrain the five-character window network in Figure FIGREF13 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.",
"Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation BIBREF11 . For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations BIBREF10 .",
"Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training BIBREF13 or deriving statistical features BIBREF12 . We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given INLINEFORM0 , INLINEFORM1 is derived using the MLP in Figure FIGREF13 , and then used to classify the segmentation of INLINEFORM2 into B(begining)/M(middle)/E(end)/S(single character word) labels. DISPLAYFORM0",
"Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation BIBREF16 . We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. DISPLAYFORM0",
"POS Data. Previous research has shown that POS information is closely related to segmentation BIBREF14 , BIBREF15 . We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation INLINEFORM0 . In particular, given INLINEFORM1 , the POS of the word that INLINEFORM2 belongs to is used as the output. DISPLAYFORM0"
]
] |
9893c5f36f9d503678749cb0466eeaa0cfc9413f | What submodules does the model consist of? | [
"five-character window context"
] | [
[
"We fill this gap by investigating rich external pretraining for neural segmentation. Following BIBREF4 and BIBREF5 , we adopt a globally optimised beam-search framework for neural structured prediction BIBREF9 , BIBREF17 , BIBREF18 , which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy BIBREF19 , casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor."
]
] |
5d85d7d4d013293b4405beb4b53fa79ac7c03401 | How they add human prefference annotation to fine-tuning process? | [
"human preference annotation is available, $Q(x_1, x_2) \\in \\lbrace >,<,\\approx \\rbrace $ is the true label for the pair"
] | [
[
"The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6",
"where $\\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \\in \\lbrace >,<,\\approx \\rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \\in \\lbrace >,<,\\approx \\rbrace $) for the pair ($x_1$, $x_2$)."
]
] |
6dc9960f046ec6bd280a721724458f66d5a9a585 | What previous automated evalution approaches authors mention? | [
"Text Overlap Metrics, including BLEU, Perplexity, Parameterized Metrics"
] | [
[
"Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.",
"Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.",
"Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.",
"Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization."
]
] |
75b69eef4a38ec16df63d60be9708a3c44a79c56 | How much better peformance is achieved in human evaluation when model is trained considering proposed metric? | [
"Pearson correlation to human judgement - proposed vs next best metric\nSample level comparison:\n- Story generation: 0.387 vs 0.148\n- Dialogue: 0.472 vs 0.341\nModel level comparison:\n- Story generation: 0.631 vs 0.302\n- Dialogue: 0.783 vs 0.553"
] | [
[
"The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.",
"Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation."
]
] |
7488855f09b97eb6a027212fb7ace1d338f36a2b | Do the authors suggest that proposed metric replace human evaluation on this task? | [
"No"
] | [
[]
] |
1083ec9a2a33f7fe2b6b51bbcebd2d9aec4b4de2 | What is the training objective of their pair-to-sequence model? | [
"is to minimize the negative likelihood of the aligned unanswerable question $\\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer "
] | [
[
"The training objective is to minimize the negative likelihood of the aligned unanswerable question $\\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ : L=-(q,q,p,a)DP(q|q,p,a;) where $\\mathcal {D}$ is the training corpus and $\\theta $ denotes all the parameters. Sequence-to-sequence and pair-to-sequence models are trained with the same objective."
]
] |
58a00ca123d67b9be55021493384c0acef4c568d | How do they ensure the generated questions are unanswerable? | [
"learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc"
] | [
[
"To create training data for unanswerable question generation, we use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. As shown in Figure 1 , the answerable and unanswerable questions of a paragraph are aligned through the text span “Victoria Department of Education” for being both the answer and plausible answer. These two questions are lexically similar and both asked with the same answer type in mind. In this way, we obtain the data with which the models can learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc. Consequently, we can generate a mass of unanswerable questions with existing large-scale machine reading comprehension datasets."
]
] |
199bdb3a6b1f7c89d95ea6c6ddbbb5eff484fa1f | Does their approach require a dataset of unanswerable questions mapped to similar answerable questions? | [
"Yes"
] | [
[
"We attack the problem by automatically generating unanswerable questions for data augmentation to improve question answering models. The generated unanswerable questions should not be too easy for the question answering model so that data augmentation can better help the model. For example, a simple baseline method is randomly choosing a question asked for another paragraph, and using it as an unanswerable question. However, it would be trivial to determine whether the retrieved question is answerable by using word-overlap heuristics, because the question is irrelevant to the context BIBREF6 . In this work, we propose to generate unanswerable questions by editing an answerable question and conditioning on the corresponding paragraph that contains the answer. So the generated unanswerable questions are more lexically similar and relevant to the context. Moreover, by using the answerable question as a prototype and its answer span as a plausible answer, the generated examples can provide more discriminative training signal to the question answering model."
]
] |
5ea87432b9166d6a4ab8806599cd2b1f9178622f | What conclusions are drawn from these experiments? | [
"best results were obtained using new word embeddings, best group of word embeddings is EC, The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, ability of the model to provide vector representation for the unknown words seems to be the most important"
] | [
[
"The analysis of results from Tables TABREF17 , TABREF18 and TABREF19 show that 12 of 15 best results were obtained using new word embeddings. The evaluation results presented in Table TABREF20 (the chosen best embeddings models from Table TABREF19 ) prove that the best group of word embeddings is EC. The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, with vector dimension equal to 300 and negative sampling equal to 10. The ability of the model to provide vector representation for the unknown words seems to be the most important. Also, previous models built using KGR10 (EP) are probably less accurate due to an incorrect tokenisation of the corpus. We used WCRFT tagger BIBREF22 , which utilises Toki BIBREF21 to tokenise the input text before the creation of the embeddings model. The comparison of EC1 with previous results obtained using only CRF BIBREF38 show the significant improvement across all the tested metrics: 3.6pp increase in strict F1-score, 1.36pp increase in relaxed precision, 5.61pp increase in relaxed recall and 3.51pp increase in relaxed F1-score."
]
] |
3af9156b95a4c2d67cc54b80b92cc7b918fea2a9 | What experiments are presented? | [
"identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set, Then we evaluated these results using more detailed measures for timexes"
] | [
[
"Experiments were carried out by the method proposed in BIBREF27 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set.",
"We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 . The following measures were used to evaluate the quality of boundaries and class recognition, so-called strict match: strict precision (Str.P), strict recall (Str.R) and strict F1-score (Str.F1). A relaxed match (Rel.P, Rel.R, Rel.F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] BIBREF27 . If there was an overlap, a relaxed type F1-score (Type.F1) was calculated BIBREF27 . The results are presented in Table TABREF20 ."
]
] |
7e328cc3cffa521e73f111d6796aaa9661c8eb07 | What is specific about the specific embeddings? | [
"predicting the word given its context"
] | [
[
"Recent studies in information extraction domain (but also in other natural language processing fields) show that deep learning models produce state-of-the-art results BIBREF0 . Deep architectures employ multiple layers to learn hierarchical representations of the input data. In the last few years, neural networks based on dense vector representations provided the best results in various NLP tasks, including named entities recognition BIBREF1 , semantic role labelling BIBREF2 , question answering BIBREF3 and multitask learning BIBREF4 . The core element of most deep learning solutions is the dense distributed semantic representation of words, often called word embeddings. Distributional vectors follow the distributional hypothesis that words with a similar meaning tend to appear in similar contexts. Word embeddings capture the similarity between words and are often used as the first layer in deep learning models. Two of the most common and very efficient methods to produce word embeddings are Continuous Bag-of-Words (CBOW) and Skip-gram (SG), which produce distributed representations of words in a vector space, grouping them by similarity BIBREF5 , BIBREF6 . With the progress of machine learning techniques, it is possible to train such models on much larger data sets, and these often outperform the simple ones. It is possible to use a set of text documents containing even billions of words as training data. Both architectures (CBOW and SG) describe how the neural network learns the vector word representations for each word. In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word.",
"We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 )."
]
] |
80f19be1cbe1f0ec89bbafb9c5f7a8ded37881fb | What embedding algorithm is used to build the embeddings? | [
"CBOW and Skip-gram methods in the FastText tool BIBREF9"
] | [
[
"We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 )."
]
] |
b3238158392684a5a6b62a7eabaa2a10fbecf3e6 | How was the KGR10 corpus created? | [
"most relevant content of the website, including all subsites"
] | [
[
"KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish."
]
] |
526ae24fa861d52536b66bcc2d2ddfce483511d6 | How big are improvements with multilingual ASR training vs single language training? | [
"relative WER improvement of 10%."
] | [
[
"The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language."
]
] |
8a5254ca726a2914214a4c0b6b42811a007ecfc6 | How much transcribed data is available for for Ainu language? | [
"Transcribed data is available for duration of 38h 54m 38s for 8 speakers."
] | [
[
"The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2."
]
] |
3c0d66f9e55a89d13187da7b7128666df9a742ce | What is the difference between speaker-open and speaker-closed setting? | [
"In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets., In the speaker-open condition, all the data except for the test speaker's were used for training"
] | [
[
"In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted."
]
] |
13d92cbc2c77134626e26166c64ca5c00aec0bf5 | What baseline approaches do they compare against? | [
"HotspotQA: Yang, Ding, Muppet\nFever: Hanselowski, Yoneda, Nie"
] | [
[
"We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .",
"As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\\sim $8 absolute points increase on EM and $\\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.",
"Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\\sim $4 and $\\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval."
]
] |
9df4a7bd0abb99ae81f0ebb29c488f1caa0f268f | How do they train the retrieval modules? | [
"We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss."
] | [
[
"Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:",
"We applied an affine layer and sigmoid activation on the last layer output of the [$\\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:",
"where $\\hat{p}_i$ is the output of the model, $\\mathbf {T}^{p/s}_{pos}$ is the positive set and $\\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples."
]
] |
b7291845ccf08313e09195befd3c8030f28f6a9e | How do they model the neural retrieval modules? | [
"BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling"
] | [
[
"Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.",
"Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:"
]
] |
ac54a9c30c968e5225978a37032158a6ffd4ddb8 | Retrieval at what level performs better, sentence level or paragraph level? | [
"This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval."
] | [
[
"Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.",
"Next, the removal of sentence-level retrieval module induces a $\\sim $2 point drop on EM and F1 score in the QA task, and a $\\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label."
]
] |
b236b9827253037b2fd7884d7bfec74619d96293 | How much better performance of proposed model compared to answer-selection models? | [
"significant MAP performance improvement compared to the previous best model, CompClip-LM (0.696 to 0.734 absolute)"
] | [
[
"As shown in Table , the proposed PS-rnn-elmo shows a significant MAP performance improvement compared to the previous best model, CompClip-LM (0.696 to 0.734 absolute)."
]
] |
b53efdbb9e53a65cd3828a3eb485c70f782a06e5 | How are some nodes initially connected based on text structure? | [
"we fully connect nodes that represent sentences from the same passage, we fully connect nodes that represent the first sentence of each passage, we add an edge between the question and every node for each passage"
] | [
[
"Topology: To build a model that understands the relationship between sentences for answering a question, we propose a graph neural network where each node represents a sentence from passages and the question. Figure depicts the topology of the proposed model. In an offline step, we organize the content of each instance in a graph where each node represents a sentence from the passages and the question. Then, we add edges between nodes using the following topology:",
"we fully connect nodes that represent sentences from the same passage (dotted-black);",
"we fully connect nodes that represent the first sentence of each passage (dotted-red);",
"we add an edge between the question and every node for each passage (dotted-blue)."
]
] |
4d5e2a83b517e9c082421f11a68a604269642f29 | how many domains did they experiment with? | [
"2"
] | [
[
"We demonstrated the utility of Katecheo by deploying the system for question answering in two topics, Medical Sciences and Christianity. These topics are diverse enough that they would warrant different curated sets of knowledge base articles, and we can easily retrieve knowledge base articles for each of these subjects from the Medical Sciences and Christianity Stack Exchange sites, respectively."
]
] |
2c3b2c3bab6d18cb0895462e3cfd91cd0dee7f7d | what pretrained models were used? | [
"BiDAF, BERT "
] | [
[
"The current release of Katecheo uses a Bi-Directional Attention Flow, or BiDAF, model for reading comprehension BIBREF6 . This BiDAF model includes a Convolutional Neural Network (CNN) based character level embedding layer, a word embedding layer that uses pre-trained GloVE embeddings, a Long Short-Term Memory Network (LSTM) based contextual embedding layer, an “attention flow layer\", and a modeling layer include bi-directional LSTMs. We are using a pre-trained version of BiDAF available in the AllenNLP BIBREF7 library.",
"Future releases of Katecheo will include the ability to swap out the reading comprehension model for newer architectures based on, e.g., BERT BIBREF8 or XLNet BIBREF9 or custom trained models.",
"Architecture and Configuration"
]
] |
ea51aecd64bd95d42d28ab3f1b60eecadf6d3760 | What domains are contained in the polarity classification dataset? | [
"Books, DVDs, Electronics, Kitchen appliances"
] | [
[
"Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews."
]
] |
e4cc2e73c90e568791737c97d77acef83588185f | How long is the dataset? | [
"8000"
] | [
[
"Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews."
]
] |
cc28919313f897358ef864948c65318dc61cb03c | What machine learning algorithms are used? | [
"string kernels, SST, KE-Meta, SFA, CORAL, TR-TrAdaBoost, Transductive string kernels, transductive kernel classifier"
] | [
[
"Baselines. We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.",
"Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings. We take the following steps to derive transductive string kernels. For a given kernel (similarity) function INLINEFORM0 , we first build the full kernel matrix INLINEFORM1 , by including the pairwise similarities of samples from both the train and the test sets. For a training set INLINEFORM2 of INLINEFORM3 samples and a test set INLINEFORM4 of INLINEFORM5 samples, such that INLINEFORM6 , each component in the full kernel matrix is defined as follows: DISPLAYFORM0",
"We next present a simple yet effective approach for adapting a one-versus-all kernel classifier trained on a source domain to a different target domain. Our transductive kernel classifier (TKC) approach is composed of two learning iterations. Our entire framework is formally described in Algorithm SECREF3 ."
]
] |
b3857a590fd667ecc282f66d771e5b2773ce9632 | What is a string kernel? | [
"String kernel is a technique that uses character n-grams to measure the similarity of strings"
] | [
[
"In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification."
]
] |
b653f55d1dad5cd262a99502f63bf44c58ccc8cf | Which dataset do they use to learn embeddings? | [
"Fisher Corpus English Part 1"
] | [
[
"We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher."
]
] |
22c802872b556996dd7d09eb1e15989d003f30c0 | How do they correlate NED with emotional bond levels? | [
"They compute Pearson’s correlation between NED measure for patient-to-therapist and patient-perceived emotional bond rating and NED measure for therapist-to-patient and patient-perceived emotional bond rating"
] | [
[
"According to prior work, both from domain theory BIBREF16 and from experimental validation BIBREF6 , a high emotional bond in patient-therapist interactions in the suicide therapy domain is associated with more entrainment. In this experiment, we compute the correlation of the proposed NED measure with the patient-perceived emotional bond ratings. Since the proposed measure is asymmetric in nature, we compute the measures for both patient-to-therapist and therapist-to-patient entrainment. We also compute the correlation of emotional bond with the baselines used in Experiment 1. We report Pearson's correlation coefficients ( INLINEFORM0 ) for this experiment in Table TABREF26 along with their INLINEFORM1 -values. We test against the null hypothesis INLINEFORM2 that there is no linear association between emotional bond and the candidate measure."
]
] |
a7510ec34eaec2c7ac2869962b69cc41031221e5 | What was their F1 score on the Bengali NER corpus? | [
"52.0%"
] | [
[]
] |
869aaf397c9b4da7ab52d6dd0961887ae08da9ae | Which languages are evaluated? | [
"Bengali, English, German, Spanish, Dutch, Amharic, Arabic, Hindi, Somali "
] | [
[
"We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods.",
"We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.",
"The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25."
]
] |
871c34219eb623bde9ac3937aa0f28fc3ad69445 | Which model have the smallest Character Error Rate and which have the smallest Word Error Rate? | [
"character unit the RNN-transducer with additional attention module, For subword units, classic RNN-transducer, RNN-transducer with attention and joint CTC-attention show comparable performance"
] | [
[
"In this paper, we experimentally showed that end-to-end approaches and different orthographic units were rather suitable to model the French language. RNN-transducer was found specially competitive with character units compared to other end-to-end approaches. Among the two orthographic units, subword was found beneficial for most methods to address the problems described in section SECREF14 and retain information on ambiguous patterns in French. Extending with language models, we could obtain promising results compared to traditional phone-based systems. The best performing systems being for character unit the RNN-transducer with additional attention module, achieving 7.8% in terms of CER and 17.6% on WER. For subword units, classic RNN-transducer, RNN-transducer with attention and joint CTC-attention show comparable performance on subword error rate and WER, with the first one being slightly better on WER ($17.4\\%$) and the last one having a lower error rate on subword ($14.5\\%$)."
]
] |
285858416b1583aa3d8ba0494fd01c0d4332659f | What will be in focus for future work? | [
"1) investigate errors produced by the end-to-end methods and explore several approaches to correct common errors done in French, 2) compare the end-to-end methods in a SLU context and evaluate the semantic value of the partially correct produced words"
] | [
[
"However, we also showed difference in produced errors for each method and different impact at word-level depending of the approach or units. Thus, future work will focus on analysing the orthographic output of these systems in two ways: 1) investigate errors produced by the end-to-end methods and explore several approaches to correct common errors done in French and 2) compare the end-to-end methods in a SLU context and evaluate the semantic value of the partially correct produced words."
]
] |
150af1f5f4ce0ec94a7114397cffc59c4798441e | Which acoustic units are more suited to model the French language? | [
"Unanswerable"
] | [
[]
] |
acc512c57aef4d5a15c15e3593f0a9b3e7e7e8b8 | What are the existing end-to-end ASR approaches for the French language? | [
"1) Connectionist Temporal Classification (CTC), 2) Attention-based methods, 3) RNN-tranducer"
] | [
[
"In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model. We extend our experiments by adding two hybrid end-to-end methods: a multi-task method called joint CTC-attention BIBREF11, BIBREF12 and a RNN-transducer extended with attention mechanisms BIBREF13. To complete our review, we build a state-of-art phone-based system based on lattice-free MMI criterion BIBREF14 and its end-to-end counterpart with both phonetic and orthographic units BIBREF15."
]
] |
e75f5bd7cc7107f10412d61e3202a74b082b0934 | How much is decoding speed increased by increasing encoder and decreasing decoder depth? | [
"the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer"
] | [
[
"Table TABREF24 shows that while the acceleration of trading decoder layers for encoding layers in training is small, in decoding is significant. Specifically, the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer while achieving a slightly higher BLEU."
]
] |
58819fd80c9fbe8674f147bd84a45a25f674a093 | Did they experiment on this dataset? | [
"No"
] | [
[]
] |
694dd76a37ad9e8083c546e9bd083c5c3b65695c | What language are the conversations in? | [
"The language is Chinese, which is not easy for non-Chinese-speaking researchers to work on."
] | [
[
"The language is Chinese, which is not easy for non-Chinese-speaking researchers to work on."
]
] |
dd3240045f662d9e2f4067ad5399a9cbfe25cc32 | How did they annotate the dataset? | [
"Unanswerable"
] | [
[]
] |
2223d8f417c532bd845d5ade792e955486b536a3 | What annotations are in the dataset? | [
"Unanswerable"
] | [
[]
] |
675f28958c76623b09baa8ee3c040ff0cf277a5a | What is the size of the dataset? | [
"300,000 sentences with 1.5 million single-quiz questions"
] | [
[
"Using our platform, we extracted anonymized user interaction data in the manner of real quizzes generated for a collection of several input video sources. We obtained a corpus of approximately 300,000 sentences, from which roughly 1.5 million single-quiz question training examples were derived. We split this dataset using the regular 70/10/20 partition for training, validation and testing."
]
] |
47b00652ac66039aafe886780e86961bfc5b466e | What language platform does the data come from? | [
"Unanswerable"
] | [
[]
] |
79443bf3123170da44396b0481364552186abb91 | Which two schemes are used? | [
"sequence classification, sequence labeling"
] | [
[
"In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling. We have also proposed concrete architectures based on LSTMs to tackle the problem in both cases."
]
] |
2a46db1b91de4b583d4a5302b2784c091f9478cc | How many examples do they have in the target domain? | [
"Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain)"
] | [
[]
] |
48fa2ccc236e217fcf0e5aab0e7a146faf439b02 | Does Grail accept Prolog inputs? | [
"No"
] | [
[
"In its general form, a type-logical grammar consists of following components:"
]
] |
2b52d481b30185d2c6e7b403d37277f70337d6ca | What formalism does Grail use? | [
"a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors)."
] | [
[
"This chapter describes the underlying formalism of the theorem provers, as it is visible during an interactive proof trace, and present the general strategy followed by the theorem provers. The presentation in this chapter is somewhat informal, referring the reader elsewhere for full proofs.",
"The rest of this chapter is structured as follows. Section \"Type-logical grammars\" presents a general introduction to type-logical grammars and illustrates its basic concepts using the Lambek calculus, ending the section with some problems at the syntax-semantics interface for the Lambek calculus. Section \"Modern type-logical grammars\" looks at recent developments in type-logical grammars and how they solve some of the problems at the syntax-semantics interface. Section \"Theorem proving\" looks at two general frameworks for automated theorem proving for type-logical grammars, describing the internal representation of partial proofs and giving a high-level overview of the proof search mechanism.",
"Type-logical grammars are a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors). Though Lambek built on the work of BIBREF5 , BIBREF6 and others, Lambek's main innovation was to cast the calculus as a logic, giving a sequent calculus and showing decidability by means of cut elimination. This combination of linguistic and computational applications has proved very influential."
]
] |
0fa81adf00662694e1dc74475ae2b9283c50748c | Which components of QA and QG models are shared during training? | [
"parameter sharing"
] | [
[
"We proposed a neural machine comprehension model that can jointly ask and answer questions given a document. We hypothesized that question answering can benefit from synergistic interaction between the two tasks through parameter sharing and joint training under this multitask setting. Our proposed model adopts an attention-based sequence-to-sequence architecture that learns to dynamically switch between copying words from the document and generating words from a vocabulary. Experiments with the model confirm our hypothesis: the joint model outperforms its QA-only counterpart by a significant margin on the SQuAD dataset."
]
] |
4ade72bfa28bd1f6b75cc7fa687fa634717782f2 | How much improvement does jointly learning QA and QG give, compared to only training QA? | [
"We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points. "
] | [
[
"Evaluation results are provided in Table 1 . We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points. Performance of q-gen worsens after joint training, but the decrease is relatively small. Furthermore, as pointed out by earlier studies, automatic metrics often do not correlate well with the generation quality assessed by humans BIBREF9 . We thus consider the overall outcome to be positive."
]
] |
fb381a59732474dc71a413e25cac37e239547b55 | Do they test their word embeddings on downstream tasks? | [
"Yes"
] | [
[
"Outlier Detection. The Outlier Detection task BIBREF0 is to determine which word in a list INLINEFORM0 of INLINEFORM1 words is unrelated to the other INLINEFORM2 which were chosen to be related. For each INLINEFORM3 , one can compute its compactness score INLINEFORM4 , which is the compactness of INLINEFORM5 . INLINEFORM6 is explicitly computed as the mean similarity of all word pairs INLINEFORM7 . The predicted outlier is INLINEFORM8 , as the INLINEFORM9 related words should form a compact cluster with high mean similarity.",
"Sentiment analysis. We also consider sentiment analysis as described by BIBREF31 . We use the suggested Large Movie Review dataset BIBREF32 , containing 50,000 movie reviews."
]
] |
a9b10e3db5902c6142e7d6a83253ad2a6cee77fc | What are the main disadvantages of their proposed word embeddings? | [
"Unanswerable"
] | [
[]
] |
54415efa91566d5d7135fa23bce3840d41a6389e | What dimensions of word embeddings do they produce using factorization? | [
"300-dimensional vectors"
] | [
[
"As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation."
]
] |
dcd22abfc9e7211925c0393adc30dbd4711a9f88 | On which dataset(s) do they compute their word embeddings? | [
"10 million sentences gathered from Wikipedia"
] | [
[
"For a fair comparison, we trained each model on the same corpus of 10 million sentences gathered from Wikipedia. We removed stopwords and words appearing fewer than 2,000 times (130 million tokens total) to reduce noise and uninformative words. Our word2vec and NNSE baselines were trained using the recommended hyperparameters from their original publications, and all optimizers were using using the default settings. Hyperparameters are always consistent across evaluations."
]
] |
05238d1fad2128403577822aa4822ef8ca9570ac | Do they measure computation time of their factorizations compared to other word embeddings? | [
"Yes"
] | [
[
"When considering going from two dimensions to three, it is perhaps necessary to discuss the computational issues in such a problem size increase. However, it should be noted that the creation of pre-trained embeddings can be seen as a pre-processing step for many future NLP tasks, so if the training can be completed once, it can be used forever thereafter without having to take training time into account. Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively."
]
] |
6ee27ab55b1f64783a9e72e3f83b7c9ec5cc8073 | What datasets are experimented with? | [
"the CMU ARCTIC database BIBREF33, the M-AILABS speech dataset BIBREF34 "
] | [
[
"We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long."
]
] |
bb4de896c0fa4bf3c8c43137255a4895f52abeef | What is the baseline model? | [
"a RNN-based seq2seq VC model called ATTS2S based on the Tacotron model"
] | [
[
"Next, we compared our VTN model with an RNN-based seq2seq VC model called ATTS2S BIBREF8. This model is based on the Tacotron model BIBREF32 with the help of context preservation loss and guided attention loss to stabilize training and maintain linguistic consistency after conversion. We followed the configurations in BIBREF8 but used mel spectrograms instead of WORLD features."
]
] |
d9eacd965bbdc468da522e5e6fe7491adc34b93b | What model do they train? | [
"Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier"
] | [
[
"We carried out the experimentation with a range of classifiers of different types: Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier. They were tested in two different settings, one without balancing the weights of the different classes and the other by weighing the classes as the inverse of their frequency in the training set; the latter was tested as a means for dealing with the highly imbalanced data. The selection of these classifiers is in line with those used in the literature, especially with those tested by Han et al. BIBREF41 . This experimentation led to the selection of the weighed Maximum Entropy (MaxEnt) classifier as the most accurate. In the interest of space and focus, we only present results for this classifier."
]
] |
ebae0cd1fe0e7ba877d4b3055190e8b1dfcaeb53 | What are the eight features mentioned? | [
"User location (uloc), User language (ulang), Timezone (tz), Tweet language (tlang), Offset (offset), User name (name), User description (description), Tweet content (content)"
] | [
[
"We created eight different classifiers, each of which used one of the following eight features available from a tweet as retrieved from a stream of the Twitter API:",
"User location (uloc): This is the location the user specifies in their profile. While this feature might seem a priori useful, it is somewhat limited as this is a free text field that users can leave empty, input a location name that is ambiguous or has typos, or a string that does not match with any specific locations (e.g., “at home”). Looking at users' self-reported locations, Hecht et al. BIBREF49 found that 66% report information that can be translated, accurately or inaccurately, to a geographic location, with the other 34% being either empty or not geolocalisable.",
"User language (ulang): This is the user's self-declared user interface language. The interface language might be indicative of the user's country of origin; however, they might also have set up the interface in a different language, such as English, because it was the default language when they signed up or because the language of their choice is not available.",
"Timezone (tz): This indicates the time zone that the user has specified in their settings, e.g., “Pacific Time (US & Canada)”. When the user has specified an accurate time zone in their settings, it can be indicative of their country of origin; however, some users may have the default time zone in their settings, or they may use an equivalent time zone belonging to a different location (e.g., “Europe/London” for a user in Portugal). Also, Twitter's list of time zones does not include all countries.",
"Tweet language (tlang): The language in which a tweet is believed to be written is automatically detected by Twitter. It has been found to be accurate for major languages, but it leaves much to be desired for less widely used languages. Twitter's language identifier has also been found to struggle with multilingual tweets, where parts of a tweet are written in different languages BIBREF50 .",
"Offset (offset): This is the offset, with respect to UTC/GMT, that the user has specified in their settings. It is similar to the time zone, albeit more limited as it is shared with a number of countries.",
"User name (name): This is the name that the user specifies in their settings, which can be their real name, or an alternative name they choose to use. The name of a user can reveal, in some cases, their country of origin.",
"User description (description): This is a free text where a user can describe themselves, their interests, etc.",
"Tweet content (content): The text that forms the actual content of the tweet. The use of content has a number of caveats. One is that content might change over time, and therefore new tweets might discuss new topics that the classifiers have not seen before. Another caveat is that the content of the tweet might not be location-specific; in a previous study, Rakesh et al. BIBREF51 found that the content of only 289 out of 10,000 tweets was location-specific."
]
] |
8e630c5a4a8ba0a4f5d8c483a2bf09c4ac8020ce | How many languages are considered in the experiments? | [
"Unanswerable"
] | [
[]
] |
0b24b5a652d674d4694668d889643bc1accf18ef | How did they evaluate the system? | [
"Unanswerable"
] | [
[]
] |
1fb73176394ef59adfaa8fc7827395525f9a5af7 | Where did they get training data? | [
"AmazonQA and ConciergeQA datasets"
] | [
[]
] |
3a3a65c65cebc2b8c267c334e154517d208adc7d | What extraction model did they use? | [
"Multi-Encoder, Constrained-Decoder model"
] | [
[]
] |
d70ba6053e245ee4179c26a5dabcad37561c6af0 | Which datasets did they experiment on? | [
"ConciergeQA and AmazonQA"
] | [
[]
] |
802687121a98ba4d7df1f8040ea0dc1cc9565b69 | What types of facts can be extracted from QA pairs that can't be extracted from general text? | [
"Unanswerable"
] | [
[]
] |
f1bd66bb354e3dabf5dc4a71e6f08b17d472ecc9 | How do slot binary classifiers improve performance? | [
"by adding extra supervision to generate the slots that will be present in the response"
] | [
[
"This paper proposes the Flexibly-Structured Dialogue Model (FSDM) as a new end-to-end task-oriented dialogue system. The state tracking component of FSDM has the advantages of both fully structured and free-form approaches while addressing their shortcomings. On one hand, it is still structured, as it incorporates information about slots in KB schema; on the other hand, it is flexible, as it does not use information about the values contained in the KB records. This makes it easily adaptable to new values. These desirable properties are achieved by a separate decoder for each informable slot and a multi-label classifier for the requestable slots. Those components explicitly assign values to slots like the fully structured approach, while also preserving the capability of dealing with out-of-vocabulary words like the free-form approach. By using these two types of decoders, FSDM produces only valid belief states, overcoming the limitations of the free-form approach. Further, FSDM has a new module called response slot binary classifier that adds extra supervision to generate the slots that will be present in the response more precisely before generating the final textual agent response (see Section \"Methodology\" for details)."
]
] |
25fd61bb20f71051fe2bd866d221f87367e81027 | What baselines have been used in this work? | [
"NDM, LIDM, KVRN, and TSCP/RL"
] | [
[
"We compare FSDM with four baseline methods and two ablations.",
"NDM BIBREF7 proposes a modular end-to-end trainable network. It applies de-lexicalization on user utterances and responses.",
"LIDM BIBREF9 improves over NDM by employing a discrete latent variable to learn underlying dialogue acts. This allows the system to be refined by reinforcement learning.",
"KVRN BIBREF13 adopts a copy-augmented Seq2Seq model for agent response generation and uses an attention mechanism on the KB. It does not perform belief state tracking.",
"TSCP/RL BIBREF10 is a two-stage CopyNet which consists of one encoder and two copy-mechanism-augmented decoders for belief state and response generation. TSCP includes further parameter tuning with reinforcement learning to increase the appearance of response slots in the generated response. We were unable to replicate the reported results using the provided code, hyperparameters, and random seed, so we report both the results from the paper and the average of 5 runs on the code with different random seeds (marked with $^\\dagger $ )."
]
] |
a1c5b95e407127c6bb2f9a19b7d9b1f1bcd4a7a5 | Do sluice networks outperform non-transfer learning approaches? | [
"Yes"
] | [
[]
] |
5b99f74bb25bc88677621443bf065d96d84895ab | What is hard parameter sharing? | [
"Unanswerable"
] | [
[]
] |
70e596dd4334a94844454fa7b565889556e2358d | How successful are they at matching names of authors in Japanese and English? | [
"180221 of 231162 author names could be matched successfully"
] | [
[
"Fortunately, 180221 of 231162 author names could be matched successfully. There are many reasons for the remaining uncovered cases. 9073 Latin names could not be found in the name dictionary ENAMDICT and 14827 name matchings between the names' Latin and kanji representations did not succeed. These names might be missing at all in the dictionary, delivered in a very unusual format that the tool does not cover, or might not be Japanese or human names at all. Of course, Japanese computer scientists sometimes also cooperate with foreign colleagues but our tool expects Japanese names and is optimized for them. Both IPSJ DL and ENAMDICT provide katakana representations for some Western names. However, katakana representations for Western names are irrelevant for projects like DBLP. But for instance, Chinese names in Chinese characters are relevant. Understandably, our tool does not support any special Personal Name Matching for Chinese names yet because our work is focused on Japanese names. The tool does not take account of the unclassified names of ENAMDICT by default. We can increase the general success rate of the Name Matching process by enabling the inclusion of unclassified names in the configuration file but the quality of the Name Matching process will decrease because the correct differentiation between given and family name cannot be guaranteed anymore. An unclassified name may substitute a given or a family name."
]
] |
18dab362ae4587408a291a55299f347f8870e9f1 | Is their approach applicable to papers outside computer science? | [
"Unanswerable"
] | [
[]
] |
9c2de35d07f0d536bfdefe4828d66dd450de2b61 | Do they translate metadata from Japanese papers to English? | [
"No"
] | [
[]
] |
8d793bda51a53a4605c1c33e7fd20ba35581a518 | what bottlenecks were identified? | [
"Confusion in recognizing the words that are active at a given node by a speech recognition solution developed for Indian Railway Inquiry System."
] | [
[
"In this paper we proposed a methodology to identify words that could lead to confusion at any given node of a speech recognition based system. We used edit distance as the metric to identifying the possible confusion between the active words. We showed that this metric can be used effectively to enhance the performance of a speech solution without actually putting it to people test. There is a significant saving in terms of being able to identify recognition bottlenecks in a menu based speech solution through this analysis because it does not require actual people testing the system. This methodology was adopted to restructuring the set of active words at each node for better speech recognition in an actual menu based speech recognition system that caters to masses.",
"We hypothesize that we can identify the performance of a menu based speech system by identifying the possible confusion among all the words that are active at a given node. If active words at a given node are phonetically similar it becomes difficult for the speech recognition system to distinguish them which in turn leads to recognition errors. We used Levenshtein distance BIBREF4 , BIBREF5 a well known measure to analyze and identify the confusion among the active words at a given node. This analysis gives a list of all set of words that have a high degree of confusability among them; this understanding can be then used to (a) restructure the set of active words at that node and/or (b) train the words that can be confused by using a larger corpus of speech data. This allows the speech recognition engine to be equipped to be able to distinguish the confusing words better. Actual use of this analysis was carried out for a speech solution developed for Indian Railway Inquiry System to identify bottlenecks in the system before its actual launch."
]
] |
8f838ec579f2609b01227da3d8c77860ac1b39d2 | What is grounded language understanding? | [
"Unanswerable"
] | [
[]
] |
1835f65694698a9153857e33cd9b86a96772fff5 | Does the paper report the performance on the task of a Neural Machine Translation model? | [
"No"
] | [
[]
] |
a61732774faf30bab15bf944b2360ec4710870c1 | What are the predefined morpho-syntactic patterns used to filter the training data? | [
"Unanswerable"
] | [
[]
] |
994ac7aa662d16ea64b86510fcf9efa13d17b478 | Is the RNN model evaluated against any baseline? | [
"Yes"
] | [
[
"For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:"
]
] |
9282cf80265a914a13053ab23b77d1a8ed71db1b | Which languages are used in the paper? | [
"English, Russian"
] | [
[
"For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:"
]
] |
41bff17f7d7e899c03b051e20ef01f0ebc5c8bb1 | What metrics are used for evaluation? | [
"ROUGE BIBREF29 and METEOR BIBREF30"
] | [
[
"We automatically evaluate on four different types of interpolations (where different combinations of sentences are removed and the model is forced to regenerate them), We evaluate the generations with the ROUGE BIBREF29 and METEOR BIBREF30 metrics using the true sentences as targets. Table TABREF33 shows the automatic evaluation results from interpolations using our proposed models and baselines. The #Sent(s) column indicates which sentence(s) were removed, and then regenerated by the model. We gave the baselines a slight edge over SLDS because they pick the best out of 1000 samples while SLDS is only out of 50. The SLDS models see their largest gain over the baseline models when at least the first sentence is given as an input. The baseline models do better when the first and second sentence need to be imputed. This is likely due to the fact that having access to the earlier sentences allows a better initialization for the Gibbs sampler. Surprisingly, the semi-supervised variants of the SLDS models achieve higher scores. The reasons for this is discussed below in the Perplexity section."
]
] |
b03e8e9a0cd2a44a215082773c7338f2f3be412a | What baselines are used? | [
"a two layer recurrent neural language model with GRU cells of hidden size 512, a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512, a linear dynamical system, semi-supervised SLDS models with varying amount of labelled sentiment tags"
] | [
[
"Language Model (LM): We train a two layer recurrent neural language model with GRU cells of hidden size 512.",
"Sequence-to-Sequence Attention Model (S2S): We train a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512. Sentiments tags for a narrative (1 for each sentence) are given as input to the model and the corresponding sentences are concatenated together as the output with only one <eos> tag at the end. This model is trained with a 0.1 dropout. This model is comparable to the static model of BIBREF7, and other recent works employing a notion of scaffolding into neural generation (albeit adapted for our setting).",
"Linear Dynamical System (LDS): We also train a linear dynamical system as discussed in Section SECREF1 as one of our baselines for fair comparisons. Apart from having just a single transition matrix this model has the same architectural details as SLDS.",
"Semi-Supervised SLDS (SLDS-X%): To gauge the usability of semi-supervision, we also train semi-supervised SLDS models with varying amount of labelled sentiment tags unlike the original model which uses 100% tagged data. We refer to these as SLDS-X%, where X is the % labelled data used for training: 1%, 10%, 25%, and 50%."
]
] |
f608fbc7a4a10a79698f340e2948c4c7034642d5 | Which model is used to capture the implicit structure? | [
"Bi-directional LSTM, self-attention "
] | [
[
"We propose a design for EI to efficiently learn rich implicit structures for exponentially many combinations of targets to predict. To do so, we explain the process to assign scores to each edge $e$ from our neural architecture. The three yellow boxes in Figure FIGREF14 compute scores for rich implicit structures from the neural architecture consisting of LSTM and self-attention.",
"Given an input token sequence $\\mathbf {x}=\\lbrace x_1,x_2,\\cdots ,x_{n}\\rbrace $ of length $n$, we first compute the concatenated embedding $\\mathbf {e}_k=[\\mathbf {w}_k;\\mathbf {c}_k]$ based on word embedding $\\mathbf {w}_k$ and character embedding $\\mathbf {c}_k$ at position $k$.",
"As illustrated on the left part in Figure FIGREF14, we then use a Bi-directional LSTM to encode context features and obtain hidden states $\\mathbf {h}_k=\\mathrm {BiLSTM}(\\mathbf {e_1},\\mathbf {e_2}, \\cdots , \\mathbf {e_n})$. We use two different linear layers $f_t$ and $f_s$ to compute scores for target and sentiment respectively. The linear layer $f_t$ returns a vector of length 4, with each value in the vector indicating the score of the corresponding tag under the BMES tagging scheme. The linear layer $f_s$ returns a vector of length 3, with each value representing the score of a certain polarity of $+,0,-$. We assign such scores to each type of edge as follows:",
"As illustrated in Figure FIGREF14, we calculate $\\mathbf {a}_k$, the output of self-attention at position $k$:",
"where $\\alpha _{k,j}$ is the normalized weight score for $\\mathbf {\\beta }_{k,j}$, and $\\mathbf {\\beta }_{k,j}$ is the weight score calculated by target representation at position $k$ and contextual representation at position $j$. In addition, $W$ and $b$ as well as the attention matrix $U$ are the weights to be learned. Such a vector $\\mathbf {a}_k$ encodes the implicit structures between the word $x_k$ and each word in the remaining sentence."
]
] |
9439430ff97c6e927d919860b1cb86a0dcff0038 | How is the robustness of the model evaluated? | [
"10-fold cross validation"
] | [
[
"We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26."
]
] |
00d6228bcd6b839529e52d0d622bf787a9356158 | How is the effectiveness of the model evaluated? | [
"precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment"
] | [
[
"Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. Note that a correct target prediction requires the boundary of the target to be correct, and a correct targeted sentiment prediction requires both target boundary and sentiment polarity to be correct."
]
] |
c3d50f1e6942c9894f9a344e7cbc411af01e419c | Do they assume sentence-level supervision? | [
"No"
] | [
[
"To better exploit such existing data sources, we propose an end-to-end (E2E) model based on pointer networks with attention, which can be trained end-to-end on the input/output pairs of human IE tasks, without requiring token-level annotations.",
"Since our model does not need token-level labels, we create an E2E version of each data set without token-level labels by chunking the BIO-labeled words and using the labels as fields to extract. If there are multiple outputs for a single field, e.g. multiple destination cities, we join them with a comma. For the ATIS data set, we choose the 10 most common labels, and we use all the labels for the movie and restaurant corpus. The movie data set has 12 fields and the restaurant has 8. See Table 2 for an example of the E2E ATIS data set."
]
] |
602396d1f5a3c172e60a10c7022bcfa08fa6cbc9 | By how much do they outperform BiLSTMs in Sentiment Analysis? | [
"Proposed RCRN outperforms ablative baselines BiLSTM by +2.9% and 3L-BiLSTM by +1.1% on average across 16 datasets."
] | [
[
"On the 16 review datasets (Table TABREF22 ) from BIBREF32 , BIBREF31 , our proposed RCRN architecture achieves the highest score on all 16 datasets, outperforming the existing state-of-the-art model - sentence state LSTMs (SLSTM) BIBREF31 . The macro average performance gain over BiLSTMs ( INLINEFORM0 ) and Stacked (2 X BiLSTM) ( INLINEFORM1 ) is also notable. On the same architecture, our RCRN outperforms ablative baselines BiLSTM by INLINEFORM2 and 3L-BiLSTM by INLINEFORM3 on average across 16 datasets."
]
] |
b984612ceac5b4cf5efd841af2afddd244ee497a | Does their model have more parameters than other models? | [
"approximately equal parameterization"
] | [
[
"Across all 26 datasets, RCRN outperforms not only standard BiLSTMs but also 3L-BiLSTMs which have approximately equal parameterization. 3L-BiLSTMs were overall better than BiLSTMs but lose out on a minority of datasets. RCRN outperforms a wide range of competitive baselines such as DiSAN, Bi-SRUs, BCN and LSTM-CNN, etc. We achieve (close to) state-of-the-art performance on SST, TREC question classification and 16 Amazon review datasets."
]
] |
bde6fa2057fa21b38a91eeb2bb6a3ae7fb3a2c62 | what state of the accuracy did they obtain? | [
"51.5"
] | [
[]
] |
a381ba83a08148ce0324b48b8ff35128e66f580a | what models did they compare to? | [
"High-order CNN, Tree-LSTM, DRNN, DCNN, CNN-MC, NBoW and SVM "
] | [
[
"The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 . The results indicate that the bag-of-words MVN outperforms most methods, but obtains lower accuracy than the state-of-the-art results achieved by the tree-LSTM BIBREF21 , BIBREF22 and the high-order CNN BIBREF16 . However, when augmented with 4 convolutional features as described in Section SECREF9 , the MVN strategy surpasses both of these, establishing a new state-of-the-art on this benchmark."
]
] |
edb068df4ffbd73b379590762125990fcd317862 | which benchmark tasks did they experiment on? | [
" They used Stanford Sentiment Treebank benchmark for sentiment classification task and AG English news corpus for the text classification task."
] | [
[
"Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets."
]
] |
8ea664a72e6d6eca73c1b3e1f75a72a677474ab1 | Are recurrent neural networks trained on perturbed data? | [
"No"
] | [
[
"For evaluation, we used the widely popular dialog act and intent prediction datasets. MRDA BIBREF12 is a dialog corpus of multi-party meetings with 6 classes, 78K training and 15K test data; ATIS BIBREF13 is intent prediction dataset for flight reservations with 21 classes, 4.4K training and 893 test examples; and SWDA BIBREF14, BIBREF15 is an open domain dialog corpus between two speakers with 42 classes, 193K training and 5K test examples. For fair comparison, we train LSTM baseline with sub-words and 240 vocabulary size on MRDA, ATIS and SWDA. We uniformly randomly initialized the input word embeddings. We also trained the on-device SGNN model BIBREF6. Then, we created test sets with varying levels of perturbation operations - $\\lbrace 20\\%,40\\%,60\\%\\rbrace $."
]
] |
5e41516a27c587aa2f80dba8cf4c3f616174099b | How does their perturbation algorihm work? | [
"same sentences after applying character level perturbations"
] | [
[
"In this section, we analyze the Hamming distance between the projections of the sentences from the enwik9 dataset and the corresponding projections of the same sentences after applying character level perturbations. We experiment with three types of character level perturbation BIBREF11 and two types of word level perturbation operations.",
"Perturbation Study ::: Character Level Perturbation Operations",
"insert(word, n) : We randomly choose n characters from the character vocabulary and insert them at random locations into the input word. We however retain the first and last characters of the word as is. Ex. transformation: $sample \\rightarrow samnple$.",
"swap(word, n): We randomly swap the location of two characters in the word n times. As with the insert operation, we retain the first and last characters of the word as is and only apply the swap operation to the remaining characters. Ex. transformation: $sample \\rightarrow sapmle$.",
"duplicate(word, n): We randomly duplicate a character in the word by n times. Ex. transformation: $sample \\rightarrow saample$.",
"Perturbation Study ::: Character Level Perturbation Operations ::: Word Level Perturbation Operations",
"drop(sentence, n): We randomly drop n words from the sentence. Ex. transformation: This is a big cat. $\\rightarrow $ This is a cat.",
"duplicate(sentence, n): Similar to duplicate(word, n) above, we randomly duplicate a word in the sentence n times. Ex. transformation: This is a big cat. $\\rightarrow $ This is a big big cat.",
"swap(sentence, n): Similar to swap(word, n), we randomly swap the location of two words in the sentence n times. Ex. transformation: This is a big cat. $\\rightarrow $ This cat is big."
]
] |
edc43e1b75c0970b7003deeabfe3ad247cb1ed83 | Which language is divided into six dialects in the task mentioned in the paper? | [
"Akkadian."
] | [
[
"The focus of the aforementioned language and dialect identification competitions was diatopic variation and thus the data made available in these competitions was synchronic contemporary corpora. In the 2019 edition of the workshop, for the first time, a task including historical languages was organized. The CLI shared task provided participants with a dataset containing languages and dialects written in cuneiform script: Sumerian and Akkadian. Akkadian is divided into six dialects in the dataset: Old Babylonian, Middle Babylonian peripheral, Standard Babylonian, Neo Babylonian, Late Babylonian, and Neo Assyrian BIBREF14 ."
]
] |
0c3924214572579ddbc1b4a87c7f7842ef20ff1b | What is one of the first writing systems in the world? | [
"Cuneiform"
] | [
[
"As evidenced in Section \"Related Work\" , the focus of most of these studies is the identification of languages and dialects using contemporary data. A few exceptions include the work by trieschnigg2012exploration who applied language identification methods to historical varieties of Dutch and the work by CLIarxiv on languages written in cuneiform script: Sumerian and Akkadian. Cuneiform is an ancient writing system invented by the Sumerians for more than three millennia."
]
] |
4519afe91b1042876d7c021487d98e2d72a09861 | How do they obtain distant supervision rules for predicting relations? | [
"dominant temporal associations can be learned from training data"
] | [
[
"Phase 2: In order to predict the relationship between an event and the creation time of its parent document, we assign a DocRelTime random variable to every Timex3 and Event mention. For Events, these values are provided by the training data, for Timex3s we have to compute class labels. Around 42% of Timex3 mentions are simple dates (“12/29/08\", “October 16\", etc.) and can be naively canonicalized to a universal timestamp. This is done using regular expressions to identify common date patterns and heuristics to deal with incomplete dates. The missing year in “October 16\", for example, can be filled in using the nearest preceding date mention; if that isn't available we use the document creation year. These mentions are then assigned a class using the parent document's DocTime value and any revision timestamps. Other Timex3 mentions are more ambiguous so we use a distant supervision approach. Phrases like “currently\" and “today's\" tend to occur near Events that overlap the current document creation time, while “ago\" or “ INLINEFORM0 -years\" indicate past events. These dominant temporal associations can be learned from training data and then used to label Timex3s. Finally, we define a logistic regression rule to predict entity DocRelTime values as well as specify a linear skip-chain factor over Event mentions and their nearest Timex3 neighbor, encoding the baseline system heuristic directly as an inference rule."
]
] |
0cfaca6f3f33ebdb338c5f991f6a7a33ff33844d | Which structured prediction approach do they adopt for temporal entity extraction? | [
"DeepDive BIBREF1"
] | [
[
"We examine a deep-learning approach to sequence labeling using a vanilla recurrent neural network (RNN) with word embeddings, as well as a joint inference, structured prediction approach using Stanford's knowledge base construction framework DeepDive BIBREF1 . Our DeepDive application outperformed the RNN and scored similarly to 2015's best-in-class extraction systems, even though it only used a small set of context window and dictionary features. Extraction performance, however lagged this year's best system submission. For document creation time relations, we again use DeepDive. Our system examined a simple temporal distant supervision rule for labeling time expressions and linking them to nearby event mentions via inference rules. Overall system performance was better than this year's median submission, but again fell short of the best system."
]
] |
70c2dc170a73185c9d1a16953f85aca834ead6d3 | Which evaluation metric has been measured? | [
"Mean Average Precision"
] | [
[
"In order to evaluate the precision of the retrieved documents in each experiment, we used \"TREC_Eval\" tool [3]. TREC_Eval is a standard tool for evaluation of IR tasks and its name is a short form of Text REtrieval Conference (TREC) Evaluation tool. The Mean Average Precision (MAP) reported by TREC_Eval was 27.99% without query expansion and 37.10% with query expansion which shows more than 9 percent improvement."
]
] |
38854255dbdf2f36eebefc0d9826aa76df9637c6 | What is the WordNet counterpart for Persian? | [
"FarsNet"
] | [
[
"FarsNet [20] [21] is the first WordNet for Persian, developed by the NLP Lab at Shahid Beheshti University and it follows the same structure as the original WordNet. The first version of FarsNet contained more than 10,000 synsets while version 2.0 and 2.5 contained 20,000 synsets. Currently, FarsNet version 3 is under release and contains more than 40,000 synsets [7]."
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.