id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
66f0dee89f084fe0565539a73f5bbe65f3677814
66f0dee89f084fe0565539a73f5bbe65f3677814_0
Q: Do they use pretrained word representations in their neural network models? Text: Introduction Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information. They are also not limited to specific error types. Currently, many state-of-the-art GEC systems are based on SMT or use SMT components for error correction BIBREF0 , BIBREF1 , BIBREF2 . In this paper, grammatical error correction includes correcting errors of all types, including word choice errors and collocation errors which constitute a large class of learners' errors. We model our GEC system based on the phrase-based SMT approach. However, traditional phrase-based SMT systems treat words and phrases as discrete entities. We take advantage of continuous space representation by adding two neural network components that have been shown to improve SMT systems BIBREF3 , BIBREF4 . These neural networks are able to capture non-linear relationships between source and target sentences and can encode contextual information more effectively. Our experiments show that the addition of these two neural networks leads to significant improvements over a strong baseline and outperforms the current state of the art. Related Work In the past decade, there has been increasing attention on grammatical error correction in English, mainly due to the growing number of English as Second Language (ESL) learners around the world. The popularity of this problem in natural language processing research grew further through Helping Our Own (HOO) and the CoNLL shared tasks BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Most published work in GEC aimed at building specific classifiers for different error types and then use them to build hybrid systems BIBREF9 , BIBREF10 . One of the first approaches of using SMT for GEC focused on correction of countability errors of mass nouns (e.g., many informations INLINEFORM0 much information) BIBREF11 . They had to use an artificially constructed parallel corpus for training their SMT system. Later, the availability of large-scale error corrected data BIBREF12 further improved SMT-based GEC systems. Recently, continuous space representations of words and phrases have been incorporated into SMT systems via neural networks. Specifically, addition of monolingual neural network language models BIBREF13 , BIBREF14 , neural network joint models (NNJM) BIBREF4 , and neural network global lexicon models (NNGLM) BIBREF3 have been shown to be useful for SMT. Neural networks have been previously used for GEC as a language model feature in the classification approach BIBREF15 and as a classifier for article error correction BIBREF16 . Recently, a neural machine translation approach has been proposed for GEC BIBREF17 . This method uses a recurrent neural network to perform sequence-to-sequence mapping from erroneous to well-formed sentences. Additionally, it relies on a post-processing step based on statistical word-based translation models to replace out-of-vocabulary words. In this paper, we investigate the effectiveness of two neural network models, NNGLM and NNJM, in SMT-based GEC. To the best of our knowledge, there is no prior work that uses these two neural network models for SMT-based GEC. A Machine Translation Framework for Grammatical Error Correction In this paper, the task of grammatical error correction is formulated as a translation task from the language of `bad' English to the language of `good' English. That is, the source sentence is written by a second language learner and potentially contains grammatical errors, whereas the target sentence is the corrected fluent sentence. We use a phrase-based machine translation framework BIBREF18 for translation, which employs a log-linear model to find the best translation INLINEFORM0 given a source sentence INLINEFORM1 . The best translation is selected according to the following equation: INLINEFORM2 where INLINEFORM0 is the number of features, INLINEFORM1 and INLINEFORM2 are the INLINEFORM3 th feature function and feature weight, respectively. We make use of the standard features used in phrase-based translation without any reordering, leading to monotone translations. The features can be broadly categorized as translation model and language model features. The translation model in the phrase-based machine translation framework is trained using parallel data, i.e., sentence-aligned erroneous source text and corrected target text. The translation model is responsible for finding the best transformation of the source sentence to produce the corrected sentence. On the other hand, the language model is trained on well-formed English text and this ensures the fluency of the corrected text. To find the optimal feature weights ( INLINEFORM4 ), we use minimum error rate training (MERT), maximizing the INLINEFORM5 measure on the development set BIBREF2 . The INLINEFORM6 measure BIBREF19 , which weights precision twice as much as recall, is the evaluation metric widely used for GEC and was the official evaluation metric adopted in the CoNLL 2014 shared task BIBREF8 . Additionally, we augment the feature set by adding two neural network translation models, namely a neural network global lexicon model BIBREF3 and a neural network joint model BIBREF4 . These models are described in detail in Sections SECREF4 and SECREF5 . Neural Network Global Lexicon Model A global lexicon model is used to predict the presence of words in the corrected output. The model estimates the overall probability of a target hypothesis (i.e., a candidate corrected sentence) given the source sentence, by making use of the probability computed for each word in the hypothesis. The individual word probabilities can be computed by training density estimation models such as maximum entropy BIBREF20 or probabilistic neural networks BIBREF3 . Following BIBREF3 , we formulate our global lexicon model using a feed-forward neural network. The model and the training algorithm are described below. Model The probability of a target hypothesis is computed using the following equation: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the source sentence and the target hypothesis respectively, and INLINEFORM2 denotes the number of words in the target hypothesis. INLINEFORM3 is the probability of the target word INLINEFORM4 given the source sentence INLINEFORM5 . INLINEFORM6 is the output of the neural network. The architecture of the neural network is shown in Figure FIGREF3 . INLINEFORM7 is calculated by: where INLINEFORM0 is the hidden layer output, and INLINEFORM1 and INLINEFORM2 are the output layer weights and biases respectively. INLINEFORM3 is the element-wise sigmoid function which scales the output to INLINEFORM4 . INLINEFORM0 is computed by the following equation: INLINEFORM1 where INLINEFORM0 is the activation function, and INLINEFORM1 and INLINEFORM2 are the hidden layer weights and biases applied on a binary bag-of-words representation of the input sentence denoted by INLINEFORM3 . The size of INLINEFORM4 is equal to the size of the source vocabulary INLINEFORM5 and each element indicates the presence or absence (denoted by 1 or 0 respectively) of a given source word. The probability of the target hypothesis INLINEFORM0 given the source sentence INLINEFORM1 is estimated by the following equation: DISPLAYFORM0 where INLINEFORM0 is the number of words in the target sentence, INLINEFORM1 is the INLINEFORM2 th target word, and INLINEFORM3 is the context (history) for the target word INLINEFORM4 . The context INLINEFORM5 consists of a set of INLINEFORM6 source words represented by INLINEFORM7 and INLINEFORM8 words preceding INLINEFORM9 from the target sentence represented by INLINEFORM10 . The context words from the source side are the words in the window of size INLINEFORM11 surrounding the source word INLINEFORM12 that is aligned to the target word INLINEFORM13 . The output of the neural network INLINEFORM14 is the output of the final softmax layer which is given by the following equation: DISPLAYFORM0 where INLINEFORM0 is the output of the neural network before applying softmax and INLINEFORM1 is given by following equation: INLINEFORM2 The output of the neural network before softmax is computed by applying output layer weights INLINEFORM0 and biases INLINEFORM1 to the hidden layer output INLINEFORM2 . INLINEFORM3 INLINEFORM0 is computed by applying weights INLINEFORM1 and biases INLINEFORM2 on the hidden layer input INLINEFORM3 and using a non-linear activation function INLINEFORM4 : INLINEFORM5 The input to the hidden layer ( INLINEFORM0 ) is a concatenated vector of context word embeddings: INLINEFORM1 where INLINEFORM0 and INLINEFORM1 are the one-hot representations of the source word INLINEFORM2 and the target word INLINEFORM3 , respectively. Similarly, INLINEFORM4 and INLINEFORM5 are the word embeddings matrices for the source words and the target words. As we use log probabilities instead of raw probabilities in our GEC system, Equation EQREF11 can be rewritten as the following: DISPLAYFORM0 Finally, since the network is trained by Noise Contrastive Estimation (NCE) (described in Section SECREF14 ), it becomes self-normalized. This means that INLINEFORM0 will be approximately 1 and hence the raw output of the neural network INLINEFORM1 can be directly used as the log probabilities during decoding. Training The model is trained using mini-batch gradient descent with back-propagation. We use binary cross entropy (Equation EQREF5 ) as the cost function: DISPLAYFORM0 where INLINEFORM0 refers to the binary bag-of-words representation of the reference target sentence, and INLINEFORM1 is the target vocabulary. Each mini-batch is composed of a fixed number of sentence pairs INLINEFORM2 . The training algorithm repeatedly minimizes the cost function calculated for a given mini-batch by updating the parameters according to the gradients. To avoid the costly softmax layer and thereby speed up both training and decoding, we use Noise Contrastive Estimation (NCE) following BIBREF14 . During training, the negative log likelihood cost function is modified to a probabilistic binary classifier, which learns to discriminate between the actual target word and INLINEFORM0 random words (noisy samples) per training instance selected from a noise distribution INLINEFORM1 . The two classes are INLINEFORM2 indicating that the word is the target word and INLINEFORM3 indicating that the word is a noisy sample. The conditional probabilities for INLINEFORM4 and INLINEFORM5 given a target word and context is given by: INLINEFORM6 INLINEFORM7 where, INLINEFORM0 is the model probability given in Equation EQREF12 . The negative log likelihood cost function is replaced by the following function. INLINEFORM1 where INLINEFORM0 refers to the INLINEFORM1 th noise sample for the target word INLINEFORM2 . INLINEFORM3 is required for the computation of the neural network output, INLINEFORM4 . However, setting the term INLINEFORM5 to 1 during training forces the output of the neural network to be self-normalized. Hence, Equation EQREF13 reduces to: DISPLAYFORM0 Using Equation EQREF15 avoids the expensive softmax computation in the final layer and consequently speeds up decoding. Rescaling Since the prior probability of observing a particular word in a sentence is usually a small number, the probabilistic output of NNGLM can be biased towards zero. This bias can hurt the performance of our system and therefore, we try to alleviate this problem by rescaling the output after training NNGLM. Our solution is to map the output probabilities to a new probability space by fitting a logistic function on the output. Formally, we use Equation EQREF7 as the mapping function: DISPLAYFORM0 where INLINEFORM0 is the rescaled probability and INLINEFORM1 and INLINEFORM2 are the parameters. For each sentence pair INLINEFORM3 in the development set, we collect training instances of the form INLINEFORM4 for every word INLINEFORM5 in the target vocabulary, where INLINEFORM6 and INLINEFORM7 . The value of INLINEFORM8 is set according to the presence ( INLINEFORM9 ) or absence ( INLINEFORM10 ) of the word INLINEFORM11 in the target sentence INLINEFORM12 . We use weighted cross entropy loss function with INLINEFORM13 -regularization to train INLINEFORM14 and INLINEFORM15 on the development set: INLINEFORM16 Here, INLINEFORM0 is the number of training samples, INLINEFORM1 is the probability of INLINEFORM2 computed by INLINEFORM3 , and INLINEFORM4 and INLINEFORM5 are the weights assigned to the two classes INLINEFORM6 and INLINEFORM7 , respectively. In order to balance the two classes, we weight each class inversely proportional to class frequencies in the training data (Equation EQREF8 ) to put more weight on the less frequent class: DISPLAYFORM0 In Equation EQREF8 , INLINEFORM0 and INLINEFORM1 are the number of samples in each class. After training the rescaling model, we use INLINEFORM2 and INLINEFORM3 to calculate INLINEFORM4 according to Equation EQREF7 . Finally, we use INLINEFORM5 instead of INLINEFORM6 in Equation EQREF2 . Neural Network Joint Model Joint models in translation augment the context information in language models with words from the source sentence. A neural network joint model (NNJM) BIBREF4 uses a neural network to model the word probabilities given a context composed of source and target words. NNJM can scale up to large order of n-grams and still perform well because of its ability to capture semantic information through continuous space representations of words and to learn non-linear relationship between source and target words. Unlike the global lexicon model, NNJM uses a fixed window from the source side and take sequence information of words into consideration in order to estimate the probability of the target word. The model and the training method are described below. Experiments We describe our experimental setup including the description of the data we used, the configuration of our baseline system and the neural network components, and the evaluation method in Section SECREF16 , followed by the results and discussion in Section SECREF23 Setup We use the popular phrase-based machine translation toolkit Moses as our baseline SMT system. NUCLE BIBREF21 , which is the official training data for the CoNLL 2013 and 2014 shared tasks, is used as the parallel text for training. Additionally, we obtain parallel corpora from Lang-8 Corpus of Learner English v1.0 BIBREF12 , which consists of texts written by ESL (English as Second Language) learners on the language learning platform Lang-8. We use the test data for the CoNLL 2013 shared task as our development data. The statistics of the training and development data are given in Table TABREF19 . Source side refers to the original text written by the ESL learners and target side refers to the corresponding corrected text hand-corrected by humans. The source side and the target side are sentence-aligned and tokenized. We train the translation model for our SMT system using a concatenation of NUCLE and Lang-8 v1.0 parallel data. The training data is cleaned up by removing sentence pairs in which either the source or the target sentence is empty, or is too long (greater than 80 tokens), or violate a 9:1 sentence ratio limit. The translation model uses the default features in Moses which include the forward and inverse phrase translation probabilities, forward and inverse lexical weights, word penalty, and phrase penalty. We compute the phrase alignments using standard tools in Moses. We use two language model features: a 5-gram language model trained using the target side of NUCLE used for training the translation model and a 5-gram language model trained using English Wikipedia ( INLINEFORM0 1.78 billion tokens). Both language models are estimated with KenLM using modified Kneser-Ney smoothing. We use MERT for tuning the feature weights by optimizing the INLINEFORM1 measure (which weights precision twice as much as recall). This system constitutes our baseline system in Table TABREF24 . Our baseline system uses exactly the same training data as BIBREF0 for training the translation model and the language model. The difference between our baseline system and the SMT components of BIBREF0 is that we tune with INLINEFORM2 instead of BLEU and we use the standard Moses configuration without the Levenshtein distance feature. On top of our baseline system described above, we incorporate the two neural network components, neural network global lexicon model (NNGLM) and neural network joint model (NNJM) as features. Both NNGLM and NNJM are trained using the parallel data used to train the translation model of our baseline system. We implement NNGLM using the Theano library in Python in order to make use of parallelization with GPUs, thus speeding up training significantly. We use a source and target vocabulary of 10,000 most frequent words on both sides. We use a single hidden layer neural network with 2,000 hidden nodes. We use INLINEFORM0 as the activation function for the hidden layer. We optimize the model weights by stochastic gradient descent using a mini-batch size of 100 and a learning rate of 10. We train the model for 45 epochs. The logistic regression function for rescaling is trained using the probabilities obtained from this model on the development set. To speed up tuning and decoding, we pre-compute the probabilities of target words using the source side sentences of the development and the test sets, respectively. We implement a feature function in Moses to compute the probability of a target hypothesis given the source sentence using the pre-computed probabilities. To train NNJM, we use the publicly available implementation, Neural Probabilistic Language Model (NPLM) BIBREF14 . The latest version of Moses can incorporate NNJM trained using NPLM as a feature while decoding. Similar to NNGLM, we use the parallel text used for training the translation model in order to train NNJM. We use a source context window size of 5 and a target context window size of 4. We select a source context vocabulary of 16,000 most frequent words from the source side. The target context vocabulary and output vocabulary is set to the 32,000 most frequent words. We use a single hidden layer to speed up training and decoding with an input embedding dimension of 192 and 512 hidden layer nodes. We use rectified linear units (ReLU) as the activation function. We train NNJM with noise contrastive estimation with 100 noise samples per training instance, which are obtained from a unigram distribution. The neural network is trained for 30 epochs using stochastic gradient descent optimization with a mini-batch size of 128 and learning rate of 0.1. We conduct experiments by incorporating NNGLM and NNJM both independently and jointly into our baseline system. The results of our experiments are described in Section SECREF23 . The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators (without considering alternative annotations suggested by the participating teams). The test dataset consists of 1,312 error-annotated sentences with 30,144 tokens on the source side. We make use of the official scorer for the shared task, M INLINEFORM0 Scorer v3.2 BIBREF19 , for evaluation. We perform statistical significance test using one-tailed sign test with bootstrap resampling on 100 samples. Results and Discussion Table TABREF24 presents the results of our experiments with neural network global lexicon model (NNGLM) and neural network joint model (NNJM). We see that the addition of both NNGLM and NNJM to our baseline individually improves INLINEFORM0 measure on the CoNLL 2014 test set by 0.43 and 0.80, respectively. Although both improvements over the baseline are statistically significant (with INLINEFORM1 ), we observe that the improvement of NNGLM is slightly lower than that of NNJM. NNGLM encodes the entire lexical information from the source sentence without word ordering information. Hence, it focuses mostly on the choice of words appearing in the output. Many of the words in the source context may not be necessary for ensuring the quality of corrected output. On the other hand, NNJM looks at a smaller window of words in the source side. NNJM can act as a language model and can ensure a fluent translation output compared to NNGLM. We also found rescaling to be important for NNGLM because of imbalanced training data. While the most frequent words in the data, `I' and to', appear in 43% and 27% of the training sentences, respectively, most words occur in very few sentences only. For example, the word `set' appears in 0.15% of the sentences and the word `enterprise' appears in 0.003% of the sentences. By incorporating both components together, we obtain an improvement of 1.17 in terms of INLINEFORM0 measure. This indicates that both components are beneficial and complement each other to improve the performance of the baseline system. While NNGLM looks at the entire source sentence and ensures the appropriate choice of words to appear in the output sentence, NNJM encourages the system to choose appropriate corrections that give a fluent output. We compare our system to the top 3 systems in the CoNLL 2014 shared task and to the best published results BIBREF17 , BIBREF0 on the test data of the CoNLL 2014 shared task. The results are summarized in Table TABREF26 . Our final system including both neural network models outperforms the best system BIBREF17 by 1.85 in F INLINEFORM0 measure. It should be noted that this is despite the fact that the system proposed in BIBREF17 uses much larger training data than our system. We qualitatively analyze the output of our neural network-enhanced systems against the outputs produced by our baseline system. We have included some examples in Table TABREF25 and the corresponding outputs of the baseline system and the reference sentences. The selected examples show that NNGLM and NNJM choose appropriate words by making use of the surrounding context effectively. Note that our neural networks, which rely on fixed source and target vocabulary, map the rare words and misspelled words to the UNK token. Therefore, phrases with the UNK token may get a higher probability than they actually should due to the large number of UNK tokens seen during training. This leads to fewer spelling error corrections compared to the baseline system which does not employ these neural networks. Consider the following example from the test data: ... numerous profit-driven companies realize the hugh (huge) human traffic on such social media sites .... The spelling error hugh INLINEFORM0 huge is corrected by the baseline system, but not by our final system with the neural networks. This is because the misspelled word hugh is not in the neural network vocabulary and so it is mapped to the UNK token. The sentence with the UNK token gets a higher score and hence the system chooses this output over the correct one. From our experiments and analysis, we see that NNGLM and NNJM capture contextual information better than regular translation models and language models. This is because they make use of larger source sentence contexts and continuous space representation of words. This enables them to make better predictions compared to traditional translation models and language models. We also observed that our system has an edge over the baseline for correction of word choice and collocation errors. Conclusion Our experiments show that using the two neural network translation models improves the performance of a phrase-based SMT approach to GEC. To the best of our knowledge, this is the first work that uses these two neural network models for SMT-based GEC. The ability of neural networks to model words and phrases in continuous space and capture non-linear relationships enables them to generalize better and make more accurate grammatical error correction. We have achieved state-of-the-art results on the CoNLL 2014 shared task test dataset. This has been done without using any additional training data compared to the best performing systems evaluated on the same dataset. Acknowledgments This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013-T2-1-150.
No
8f882f414d7ea12077930451ae77c6e5f093adbc
8f882f414d7ea12077930451ae77c6e5f093adbc_0
Q: How do they combine the two proposed neural network models? Text: Introduction Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information. They are also not limited to specific error types. Currently, many state-of-the-art GEC systems are based on SMT or use SMT components for error correction BIBREF0 , BIBREF1 , BIBREF2 . In this paper, grammatical error correction includes correcting errors of all types, including word choice errors and collocation errors which constitute a large class of learners' errors. We model our GEC system based on the phrase-based SMT approach. However, traditional phrase-based SMT systems treat words and phrases as discrete entities. We take advantage of continuous space representation by adding two neural network components that have been shown to improve SMT systems BIBREF3 , BIBREF4 . These neural networks are able to capture non-linear relationships between source and target sentences and can encode contextual information more effectively. Our experiments show that the addition of these two neural networks leads to significant improvements over a strong baseline and outperforms the current state of the art. Related Work In the past decade, there has been increasing attention on grammatical error correction in English, mainly due to the growing number of English as Second Language (ESL) learners around the world. The popularity of this problem in natural language processing research grew further through Helping Our Own (HOO) and the CoNLL shared tasks BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Most published work in GEC aimed at building specific classifiers for different error types and then use them to build hybrid systems BIBREF9 , BIBREF10 . One of the first approaches of using SMT for GEC focused on correction of countability errors of mass nouns (e.g., many informations INLINEFORM0 much information) BIBREF11 . They had to use an artificially constructed parallel corpus for training their SMT system. Later, the availability of large-scale error corrected data BIBREF12 further improved SMT-based GEC systems. Recently, continuous space representations of words and phrases have been incorporated into SMT systems via neural networks. Specifically, addition of monolingual neural network language models BIBREF13 , BIBREF14 , neural network joint models (NNJM) BIBREF4 , and neural network global lexicon models (NNGLM) BIBREF3 have been shown to be useful for SMT. Neural networks have been previously used for GEC as a language model feature in the classification approach BIBREF15 and as a classifier for article error correction BIBREF16 . Recently, a neural machine translation approach has been proposed for GEC BIBREF17 . This method uses a recurrent neural network to perform sequence-to-sequence mapping from erroneous to well-formed sentences. Additionally, it relies on a post-processing step based on statistical word-based translation models to replace out-of-vocabulary words. In this paper, we investigate the effectiveness of two neural network models, NNGLM and NNJM, in SMT-based GEC. To the best of our knowledge, there is no prior work that uses these two neural network models for SMT-based GEC. A Machine Translation Framework for Grammatical Error Correction In this paper, the task of grammatical error correction is formulated as a translation task from the language of `bad' English to the language of `good' English. That is, the source sentence is written by a second language learner and potentially contains grammatical errors, whereas the target sentence is the corrected fluent sentence. We use a phrase-based machine translation framework BIBREF18 for translation, which employs a log-linear model to find the best translation INLINEFORM0 given a source sentence INLINEFORM1 . The best translation is selected according to the following equation: INLINEFORM2 where INLINEFORM0 is the number of features, INLINEFORM1 and INLINEFORM2 are the INLINEFORM3 th feature function and feature weight, respectively. We make use of the standard features used in phrase-based translation without any reordering, leading to monotone translations. The features can be broadly categorized as translation model and language model features. The translation model in the phrase-based machine translation framework is trained using parallel data, i.e., sentence-aligned erroneous source text and corrected target text. The translation model is responsible for finding the best transformation of the source sentence to produce the corrected sentence. On the other hand, the language model is trained on well-formed English text and this ensures the fluency of the corrected text. To find the optimal feature weights ( INLINEFORM4 ), we use minimum error rate training (MERT), maximizing the INLINEFORM5 measure on the development set BIBREF2 . The INLINEFORM6 measure BIBREF19 , which weights precision twice as much as recall, is the evaluation metric widely used for GEC and was the official evaluation metric adopted in the CoNLL 2014 shared task BIBREF8 . Additionally, we augment the feature set by adding two neural network translation models, namely a neural network global lexicon model BIBREF3 and a neural network joint model BIBREF4 . These models are described in detail in Sections SECREF4 and SECREF5 . Neural Network Global Lexicon Model A global lexicon model is used to predict the presence of words in the corrected output. The model estimates the overall probability of a target hypothesis (i.e., a candidate corrected sentence) given the source sentence, by making use of the probability computed for each word in the hypothesis. The individual word probabilities can be computed by training density estimation models such as maximum entropy BIBREF20 or probabilistic neural networks BIBREF3 . Following BIBREF3 , we formulate our global lexicon model using a feed-forward neural network. The model and the training algorithm are described below. Model The probability of a target hypothesis is computed using the following equation: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the source sentence and the target hypothesis respectively, and INLINEFORM2 denotes the number of words in the target hypothesis. INLINEFORM3 is the probability of the target word INLINEFORM4 given the source sentence INLINEFORM5 . INLINEFORM6 is the output of the neural network. The architecture of the neural network is shown in Figure FIGREF3 . INLINEFORM7 is calculated by: where INLINEFORM0 is the hidden layer output, and INLINEFORM1 and INLINEFORM2 are the output layer weights and biases respectively. INLINEFORM3 is the element-wise sigmoid function which scales the output to INLINEFORM4 . INLINEFORM0 is computed by the following equation: INLINEFORM1 where INLINEFORM0 is the activation function, and INLINEFORM1 and INLINEFORM2 are the hidden layer weights and biases applied on a binary bag-of-words representation of the input sentence denoted by INLINEFORM3 . The size of INLINEFORM4 is equal to the size of the source vocabulary INLINEFORM5 and each element indicates the presence or absence (denoted by 1 or 0 respectively) of a given source word. The probability of the target hypothesis INLINEFORM0 given the source sentence INLINEFORM1 is estimated by the following equation: DISPLAYFORM0 where INLINEFORM0 is the number of words in the target sentence, INLINEFORM1 is the INLINEFORM2 th target word, and INLINEFORM3 is the context (history) for the target word INLINEFORM4 . The context INLINEFORM5 consists of a set of INLINEFORM6 source words represented by INLINEFORM7 and INLINEFORM8 words preceding INLINEFORM9 from the target sentence represented by INLINEFORM10 . The context words from the source side are the words in the window of size INLINEFORM11 surrounding the source word INLINEFORM12 that is aligned to the target word INLINEFORM13 . The output of the neural network INLINEFORM14 is the output of the final softmax layer which is given by the following equation: DISPLAYFORM0 where INLINEFORM0 is the output of the neural network before applying softmax and INLINEFORM1 is given by following equation: INLINEFORM2 The output of the neural network before softmax is computed by applying output layer weights INLINEFORM0 and biases INLINEFORM1 to the hidden layer output INLINEFORM2 . INLINEFORM3 INLINEFORM0 is computed by applying weights INLINEFORM1 and biases INLINEFORM2 on the hidden layer input INLINEFORM3 and using a non-linear activation function INLINEFORM4 : INLINEFORM5 The input to the hidden layer ( INLINEFORM0 ) is a concatenated vector of context word embeddings: INLINEFORM1 where INLINEFORM0 and INLINEFORM1 are the one-hot representations of the source word INLINEFORM2 and the target word INLINEFORM3 , respectively. Similarly, INLINEFORM4 and INLINEFORM5 are the word embeddings matrices for the source words and the target words. As we use log probabilities instead of raw probabilities in our GEC system, Equation EQREF11 can be rewritten as the following: DISPLAYFORM0 Finally, since the network is trained by Noise Contrastive Estimation (NCE) (described in Section SECREF14 ), it becomes self-normalized. This means that INLINEFORM0 will be approximately 1 and hence the raw output of the neural network INLINEFORM1 can be directly used as the log probabilities during decoding. Training The model is trained using mini-batch gradient descent with back-propagation. We use binary cross entropy (Equation EQREF5 ) as the cost function: DISPLAYFORM0 where INLINEFORM0 refers to the binary bag-of-words representation of the reference target sentence, and INLINEFORM1 is the target vocabulary. Each mini-batch is composed of a fixed number of sentence pairs INLINEFORM2 . The training algorithm repeatedly minimizes the cost function calculated for a given mini-batch by updating the parameters according to the gradients. To avoid the costly softmax layer and thereby speed up both training and decoding, we use Noise Contrastive Estimation (NCE) following BIBREF14 . During training, the negative log likelihood cost function is modified to a probabilistic binary classifier, which learns to discriminate between the actual target word and INLINEFORM0 random words (noisy samples) per training instance selected from a noise distribution INLINEFORM1 . The two classes are INLINEFORM2 indicating that the word is the target word and INLINEFORM3 indicating that the word is a noisy sample. The conditional probabilities for INLINEFORM4 and INLINEFORM5 given a target word and context is given by: INLINEFORM6 INLINEFORM7 where, INLINEFORM0 is the model probability given in Equation EQREF12 . The negative log likelihood cost function is replaced by the following function. INLINEFORM1 where INLINEFORM0 refers to the INLINEFORM1 th noise sample for the target word INLINEFORM2 . INLINEFORM3 is required for the computation of the neural network output, INLINEFORM4 . However, setting the term INLINEFORM5 to 1 during training forces the output of the neural network to be self-normalized. Hence, Equation EQREF13 reduces to: DISPLAYFORM0 Using Equation EQREF15 avoids the expensive softmax computation in the final layer and consequently speeds up decoding. Rescaling Since the prior probability of observing a particular word in a sentence is usually a small number, the probabilistic output of NNGLM can be biased towards zero. This bias can hurt the performance of our system and therefore, we try to alleviate this problem by rescaling the output after training NNGLM. Our solution is to map the output probabilities to a new probability space by fitting a logistic function on the output. Formally, we use Equation EQREF7 as the mapping function: DISPLAYFORM0 where INLINEFORM0 is the rescaled probability and INLINEFORM1 and INLINEFORM2 are the parameters. For each sentence pair INLINEFORM3 in the development set, we collect training instances of the form INLINEFORM4 for every word INLINEFORM5 in the target vocabulary, where INLINEFORM6 and INLINEFORM7 . The value of INLINEFORM8 is set according to the presence ( INLINEFORM9 ) or absence ( INLINEFORM10 ) of the word INLINEFORM11 in the target sentence INLINEFORM12 . We use weighted cross entropy loss function with INLINEFORM13 -regularization to train INLINEFORM14 and INLINEFORM15 on the development set: INLINEFORM16 Here, INLINEFORM0 is the number of training samples, INLINEFORM1 is the probability of INLINEFORM2 computed by INLINEFORM3 , and INLINEFORM4 and INLINEFORM5 are the weights assigned to the two classes INLINEFORM6 and INLINEFORM7 , respectively. In order to balance the two classes, we weight each class inversely proportional to class frequencies in the training data (Equation EQREF8 ) to put more weight on the less frequent class: DISPLAYFORM0 In Equation EQREF8 , INLINEFORM0 and INLINEFORM1 are the number of samples in each class. After training the rescaling model, we use INLINEFORM2 and INLINEFORM3 to calculate INLINEFORM4 according to Equation EQREF7 . Finally, we use INLINEFORM5 instead of INLINEFORM6 in Equation EQREF2 . Neural Network Joint Model Joint models in translation augment the context information in language models with words from the source sentence. A neural network joint model (NNJM) BIBREF4 uses a neural network to model the word probabilities given a context composed of source and target words. NNJM can scale up to large order of n-grams and still perform well because of its ability to capture semantic information through continuous space representations of words and to learn non-linear relationship between source and target words. Unlike the global lexicon model, NNJM uses a fixed window from the source side and take sequence information of words into consideration in order to estimate the probability of the target word. The model and the training method are described below. Experiments We describe our experimental setup including the description of the data we used, the configuration of our baseline system and the neural network components, and the evaluation method in Section SECREF16 , followed by the results and discussion in Section SECREF23 Setup We use the popular phrase-based machine translation toolkit Moses as our baseline SMT system. NUCLE BIBREF21 , which is the official training data for the CoNLL 2013 and 2014 shared tasks, is used as the parallel text for training. Additionally, we obtain parallel corpora from Lang-8 Corpus of Learner English v1.0 BIBREF12 , which consists of texts written by ESL (English as Second Language) learners on the language learning platform Lang-8. We use the test data for the CoNLL 2013 shared task as our development data. The statistics of the training and development data are given in Table TABREF19 . Source side refers to the original text written by the ESL learners and target side refers to the corresponding corrected text hand-corrected by humans. The source side and the target side are sentence-aligned and tokenized. We train the translation model for our SMT system using a concatenation of NUCLE and Lang-8 v1.0 parallel data. The training data is cleaned up by removing sentence pairs in which either the source or the target sentence is empty, or is too long (greater than 80 tokens), or violate a 9:1 sentence ratio limit. The translation model uses the default features in Moses which include the forward and inverse phrase translation probabilities, forward and inverse lexical weights, word penalty, and phrase penalty. We compute the phrase alignments using standard tools in Moses. We use two language model features: a 5-gram language model trained using the target side of NUCLE used for training the translation model and a 5-gram language model trained using English Wikipedia ( INLINEFORM0 1.78 billion tokens). Both language models are estimated with KenLM using modified Kneser-Ney smoothing. We use MERT for tuning the feature weights by optimizing the INLINEFORM1 measure (which weights precision twice as much as recall). This system constitutes our baseline system in Table TABREF24 . Our baseline system uses exactly the same training data as BIBREF0 for training the translation model and the language model. The difference between our baseline system and the SMT components of BIBREF0 is that we tune with INLINEFORM2 instead of BLEU and we use the standard Moses configuration without the Levenshtein distance feature. On top of our baseline system described above, we incorporate the two neural network components, neural network global lexicon model (NNGLM) and neural network joint model (NNJM) as features. Both NNGLM and NNJM are trained using the parallel data used to train the translation model of our baseline system. We implement NNGLM using the Theano library in Python in order to make use of parallelization with GPUs, thus speeding up training significantly. We use a source and target vocabulary of 10,000 most frequent words on both sides. We use a single hidden layer neural network with 2,000 hidden nodes. We use INLINEFORM0 as the activation function for the hidden layer. We optimize the model weights by stochastic gradient descent using a mini-batch size of 100 and a learning rate of 10. We train the model for 45 epochs. The logistic regression function for rescaling is trained using the probabilities obtained from this model on the development set. To speed up tuning and decoding, we pre-compute the probabilities of target words using the source side sentences of the development and the test sets, respectively. We implement a feature function in Moses to compute the probability of a target hypothesis given the source sentence using the pre-computed probabilities. To train NNJM, we use the publicly available implementation, Neural Probabilistic Language Model (NPLM) BIBREF14 . The latest version of Moses can incorporate NNJM trained using NPLM as a feature while decoding. Similar to NNGLM, we use the parallel text used for training the translation model in order to train NNJM. We use a source context window size of 5 and a target context window size of 4. We select a source context vocabulary of 16,000 most frequent words from the source side. The target context vocabulary and output vocabulary is set to the 32,000 most frequent words. We use a single hidden layer to speed up training and decoding with an input embedding dimension of 192 and 512 hidden layer nodes. We use rectified linear units (ReLU) as the activation function. We train NNJM with noise contrastive estimation with 100 noise samples per training instance, which are obtained from a unigram distribution. The neural network is trained for 30 epochs using stochastic gradient descent optimization with a mini-batch size of 128 and learning rate of 0.1. We conduct experiments by incorporating NNGLM and NNJM both independently and jointly into our baseline system. The results of our experiments are described in Section SECREF23 . The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators (without considering alternative annotations suggested by the participating teams). The test dataset consists of 1,312 error-annotated sentences with 30,144 tokens on the source side. We make use of the official scorer for the shared task, M INLINEFORM0 Scorer v3.2 BIBREF19 , for evaluation. We perform statistical significance test using one-tailed sign test with bootstrap resampling on 100 samples. Results and Discussion Table TABREF24 presents the results of our experiments with neural network global lexicon model (NNGLM) and neural network joint model (NNJM). We see that the addition of both NNGLM and NNJM to our baseline individually improves INLINEFORM0 measure on the CoNLL 2014 test set by 0.43 and 0.80, respectively. Although both improvements over the baseline are statistically significant (with INLINEFORM1 ), we observe that the improvement of NNGLM is slightly lower than that of NNJM. NNGLM encodes the entire lexical information from the source sentence without word ordering information. Hence, it focuses mostly on the choice of words appearing in the output. Many of the words in the source context may not be necessary for ensuring the quality of corrected output. On the other hand, NNJM looks at a smaller window of words in the source side. NNJM can act as a language model and can ensure a fluent translation output compared to NNGLM. We also found rescaling to be important for NNGLM because of imbalanced training data. While the most frequent words in the data, `I' and to', appear in 43% and 27% of the training sentences, respectively, most words occur in very few sentences only. For example, the word `set' appears in 0.15% of the sentences and the word `enterprise' appears in 0.003% of the sentences. By incorporating both components together, we obtain an improvement of 1.17 in terms of INLINEFORM0 measure. This indicates that both components are beneficial and complement each other to improve the performance of the baseline system. While NNGLM looks at the entire source sentence and ensures the appropriate choice of words to appear in the output sentence, NNJM encourages the system to choose appropriate corrections that give a fluent output. We compare our system to the top 3 systems in the CoNLL 2014 shared task and to the best published results BIBREF17 , BIBREF0 on the test data of the CoNLL 2014 shared task. The results are summarized in Table TABREF26 . Our final system including both neural network models outperforms the best system BIBREF17 by 1.85 in F INLINEFORM0 measure. It should be noted that this is despite the fact that the system proposed in BIBREF17 uses much larger training data than our system. We qualitatively analyze the output of our neural network-enhanced systems against the outputs produced by our baseline system. We have included some examples in Table TABREF25 and the corresponding outputs of the baseline system and the reference sentences. The selected examples show that NNGLM and NNJM choose appropriate words by making use of the surrounding context effectively. Note that our neural networks, which rely on fixed source and target vocabulary, map the rare words and misspelled words to the UNK token. Therefore, phrases with the UNK token may get a higher probability than they actually should due to the large number of UNK tokens seen during training. This leads to fewer spelling error corrections compared to the baseline system which does not employ these neural networks. Consider the following example from the test data: ... numerous profit-driven companies realize the hugh (huge) human traffic on such social media sites .... The spelling error hugh INLINEFORM0 huge is corrected by the baseline system, but not by our final system with the neural networks. This is because the misspelled word hugh is not in the neural network vocabulary and so it is mapped to the UNK token. The sentence with the UNK token gets a higher score and hence the system chooses this output over the correct one. From our experiments and analysis, we see that NNGLM and NNJM capture contextual information better than regular translation models and language models. This is because they make use of larger source sentence contexts and continuous space representation of words. This enables them to make better predictions compared to traditional translation models and language models. We also observed that our system has an edge over the baseline for correction of word choice and collocation errors. Conclusion Our experiments show that using the two neural network translation models improves the performance of a phrase-based SMT approach to GEC. To the best of our knowledge, this is the first work that uses these two neural network models for SMT-based GEC. The ability of neural networks to model words and phrases in continuous space and capture non-linear relationships enables them to generalize better and make more accurate grammatical error correction. We have achieved state-of-the-art results on the CoNLL 2014 shared task test dataset. This has been done without using any additional training data compared to the best performing systems evaluated on the same dataset. Acknowledgments This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013-T2-1-150.
ncorporating NNGLM and NNJM both independently and jointly into, baseline system
a49832c89a2d7f95c1fe6132902d74e4e7a3f2d0
a49832c89a2d7f95c1fe6132902d74e4e7a3f2d0_0
Q: Which dataset do they evaluate grammatical error correction on? Text: Introduction Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information. They are also not limited to specific error types. Currently, many state-of-the-art GEC systems are based on SMT or use SMT components for error correction BIBREF0 , BIBREF1 , BIBREF2 . In this paper, grammatical error correction includes correcting errors of all types, including word choice errors and collocation errors which constitute a large class of learners' errors. We model our GEC system based on the phrase-based SMT approach. However, traditional phrase-based SMT systems treat words and phrases as discrete entities. We take advantage of continuous space representation by adding two neural network components that have been shown to improve SMT systems BIBREF3 , BIBREF4 . These neural networks are able to capture non-linear relationships between source and target sentences and can encode contextual information more effectively. Our experiments show that the addition of these two neural networks leads to significant improvements over a strong baseline and outperforms the current state of the art. Related Work In the past decade, there has been increasing attention on grammatical error correction in English, mainly due to the growing number of English as Second Language (ESL) learners around the world. The popularity of this problem in natural language processing research grew further through Helping Our Own (HOO) and the CoNLL shared tasks BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Most published work in GEC aimed at building specific classifiers for different error types and then use them to build hybrid systems BIBREF9 , BIBREF10 . One of the first approaches of using SMT for GEC focused on correction of countability errors of mass nouns (e.g., many informations INLINEFORM0 much information) BIBREF11 . They had to use an artificially constructed parallel corpus for training their SMT system. Later, the availability of large-scale error corrected data BIBREF12 further improved SMT-based GEC systems. Recently, continuous space representations of words and phrases have been incorporated into SMT systems via neural networks. Specifically, addition of monolingual neural network language models BIBREF13 , BIBREF14 , neural network joint models (NNJM) BIBREF4 , and neural network global lexicon models (NNGLM) BIBREF3 have been shown to be useful for SMT. Neural networks have been previously used for GEC as a language model feature in the classification approach BIBREF15 and as a classifier for article error correction BIBREF16 . Recently, a neural machine translation approach has been proposed for GEC BIBREF17 . This method uses a recurrent neural network to perform sequence-to-sequence mapping from erroneous to well-formed sentences. Additionally, it relies on a post-processing step based on statistical word-based translation models to replace out-of-vocabulary words. In this paper, we investigate the effectiveness of two neural network models, NNGLM and NNJM, in SMT-based GEC. To the best of our knowledge, there is no prior work that uses these two neural network models for SMT-based GEC. A Machine Translation Framework for Grammatical Error Correction In this paper, the task of grammatical error correction is formulated as a translation task from the language of `bad' English to the language of `good' English. That is, the source sentence is written by a second language learner and potentially contains grammatical errors, whereas the target sentence is the corrected fluent sentence. We use a phrase-based machine translation framework BIBREF18 for translation, which employs a log-linear model to find the best translation INLINEFORM0 given a source sentence INLINEFORM1 . The best translation is selected according to the following equation: INLINEFORM2 where INLINEFORM0 is the number of features, INLINEFORM1 and INLINEFORM2 are the INLINEFORM3 th feature function and feature weight, respectively. We make use of the standard features used in phrase-based translation without any reordering, leading to monotone translations. The features can be broadly categorized as translation model and language model features. The translation model in the phrase-based machine translation framework is trained using parallel data, i.e., sentence-aligned erroneous source text and corrected target text. The translation model is responsible for finding the best transformation of the source sentence to produce the corrected sentence. On the other hand, the language model is trained on well-formed English text and this ensures the fluency of the corrected text. To find the optimal feature weights ( INLINEFORM4 ), we use minimum error rate training (MERT), maximizing the INLINEFORM5 measure on the development set BIBREF2 . The INLINEFORM6 measure BIBREF19 , which weights precision twice as much as recall, is the evaluation metric widely used for GEC and was the official evaluation metric adopted in the CoNLL 2014 shared task BIBREF8 . Additionally, we augment the feature set by adding two neural network translation models, namely a neural network global lexicon model BIBREF3 and a neural network joint model BIBREF4 . These models are described in detail in Sections SECREF4 and SECREF5 . Neural Network Global Lexicon Model A global lexicon model is used to predict the presence of words in the corrected output. The model estimates the overall probability of a target hypothesis (i.e., a candidate corrected sentence) given the source sentence, by making use of the probability computed for each word in the hypothesis. The individual word probabilities can be computed by training density estimation models such as maximum entropy BIBREF20 or probabilistic neural networks BIBREF3 . Following BIBREF3 , we formulate our global lexicon model using a feed-forward neural network. The model and the training algorithm are described below. Model The probability of a target hypothesis is computed using the following equation: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the source sentence and the target hypothesis respectively, and INLINEFORM2 denotes the number of words in the target hypothesis. INLINEFORM3 is the probability of the target word INLINEFORM4 given the source sentence INLINEFORM5 . INLINEFORM6 is the output of the neural network. The architecture of the neural network is shown in Figure FIGREF3 . INLINEFORM7 is calculated by: where INLINEFORM0 is the hidden layer output, and INLINEFORM1 and INLINEFORM2 are the output layer weights and biases respectively. INLINEFORM3 is the element-wise sigmoid function which scales the output to INLINEFORM4 . INLINEFORM0 is computed by the following equation: INLINEFORM1 where INLINEFORM0 is the activation function, and INLINEFORM1 and INLINEFORM2 are the hidden layer weights and biases applied on a binary bag-of-words representation of the input sentence denoted by INLINEFORM3 . The size of INLINEFORM4 is equal to the size of the source vocabulary INLINEFORM5 and each element indicates the presence or absence (denoted by 1 or 0 respectively) of a given source word. The probability of the target hypothesis INLINEFORM0 given the source sentence INLINEFORM1 is estimated by the following equation: DISPLAYFORM0 where INLINEFORM0 is the number of words in the target sentence, INLINEFORM1 is the INLINEFORM2 th target word, and INLINEFORM3 is the context (history) for the target word INLINEFORM4 . The context INLINEFORM5 consists of a set of INLINEFORM6 source words represented by INLINEFORM7 and INLINEFORM8 words preceding INLINEFORM9 from the target sentence represented by INLINEFORM10 . The context words from the source side are the words in the window of size INLINEFORM11 surrounding the source word INLINEFORM12 that is aligned to the target word INLINEFORM13 . The output of the neural network INLINEFORM14 is the output of the final softmax layer which is given by the following equation: DISPLAYFORM0 where INLINEFORM0 is the output of the neural network before applying softmax and INLINEFORM1 is given by following equation: INLINEFORM2 The output of the neural network before softmax is computed by applying output layer weights INLINEFORM0 and biases INLINEFORM1 to the hidden layer output INLINEFORM2 . INLINEFORM3 INLINEFORM0 is computed by applying weights INLINEFORM1 and biases INLINEFORM2 on the hidden layer input INLINEFORM3 and using a non-linear activation function INLINEFORM4 : INLINEFORM5 The input to the hidden layer ( INLINEFORM0 ) is a concatenated vector of context word embeddings: INLINEFORM1 where INLINEFORM0 and INLINEFORM1 are the one-hot representations of the source word INLINEFORM2 and the target word INLINEFORM3 , respectively. Similarly, INLINEFORM4 and INLINEFORM5 are the word embeddings matrices for the source words and the target words. As we use log probabilities instead of raw probabilities in our GEC system, Equation EQREF11 can be rewritten as the following: DISPLAYFORM0 Finally, since the network is trained by Noise Contrastive Estimation (NCE) (described in Section SECREF14 ), it becomes self-normalized. This means that INLINEFORM0 will be approximately 1 and hence the raw output of the neural network INLINEFORM1 can be directly used as the log probabilities during decoding. Training The model is trained using mini-batch gradient descent with back-propagation. We use binary cross entropy (Equation EQREF5 ) as the cost function: DISPLAYFORM0 where INLINEFORM0 refers to the binary bag-of-words representation of the reference target sentence, and INLINEFORM1 is the target vocabulary. Each mini-batch is composed of a fixed number of sentence pairs INLINEFORM2 . The training algorithm repeatedly minimizes the cost function calculated for a given mini-batch by updating the parameters according to the gradients. To avoid the costly softmax layer and thereby speed up both training and decoding, we use Noise Contrastive Estimation (NCE) following BIBREF14 . During training, the negative log likelihood cost function is modified to a probabilistic binary classifier, which learns to discriminate between the actual target word and INLINEFORM0 random words (noisy samples) per training instance selected from a noise distribution INLINEFORM1 . The two classes are INLINEFORM2 indicating that the word is the target word and INLINEFORM3 indicating that the word is a noisy sample. The conditional probabilities for INLINEFORM4 and INLINEFORM5 given a target word and context is given by: INLINEFORM6 INLINEFORM7 where, INLINEFORM0 is the model probability given in Equation EQREF12 . The negative log likelihood cost function is replaced by the following function. INLINEFORM1 where INLINEFORM0 refers to the INLINEFORM1 th noise sample for the target word INLINEFORM2 . INLINEFORM3 is required for the computation of the neural network output, INLINEFORM4 . However, setting the term INLINEFORM5 to 1 during training forces the output of the neural network to be self-normalized. Hence, Equation EQREF13 reduces to: DISPLAYFORM0 Using Equation EQREF15 avoids the expensive softmax computation in the final layer and consequently speeds up decoding. Rescaling Since the prior probability of observing a particular word in a sentence is usually a small number, the probabilistic output of NNGLM can be biased towards zero. This bias can hurt the performance of our system and therefore, we try to alleviate this problem by rescaling the output after training NNGLM. Our solution is to map the output probabilities to a new probability space by fitting a logistic function on the output. Formally, we use Equation EQREF7 as the mapping function: DISPLAYFORM0 where INLINEFORM0 is the rescaled probability and INLINEFORM1 and INLINEFORM2 are the parameters. For each sentence pair INLINEFORM3 in the development set, we collect training instances of the form INLINEFORM4 for every word INLINEFORM5 in the target vocabulary, where INLINEFORM6 and INLINEFORM7 . The value of INLINEFORM8 is set according to the presence ( INLINEFORM9 ) or absence ( INLINEFORM10 ) of the word INLINEFORM11 in the target sentence INLINEFORM12 . We use weighted cross entropy loss function with INLINEFORM13 -regularization to train INLINEFORM14 and INLINEFORM15 on the development set: INLINEFORM16 Here, INLINEFORM0 is the number of training samples, INLINEFORM1 is the probability of INLINEFORM2 computed by INLINEFORM3 , and INLINEFORM4 and INLINEFORM5 are the weights assigned to the two classes INLINEFORM6 and INLINEFORM7 , respectively. In order to balance the two classes, we weight each class inversely proportional to class frequencies in the training data (Equation EQREF8 ) to put more weight on the less frequent class: DISPLAYFORM0 In Equation EQREF8 , INLINEFORM0 and INLINEFORM1 are the number of samples in each class. After training the rescaling model, we use INLINEFORM2 and INLINEFORM3 to calculate INLINEFORM4 according to Equation EQREF7 . Finally, we use INLINEFORM5 instead of INLINEFORM6 in Equation EQREF2 . Neural Network Joint Model Joint models in translation augment the context information in language models with words from the source sentence. A neural network joint model (NNJM) BIBREF4 uses a neural network to model the word probabilities given a context composed of source and target words. NNJM can scale up to large order of n-grams and still perform well because of its ability to capture semantic information through continuous space representations of words and to learn non-linear relationship between source and target words. Unlike the global lexicon model, NNJM uses a fixed window from the source side and take sequence information of words into consideration in order to estimate the probability of the target word. The model and the training method are described below. Experiments We describe our experimental setup including the description of the data we used, the configuration of our baseline system and the neural network components, and the evaluation method in Section SECREF16 , followed by the results and discussion in Section SECREF23 Setup We use the popular phrase-based machine translation toolkit Moses as our baseline SMT system. NUCLE BIBREF21 , which is the official training data for the CoNLL 2013 and 2014 shared tasks, is used as the parallel text for training. Additionally, we obtain parallel corpora from Lang-8 Corpus of Learner English v1.0 BIBREF12 , which consists of texts written by ESL (English as Second Language) learners on the language learning platform Lang-8. We use the test data for the CoNLL 2013 shared task as our development data. The statistics of the training and development data are given in Table TABREF19 . Source side refers to the original text written by the ESL learners and target side refers to the corresponding corrected text hand-corrected by humans. The source side and the target side are sentence-aligned and tokenized. We train the translation model for our SMT system using a concatenation of NUCLE and Lang-8 v1.0 parallel data. The training data is cleaned up by removing sentence pairs in which either the source or the target sentence is empty, or is too long (greater than 80 tokens), or violate a 9:1 sentence ratio limit. The translation model uses the default features in Moses which include the forward and inverse phrase translation probabilities, forward and inverse lexical weights, word penalty, and phrase penalty. We compute the phrase alignments using standard tools in Moses. We use two language model features: a 5-gram language model trained using the target side of NUCLE used for training the translation model and a 5-gram language model trained using English Wikipedia ( INLINEFORM0 1.78 billion tokens). Both language models are estimated with KenLM using modified Kneser-Ney smoothing. We use MERT for tuning the feature weights by optimizing the INLINEFORM1 measure (which weights precision twice as much as recall). This system constitutes our baseline system in Table TABREF24 . Our baseline system uses exactly the same training data as BIBREF0 for training the translation model and the language model. The difference between our baseline system and the SMT components of BIBREF0 is that we tune with INLINEFORM2 instead of BLEU and we use the standard Moses configuration without the Levenshtein distance feature. On top of our baseline system described above, we incorporate the two neural network components, neural network global lexicon model (NNGLM) and neural network joint model (NNJM) as features. Both NNGLM and NNJM are trained using the parallel data used to train the translation model of our baseline system. We implement NNGLM using the Theano library in Python in order to make use of parallelization with GPUs, thus speeding up training significantly. We use a source and target vocabulary of 10,000 most frequent words on both sides. We use a single hidden layer neural network with 2,000 hidden nodes. We use INLINEFORM0 as the activation function for the hidden layer. We optimize the model weights by stochastic gradient descent using a mini-batch size of 100 and a learning rate of 10. We train the model for 45 epochs. The logistic regression function for rescaling is trained using the probabilities obtained from this model on the development set. To speed up tuning and decoding, we pre-compute the probabilities of target words using the source side sentences of the development and the test sets, respectively. We implement a feature function in Moses to compute the probability of a target hypothesis given the source sentence using the pre-computed probabilities. To train NNJM, we use the publicly available implementation, Neural Probabilistic Language Model (NPLM) BIBREF14 . The latest version of Moses can incorporate NNJM trained using NPLM as a feature while decoding. Similar to NNGLM, we use the parallel text used for training the translation model in order to train NNJM. We use a source context window size of 5 and a target context window size of 4. We select a source context vocabulary of 16,000 most frequent words from the source side. The target context vocabulary and output vocabulary is set to the 32,000 most frequent words. We use a single hidden layer to speed up training and decoding with an input embedding dimension of 192 and 512 hidden layer nodes. We use rectified linear units (ReLU) as the activation function. We train NNJM with noise contrastive estimation with 100 noise samples per training instance, which are obtained from a unigram distribution. The neural network is trained for 30 epochs using stochastic gradient descent optimization with a mini-batch size of 128 and learning rate of 0.1. We conduct experiments by incorporating NNGLM and NNJM both independently and jointly into our baseline system. The results of our experiments are described in Section SECREF23 . The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators (without considering alternative annotations suggested by the participating teams). The test dataset consists of 1,312 error-annotated sentences with 30,144 tokens on the source side. We make use of the official scorer for the shared task, M INLINEFORM0 Scorer v3.2 BIBREF19 , for evaluation. We perform statistical significance test using one-tailed sign test with bootstrap resampling on 100 samples. Results and Discussion Table TABREF24 presents the results of our experiments with neural network global lexicon model (NNGLM) and neural network joint model (NNJM). We see that the addition of both NNGLM and NNJM to our baseline individually improves INLINEFORM0 measure on the CoNLL 2014 test set by 0.43 and 0.80, respectively. Although both improvements over the baseline are statistically significant (with INLINEFORM1 ), we observe that the improvement of NNGLM is slightly lower than that of NNJM. NNGLM encodes the entire lexical information from the source sentence without word ordering information. Hence, it focuses mostly on the choice of words appearing in the output. Many of the words in the source context may not be necessary for ensuring the quality of corrected output. On the other hand, NNJM looks at a smaller window of words in the source side. NNJM can act as a language model and can ensure a fluent translation output compared to NNGLM. We also found rescaling to be important for NNGLM because of imbalanced training data. While the most frequent words in the data, `I' and to', appear in 43% and 27% of the training sentences, respectively, most words occur in very few sentences only. For example, the word `set' appears in 0.15% of the sentences and the word `enterprise' appears in 0.003% of the sentences. By incorporating both components together, we obtain an improvement of 1.17 in terms of INLINEFORM0 measure. This indicates that both components are beneficial and complement each other to improve the performance of the baseline system. While NNGLM looks at the entire source sentence and ensures the appropriate choice of words to appear in the output sentence, NNJM encourages the system to choose appropriate corrections that give a fluent output. We compare our system to the top 3 systems in the CoNLL 2014 shared task and to the best published results BIBREF17 , BIBREF0 on the test data of the CoNLL 2014 shared task. The results are summarized in Table TABREF26 . Our final system including both neural network models outperforms the best system BIBREF17 by 1.85 in F INLINEFORM0 measure. It should be noted that this is despite the fact that the system proposed in BIBREF17 uses much larger training data than our system. We qualitatively analyze the output of our neural network-enhanced systems against the outputs produced by our baseline system. We have included some examples in Table TABREF25 and the corresponding outputs of the baseline system and the reference sentences. The selected examples show that NNGLM and NNJM choose appropriate words by making use of the surrounding context effectively. Note that our neural networks, which rely on fixed source and target vocabulary, map the rare words and misspelled words to the UNK token. Therefore, phrases with the UNK token may get a higher probability than they actually should due to the large number of UNK tokens seen during training. This leads to fewer spelling error corrections compared to the baseline system which does not employ these neural networks. Consider the following example from the test data: ... numerous profit-driven companies realize the hugh (huge) human traffic on such social media sites .... The spelling error hugh INLINEFORM0 huge is corrected by the baseline system, but not by our final system with the neural networks. This is because the misspelled word hugh is not in the neural network vocabulary and so it is mapped to the UNK token. The sentence with the UNK token gets a higher score and hence the system chooses this output over the correct one. From our experiments and analysis, we see that NNGLM and NNJM capture contextual information better than regular translation models and language models. This is because they make use of larger source sentence contexts and continuous space representation of words. This enables them to make better predictions compared to traditional translation models and language models. We also observed that our system has an edge over the baseline for correction of word choice and collocation errors. Conclusion Our experiments show that using the two neural network translation models improves the performance of a phrase-based SMT approach to GEC. To the best of our knowledge, this is the first work that uses these two neural network models for SMT-based GEC. The ability of neural networks to model words and phrases in continuous space and capture non-linear relationships enables them to generalize better and make more accurate grammatical error correction. We have achieved state-of-the-art results on the CoNLL 2014 shared task test dataset. This has been done without using any additional training data compared to the best performing systems evaluated on the same dataset. Acknowledgments This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013-T2-1-150.
CoNLL 2014
a33ab5ce8497ff63ca575a80b03e0ed9c6acd273
a33ab5ce8497ff63ca575a80b03e0ed9c6acd273_0
Q: How many users/clicks does their search engine have? Text: Introduction Over the past few years, major commercial search engines have enriched and improved the user experience by proactively presenting related entities for a query along with the regular web search results. Figure FIGREF3 shows an example of Alibaba ShenMa search engine's entity recommendation results presented on the panel of its mobile search result page. Existing studies BIBREF0, BIBREF1 in entity recommendation typically consider the query containing explicit entities, while ignoring those queries without entities. A main common drawback of these approaches is that they cannot handle well the complex queries, because they do not have informative evidence other than the entity itself for retrieving related entities with the same surface form. Therefore, existing entity recommendation systems tend to recommend entities with regard to the explicitly asked meaning, ignoring those queries with implicit user needs. Through analyzing hundreds of million unique queries from search logs with named entity recognition technology, we have found that more than 50% of the queries do not have explicit entities. In our opinion, those queries without explicit entities are valuable for entity recommendation. The queries convey insights into a user's current information need, which enable us to provide the user with more relevant entity recommendations and improve user experience. For example, a user's search intent behind the query "what food is good for cold weather" could be a kind of food suitable to eat in cold weather. However, most of the entities recommended for the query are mainly based on entities existed in the query such as given the query "cake" and recommended those entities "cupcakes," "chocolate" and so on, and there is no explicit entity called "good food for cold weather" at all. It is very likely that the user is interested in the search engine that is able to recommend entities with arbitrary queries. However, recommending entities with such complex queries is extremely challenging. At first, many existing recommendation algorithms proven to work well on small problems but fail to operate on a large scale. Highly specialized distributed learning algorithms and efficient serving systems are essential for handling search engine's massive queries and candidate entities. Secondly, user queries are extremely complex and diverse, and it is quite challenging to understand the user's true intention. Furthermore, historical user behavior on the search engine is inherently difficult to predict due to sparsity and a variety of unobservable external factors. We rarely obtain the ground truth of user satisfaction and instead model noisy implicit feedback signals. In this paper, we study the problem of context-aware entity recommendation and investigate how to utilize the queries without explicit entities to improve the entity recommendation quality. Our approach is based on neural networks, which maps both queries and candidate entities into vector space via large-scale distributed training. We evaluate our approach using large-scale, real-world search logs of a widely used commercial Chinese search engine. Our system has been deployed in ShenMa Search Engine and you can experience this feature in UC Browser of Alibaba. Results from online A/B test involving a large number of real users suggest that the impression efficiency of click-through rate (CTR) increased by 5.1% and page view (PV) increased by 5.5%. The main contributions of our paper are summarized as follows: To the best of our knowledge, we are the first approach to recommend entities for arbitrary queries in large-scale Chinese search engine. Our approach is flexible capable of recommending entities for billions of queries. We conduct extensive experiments on large-scale, real-world search logs which shows the effectiveness of our approach in both offline evaluation and online A/B test. Related Work Previous work that is closest to our work is the task of entity recommendation. Entity recommendation can be categorized into the following two categories: First, for query assistance for knowledge graphs BIBREF2, BIBREF3, GQBE BIBREF4 and Exemplar Queries BIBREF5 studied how to retrieve entities from a knowledge base by specifying example entities. For example, the input entity pair {Jerry Yang, Yahoo!} would help retrieve answer pairs such as {Sergey Brin, Google}. Both of them projected the example entities onto the RDF knowledge graph to discover result entities as well as the relationships around them. They used an edge-weighted graph as the underlying model and subgraph isomorphism as the basic matching scheme, which in general is costly. Second, to recommend related entities for search assistance. BIBREF0 proposed a recommendation engine called Spark to link a user's query word to an entity within a knowledge base and recommend a ranked list of the related entities. To guide user exploration of recommended entities, they also proposed a series of features to characterize the relatedness between the query entity and the related entities. BIBREF6 proposed a similar entity search considering diversity. BIBREF7 proposed to enhance the understandability of entity recommendations by captioning the results. BIBREF8 proposed a number of memory-based methods that exploit user behaviors in search logs to recommend related entities for a user's full search session. BIBREF1 propose a model in a multi-task learning setting where the query representation is shared across entity recommendation and context-aware ranking. However, none of those approaches take into account queries without entities. Our objective is to infer entities given diverse and complex queries for search assistance. Actually, there are little research papers that focus on this issue. In industry, there are three simple approaches to handle those complex queries. One is tagging the query and then recommend the relevant entities based on those tags. However, the tagging space is so huge that it is difficult to cover all domains. The second method is to use the query recommendation algorithm to convert and disambiguate the queries into entities, ignoring effect of error transmission from query recommendation. The last approach is to recall entities from the clicked documents. However, not all queries have clicked documents. To the best of our knowledge, we are the first end-to-end method that makes it possible to recommend entities with arbitrary queries in large scale Chinese search engine. System Overview The overall structure of our entity recommendation system is illustrated in Figure FIGREF7. The system is composed of three modules: query processing, candidate generation and ranking. The query processing module at first preprocesses the queries, extract entities (cannot extract any entities for complex queries) and then conceptualize queries. The candidate generation module takes the output of query processing module as input and retrieves a subset (hundreds) of entities from the knowledge graph. For a simple query with entities, we utilize heterogeneous graph embedding BIBREF9 to retrieve relative entities. For those complex queries with little entities, we propose a deep collaborative matching model to get relative entities. These candidates are intended to be generally relevant to the query with high recall. The candidate generation module only provides broad relativity via multi-criteria matching. The similarity between entities is expressed in terms of coarse features. Presenting a few "best" recommendations in a list requires a fine-level representation to distinguish relative importance among candidates with high precision. The ranking module accomplishes this task by type filtering, learning to rank, and click-through rate estimation. We also utilize online learning algorithm, including Thompson sampling, to balance the exploitation and exploration in entity ranking. In the final product representation of entity recommendation, we utilize the concept of entities to cluster the different entities with the same concept in the same group to represent a better visual display and provide a better user experience. In this paper, we mainly focus on candidate generation, the first stage of entity recommendation and present our approach (red part in Figure FIGREF7), which can handle complex queries. Preliminaries In this section, we describe the large knowledge graph that we use to retrieve candidate entities and cognitive concept graph that we use to conceptualize queries and entities. Preliminaries ::: Knowledge Graph Shenma knowledge graph is a semantic network that contains ten million of entities, thousand types and billions of triples. It has a wide range of fields, such as people, education, film, tv, music, sports, technology, book, app, food,plant, animal and so on. It is rich enough to cover a large proportion of entities about worldly facts. Entities in the knowledge graph are connected by a variety of relationships. Preliminaries ::: Cognitive Concept Graph Based on Shenma knowledge graph, we also construct a cognitive concept graph which contains millions of instances and concepts. Different from Shenma knowledge graph, cognitive concept graph is a probabilistic graph mainly focus on the Is-A relationship. For example, "robin" is-a bird, and "penguin" is-a bird. Cognitive concept graph is helpful in entity conceptualization and query understanding. Deep Collaborative Match In this section, we first introduce the basics of the deep collaborative match and then elaborate on how we design the deep model architecture. Deep Collaborative Match ::: Recommendation as Classification Traditionally, major search engines recommend related entities based on their similarities to the main entity that the user searched. BIBREF1 have detailed explained the procedure of entity recommendation in the search engine, including entity linking, related entity discovery and so on. Unlike traditional methods, we regard recommendation as large-scale multi-classification where the prediction problem becomes how to accurately classify a specific entity $e_{i}$ among millions of entities from a knowledge graph $V$ based on a user's input query $Q$, where $q\in \mathbb {R}^{N}$ is a high-dimensional "embedding" of the user's input query, $u_{j}\in \mathbb {R}^{N}$ represents each entity embedding and V is the entities from knowledge graph. In this setting, we map the sparse entity or query into a dense vector in $\mathbb {R}^{N}$. Our deep neural model tries to learn the query embedding via the user's history behavior which is useful for discriminating among entities with a softmax classier. Through joint learning of entity embeddings and query embeddings, the entity recommendation becomes the calculation of cosine similarity between entity vectors and query vectors. Deep Collaborative Match ::: Base Deep Match Model Inspired by skip-gram language models BIBREF10, we map the user's input query to a dense vector representation and learn high dimensional embedding for each entity in a knowledge graph. Figure FIGREF13 shows the architecture of the base deep match model. Input Layer. Input layer mainly contains the features from the input query, we first use word segmentation tool to segment queries, then fetch basic level tokens and semantic level tokens, and finally combine all the input features via the embedding technique, as shown below: word embedding: averaging the embedding of both the basic level tokens and semantic level tokens, and the final embedding dimension is 128. ngram embedding: inspired by fasttext BIBREF11, we add ngram (n=2,3) features to the input layer to import some local temporal information. The dimension of ngram embedding is also 128. Fully-Connected Layer. Following the input layer, we utilize three fully connected layers (512-256-128) with tanh activation function. In order to speed up the training, we add batch normalization to each layer. Softmax Layer. To efficiently train such a model with millions of classes, we apply sampled softmax BIBREF12 in our model. For each example, the cross-entropy loss is minimized for the true label and the sampled negative classes. In practice, we sample 5000 negatives instances. Online Serving. At the serving time, we need to compute the most likely $K$ classes (entities) in order to choose the top $K$ to present to the user. In order to recall the given number of entities within ten milliseconds, we deploy the vector search engine under the offline building index. In practice, our model can generate query embedding within 5ms and recall related entities within 3ms. Deep Collaborative Match ::: Enhanced Deep Match Model The above base model also remains two problems of on the semantic representation of the input query: 1) ignoring the global temporal information, which is important for learning query's sentence-level representation; 2) different query tokens contribute equally to the final input embedding, which is not a good hypnosis. For example, the entity token should be more important than other tokens such as stop words. To address the first issue, we adopt the Bi-directional LSTM model to encode the global and local temporal information. At the same time, with the attention mechanism, our model can automatically learn the weights of different query tokens. Figure FIGREF20 shows the enhanced deep match model architecture. The proposed model consists of two parts. The first is a Bi-directional LSTM, and the second is the self-attention mechanism, which provides weight vectors for the LSTM hidden states. The weight vectors are dotted with the LSTM hidden states, and the weighted LSTM hidden states are considered as an embedding for the input query. Suppose the input query has $n$ tokens represented with a sequence of word embeddings. where $w_{i}\in \mathbb {R}^{d}$ is the word embedding for the $i$-th token in the query. $Q\in \mathbb {R}^{n \times d}$ is thus represented as a 2-D matrix, which concatenates all the word embeddings together. To utilize the dependency between adjacent words within a single sentence, we use the Bi-directional LSTM to represent the sentence and concatenate $h_{if}$ with $h_{ib}$ to obtain the hidden state $h_{i}$: The number of LSTM's hidden unit is $m$. For simplicity, we concatenate all the hidden state $h_{i}$ as $H \in \mathbb {R}^{n \times 2m}$. $H=[h_{1},h_{2},\cdots ,h_{n-1},h_{n}]$ With the self-attention mechanism, we encode a variable length sentence into a fixed size embedding. The attention mechanism takes the whole LSTM hidden states $H$ as input, and outputs the weights $\alpha \in \mathbb {R}^{1 \times k}$: where $W\in \mathbb {R}^{k \times 2m}$,$U\in \mathbb {R}^{1 \times k}$,$b\in \mathbb {R}^{k}$. Then we sum up the LSTM hidden states $H$ according to the weight provided by $\alpha $ to get the final representation of the input query. Note that, the query embeddings and entity embeddings are all random initialized and trained from scratch. We have huge amounts of training data which is capable of modeling the relativity between queries and entities. Experiments ::: Data Sets In this section, we illustrate how to generate the training samples to learn the query-entity match model. Training samples are generated from query logs and knowledge graph, which can be divided into four parts as shown below: Query-Click-Entity: given a query, choose the clicked entities with relatively high CTR. In practice, we collect thousand millions of data from the query logs in the past two months. Query-Doc-Entity: we assume that high clicked doc is well matched to the query and the entities in title or summary are also related to the query. The procedure is 1) we first fetch the clicked documents with title and summary from the query log; 2) extract entities from title and summary via name entity recognition; 3) keep those high-quality entities. At last, we collect millions of unique queries. Query-Query-Entity: given the text recommendation's well results, we utilize the entity linking method to extract entities from those results. We also collect millions of unique queries. Query-Tag-Entity: as to some specific queries, we will tag entity label to them and generate query-entity pairs. Here, we define hundreds of entity tags in advance. After generating of query-entity pairs, we adopt the following data prepossessing procedures: low-quality filter: We filter low-quality entities via some basic rules, such as blacklist, authority, hotness, importance and so on. low-frequency filter: We filter low-frequency entities. high-frequency sub-sampling: We make sub-sampling to those high-frequency entities. shuffle: We shuffle all samples. Apart from user clicked data, we construct millions of query-entity relevant pairs at the semantic level, which are very important for the model to learn the query's semantic representation. Finally, we generate billions of query-entity pairs and about one thousand billion unique queries. Experiments ::: Evaluation Metric To evaluate the effectiveness of different methods, we use Precision@M following BIBREF13. Derive the recalled set of entities for a query $u$ as $P_u (|P_u| = M)$ and the query's ground truth set as $G_u$. Precision@M are: Experiments ::: Offline Evaluation To evaluate the performance of our model, we compare its performance with various baseline models. From unseen and real online search click log, we collect millions of query-entity pairs as our test set (ground truth set). The evaluation results are shown in Table TABREF31: DNN BIBREF14 is the base method with a DNN encoder; +ngram is method adding ngram features; att-BiLSTM is our method with BiLSTM encoder with attention mechanism. The DNN BIBREF14 is a very famous recommendation baseline and we re-implement the algorithm and modify the model for entity recommendation setting. Note that, there are no other baselines of entity recommendation for complex queries with no entities at all. att-BiLSTM is slightly better than +ngram. The reasons are mainly that a certain percentage of queries is without order and ngram is enough to provide useful information. Our approach achieves the comparable results in the offline evaluation. These results indicate that our method benefits a lot from joint representation learning in queries and entities. Note that, we learn the embedding of queries and entities with random initialization. We believe the performance can be further improved by adopting more complex sentence encoder such as BERTBIBREF15 and XLNetBIBREF16 and inductive bias from structure knowledgeBIBREF17 to enhance the entity representation, which we plan to address in future work. Experiments ::: Online A/B Test We perform large-scale online A/B test to show how our approach on entity recommendation helps with improving the performance of recommendation in real-world applications. We first retrieve candidate entities by matching queries, then we rank candidate entities by a click-through rate (CTR) prediction model and Thompson sampling. The ranked entities are pushed to users in the search results of Alibaba UC Browser. For online A/B test, we split users into buckets. We observe and record the activities of each bucket for seven days. We select two buckets with highly similar activities. For one bucket, we perform recommendation without the deep collaborative match model. For another one, the deep collaborative match model is utilized for the recommendation. We run our A/B test for seven days and compare the result. The page view (PV) and click-through rate (CTR) are the two most critical metrics in real-world application because they show how many contents users read and how much time they spend on an application. In the online experiment, we observe a statistically significant CTR gain (5.1%) and PV (5.5%). These observations prove that the deep collaborative match for entity recommendation greatly benefits the understanding of queries and helps to match users with their potential interested entities better. With the help of a deep collaborative match, we can better capture the contained implicit user's need in a query even if it does not explicitly have an entity. Given more matched entities, users spend more times and reading more articles in our search engine. Experiments ::: Qualitative Analysis We make a qualitative analysis of the entity embeddings learned from scratch. Interestingly, we find that our approach is able to capture the restiveness of similar entities. As Figure FIGREF26 shows, the entities "Beijing University," "Fudan University" are similar to the entity "Tsinghua University." Those results demonstrate that our approach's impressive power of representation learning of entities. It also indicates that the text is really helpful in representation learning in knowledge graph. We also make a qualitative analysis of the query embeddings. We find that our approach generates more discriminate query embedding for entity recommendation due to the attention mechanisms. Specifically, we randomly selected six queries from the search log and then visualize the attention weights, as shown in Figure FIGREF37. Our approach is capable of emphasizing those relative words and de-emphasizing those noisy terms in queries which boost the performance. Experiments ::: Case Studies We give some examples of how our deep collaborative matching takes effect in entity recommendation for those complex queries. In Figure FIGREF36, we display the most relative entities that are retrieved from the given queries. We observe that (1) given the interrogative query "what food is good for cold weather", our model is able to understand the meaning of query and get the most relative entities "Grain nutrition powder", "Almond milk"; (2) our model is able to handle short queries such as "e52640 and i73770s" which usually do not have the syntax of a written language or contain little signals for statistical inference; (3) our model is able to infer some queries such as "multiply six by the largest single digit greater than fourth" that need commonsense "number" is "mathematical terms" which demonstrate the generalization of our approach; (4) our approach can also handle multi-modal queries "the picture of baby walking feet outside" and get promising results although in recent version of our model we do not consider the image representation in entity recommendation, which indicates that our approach can model the presentation of queries which reveal the implicit need of users. We believe the multi-modal information (images) will further boost the performance which will be left for our future work. Experiments ::: Conceptualized Entity Recommendation In the entity recommendation system, each entity may have different views. For example, when recommending entities relative to "apple", it may represent both "fruits" and "technology products" as the Figure FIGREF42 shows. Actually, different users have different intentions. To give a better user experience, we develop the conceptualized multi-dimensional recommendation shown in Figure FIGREF43. To be specific, we utilize the concepts of candidate entities to cluster the entities in the same group to give a better visual display. Those concepts are retrieved from our cognitive concept graph. Online evaluation shows that conceptualized multi-dimensional recommendation has the total coverage of 49.8% in entity recommendation and also achieve more than 4.1% gain of CTR. Conclusion In this paper, we study the problem of context modeling for improving entity recommendation. To this end, we develop a deep collaborative match model that learns representations from complex and diverse queries and entities. We evaluate our approach using large-scale, real-world search logs of a widely used commercial search engine. The experiments demonstrate that our approach can significantly improve the performance of entity recommendation. Generally speaking, the knowledge graph and cognitive concept graph can provide more prior knowledge in query understanding and entity recommendation. In the future, we plan to explore the following directions: (1) we may combine our method with structure knowledge from knowledge graph and cognitive concept graph; (2) we may combine rule mining and knowledge graph reasoning technologies to enhance the interpretability of entity recommendation; (3) it will be promising to apply our method to other industry applications and further adapt to other NLP scenarios. Acknowledgments We would like to thank colleagues of our team - Xiangzhi Wang, Yulin Wang, Liang Dong, Kangping Yin, Zhenxin Ma, Yongjin Wang, Qiteng Yang, Wei Shen, Liansheng Sun, Kui Xiong, Weixing Zhang and Feng Gao for useful discussions and supports on this work. We are grateful to our cooperative team - search engineering team. We also thank the anonymous reviewers for their valuable comments and suggestions that help improve the quality of this manuscript.
Unanswerable
8fcbae7c3bd85034ae074fa58a35e773936edb5b
8fcbae7c3bd85034ae074fa58a35e773936edb5b_0
Q: what was their baseline comparison? Text: Introduction Time-critical analysis of social media data streams is important for many application areas. For instance, responders to humanitarian disasters (e.g., earthquake, flood) need information about the disasters to determine what help is needed and where. This information usually breaks out on social media before other sources. During the onset of a crisis situation, rapid analysis of messages posted on microblogging platforms such as Twitter can help humanitarian organizations like the United Nations gain situational awareness, learn about urgent needs of affected people at different locations, and decide on actions accordingly BIBREF0 , BIBREF1 . Artificial Intelligence for Disaster Response (AIDR) is an online platform to support this cause BIBREF2 . During a disaster, any person or organization can use it to collect tweets related to the event. The total volume of all tweets can be huge, about 350 thousand tweets per minute. Filtering them using keywords helps cut down this volume to some extent. But, identifying different kinds of useful tweets that responders can act upon cannot be achieved using only keywords because a large number of tweets may contain the keywords but are of limited utility for the responders. The best-known solution to address this problem is to use supervised classifiers that would separate useful tweets from the rest. Classifying tweets to identify their usefulness is difficult because: tweets are short – only 140 characters – and therefore, hard to understand without enough context; they often contain abbreviations, informal language and are ambiguous; and, finally, determining whether the tweet is useful in a disaster situation and identifying required actions for relief operations is a hard task because of its subjectivity. Individuals differ on their judgement about whether a tweet is useful or not and sometimes whether they belong to one topical class or another especially when there is information in a tweet that would be classified into multiple topical classes. Given this ambiguity, a computer cannot agree with annotators at a rate that is higher than the rate at which the annotators agree with each other. Despite advances in natural language processing (NLP), interpreting the semantics of the short informal texts automatically remains a hard problem. To classify disaster-related tweets, traditional classification approaches use batch learning with discrete representation of words. This approach has three major limitations. First, in the beginning of a disaster situation, there is no event labeled data available for training. Later, the labeled data arrives in small batches depending on the availability of geographically dispersed volunteers. These learning algorithms are dependent on the labeled data of the event for training. Due to the discrete word representations, they perform poor when trained on the data from previous events (out-of-event data). The second limitation is the offline learning style that inputs the complete labeled data and train a model. This is computational expensive in a disaster situation where labeled data is coming in batches. One would need to train a classifier from scratch every time a new batch of labeled data arrives. Thirdly, these approaches require to manually engineered features like cue words and TF-IDF vectors BIBREF3 for learning. Deep neural networks (DNNs) are based on online learning mechanism and have the flexibility to adaptively learn from new batches of labeled data without requiring to retrain from scratch. Due to their distributed word representation, they generalize well and make better use of the previously labeled data from other events to speed up the classification process in the beginning of a disaster. DNNs automatically learn latent features as distributed dense vectors, which generalize well and have shown to benefit various NLP tasks BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . In this paper, we propose a convolutional neural network (CNN) for the classification task. CNN captures the most salient $n$ -gram information by means of its convolution and max-pooling operations. On top of the typical CNN, we propose an extension that combines multilayer perceptron with a CNN. We present a series of experiments using different variations of the training data – event data only, out-of-event data only and a concatenation of both. Experiments are conducted for binary and multi-class classification tasks. For the event only binary classification task, the CNN model outperformed in four out of five tasks with an accuracy gain of up to 4.5 absolute points. In the scenario of no event data, the CNN model shows substantial improvement of up to 18 absolute points over the several non-neural models. This makes the neural network model an ideal choice in early hours of a disaster for tweet classification. When combined the event data with out-of-event data, we see similar results as in the case of event only training. For multi-class classification, the CNN model outperformed in similar fashion as in the case of binary classification. Our variation of the CNN model with multilayer perceptron (MLP-CNN) performed better than it's CNN counter part. In some cases, adding out-of-event data drops the performance. To reduce the effect of large out-of-event data and to make the most out of it, we apply a simple event selection technique based on TF-IDF and select only those events that are most similar to the event under consideration. We then train the classifiers on the concatenation of the event plus selected out-of-event data. The performance improves only for the event with small event data. To summarize, we show that neural network models can be used reliably with the already available out-of-event data for binary and multi-class classification. The automatic feature learning capabilities brings an additional value on top of non-neural classification methods. The MLP-CNN results show that there is still a roam for improvement on top of the best accuracy achieved. The rest of the paper is organized as follows. We summarize related work in Section "Related Work" . Section "Convolutional Neural Network" presents the convolutional neural model. In Section "Experimental Settings" , we describe the dataset and training settings of the models. In Section "Results" presents our results and analysis. We conclude and discuss future work in Section "Conclusion and Future Work" . Related Work Studies have analyzed how Twitter can be useful during major disasters so as to gain insight into the situation as it unfolds BIBREF0 , BIBREF8 , BIBREF9 . A number of systems have been developed to classify, extract, and summarize crisis-relevant information from social media; for a detailed survey see BIBREF3 . Cameron, et al., describe a platform for emergency situation awareness BIBREF10 . They classify interesting tweets using an SVM classifier. Verma, et al., use Naive Bayes and MaxEnt classifiers to find situational awareness tweets from several crises BIBREF11 . Imran, et al., implemented AIDR to classify a Twitter data stream during crises BIBREF2 . They use a random forest classifier in an offline setting. After receiving every minibatch of 50 training examples, they replace the older model with a new one. There is growing interest in recent years for DNNs and word embeddings with application to myriad of NLP problems. The emergence of tools such as word2vec BIBREF6 and GloVe BIBREF12 have enabled NLP researchers to learn word embeddings efficiently and use them to train better models. Collobert et al. BIBREF4 presented a unified DNN architecture for solving various NLP tasks including part-of-speech tagging, chunking, named entity recognition and semantic role labeling. They showed that DNNs outperform traditional models in most of the tasks. They also proposed a multi-task learning framework for solving the tasks jointly. Kim BIBREF13 and Kalchbrenner et al. BIBREF14 used convolutional neural networks (CNN) for sentence-level classification tasks (e.g., sentiment/polarity classification, question classification) and showed that CNNs outperform traditional methods (e.g., SVMs, MaxEnts). Despite these recent advancements, the application of CNNs to disaster response is novel to the best of our knowledge. Convolutional Neural Network In order to classify short and noisy Twitter messages effectively, possibly in the absence of any in-event training data, a classification model should use a distributed representation of words, which results in improved generalization, and should learn the key features at different levels of abstraction automatically. To this end, we use a Convolutional Neural Network (CNN) as proposed by Kim BIBREF13 . Figure 1 demonstrates how a CNN works with an example tweet “guys if know any medical emergency around balaju area you can reach umesh HTTP doctor at HTTP HTTP”. Each word in the vocabulary $V$ is represented by a $D$ dimensional vector in a shared look-up table $L$ $\in $ $^{|V| \times D}$ . $L$ is considered a model parameter to be learned. We can initialize $L$ randomly or using pretrained word embedding vectors like word2vec BIBREF15 . Given an input tweet $\mathbf {s} = (w_1, \cdots , w_T)$ , we first transform it into a feature sequence by mapping each word token $w_t \in \mathbf {s}$ to an index in $L$ . The look-up layer then creates an input vector $\mathbf {x_t}\in ^{D}$ for each token $w_t$ , which are passed through a sequence of convolution and pooling operations to learn high-level feature representations. A convolution operation involves applying a filter $\mathbf {u} \in ^{L.D}$ to a window of $L$ words to produce a new feature $$h_t = f(\mathbf {u} . \mathbf {x}_{t:t+L-1} + b_t)$$ (Eq. 4) where $\mathbf {x}_{t:t+L-1}$ denotes the concatenation of $L$ input vectors, $b_t$ is a bias term, and $f$ is a nonlinear activation function (e.g., $, \tanh $ ). We apply this filter to each possible $L$ -word window in the tweet to generate a feature map $\mathbf {h}^i = [h_1, \cdots , h_{T+L-1}]$ . We repeat this process $N$ times with $N$ different filters to get $N$ different feature maps (i.e., $L$0 ). We use a wide convolution BIBREF14 (as opposed to narrow), which ensures that the filters reach the entire sentence, including the boundary words. This is done by performing zero-padding, where out-of-range ( $L$1 $L$2 1 or $L$3 $L$4 $L$5 ) vectors are assumed to be zero. After the convolution, we apply a max-pooling operation to each feature map. $$\mathbf {m} = [\mu _p(\mathbf {h}^1), \cdots , \mu _p(\mathbf {h}^N)] $$ (Eq. 5) where $\mu _p(\mathbf {h}^i)$ refers to the $\max $ operation applied to each window of $p$ features in the feature map $\mathbf {h}^i$ . For instance, with $p=2$ , this pooling gives the same number of features as in the feature map (because of the zero-padding). Intuitively, the filters compose local $n$ -grams into higher-level representations in the feature maps, and max-pooling reduces the output dimensionality while keeping the most important aspects from each feature map. Since each convolution-pooling operation is performed independently, the features extracted become invariant in locations (i.e., where they occur in the tweet), thus acts like bag-of- $n$ -grams. However, keeping the order information could be important for modeling sentences. In order to model interactions between the features picked up by the filters and the pooling, we include a dense layer of hidden nodes on top of the pooling layer $$\mathbf {z} = f(V\mathbf {m} + \mathbf {b_h}) $$ (Eq. 6) where $V$ is the weight matrix, $\mathbf {b_h}$ is a bias vector, and $f$ is a non-linear activation. The dense layer naturally deals with variable sentence lengths by producing fixed size output vectors $\mathbf {z}$ , which are fed to the final output layer for classification. Formally, the output layer defines a Bernoulli distribution: $$p(y|\mathbf {s}, \theta )= (y| (\mathbf {w^T} \mathbf {z} + b )) $$ (Eq. 7) where $$ refers to the sigmoid function, and $\mathbf {w}$ are the weights from the dense layer to the output layer and $b$ is a bias term. We fit the models by minimizing the cross-entropy between the predicted distributions $\hat{y}_{n\theta } = p(y_n|\mathbf {s}_n, \theta )$ and the target distributions $y_n$ (i.e., the gold labels). $$J(\theta ) = \hspace{0.0pt} - \sum _{n} y_n \log \hat{y}_{n\theta } + (1-y_n) \log \left(1- \hat{y}_{n\theta } \right)$$ (Eq. 9) Word Embedding and Fine-tuning In our CNN model, we intend to avoid manual feature engineering efforts by using word embeddings as the only features. As mentioned before, we can initialize the embeddings $L$ randomly and learn them as part of model parameters by backpropagating the errors to the look-up layer. One issue with random initialization is that it may lead the training algorithm to get stuck in local minima. On the other hand, one can plug the readily available embeddings from external sources (e.g., Google embeddings BIBREF15 ) in the CNN model and use them as features without tuning them further for the task, as is done in any other machine learning model. However, this approach does not exploit the automatic feature learning capability of NN models, which is one of the main motivations of using them. In our work, we use the pre-trained word embeddings to better initialize our models, and we fine-tune them for our task in training, which turns out to be beneficial. More specifically, we initialize the word vectors in $L$ in two different ways. 1. Google Embedding: Mikolov et al. BIBREF15 propose two log-linear models for computing word embeddings from large (unlabeled) corpus efficiently: a bag-of-words model CBOW that predicts the current word based on the context words, and a skip-gram model that predicts surrounding words given the current word. They released their pre-trained 300-dimensional word embeddings (vocabulary size 3 million) trained by the skip-gram model on part of Google news dataset containing about 100 billion words. 2. Crisis Embedding: Since we work on disaster related tweets, which are quite different from news, we have also trained domain-specific embeddings (vocabulary size 20 million) using the Skip-gram model of word2vec tool BIBREF6 from a large corpus of disaster related tweets. The corpus contains $57,908$ tweets and $9.4$ million tokens. For comparison with Google, we learn word embeddings of 300-dimensions. Incorporating Other Features Although CNNs learn word features (i.e., embeddings) automatically, we may still be interested in incorporating other sources of information (e.g., TF-IDF vector representation of tweets) to build a more effective model. Additional features can also guide the training to learn a better model. However, unlike word embeddings, we want these features to be fixed during training. This can be done in our CNN model by creating another channel, which feeds these additional features directly to the dense layer. In that case, the dense layer in Equation 6 can be redefined as $$\mathbf {z} = f(V^{\prime } \mathbf {m^{\prime }} + \mathbf {b_h})$$ (Eq. 13) where $\mathbf {m^{\prime }}=\mathbf {[m;y]}$ is a concatenated (column) vector of feature maps $\mathbf {m}$ and additional features $\mathbf {y}$ , and $V^{\prime }$ is the associated weight matrix. Notice that by including this additional channel, this network combines a multi-layer perceptron (MLP) with a CNN. Experimental Settings In this section, we first describe the dataset that is used for the classification task. We then present the TF-IDF based features which are used to train the non-neural classification algorithms. In the end, we describe the model settings and training settings of non-neural and neural classification models. Datasets We use data from multiple sources: (1) CrisisLex BIBREF16 , (2) CrisisNLP BIBREF17 , and (3) AIDR BIBREF2 . The first two sources have tweets posted during several humanitarian crises and labeled by paid workers. The AIDR data consists of tweets from several crises events labeled by volunteers. The dataset consists of various event types such as earthquakes, floods, typhoons, etc. In all the datasets, the tweets are labeled into various informative classes (e.g., urgent needs, donation offers, infrastructure damage, dead or injured people) and one not-related or irrelevant class. Table 1 provides a one line description of each class and also the total number of labels from all the sources. Other useful information and Not related or irrelevant are the most frequent classes in the dataset. Table 2 shows statistics about the events we use for our experiments. In order to access the difficulty of the classification task, we calculate the inter-annotator agreement (IAA) scores of the datasets obtained from CrisisNLP. The California Earthquake has the highest IAA of 0.85 and Typhoon Hagupit has the lowest IAA of 0.70 in the events under-consideration. The IAA of remaining three events are around 0.75. We aim to reach these levels of accuracy. Data Preprocessing: We normalize all characters to their lower-cased forms, truncate elongations to two characters, spell out every digit to D, all twitter usernames to userID, and all URLs to HTTP. We remove all punctuation marks except periods, semicolons, question and exclamation marks. We further tokenize the tweets using the CMU TweetNLP tool BIBREF18 . Data Settings: For a particular event such as Nepal earthquake, data from all other events under-consideration plus All others (see Table 2 ) are referred to as out-of-event data. We divide each event dataset into train (70%), validation (10%) and test sets (20%) using ski-learn toolkit's module BIBREF19 which ensured that the class distribution remains reasonably balanced in each subset. Feature Extraction: We extracted unigram, bigram and trigram features from the tweets as features. The features are converted to TF-IDF vectors by considering each tweet as a document. Note that these features are used only in non-neural models. The neural models take tweets and their labels as input. For SVM classifier, we implemented feature selection using Chi Squared test to improve estimator's accuracy scores. Non-neural Model Settings To compare our neural models with the traditional approaches, we experimented with a number of existing models including: Support Vector Machine (SVM), a discriminative max-margin model; Logistic Regression (LR), a discriminative probabilistic model; and Random Forest (RF), an ensemble model of decision trees. We use the implementation from the scikit-learn toolkit BIBREF19 . All algorithms use the default value of their parameters. Settings for Convolutional Neural Network We train CNN models by optimizing the cross entropy in Equation 7 using the gradient-based online learning algorithm ADADELTA BIBREF20 . The learning rate andparameters were set to the values as suggested by the authors. Maximum number of epochs was set to 25. To avoid overfitting, we use dropout BIBREF21 of hidden units and early stopping based on the accuracy on the validation set. We experimented with $\lbrace 0.0, 0.2, 0.4, 0.5\rbrace $ dropout rates and $\lbrace 32, 64, 128\rbrace $ minibatch sizes. We limit the vocabulary ( $V$ ) to the most frequent $P\%$ ( $P\in \lbrace 80, 85, 90\rbrace $ ) words in the training corpus. The word vectors in $L$ were initialized with the pre-trained embeddings. See Section "Word Embedding and Fine-tuning" . We use rectified linear units (ReLU) for the activation functions ( $f$ ), $\lbrace 100, 150, 200\rbrace $ filters each having window size ( $L$ ) of $\lbrace 2, 3, 4\rbrace $ , pooling length ( $p$ ) of $\lbrace 2,3, 4\rbrace $ , and $\lbrace 100, 150, 200\rbrace $ dense layer units. All the hyperparameters are tuned on the development set. Results For each event under consideration, we train classifiers on the event data only, on the out-of-event data only, and on a combination of both. We conduct experiments for the binary and multi-class classification task. For former, we merge all informative classes to create one general Informative one. We initialized the CNN model using two types of pre-trained word embeddings. Crisis Embeddings CNN $_{I}$ : trained on all tweet crisis data Google Embeddings CNN $_{II}$ trained on the Google News dataset. The CNN model then fine-tuned the embeddings using the training data. Binary Classification Table 3 (left) presents the results of binary classification comparing several non-neural classifier with the CNN-based classifier. For the scenario of training on the event only data, CNNs performed better than all non-neural classifiers in every event. The CNN performed substantially better than the non-neural model SVM by a margin of up to 4% when trained on the out-of-event data only. This shows robustness of the CNN model in learning from the out-of-event data when no event data is available. This can be very helpful during the early hours of a crisis when no event-specific labeled data is available. When combined event data with out-of-event data, CNN also performed better than the non-neural models. However, comparing different data settings for CNN, we saw mixed results. In most of the cases, the performance dropped in comparison to the event only training. The large size of the out-of-event data down weights the benefits of the event data and skewed the probability distribution towards the out-of-event data. Table 3 (right) presents confusion matrix of SVM and CNN $_I$ classifiers trained and evaluated on the Nepal earthquake data. SVM prediction is inclined towards Informative class whereas CNN predicted more instances as non-informative than informative. In the case of out-of-event training, SVM predicted most of the instances as informative. Thus achieved high recall but very low precision. CNN, on the other hand, achieved quite balanced precision and recall. To summarize, the neural network based classifier out-performed non-neural classifiers in all data settings. The performance of the models trained on out-of-event data are (as expected) lower than that in the other two training settings. However, in case of the CNN models, the results are reasonable to the extent that out-of-event data can be used to predict tweets informativeness when no event data is available. It is also worth mentioning here that aside from out-performing non-neural classifiers, neural network models do not require any feature engineering and learn features automatically from the data. Comparing CNN $_{I}$ with CNN $_{II}$ , we did not see any system consistently better than other. For further experimentation and comparison, we only consider the CNN $_{I}$ trained on crisis embedding. Multi-class Classification For the purpose of multi-class classification, we mainly compare the performance of two variations of the CNN-based classifier, CNN $_{I}$ and MLP-CNN $_{I}$ (combining multi-layer perception and CNN), against an SVM classifier. All labeled data from other events may not be useful for the event under-consideration. Based on this intuition, we apply a data selection technique to select the data from out-of-events that is most liked by the event. To achieve this, we trained a classifier on the event data only and predicted the label of each tweet of the out-of-event data. We selected tweets that are correctly predicted by the event classifier and added them (M $_{event+adapt}$ ) to the event data for the training of the classifier for multi-class classification task. Table 4 summarizes the accuracy and macro F1 scores of the multi-class classification task. Similar to the results of binary classification task, the CNN model outperformed SVM in almost data settings. The most promising results are combing MLP and CNN really improve performance of our system. The results on training with event plus out-of-event data did not have a clear improvement over the event only model. The results dropped slightly in some cases. In addition, when using simple domain adaptation technique, we get rid of the noisy tweets from out-of-event data and improve the performance of our models. The M $_{event+adapt}$ system in Table 4 shows the results. The domain adapted system consistently performed better than its respective baseline and with the system trained on event data only. The macro F1 scores show the per class performance of each classifier. The Nepal earthquake has the lowest per class classification performance. This could be due to the imbalance in the training data as can be seen in Table 2 . In Table 5 , we show the confusion matrix of the CNN-MLP model tested on the Nepal Earthquake data. The class Other useful information is the second largest class in the data. The classifier got biased and learned to predict other tweets in this class. Not related or irrelevant is the most confused class among all classes. The precision-recall curve in Table 5 shows that Not related or irrelevant and Donation and Volunteering are easier to be predicted than other classes. Their AUC scores are $0.87$ and $0.70$ respectively. Meanwhile, AUC scores of Other useful information and Infrastructure and utilities class are lower than the random (0.5) level. Conclusion and Future Work We presented a deep neural network model for binary and multi-class classification tasks and showed that one can reliably use out-of-event data for the classification of new event when no in-event data is available. A convolution neural network model performed consistently for all five events under consideration, and worked better than all non-neural models in most of the cases. The performance of the classifiers degraded when out-of-event training samples was added to training samples from event data. Thus, we recommend using out-of-event training data during the first few hours of a disaster only after which the training data related to the event should be used. In future, we would like to improve the performance of the system using domain adaptation by either model weighting, separately building models for event and out-of-event and combining them, or by data selection, or intelligently selecting the out-of-event data that is best suited the event data.
Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF)
cbbcafffda7107358fa5bf02409a01e17ee56bfd
cbbcafffda7107358fa5bf02409a01e17ee56bfd_0
Q: Was any variation in results observed based on language typology? Text: Introduction Neural networks are the backbone of modern state-of-the-art Natural Language Processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks BIBREF0. As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. A syntactic probe, then, is a model for extracting syntactic properties, such as part-of-speech, from the representations BIBREF6. In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the goal of probing ought to be estimating the mutual information BIBREF7 between a representation-valued random variable and a linguistic property-valued random variable. This formulation gives probing a clean, information-theoretic foundation, and allows us to consider what “probing” actually means. Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones BIBREF8, BIBREF9, BIBREF10. In this context, we also discuss the recent work of hewitt-liang-2019-designing who propose selectivity as a criterion for choosing families of probes. hewitt-liang-2019-designing define selectivity as the performance difference between a probe on the target task and a control task, writing “[t]he selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.” They further ponder: “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints. In support of our discussion, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task BIBREF6, BIBREF11, within our framework. Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Tamil, and Turkish), we show that the representations from BERT, a common contextualized embedder, only account for at most $5\%$ more of the part-of-speech tag entropy than a control. These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require the sentential context of the word. Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP BIBREF12. We interpret this to mean that part-of-speech labeling is not a very informative probing task. We also remark that formulating probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT BIBREF13 and ELMo BIBREF14, contain the same amount of information about the linguistic property of interest as the original sentence. This follows naturally from the data-processing inequality under a very mild assumption. What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all. Word-Level Syntactic Probes for Contextual Embeddings Following hewitt-liang-2019-designing, we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a single token's embedding and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) labeling: determining a word's part of speech in a given sentence. For example, we wish to determine whether the word love is a noun or a verb. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a noun. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS. Word-Level Syntactic Probes for Contextual Embeddings ::: Notation Let $S$ be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary $\mathcal {V}$ is finite and, thus, the values $S$ can take are in $\mathcal {V}^*$. We write $\mathbf {s}\in S$ as $\mathbf {s}= w_1 \cdots w_{|\mathbf {s}|}$ for a specific sentence, where each $w_i \in \mathcal {V}$ is a specific word in the sentence and the position $i \in \mathbb {N}^{+}$. We also define the random variable $W$ that ranges over the vocabulary $\mathcal {V}$. We define both a sentence-level random variable $S$ and a word-level random variable $W$ since each will be useful in different contexts during our exposition. Next, let $T$ be a random variable whose possible values are the analyses $t$ that we want to consider for word $w_i$ in its sentential context, $\mathbf {s}= w_1 \cdots w_i \cdots w_{|\mathbf {s}|}$. In this work, we will focus on predicting the part-of-speech tag of the $i^\text{th}$ word $w_i$. We denote the set of values $T$ can take as the set $\mathcal {T}$. Finally, let $R$ be a representation-valued random variable for the $i^\text{th}$ word $w_i$ in a sentence derived from the entire sentence $\mathbf {s}$. We write $\mathbf {r}\in \mathbb {R}^d$ for a value of $R$. While any given value $\mathbf {r}$ is a continuous vector, there are only a countable number of values $R$ can take. To see this, note there are only a countable number of sentences in $\mathcal {V}^*$. Next, we assume there exists a true distribution $p(t, \mathbf {s}, i)$ over analyses $t$ (elements of $\mathcal {T}$), sentences $\mathbf {s}$ (elements of $\mathcal {V}^*$), and positions $i$ (elements of $\mathbb {N}^{+}$). Note that the conditional distribution $p(t \mid \mathbf {s}, i)$ gives us the true distribution over analyses $t$ for the $i^{\text{th}}$ word in the sentence $\mathbf {s}$. We will augment this distribution such that $p$ is additionally a distribution over $\mathbf {r}$, i.e., where we define the augmentation as a Dirac's delta function Since contextual embeddings are a deterministic function of a sentence $\mathbf {s}$, the augmented distribution in eq:true has no more randomness than the original—its entropy is the same. We assume the values of the random variables defined above are distributed according to this (unknown) $p$. While we do not have access to $p$, we assume the data in our corpus were drawn according to it. Note that $W$—the random variable over possible words—is distributed according to the marginal distribution where we define the deterministic distribution Word-Level Syntactic Probes for Contextual Embeddings ::: Probing as Mutual Information The task of supervised probing is an attempt to ascertain how much information a specific representation $\mathbf {r}$ tells us about the value of $t$. This is naturally expressed as the mutual information, a quantity from information theory: where we define the entropy, which is constant with respect to the representations, as and where we define the conditional entropy as the point-wise conditional entropy inside the sum is defined as Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq:true. Word-Level Syntactic Probes for Contextual Embeddings ::: Bounding Mutual Information The desired conditional entropy, $\mathrm {H}(T \mid R)$ is not readily available, but with a model $q_{{\theta }}(\mathbf {t}\mid \mathbf {r})$ in hand, we can upper-bound it by measuring their empirical cross entropy where $\mathrm {H}_{q_{{\theta }}}(T \mid R)$ is the cross-entropy we obtain by using $q_{{\theta }}$ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information This bound gets tighter, the more similar (in the sense of the KL divergence) $q_{{\theta }}(\cdot \mid \mathbf {r})$ is to the true distribution $p(\cdot \mid \mathbf {r})$. Word-Level Syntactic Probes for Contextual Embeddings ::: Bounding Mutual Information ::: Bigger Probes are Better. If we accept mutual information as a natural measure for how much representations encode a target linguistic task (§SECREF6), then the best estimate of that mutual information is the one where the probe $q_{{\theta }}(t \mid \mathbf {r})$ is best at the target task. In other words, we want the best probe $q_{{\theta }}(t \mid \mathbf {r})$ such that we get the tightest bound to the actual distribution $p(t\mid \mathbf {r})$. This paints the question posed by hewitt-liang-2019-designing, who write “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” as a false dichotomy. From an information-theoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure. Control Functions To place the performance of a probe in perspective, hewitt-liang-2019-designing develop the notion of a control task. Inspired by this, we develop an analogue we term control functions, which are functions of the representation-valued random variable $R$. Similar to hewitt-liang-2019-designing's control tasks, the goal of a control function $\mathbf {c}(\cdot )$ is to place the mutual information $\mathrm {I}(T; R)$ in the context of a baseline that the control function encodes. Control functions have their root in the data-processing inequality BIBREF7, which states that, for any function $\mathbf {c}(\cdot )$, we have In other words, information can only be lost by processing data. A common adage associated with this inequality is “garbage in, garbage out.” Control Functions ::: Type-Level Control Functions We will focus on type-level control functions in this paper; these functions have the effect of decontextualizing the embeddings. Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task. To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption about the embeddings. Assumption 1 Every contextualized embedding is unique, i.e., for any pair of sentences $\mathbf {s}, \mathbf {s}^{\prime } \in \mathcal {V}^*$, we have $(\mathbf {s}\ne \mathbf {s}^{\prime }) \mid \mid (i \ne j) \Rightarrow \textsc {bert} (\mathbf {s})_i \ne \textsc {bert} (\mathbf {s}^{\prime })_j$ for all $i \in \lbrace 1, \ldots |\mathbf {s}|\rbrace $ and $j \in \lbrace 1, \ldots , |\mathbf {s}^{\prime }|\rbrace $. We note that ass:one is mild. Contextualized word embeddings map words (in their context) to $\mathbb {R}^d$, which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in $\mathbb {R}^d$ that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small. ass:one yields the following corollary. Corollary 1 There exists a function $\emph {\texttt {id} } : \mathbb {R}^d \rightarrow V$ that maps a contextualized embedding to its word type. The function $\emph {\texttt {id} }$ is not a bijection since multiple embeddings will map to the same type. Using cor:one, we can show that any non-contextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function $\mathbf {e}: V \rightarrow \mathbb {R}^d$ that maps a word to a word embedding. This embedding may be one-hot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText BIBREF15. We can then construct a control function as the composition of the look-up function $\mathbf {e}$ and the id function $\texttt {id} $. Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo. Specifically, we have This result is intuitive and, perhaps, trivial—context matters information-theoretically. However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in sec:gain. Control Functions ::: How Much Information Did We Gain? We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function $\mathbf {c}(\cdot )$. We term how much more information the contextualized embeddings have about a task than a control variable the gain, which we define as The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function $\mathbf {c}$. We will empirically estimate this value in sec:experiments. Interestingly enough, the gain has a straightforward interpretation. Proposition 1 The gain function is equal to the following conditional mutual information The jump from the first to the second equality follows since $R$ encodes all the information about $T$ provided by $\mathbf {c}(R)$ by construction. prop:interpretation gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations. Control Functions ::: Approximating the Gain The gain, as defined in eq:gain, is intractable to compute. In this section we derive a pair of variational bounds on $\mathcal {G}(T, R, \mathbf {e})$—one upper and one lower. To approximate the gain, we will simultaneously minimize an upper and a lower-bound on eq:gain. We begin by approximating the gain in the following manner these cross-entropies can be empirically estimated. We will assume access to a corpus $\lbrace (t_i, \mathbf {r}_i)\rbrace _{i=1}^N$ that is human-annotated for the target linguistic property; we further assume that these are samples $(t_i, \mathbf {r}_i) \sim p(\cdot , \cdot )$ from the true distribution. This yields a second approximation that is tractable: This approximation is exact in the limit $N \rightarrow \infty $ by the law of large numbers. We note the approximation given in eq:approx may be either positive or negative and its estimation error follows from eq:entestimate where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative, by the data-processing inequality, but we have yet to devise a remedy. We justify the approximation in eq:approx with a pair of variational bounds. The following two corollaries are a result of thm:variationalbounds in appendix:a. Corollary 2 We have the following upper-bound on the gain Corollary 3 We have the following lower-bound on the gain The conjunction of cor:upper and cor:lower suggest a simple procedure for finding a good approximation: We choose $q_{{\theta }1}(\cdot \mid r)$ and $q_{{\theta }2}(\cdot \mid r)$ so as to minimize eq:upper and maximize eq:lower, respectively. These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently. We will optimize both with a gradient-based procedure, discussed in sec:experiments. Understanding Probing Information-Theoretically In sec:control-functions we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in sec:context that contextualized word embeddings, by construction, contain no more information about a word-level syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in sec:control-functions-meaning, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech. Understanding Probing Information-Theoretically ::: You Know Nothing, BERT To start, we note the following corollary Corollary 4 It directly follows from ass:one that $\textsc {bert} $ is a bijection between sentences $\mathbf {s}$ and sequences of embeddings $\langle \mathbf {r}_1, \ldots , \mathbf {r}_{|\mathbf {s}|} \rangle $. As $\textsc {bert} $ is a bijection, it has an inverse, which we will denote as $\textsc {bert}^{-1} $. Theorem 1 The function $\textsc {bert} (S)$ cannot provide more information about $T$ than the sentence $S$ itself. This implies $\mathrm {I}(T ; S) = \mathrm {I}(T; \textsc {bert} (S))$. We remark this is not a BERT-specific result—it rests on the fact that the data-processing inequality is tight for bijections. While thm:bert is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, thm:bert is a cynical statement: the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under ass:one, we know the answer a priori—the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself. Understanding Probing Information-Theoretically ::: What Do Control Functions Mean? Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Actually, the same reasoning used in cor:one could be used to devise a function $\texttt {id} _s(\mathbf {r})$ which led from a single representation back to the whole sentence. For a type-level control function $\mathbf {c}$, by the data-processing inequality, we have that $\mathrm {I}(T; W) \ge \mathrm {I}(T; \mathbf {c}(R))$. Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is $\mathrm {I}(T; S) - \mathrm {I}(T; W) = \mathrm {I}(T; S \mid W)$. This quantity is interpreted as the amount of knowledge we gain about the word-level task $T$ by knowing $S$ (i.e., the sentence) in addition to $W$ (i.e., the word). Therefore, a perfect probe would provide insights about language and not about the actual representations, which are no more than a means to an end. Understanding Probing Information-Theoretically ::: Discussion: Ease of Extraction We do acknowledge another interpretation of the work of hewitt-liang-2019-designing inter alia; BERT makes the syntactic information present in an ordered sequence of words more easily extractable. However, ease of extraction is not a trivial notion to formalize, and indeed, we know of no attempt to do so; it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP). Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units BIBREF16. Although for perfect probes the above results should hold, in practice $\texttt {id} (\cdot )$ and $\mathbf {c}(\cdot )$ may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. A random embedding control function, for example, would require an infinitely large dataset to be learned—or at least one that contained all words in the vocabulary $V$. “Better” representations should make their respective probes more easily learnable—and consequently their encoded information more accessible. We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously—even though we do not attempt this ourselves. The advantage of simple probes is that they may reveal something about the structure of the encoded information—i.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings? We suspect that many researchers who are interested in less complex probes have implicitly had this in mind. A Critique of Control Tasks While this paper builds on the work of hewitt-liang-2019-designing, and we agree with them that we should have control tasks when probing for linguistic properties, we disagree with parts of the methodology for the control task construction. We present these disagreements here. A Critique of Control Tasks ::: Structure and Randomness hewitt-liang-2019-designing introduce control tasks to evaluate the effectiveness of probes. We draw inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from hewitt-liang-2019-designing. They define structure as “the output for a word token is a deterministic function of the word type.” This means that they are stripping the language of ambiguity with respect to the target task. In the case of part-of-speech labeling, love would either be a noun or a verb in a control task, never both: this is a problem. The second feature of control tasks is randomness, i.e., “the output for each word type is sampled independently at random.” In conjunction, structure and randomness may yield a relatively trivial task that does not look at all like natural language. What is more, there is a closed-form solution for an optimal, retrieval-based “probe” that has zero parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-most-frequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words. A Critique of Control Tasks ::: What's Wrong with Memorization? hewitt-liang-2019-designing propose that probes should be optimised to maximise accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role BIBREF17, BIBREF18, BIBREF19, with word learning BIBREF20 being an obvious example. Indeed, maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property. A Critique of Control Tasks ::: What Low-Selectivity Means hewitt-liang-2019-designing acknowledge that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, [§3.6]hewitt-liang-2019-designing point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task. They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned. Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misrepresentation. Since more complex probes achieve higher accuracy on the task, as evidence by the findings of hewitt-liang-2019-designing, we believe that the overall trend of misrepresentation is higher for the probes with higher selectivity. The same applies for the second example discussed in section [§4.2]hewitt-liang-2019-designing where a less selective probe appears to be less faithful. The authors show that the representations on ELMo's second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe. While they argue this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates yet again that one needs to use a more complex probe to minimize the chances of misrepresenting what the model has learned. The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present; see sec:ease-extract. Experiments We consider the task of POS labeling and use the universal POS tag information BIBREF21 from the Universal Dependencies 2.4 BIBREF22. We probe the multilingual release of BERT on six typologically diverse languages: Basque, Czech, English, Finnish, Tamil, and Turkish; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank. Experiments ::: Control Functions We will consider three different control functions. Each is defined as the composition $\mathbf {c}= \mathbf {e}\circ \texttt {id} $ with a different look-up function. These look-up functions are $\mathbf {e}_\textit {fastText}$ returns a language specific fastText embedding BIBREF15; $\mathbf {e}_\textit {onehot}$ returns a one-hot embedding; $\mathbf {e}_\textit {random}$ returns a fixed random embedding. All of these functions are type level in that they remove the influence of the context on the word. Experiments ::: Probe Architecture As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as an $m$-layer neural network with the non-linearity $\sigma (\cdot ) = \mathrm {ReLU}(\cdot )$. The initial projection matrix is $W^{(1)} \in \mathbb {R}^{r_1 \times d}$ and the final projection matrix is $W^{(m)} \in \mathbb {R}^{|\mathcal {T}| \times r_{m-1}}$, where $r_i=\frac{r}{2^{i-1}}$. The remaining matrices are $W^{(i)} \in \mathbb {R}^{r_i \times r_{i-1}}$, so we half the number of hidden states in each layer. We optimize over the hyperparameters—number of layers, hidden size, one-hot embedding size, and dropout—by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate. Experiments ::: Results We know $\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages. $\textsc {bert} $ presents negative gains in some of the analysed languages. Although this may seem to contradict the information processing inequality, it is actually caused by the difficulty of approximating $\texttt {id} $ and $\mathbf {c}(\cdot )$ with a finite training set—causing $\mathrm {KL}_{q_{{\theta }1}}(T \mid R)$ to be larger than $\mathrm {KL}_{q_{{\theta }2}}(T \mid \mathbf {c}(R))$. We believe this highlights the need to formalize ease of extraction, as discussed in sec:ease-extract. Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information. Conclusion We proposed an information-theoretic formulation of probing: we define probing as the task of estimating conditional mutual information. We introduce control functions, which allows us to put the amount of information encoded in contextual representations in the context of knowledge judged to be trivial. We further explored this formalization and showed that, given perfect probes, probing can only yield insights into the language itself and tells us nothing about the representations under investigation. Keeping this in mind, we suggested a change of focus—instead of focusing on probe size or information, we should look at ease of extraction going forward. On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings. Variational Bounds Theorem 2 The estimation error between $\mathcal {G}_{q_{{\theta }}}(T, R, \mathbf {e})$ and the true gain can be upper- and lower-bounded by two distinct Kullback–Leibler divergences. We first find the error given by our estimate Making use of this error, we trivially find an upper-bound on the estimation error as which follows since KL divergences are never negative. Analogously, we find a lower-bound as Further Results In this section, we present accuracies for the models trained using $\textsc {bert} $, fastText and onehot embeddings, and the full results on random embeddings. tab:results-extra shows that both BERT and fastText present high accuracies in all languages, except Tamil. Onehot and random results are considerably worse, as expected, since they could not do more than take random guesses (e.g. guessing the most frequent label in the training test) in any word which was not seen during training.
It is observed some variability - but not significant. Bert does not seem to gain much more syntax information than with type level information.
1e59263f7aa7dd5acb53c8749f627cf68683adee
1e59263f7aa7dd5acb53c8749f627cf68683adee_0
Q: Does the work explicitly study the relationship between model complexity and linguistic structure encoding? Text: Introduction Neural networks are the backbone of modern state-of-the-art Natural Language Processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks BIBREF0. As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. A syntactic probe, then, is a model for extracting syntactic properties, such as part-of-speech, from the representations BIBREF6. In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the goal of probing ought to be estimating the mutual information BIBREF7 between a representation-valued random variable and a linguistic property-valued random variable. This formulation gives probing a clean, information-theoretic foundation, and allows us to consider what “probing” actually means. Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones BIBREF8, BIBREF9, BIBREF10. In this context, we also discuss the recent work of hewitt-liang-2019-designing who propose selectivity as a criterion for choosing families of probes. hewitt-liang-2019-designing define selectivity as the performance difference between a probe on the target task and a control task, writing “[t]he selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.” They further ponder: “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints. In support of our discussion, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task BIBREF6, BIBREF11, within our framework. Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Tamil, and Turkish), we show that the representations from BERT, a common contextualized embedder, only account for at most $5\%$ more of the part-of-speech tag entropy than a control. These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require the sentential context of the word. Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP BIBREF12. We interpret this to mean that part-of-speech labeling is not a very informative probing task. We also remark that formulating probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT BIBREF13 and ELMo BIBREF14, contain the same amount of information about the linguistic property of interest as the original sentence. This follows naturally from the data-processing inequality under a very mild assumption. What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all. Word-Level Syntactic Probes for Contextual Embeddings Following hewitt-liang-2019-designing, we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a single token's embedding and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) labeling: determining a word's part of speech in a given sentence. For example, we wish to determine whether the word love is a noun or a verb. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a noun. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS. Word-Level Syntactic Probes for Contextual Embeddings ::: Notation Let $S$ be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary $\mathcal {V}$ is finite and, thus, the values $S$ can take are in $\mathcal {V}^*$. We write $\mathbf {s}\in S$ as $\mathbf {s}= w_1 \cdots w_{|\mathbf {s}|}$ for a specific sentence, where each $w_i \in \mathcal {V}$ is a specific word in the sentence and the position $i \in \mathbb {N}^{+}$. We also define the random variable $W$ that ranges over the vocabulary $\mathcal {V}$. We define both a sentence-level random variable $S$ and a word-level random variable $W$ since each will be useful in different contexts during our exposition. Next, let $T$ be a random variable whose possible values are the analyses $t$ that we want to consider for word $w_i$ in its sentential context, $\mathbf {s}= w_1 \cdots w_i \cdots w_{|\mathbf {s}|}$. In this work, we will focus on predicting the part-of-speech tag of the $i^\text{th}$ word $w_i$. We denote the set of values $T$ can take as the set $\mathcal {T}$. Finally, let $R$ be a representation-valued random variable for the $i^\text{th}$ word $w_i$ in a sentence derived from the entire sentence $\mathbf {s}$. We write $\mathbf {r}\in \mathbb {R}^d$ for a value of $R$. While any given value $\mathbf {r}$ is a continuous vector, there are only a countable number of values $R$ can take. To see this, note there are only a countable number of sentences in $\mathcal {V}^*$. Next, we assume there exists a true distribution $p(t, \mathbf {s}, i)$ over analyses $t$ (elements of $\mathcal {T}$), sentences $\mathbf {s}$ (elements of $\mathcal {V}^*$), and positions $i$ (elements of $\mathbb {N}^{+}$). Note that the conditional distribution $p(t \mid \mathbf {s}, i)$ gives us the true distribution over analyses $t$ for the $i^{\text{th}}$ word in the sentence $\mathbf {s}$. We will augment this distribution such that $p$ is additionally a distribution over $\mathbf {r}$, i.e., where we define the augmentation as a Dirac's delta function Since contextual embeddings are a deterministic function of a sentence $\mathbf {s}$, the augmented distribution in eq:true has no more randomness than the original—its entropy is the same. We assume the values of the random variables defined above are distributed according to this (unknown) $p$. While we do not have access to $p$, we assume the data in our corpus were drawn according to it. Note that $W$—the random variable over possible words—is distributed according to the marginal distribution where we define the deterministic distribution Word-Level Syntactic Probes for Contextual Embeddings ::: Probing as Mutual Information The task of supervised probing is an attempt to ascertain how much information a specific representation $\mathbf {r}$ tells us about the value of $t$. This is naturally expressed as the mutual information, a quantity from information theory: where we define the entropy, which is constant with respect to the representations, as and where we define the conditional entropy as the point-wise conditional entropy inside the sum is defined as Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq:true. Word-Level Syntactic Probes for Contextual Embeddings ::: Bounding Mutual Information The desired conditional entropy, $\mathrm {H}(T \mid R)$ is not readily available, but with a model $q_{{\theta }}(\mathbf {t}\mid \mathbf {r})$ in hand, we can upper-bound it by measuring their empirical cross entropy where $\mathrm {H}_{q_{{\theta }}}(T \mid R)$ is the cross-entropy we obtain by using $q_{{\theta }}$ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information This bound gets tighter, the more similar (in the sense of the KL divergence) $q_{{\theta }}(\cdot \mid \mathbf {r})$ is to the true distribution $p(\cdot \mid \mathbf {r})$. Word-Level Syntactic Probes for Contextual Embeddings ::: Bounding Mutual Information ::: Bigger Probes are Better. If we accept mutual information as a natural measure for how much representations encode a target linguistic task (§SECREF6), then the best estimate of that mutual information is the one where the probe $q_{{\theta }}(t \mid \mathbf {r})$ is best at the target task. In other words, we want the best probe $q_{{\theta }}(t \mid \mathbf {r})$ such that we get the tightest bound to the actual distribution $p(t\mid \mathbf {r})$. This paints the question posed by hewitt-liang-2019-designing, who write “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” as a false dichotomy. From an information-theoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure. Control Functions To place the performance of a probe in perspective, hewitt-liang-2019-designing develop the notion of a control task. Inspired by this, we develop an analogue we term control functions, which are functions of the representation-valued random variable $R$. Similar to hewitt-liang-2019-designing's control tasks, the goal of a control function $\mathbf {c}(\cdot )$ is to place the mutual information $\mathrm {I}(T; R)$ in the context of a baseline that the control function encodes. Control functions have their root in the data-processing inequality BIBREF7, which states that, for any function $\mathbf {c}(\cdot )$, we have In other words, information can only be lost by processing data. A common adage associated with this inequality is “garbage in, garbage out.” Control Functions ::: Type-Level Control Functions We will focus on type-level control functions in this paper; these functions have the effect of decontextualizing the embeddings. Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task. To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption about the embeddings. Assumption 1 Every contextualized embedding is unique, i.e., for any pair of sentences $\mathbf {s}, \mathbf {s}^{\prime } \in \mathcal {V}^*$, we have $(\mathbf {s}\ne \mathbf {s}^{\prime }) \mid \mid (i \ne j) \Rightarrow \textsc {bert} (\mathbf {s})_i \ne \textsc {bert} (\mathbf {s}^{\prime })_j$ for all $i \in \lbrace 1, \ldots |\mathbf {s}|\rbrace $ and $j \in \lbrace 1, \ldots , |\mathbf {s}^{\prime }|\rbrace $. We note that ass:one is mild. Contextualized word embeddings map words (in their context) to $\mathbb {R}^d$, which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in $\mathbb {R}^d$ that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small. ass:one yields the following corollary. Corollary 1 There exists a function $\emph {\texttt {id} } : \mathbb {R}^d \rightarrow V$ that maps a contextualized embedding to its word type. The function $\emph {\texttt {id} }$ is not a bijection since multiple embeddings will map to the same type. Using cor:one, we can show that any non-contextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function $\mathbf {e}: V \rightarrow \mathbb {R}^d$ that maps a word to a word embedding. This embedding may be one-hot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText BIBREF15. We can then construct a control function as the composition of the look-up function $\mathbf {e}$ and the id function $\texttt {id} $. Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo. Specifically, we have This result is intuitive and, perhaps, trivial—context matters information-theoretically. However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in sec:gain. Control Functions ::: How Much Information Did We Gain? We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function $\mathbf {c}(\cdot )$. We term how much more information the contextualized embeddings have about a task than a control variable the gain, which we define as The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function $\mathbf {c}$. We will empirically estimate this value in sec:experiments. Interestingly enough, the gain has a straightforward interpretation. Proposition 1 The gain function is equal to the following conditional mutual information The jump from the first to the second equality follows since $R$ encodes all the information about $T$ provided by $\mathbf {c}(R)$ by construction. prop:interpretation gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations. Control Functions ::: Approximating the Gain The gain, as defined in eq:gain, is intractable to compute. In this section we derive a pair of variational bounds on $\mathcal {G}(T, R, \mathbf {e})$—one upper and one lower. To approximate the gain, we will simultaneously minimize an upper and a lower-bound on eq:gain. We begin by approximating the gain in the following manner these cross-entropies can be empirically estimated. We will assume access to a corpus $\lbrace (t_i, \mathbf {r}_i)\rbrace _{i=1}^N$ that is human-annotated for the target linguistic property; we further assume that these are samples $(t_i, \mathbf {r}_i) \sim p(\cdot , \cdot )$ from the true distribution. This yields a second approximation that is tractable: This approximation is exact in the limit $N \rightarrow \infty $ by the law of large numbers. We note the approximation given in eq:approx may be either positive or negative and its estimation error follows from eq:entestimate where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative, by the data-processing inequality, but we have yet to devise a remedy. We justify the approximation in eq:approx with a pair of variational bounds. The following two corollaries are a result of thm:variationalbounds in appendix:a. Corollary 2 We have the following upper-bound on the gain Corollary 3 We have the following lower-bound on the gain The conjunction of cor:upper and cor:lower suggest a simple procedure for finding a good approximation: We choose $q_{{\theta }1}(\cdot \mid r)$ and $q_{{\theta }2}(\cdot \mid r)$ so as to minimize eq:upper and maximize eq:lower, respectively. These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently. We will optimize both with a gradient-based procedure, discussed in sec:experiments. Understanding Probing Information-Theoretically In sec:control-functions we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in sec:context that contextualized word embeddings, by construction, contain no more information about a word-level syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in sec:control-functions-meaning, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech. Understanding Probing Information-Theoretically ::: You Know Nothing, BERT To start, we note the following corollary Corollary 4 It directly follows from ass:one that $\textsc {bert} $ is a bijection between sentences $\mathbf {s}$ and sequences of embeddings $\langle \mathbf {r}_1, \ldots , \mathbf {r}_{|\mathbf {s}|} \rangle $. As $\textsc {bert} $ is a bijection, it has an inverse, which we will denote as $\textsc {bert}^{-1} $. Theorem 1 The function $\textsc {bert} (S)$ cannot provide more information about $T$ than the sentence $S$ itself. This implies $\mathrm {I}(T ; S) = \mathrm {I}(T; \textsc {bert} (S))$. We remark this is not a BERT-specific result—it rests on the fact that the data-processing inequality is tight for bijections. While thm:bert is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, thm:bert is a cynical statement: the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under ass:one, we know the answer a priori—the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself. Understanding Probing Information-Theoretically ::: What Do Control Functions Mean? Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Actually, the same reasoning used in cor:one could be used to devise a function $\texttt {id} _s(\mathbf {r})$ which led from a single representation back to the whole sentence. For a type-level control function $\mathbf {c}$, by the data-processing inequality, we have that $\mathrm {I}(T; W) \ge \mathrm {I}(T; \mathbf {c}(R))$. Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is $\mathrm {I}(T; S) - \mathrm {I}(T; W) = \mathrm {I}(T; S \mid W)$. This quantity is interpreted as the amount of knowledge we gain about the word-level task $T$ by knowing $S$ (i.e., the sentence) in addition to $W$ (i.e., the word). Therefore, a perfect probe would provide insights about language and not about the actual representations, which are no more than a means to an end. Understanding Probing Information-Theoretically ::: Discussion: Ease of Extraction We do acknowledge another interpretation of the work of hewitt-liang-2019-designing inter alia; BERT makes the syntactic information present in an ordered sequence of words more easily extractable. However, ease of extraction is not a trivial notion to formalize, and indeed, we know of no attempt to do so; it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP). Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units BIBREF16. Although for perfect probes the above results should hold, in practice $\texttt {id} (\cdot )$ and $\mathbf {c}(\cdot )$ may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. A random embedding control function, for example, would require an infinitely large dataset to be learned—or at least one that contained all words in the vocabulary $V$. “Better” representations should make their respective probes more easily learnable—and consequently their encoded information more accessible. We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously—even though we do not attempt this ourselves. The advantage of simple probes is that they may reveal something about the structure of the encoded information—i.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings? We suspect that many researchers who are interested in less complex probes have implicitly had this in mind. A Critique of Control Tasks While this paper builds on the work of hewitt-liang-2019-designing, and we agree with them that we should have control tasks when probing for linguistic properties, we disagree with parts of the methodology for the control task construction. We present these disagreements here. A Critique of Control Tasks ::: Structure and Randomness hewitt-liang-2019-designing introduce control tasks to evaluate the effectiveness of probes. We draw inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from hewitt-liang-2019-designing. They define structure as “the output for a word token is a deterministic function of the word type.” This means that they are stripping the language of ambiguity with respect to the target task. In the case of part-of-speech labeling, love would either be a noun or a verb in a control task, never both: this is a problem. The second feature of control tasks is randomness, i.e., “the output for each word type is sampled independently at random.” In conjunction, structure and randomness may yield a relatively trivial task that does not look at all like natural language. What is more, there is a closed-form solution for an optimal, retrieval-based “probe” that has zero parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-most-frequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words. A Critique of Control Tasks ::: What's Wrong with Memorization? hewitt-liang-2019-designing propose that probes should be optimised to maximise accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role BIBREF17, BIBREF18, BIBREF19, with word learning BIBREF20 being an obvious example. Indeed, maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property. A Critique of Control Tasks ::: What Low-Selectivity Means hewitt-liang-2019-designing acknowledge that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, [§3.6]hewitt-liang-2019-designing point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task. They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned. Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misrepresentation. Since more complex probes achieve higher accuracy on the task, as evidence by the findings of hewitt-liang-2019-designing, we believe that the overall trend of misrepresentation is higher for the probes with higher selectivity. The same applies for the second example discussed in section [§4.2]hewitt-liang-2019-designing where a less selective probe appears to be less faithful. The authors show that the representations on ELMo's second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe. While they argue this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates yet again that one needs to use a more complex probe to minimize the chances of misrepresenting what the model has learned. The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present; see sec:ease-extract. Experiments We consider the task of POS labeling and use the universal POS tag information BIBREF21 from the Universal Dependencies 2.4 BIBREF22. We probe the multilingual release of BERT on six typologically diverse languages: Basque, Czech, English, Finnish, Tamil, and Turkish; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank. Experiments ::: Control Functions We will consider three different control functions. Each is defined as the composition $\mathbf {c}= \mathbf {e}\circ \texttt {id} $ with a different look-up function. These look-up functions are $\mathbf {e}_\textit {fastText}$ returns a language specific fastText embedding BIBREF15; $\mathbf {e}_\textit {onehot}$ returns a one-hot embedding; $\mathbf {e}_\textit {random}$ returns a fixed random embedding. All of these functions are type level in that they remove the influence of the context on the word. Experiments ::: Probe Architecture As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as an $m$-layer neural network with the non-linearity $\sigma (\cdot ) = \mathrm {ReLU}(\cdot )$. The initial projection matrix is $W^{(1)} \in \mathbb {R}^{r_1 \times d}$ and the final projection matrix is $W^{(m)} \in \mathbb {R}^{|\mathcal {T}| \times r_{m-1}}$, where $r_i=\frac{r}{2^{i-1}}$. The remaining matrices are $W^{(i)} \in \mathbb {R}^{r_i \times r_{i-1}}$, so we half the number of hidden states in each layer. We optimize over the hyperparameters—number of layers, hidden size, one-hot embedding size, and dropout—by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate. Experiments ::: Results We know $\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages. $\textsc {bert} $ presents negative gains in some of the analysed languages. Although this may seem to contradict the information processing inequality, it is actually caused by the difficulty of approximating $\texttt {id} $ and $\mathbf {c}(\cdot )$ with a finite training set—causing $\mathrm {KL}_{q_{{\theta }1}}(T \mid R)$ to be larger than $\mathrm {KL}_{q_{{\theta }2}}(T \mid \mathbf {c}(R))$. We believe this highlights the need to formalize ease of extraction, as discussed in sec:ease-extract. Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information. Conclusion We proposed an information-theoretic formulation of probing: we define probing as the task of estimating conditional mutual information. We introduce control functions, which allows us to put the amount of information encoded in contextual representations in the context of knowledge judged to be trivial. We further explored this formalization and showed that, given perfect probes, probing can only yield insights into the language itself and tells us nothing about the representations under investigation. Keeping this in mind, we suggested a change of focus—instead of focusing on probe size or information, we should look at ease of extraction going forward. On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings. Variational Bounds Theorem 2 The estimation error between $\mathcal {G}_{q_{{\theta }}}(T, R, \mathbf {e})$ and the true gain can be upper- and lower-bounded by two distinct Kullback–Leibler divergences. We first find the error given by our estimate Making use of this error, we trivially find an upper-bound on the estimation error as which follows since KL divergences are never negative. Analogously, we find a lower-bound as Further Results In this section, we present accuracies for the models trained using $\textsc {bert} $, fastText and onehot embeddings, and the full results on random embeddings. tab:results-extra shows that both BERT and fastText present high accuracies in all languages, except Tamil. Onehot and random results are considerably worse, as expected, since they could not do more than take random guesses (e.g. guessing the most frequent label in the training test) in any word which was not seen during training.
No
eac042734f76e787cb98ba3d0c13a916a49bdfb3
eac042734f76e787cb98ba3d0c13a916a49bdfb3_0
Q: Which datasets are used in this work? Text: Introduction Among the several senses that The Oxford English Dictionary, the most venerable dictionary of English, provides for the word event, are the following. Although an event may refer to anything that happens, we are usually interested in occurrences that are of some importance. We want to extract such events from textual documents. In order to extract important events or events of a specific type, it is likely that we have to identify all events in a document to start with. Consider the first paragraphs of the article on the Battle of Fredericksburg in the English Wikipedia, accessed on May 5, 2012. We have highlighted the “events" in the paragraph. The Battle of Fredericksburg was fought December 11–15, 1862, in and around Fredericksburg, Virginia, between General Robert E. Lee's Confederate Army of Northern Virginia and the Union Army of the Potomac, commanded by Maj. Gen. Ambrose E. Burnside. The Union army's futile frontal assaults on December 13 against entrenched Confederate defenders on the heights behind the city is remembered as one of the most one-sided battles of the American Civil War, with Union casualties more than twice as heavy as those suffered by the Confederates. The paragraph contains two fairly long sentences with several “events", mentioned using the following words: fought, commanded, assaults, entrenched, remembered, casualties and suffered. Some of these “events" are described in terms of verbs whereas the others are in terms of nouns. Here fought, commanded, assaults, battles definitely seem to be “events" that have durations or are durative. Entrenched seems to talk about a state, whereas it is possible that suffered talks about something punctual (i.e., takes a moment or point of time) or can be durative (i.e., takes a longer period of time) as well. The act of remembering by an individual is usually considered to happen momentarily, i.e., forgotten things come back to mind at an instant of time. But, in this paragraph it is given in passive voice and hence, it is unclear who the actor is, possibly a lot different people at different points of time. Thus, depending on who is asked, the “events” picked out may be slightly different, but the essence is that there are several events mentioned in the paragraph and the objective in event extraction is to extract as many of them as possible in an automated fashion. For example, someone may not pick out remembered as an event that took place. Some others may not want to say that entrenched is an event. In addition, if one is asked to pick an important event, responses may vary from person to person. Finally, if one is asked to summarize the paragraph, depending on the person asked the summary may vary. A summary prepared by the author of this article is given below. The Battle of Fredericksburg, fought December 11-12, 1862, was one of the most one-sided battles of the American Civil War, with heavy Union casualties. Obviously, there are many other possibilities for summarization. However, the idea is that identification of events and their participants may play a significant role in summarizing a document. This paper discusses the extraction of events and their attributes from unstructured English text. It is an survey of research in extracting event descriptions from textual documents. In addition, we discuss how the idea of event extraction can be used in application domains such as summarization of a document. We also discuss application of event extraction in the biomedical domain and in the context of Twitter messages. The rest of the paper is organized in the following manner. Section "Events in Linguistics and Philosophy" provides a description of research in linguistics and philosophy. The author believes that such a background, at least at a basic level, is necessary to understand and develop the approaches and algorithms for automatic computational detection and extraction of events and their participants from textual documents. Section "Extracting Events from Textual Documents" discusses approaches used in extracting events from textual documents. Most approaches these days use machine learning techniques. Events in Linguistics and Philosophy Real world events are things that take place or happen. In this section, we present an overview of how real events are represented in terms of language. In particular, we discuss classification of events and features necessary for such classification. We follow this by presenting the preferred way among philosophers to represent events in terms of logic. We bring this section to an end by presenting some of the structures ascribed to events by linguists or philosophers working at an abstract level. The reason for the inclusion of this section in the paper is to set the context for the discussions in the following sections on the practical task of extracting events. Practical systems do not usually follow linguistic niceties although they draw inspiration from linguistics or philosophy. Classifying Events There have been many attempts at classifying linguistic events. Below, we briefly discuss a few. The primary focus when linguists discuss events is on the verb present in a sentence. Nouns, adjectives and other elements present in a sentence provide arguments for the verb. Aristotle (as presented in BIBREF0 ) classified verbs that denote something happening into three classes: actuality, movement and action. An actuality represents the existence of a thing or things; this is called state by others (e.g., BIBREF1 ). An examples of actuality can be seen in the sentence Jon is ill. A movement is an incomplete process or something that takes time but doesn't have an inherent end. An example of movement is seen in the sentence Jon is running. An action is something that takes time and has an inherent end. An example of an action is present in the sentence Jon is building a house. In other words, Aristotle distinguished between states and events and then events. BIBREF2 lists verbs that belong to the three Aristotelian classes and develops membership criteria for the classes. Kenny renamed the classes as states, activities (actions without inherent end) and performances (actions with inherent ends). Kenny's membership criteria are based on semantic entailments about whether the event can be considered to have taken place when it is still in progress. For example, during any point when we say Jon is running, we can consider that the activity of running has taken place. In other words Jon is running entails Jon has run. Thus, run is an activity. In contrast, when we say Joh is taking the final, we cannot say that Jon has taken the final. In other words, the first does not entail the second. Thus, the main difference between an activity and a performance is what is called delimitation. A delimited event has a natural end. BIBREF3 developed a 4-way classification scheme for linguistic events and BIBREF4 developed a set of criteria for membership in the classes. The classes enumerated by Dowty are: states, activities, achievements and accomplishments. The definitions are given below. Activities: Events that take place over a certain period of time, but do not necessarily have a fixed termination point. Examples; Jon walked for an hour, and Jon is driving the car. Accomplishments: Events that happen over a certain period of time and then end. Examples: Jon built a house in a month, and Jon is taking the final. Achievements: These are events that occur instantaneously and lack continuous tenses. Examples: Jon finished the final in 45 minutes and The vase broke. States: These are non-actions that hold for a certain period of time, but lack continuous tenses. Examples: Jon knows the answer and Jon likes Mary. BIBREF5 adopts the same classification as Vendler and Dowty, but divides achievements into two classes. The first one is still called achievements, but the second one is called semelfactives. In this new scheme, achievements are instantaneous (that is, the beginning of the event is the same as its end) culminating events, but semelfactives are events with no duration that result in no change of state. An example of a semelfactive is: Jon knocked on the door. Table 1 presents the nomenclatures introduced by various linguists in one place. There are many variations of the schemes given here, although we do not discuss them in this paper. In the early work on event classification, Aristotle, Vendler and others assume that what needs to be classified is the verb. However, many have concluded that it is impossible to classify a verb into a specific class. It is more appropriate to say that a clause containing an event has a class, and the classification of such a clause depends not only upon the verb, but also on other material present in the clause BIBREF6 , BIBREF4 , BIBREF7 , BIBREF8 . In other words, the classification must be compositional or must depend on various features of the clause, not exclusively verb-based. There is also substantial evidence that sentence material other than the verb can change the overall event type. For example, addition of a direct object can change an activity to an accomplishment BIBREF1 , as in the following examples. Bill ran for five minutes/*in five minutes: activity Bill ran the mile *for 5 minutes/in 5 minutes: accomplishment Parameters of Event Classes Many authors in linguistics have delved deeper into the nature of event classes and have tried to come up with features or characteristics that can be used to identify whether something (verb or a clause) belongs to a particular event class or not. These features or characteristics are necessary to describe the structure of events in a theoretical sense. Description of event structure usually refers to the actual words used (lexical features or characteristics) and also the structure of clause or sentence (syntactic features or characteristics). Identification of such features may be described as finding parameters of event types or parameterization of event types. A lot of the work on parameterization of event types/classes use the classes espoused by Vendler. These include BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 and others. We will only briefly touch upon such work in this paper. Our objective is to impress upon the reader that identification of features of event classes is considered an important task by linguists. For example, BIBREF9 describes Vendler's classes with two binary features or parameters: continuousness: whether an event has duration, and boundedness: whether an event has a (natural) terminal point or endpoint. Using these two features, the four Vendler classes can be parameterized as follows. : -bounded, -continuous : -bounded, +continuous : +bounded, -continuous : +bounded, +continuous BIBREF12 , BIBREF13 introduce the notion of countability while discussing event classes. This is similar to the mass-count opposition in nouns. Terminating events can be counted, but non-terminating processes cannot. Hoeksema introduces two binary features: count and duration to obtain Vendler's classes as seen below. The feature duration refers to whether the event takes place over time. : -count, -duration : -count, +duration : +count, -duration : +count, +duration BIBREF11 refines Vendler's classes by adding a class much like Smith's semelfactives BIBREF5 . He suggests that, in addition to states, there are four event types: culmination, culminated process, point, and process. He uses two binary features or parameters: consequence identifying termination or culmination, and atomic or non-atomic (which Moens called extended). Atomic is also called momentous or pointed. Moen's classification is given below, along with the features and examples. : +consequence, +atomic (examples: recognize, win the race) : +consequence, -atomic (examples: build a house) : -consequence, +atomic (example: hiccup, tap, wink) : -consequence, -atomic (example: run, swim, play the piano) : (examples: understand, love, resemble) Moens also claims that culminated process is an event class whose members are made up of smaller atomic units. In particular, a culminated process is a process with a consequent state. This insight that events can be decomposed into sub-events was used later by others working on the lexical analysis of events e.g., BIBREF16 , BIBREF17 . Others such as BIBREF18 , BIBREF19 , BIBREF20 have claimed that arguments of verbs are related to sub-events. We summarize the various features that linguists have used to classify events in Table 2 . Of course, we do not discuss many other proposals for features in this brief discussion. Classification of events and their parameterization of verbs or predicates (or clauses) are only the first steps in developing a deeper linguistic understanding of events. In particular, in order to understand the linguistic representation of events, linguists need to go beyond classification schemes. Events in Logical Representation of Semantics Mathematical logic is used to represent the semantics of language. In particular, we use logic to represent the meaning of single sentences. Early work on events, e.g., Panini (as discussed by BIBREF21 and BIBREF22 ) stated that language encodes two kinds of information–actions and non-actions. Verbs represent actions and nouns represent non-actions or things. BIBREF23 proposes that one needs an event variable $e$ to represent events in mathematical logic. This variable $e$ is used to represent relations represented by the event denoted by the verb and other constituents in the sentence, such as modifiers. Davidson claims that logically speaking, events are like things in that they can be represented by a variable and this variable can be modified and quantified. A question that arises is: how many arguments should an event predicate (in logic) take BIBREF2 ? Just like nominal modifiers modify nouns, event modifiers can modify event predicates. An event predicate can take any number of modifiers just like noun (nominal) modifiers. Examples of event modifiers are: time, place, manner and instrument. Davidson proposed that an event predicate may take one or more required arguments (is this true?) and any number of adjuncts or optional modifiers. Consider the following examples from BIBREF23 . The English sentence and the corresponding logical representation or logical form is given for each example. John buttered the toast. $\exists e \; buttered (Jones, the\_toast, e)$ John buttered the toast slowly. $\exists e \; buttered (Jones, the\_toast, e) \wedge slowly (e)$ John buttered the toast slowly, in the bathroom. $\exists e \; buttered (Jones, the\_toast, e) \wedge slowly (e) \wedge in\_the\_bathroom (e)$ John buttered the toast slowly, in the bathroom, with a knife. $\exists e \; buttered (Jones, the\_toast, e) \wedge slowly (e) \wedge in\_the\_bathroom (e) \wedge with\_a\_knife (e)$ John buttered the toast slowly, in the bathroom, with a knife, at midnight. $\exists e \; buttered (Jones, the\_toast, e) \wedge slowly (e) \wedge in\_the\_bathroom (e) \wedge with\_a\_knife (e) \wedge at\_midnight (e)$ Thus we can see that Davidson's approach places the event variable $e$ in the main predicate of a clause and distributes it among the modifiers of the clause in logical representation. In writing the meaning in Davidsonian logic, the author creates predicates such as $the\_toast$ and $in\_the\_bathroom$ , just for illustration, without going into details. Davidsonian representation allows events to be represented in logic (logical semantics) without requiring verbs to have multiple arities, i.e., without taking different arguments in different situations. Because the event is represented as a variable, the event variable $e$ can be included in the representation of logical meaning of each modifier or adjunct. Another benefit is that using Davidson's representation, one can analyze events represented syntactically as nouns (nominals) or verbs BIBREF21 . For example, one can refer to an event using the verb to burn or the noun a burn. Parsons also observes that using a variable to represent an event allows quantification over events the same way quantification applies to things. The following examples are from BIBREF21 . In every burning, oxygen is consumed. $\forall e \; burning (e) \rightarrow \exists e^{\prime } (consuming (e^{\prime }) \wedge object (e, oxygen) \wedge in (e,e^{\prime })$ Agatha burned the wood. $\exists e \; burning (e) \wedge subject (e, Agatha) \wedge object (e, wood)$ Oxygen was consumed. $\exists e^{\prime } \; consuming (e^{\prime }) \wedge object (e^{\prime }, oxygen)$ We do not go into details of containment of events as expressed by $in$ in the first example above, and also the representation of passives as in the third example above. In these three examples, the author uses predicates such as object and subject which represent more fine-grained relationship with the main predicate (corresponding to the verb usually) than the examples earlier. Extending this work, BIBREF21 , BIBREF24 , BIBREF25 have demonstrated that using Davidson's $e$ variable allows one to express tense dependency between perception verbs and their infinitival compliments in a natural way. BIBREF21 extends Davidson's approach to logical representation by adding an extra term corresponding to the event type of the predicate. He distinguishes between two types of eventualities: eventualities that culminate called Cul containing achievements and accomplishments, and those that do not, called Hold containing states and activities. John buttered the toast. $\exists e \; buttering (e) \wedge agent (e, Jones) \wedge theme (e, toast) \wedge (\exists t \; (t <now \wedge Cul (e,t))$ Mary knows Fred. $\exists e \; knowing (e) \wedge experiencer (e, Mary) \wedge theme (e, Fred) \wedge Hold (e, now))$ In the logical representation in these examples, the author uses predicates such as theme, agent and experiencer which are usually are called cases in linguistics BIBREF26 . In addition, the author uses a variable $t$ to express time. $now$ is a special indexical variable. We do not give detailed discussions of these fine points here. BIBREF27 also proposes a logical form based on Davidson's approach. The main motivation behind Hobb's approach is to be able to produce satisfactory semantic representation when an event is expressed as a noun, or when we want to express the meaning of tenses, modalities, and adverbial modifiers. He also explains how so-called opaque adverbials like almost in the sentence, John is almost a man. can be represented by the Davidsonian approach, which Hobbs extends. He also shows how the ambiguity between de re and de dicto meanings of sentences BIBREF28 that discuss beliefs can be explained by his approach to logical form representation of sentences. The representation by Hobbs is quite similar to other such representations based on Davidson, although there are some fine points of differences, that we do not discuss here. From a practical point of view, several research efforts in computational linguistics have adopted Hobb's logical form, and one such recent approach is by BIBREF29 who attempt to map Stanford dependency parses BIBREF30 into Hobbsian logical form, and discover that sometimes it is possible to do so, but in other cases the mapping requires semantic information that is not present in such dependencies indentified by the Stanford parser. Event structure Early efforts at identification of event structure in linguistics was usually limited to explaining essential grammatical phenomena. However, others later proposed complex structures that go beyond simple structures such as Davidson's approach of representing an event by a single logical variable and its components by additional predicates. Understanding the structure of an event entails (i) understanding the argument structure of the word (or, phrase) used to express the event in surface form, (ii) understanding the components in the conceptual or semantic description of an event, and (iii) understanding the relation or mapping between syntactic realization of an event and its conceptual components. In fact, analysis of argument structure includes all three steps and requires finding the relation between meaning of a verb (or a clause) and the syntactic realization of arguments. BIBREF19 , BIBREF31 introduce argument structure as a distinct level of representation in linguistics. Other prominent representations proposed include f-structures BIBREF32 , linear event structures BIBREF18 , lexical conceptual structures (LCS) BIBREF33 , BIBREF34 and two related structures: event structures and qualia structures for arguments BIBREF17 . There are two sides to event structure: syntactic and semantic. When specifying event structure, at the semantic level, the description must be richer than semantic role descriptions BIBREF35 , BIBREF36 . BIBREF37 argues that named roles or thematic roles are too coarse-grained to provide useful semantic interpretation of a sentence. It is also necessary to capture semantic distinctions in a much more fine-grained manner compared to prior theories of BIBREF38 , BIBREF39 , BIBREF40 . ***A sentence or two on these theories*** By this time it was clear that sophisticated approaches to specifying event structure must build upon the rich taxonomy of verb classes BIBREF37 and descriptive vocabulary work BIBREF41 and BIBREF33 . BIBREF41 discusses the systematic relations in language between meaning and surface expression. In particular, Talmy focuses on verbs and in particular, verbs that describe motion or location. He sketches a “motion" event in order to explore issues in lexicalization. The basic motion event consists of one object called figure moving or located with respect to another object called the referent or the ground. The motion event has additional components such as path and motion, manner and cause. Talmy gives examples of cases where the verb at once can express, in addition to the action or motion, one or more of figure, path, manner or cause. If a sematic component such as manner or cause is expressed directly by the verb, it is called conflation of manner (or cause) into the verb. Some verbs incorporate aspect, which represents the “pattern of distribution of at ion though time." In some languages, verbs can incorporate personation as well. Personation is a specification of the person involved, e.g., self or non-self. Some verbs incorporate what is called valence, where in conceptualizing an event that involves several different entities in distinct roles, a verb is able to direct greater attention to some one of these entities that to the others, or perhaps adopt a specific perspective. Sometimes, semantic components are not incorporated into the verb, but are expressed through what Talmy calls satellites. A satellite is an immediate constituent of the verb root other than inflections, auxiliaries or nominal arguments. Talmy enumerates 35 different semantic components. In addition to the six listed above, these include main purpose, result, polarity, aspect, personation, temporal and spatial setting, gender, valence, etc. Talmy also isolates surface elements within a verb complex such as the root verb, inflections, appositions, subordinate clauses and satellites. He then examines which semantic elements are expressed by which surface elements. He finds that the relationship is mostly not one-to-one. A combination of semantic elements may be expressed by a single surface element, or a single semantic element by a combination of surface elements. In a similar manner, semantic elements of different types can be expressed by the same type of surface elements or by several different ones. Talmy's work does not enumerate lexical entries for specific verbs, but provides detailed discussion on semantic facets of meanings of a verb. The main thrust of Talmy's work is to demonstrate that semantic elements and surface elements relate to each other in specific patterns, both typological and universal. In work prior to Talmy's, most work has treated language's lexical elements as atomic givens, without involving semantic components that comprise them. These studies treated the properties that such whole forms can manifest, in particular, word order, grammatical relations and case roles. Talmy's cross-linguistic study determines semantic components' surface presence, site (their host constituent or grammatical relation) and combination within a site. In addition, Talmy's tracing of surface occurrence patterns extends beyond treating single semantic component at a time to treating a concurrent set of components. Lexical semantics must strive to represent at least some of the various semantic components that Talmy enumerates. In addition, it must incorporate ways of mapping from syntax to semantics or vice versa. In a very simple system, a set of detailed rules may be able to enumerate the mappings from syntax to semantics and vice versa. In a complex modern system, it is necessary that a machine learning technique will automatically acquire the mappings. This usually requires a lot of labeled examples for a machine learning program to learn such mappings. We discuss some such as efforts later in the paper. Consider the following illustrative examples. Mary walked. Mary walked to the store. Mary walked for 30 minutes. Sentence a) describes a process, which is an activity of of indefinite length, i.e., the sentence does not say how long the activity of walking took. Although b) does not give an explicit time duration for the walking event, it depicts an accomplishment and provides a logical culmination to the duration of the event of walking because the event is over when Mary reached the store. Sentence c) talks about a bounded process in which, the event of walking terminating although it does not provide an explicit termination point, but provides a bound to the time extent of the activity in terms of a duration adverbial. This example motivates the observation that the use of prepositional phrases or duration adverbials can change the (aspectual) classification of an event. To explain such phenomena better, it is beneficial to have more complex event structures or lexical analysis of event words. BIBREF18 hypothesizes that the direct object plays a role in delimitation of an event, i.e., whether it has a culmination or not. BIBREF16 , BIBREF17 builds upon such observations and hypothesizes that it is necessary to know the how an event can be broken down into sub-events. He provides the following reasons for sub-eventual analysis. Sub-eventual analysis of predicates allows verbal decomposition leading to more complex lexical semantics. Scope of adverbial modification, for some adverbials, can be explained better using event sub-structures. Semantic arguments of items within a complex event structure can be mapped onto argument structures better. Pustejovsky describes a generative lexicon in the sense that meanings are described in terms of a limited number of so-called generative devices or primitives by drawing upon Aristotle's species of opposition BIBREF42 . For example, to express the meaning of the word closed as in The door is closed, The door closed or John closed the door, one needs the concept of opposition between closed and not-closed. This essential opposition in the meaning of a lexical item is described by Pustejovsky in terms of what is called the qualia structure of the lexical item. Thus, there are three primary components to the event structure proposed by Pustejovsky. Event type: The event type of the lexical item is given in terms of the classification schemes discussed earlier. Rules for event composition: Since an event may be expressed by more than a single verb, the meanings of several lexical items may have to be composed to obtain a description. For example, how does PP attachment change the meaning of the central event in context? Mapping rules from event structure to argument structure: Pustejovsky describes a number of rules or principles for such mapping. These rules describe how how semantic participants are realized syntactically. Pustejovsky provides lexical meaning in terms of four separate structures. Argument structure: The behavior of a word as a function, with its arity. This provides the predicate argument structure for a word, which specifies how it maps to syntax. Event structure: It identifies a specific event type for a word or a phrase, following BIBREF3 . Qualia structure: It provides the essential attributes of an object that need to be expressed lexically. Inheritance structure: It specifies how the word is globally related to other concepts in the lexicon. In summary, Pustejovsky endows complexity to lexical entries for verbs as well as non-verbs so that semantic weight does not fall on verbs alone in the lexicon and when composing the meaning of a sentence from its constituents. Pustejovsky's approach also reduces the number of lexical entries necessary for individual verbs because the lexical entries become more general. Pustejovosky focuses on creating more muscular compositional semantics rather than decomposing a verb's meaning into a specified number of primitives. Semantic Arguments and Syntactic Positions Frequently, specific semantic arguments of a verb (also called thematic arguments) appear in characteristic syntactic positions. This has led to theories or proposals regarding mapping between the two. These theories state that specific semantic arguments belong in specific syntactic positions and that there is 1-1 relationship between semantic argument and (initial) syntactic position. Such proposals or theories include the Universal Alignment Hypothesis BIBREF43 and Uniformity of Theta Assignment Hypothesis BIBREF44 . These are supposed to be universal in that they applied across languages and across verbs. For example, agents appear in subject positions across languages and verbs. This mapping is thus universal. However, other mappings are not so universal. For example, the theme can appear in object, subject or indirect object position; and the experiencer can appear in subject or object position. A theory that explains lexicon-to-syntax mapping also needs to explain the existence of argument alterations. In other words, it should explain the possibility that the same semantic role can appear in different syntactic positions for the same verb. Usually, linguists classify verbs into a number of semantic classes (different from the ones we talked about earlier) and for each class, a set of mapping relations and a set of argument alterations are specified BIBREF45 , BIBREF46 , BIBREF47 . However, other researchers claim that such semantic classification is difficult to obtain because semantically similar verbs may behave differently across languages BIBREF48 , a given verb in a language may have multiple syntactic realizations BIBREF48 , BIBREF6 , and semantically similar verbs may allow several syntactic realizations BIBREF6 . Lexical Resources for Action or Event Representation The discussions on lexical representation of verbs so far have been based on efforts where a small number of examples were studied intently by linguists before making the various proposals. Starting the 1980s but more so in the 1990s, when computer scientists started to focus more on analysis of large text corpora, it became evident to some that the lexical analysis of pure linguists can be extended by knowledge gathered from such corpora. This led to development of the Comlex lexicon BIBREF49 , WordNet BIBREF50 , BIBREF51 , VerbNet BIBREF52 , FrameNet BIBREF53 , BIBREF54 , BIBREF55 , BIBREF56 and other resources. Some of these may have started without an automatic analysis of corpora, but soon corpora were used to refine and enhance the initial lexical resources. Comlex was a substantial resource whose creators spent a lot of effort in enumerating subcategorization features. WordNet is a large lexical resource or ontology, which encompasses words from all categories. WordNet includes verbs, but is not verb-specific. VerbNet, of course, is focussed on verbs alone. FrameNet is also focussed on verbs. Both VerbNet and FrameNet attempt to represent all verbs, not only those which are used to represent “events". However, the term event itself is not clearly defined and most anything that is described by a verb can be considered an event in some context or another. Comlex and Nomlex Lexicons Comlex was created at New York University as a computational lexicon providing detailed syntactic information on approximately 38,000 words BIBREF49 . Of course, not all of these were verbs or words that describe actions. The feature set Comlex provided were more detailed than commerically available dictionaries at the time such as the Oxford Advanced Learner's Dictionary (OALD) BIBREF57 and Longman's Dictionary of Contemporary Englisch (LDOCE) BIBREF58 . The initial word list was derived from OALD. The lexicon used a Lisp-like notation for dictionary entries. We see some sample entries for verbs in Comlex in Figure 1 . Comlex paid particular attention to providing detailed subcategorization or complement information for verbs, and nouns and adjectives that take complements. Comlex was influenced by prior work on lexicon such as the Brandeis Verb Lexicon BIBREF59 , the ACUILEX project BIBREF60 , the NYU Lingustic String Project BIBREF61 , the OALD and the LDOCE, and it incorporated distinctions made in these dictionaries. Comlex had 92 different subcategorization features for verbs. The features recorded differences in grammatical functions as well as constituent structure. In particular, Comlex captured four different types of control: subject control, object control, variable control and arbitrary control. It was also able to express the fact that a verb may have different control features for different complement structures, or different prepositions within the complement. Figure 2 shows a few complements used in Comlex. Here :cs is the constituent structure, :gs is the grammatical structure and :ex are examples. The authors created a initial lexicon manually and then refined it using a variety of sources, both commercial and corpus-based. The Nomlex dictionary of nominalizations was also developed at NYU BIBREF62 , BIBREF63 . It enumerated allowed complements for nominalizations, and also related nominal complements of the corresponding verbs. A nominalization is the noun form of a verb. For example, the verb promote is nominalized as nominalization. Similarly, the nominalizations of the verb appoint are appointment and appointee. Nomlex entries are similar in syntax to Comlex entries. Each Nomlex entry has a :nom-type feature which specifies four types of nominalizations: action (appointment, destruction) or state (knowledge), subject (teacher), object (appointee) and verb-part for those nominalizations that incorporate a verbal particle (takeover). Meyers et al. BIBREF63 presented a procedure what mapped syntactic and semantic information for an active clause containing a verb e.g., (IBM appointed Alice Smith as vice president) into a set of patterns for nominalization (IBM's appointment of Alice Smith as vice president or Alice Smith's appointment as vice president). The lexical entry for the verb appoint used in Comlex is given in Figure 1 . The lexical entry in Nomlex for the action nominalization appointment is given in Figure 3 . Levin's Verb Classes Levin's verb classes BIBREF45 explicitly provide the syntax for each class, but do not provide semantic components. The classes are based on the ability or inability of a verb to occur in pairs of syntactic frames, with the assumption that syntactic frames reflect the underlying semantics. For example, break verbs and cut verbs are similar because they can all take part in transitive and middle constructions. However, only break verbs can occur in simple intransitive constructs. Similarly, cut verbs can occur in conative constructs and break verbs cannot. The explanation given is that cut describes a sequence of actions that result in the goal of separating an object into pieces. It is possible that one can perform the actions without achieving the result (e.g., John cut at the loaf). For break, the result is a changed state where the object becomes separated into pieces. If the result is not achieved, we cannot say that the action of breaking took place. The examples below are taken from BIBREF64 . Transitive Construction: (a) John broke the window., (b) John cut the bread. Middle Construction: (a) Glass breaks easily., (b) This loaf cuts easily. Intransitive Construction: (a) The window broke., (b) *The bread cut. Conative Construction: (a) *John broke at the window., (b) John valiantly cut at the frozen loaf, but his knife was too dull to make a dent in it. Levin's original classes had some inconsistencies. For example, many verbs were listed in multiple classes, some of which had conflicting syntactic frames. BIBREF65 refined the original classification to remove some of these problems to build a more fine-grained, syntactically and semantically coherent refined class called intersective Levin classes. Levin's classes also are focussed mostly on verbs taking noun (NP) and prepositional phrase (PP) complements, and are weak on coverage of ADJP, ADVP, sentential complement, etc. VerbNet is built using these classes. Organization of verbs into such classes capture generalizations about their properties. Such classes also help create better NLP systems. Many NLP systems benefit from using the mapping from surface realization of arguments to predicate-argument structure that is available in such classes. These classes also capture abstractions (e.g., syntactic and semantic properties) and as a result, they are helpful in many operational contexts where the available corpora are small in size and thus, it is not possible to extract detailed lexical information about verbs from such small corpora. The predictive power of the classes can compensate for the lack of sufficient data. Lexical classes have been helpful in tasks such as subcategorization acquisition BIBREF66 , BIBREF67 , BIBREF68 , automatic verb acquisition BIBREF69 , semantic role labeling BIBREF70 , and word sense disambiguation BIBREF71 . ***Add newer citations for application. Look at after 2004 proceedings of NAACL-HLT*** WordNet The WordNet project BIBREF50 , BIBREF51 started in the mid-1980s at Princeton University and over time, has become the most widely used lexical resource in English, especially when one needs a lexical resource that can be used by a program. Wordnet was primarily designed as a semantic network and later modified to be a lexical database. WordNet groups words into synsets (synonym set) and contains relations among these synsets. A synset contains all the word forms that can refer to a given concept or sense. For each sense of each word, WordNet also provides a short, general definition called its gloss and example usages. As the name hints, the WordNet can be thought of as a large graph where the words and synsets are nodes. These nodes linked by edges that represent lexical and semantic-conceptual links, which we discuss briefly below. Individual words may also be linked with antonym links. Superclass-subclass relations link entire synsets. WordNet has entries for verbs, nouns, adjectives and adverbs. To get a better feel for what WordNet is like, let us look at the online version of WordNet at Priceton University. When we search for the word assault in the online WordNet, the results come in two parts: noun and verb, because assault can be either a verb or a noun. The results that show up for verb are given in Figure 4 . The verb senses of assault belongs to three synsets. In other words, it has three senses or can refer to three different concepts. Each sunset is composed of several verbs. The second of these synsets contains one sense of each of the verbs assail, set on and attack. A verb may have four types of entries in WordNet: hypernyms, toponyms, entailment and coordinate terms. These terms are defined here. A verb $Y$ is a hypernym of the verb $X$ if the activity $X$ is a (kind of) $Y$ . For example, to perceive is an hypernym of to listen. A verb $Y$ is a troponym of the verb $X$ if the activity $Y$ is doing $X$ in some manner. For example, to lisp is a troponym of to talk. A verb $Y$ is entailed by $X$ if by doing $X$0 one must be doing $X$1 . For example, to sleep is entailed by to snore. Coordinate terms are those verbs that share a common hypernym, e.g., to lisp and to yell. If we want to see the direct troponym of the second synset for the verb meaning of assault, we get what we see in Figure 5 . WordNet has been used in many applications. However, it is most commonly used as a computational lexicon or “ontology" of English (or, another language) for word sense disambiguation, a task that assigns the most appropriate senses (i.e. synsets) to words in specific contexts. Although WordNet is large and detailed, WordNet does not have information required by NLP applications such as predicate-argument structure. Although WordNet contains a sufficiently wide range of common words, it does not cover special domain vocabulary. It is general in nature, and therefore difficult to use if specialized vocabulary is needed. Also, WordNet senses are sometimes overly fine-grained even for human beings and as a results, some researcher argue that it cannot achieve very high performance in the tasks where it is applied. Although WordNet is the most widely used online lexical database in NLP applications, is also limited in its coverage of verbs. The English WordNet currently contains approximately 117,659 synsets, each sunset corresponding to a sense of a word. It has 11,529 verbs that belong to 13,767 synsets. It also contains 117,798 nouns that belong to 82,115 synsets. WordNets have been developed or are being developed in a large number of languages such as Catalan, French, Spanish, Japanese, Chinese, Danish, Korean and Russian. Notable collaborative efforts include Euro Wordnet BIBREF72 , BIBREF73 , BIBREF74 , Asian Wordnet BIBREF75 , BIBREF76 , BIBREF77 and Indo WordNet BIBREF78 projects. The Indo WordNet focuses on 18 major languages of India. For example, as of June 2012 there are 15,000 synsets in the Assamese WordNet, 24,000 in Bengali, 16,000 in Bodo, 27,000 in Gujarati, and 31,500 in Oriya. WordNets in most other languages are not as sophisticated as the one in English. FrameNet FrameNet BIBREF79 , BIBREF55 , BIBREF80 is another substantial publicly available lexical resource that has come into existence independently. It is based on the theory of frame semantics BIBREF81 , BIBREF82 , BIBREF53 , BIBREF54 , BIBREF56 where a frame corresponds to a stereo-typical scenario involving an interaction and participants, where participants play some kind of roles. The idea is that the meanings of most words are best understood in context. FrameNet proposes a small context, called a semantic frame, a description of a type of event, relation or entity and the participants in it. A frame has a name and this name is used to identify a semantic relation that groups together the semantic roles. Although frames mostly correspond to verbs, there are frames that can be identified by nouns and adjectives. FrameNet also has a large number of annotated sentences. Each annotated sentence exemplifies a possible syntactic realization of the semantic role associated with a frame for a given target word. FrameNet extracts syntactic features and corresponding semantic roles from all annotated sentences in the FrameNet corpus, it builds a large set of rules that encode possible syntactic realizations of semantic frames. FrameNet aims to document the range of semantic and syntactic combinatory possibilities— valences–of each word in each of its senses, through computer-assisted annotation of example sentences and automatic tabulation of the annotation results. The FrameNet lexical database, currently contains more than 10,000 lexical units (defined below), more than 6,000 of which are fully annotated, in nearly 800 hierarchically-related semantic frames, exemplified in more than 170,000 annotated sentences. See the FrameNet website for the latest statistics. FrameNet has been used as a semantic role labeling, used in applications such as information extraction, machine translation, event recognition, sentiment analysis, etc., like the other publicly available lexical resources. An example of a frame is Attack. This frame has several frame elements. The core frame elements are assailant and victim. There are a large number of non-core frame elements. These include Circumstances, Containing_event, Direction, Duration, Explanation, Frequency, Manner, Means, Place, Purpose, Result, Time, Weapon, etc. For each of these frame elements there can be seen in one or more annotated sentences. Here is an example annotated sentence. $$[_{Assailant} \; The \; gang] \; ASSAULTED [_{Victim} \; him] \\ [_{Time} \; during \; the \;drive \; to \; Rickmansworth] [_{Place} \; in \;Hertfordshire] ...$$ (Eq. 74) The frame Attack is associated with a large number of associated units. These include verbs and nouns. Example verbs are ambush, assail, assault, attack, bomb, bombard, charge, hit, infiltrate, invade, raid, storm and strike. Examples of nouns are airstrike, ambush, assailant, assault, attack, etc. The frame Attack inherits from a frame called Intentionally_affect. It is inherited by frames Besieging, Counterattack, Invading and Suicide_attack. FrameNet annotates each frame element (or its representation, actually) in at least three layers: a frame element name (e.g., Food), a grammatical function (e.g., Object) and a phrase type (e.g., NP). Only the frame elements are shown in the Web-based interface to reduce visual clutter, although all three are available in the XML downloads. FrameNet has defined more than 1000 semantic frames. These frames are linked together using frame relations which relate more general frames to specific ones. This allows for reasoning about events and intentional actions. Because frames are semantic, they are often similar across languages. For example, frames about buying and selling involve frame elements Buyer, Seller, Goods and Money in every language. FrameNets have been developed for languages such as Portuguese, German, Spanish, Chinese, Swedish and Japanese. At the current time, there are 1159 frames in FrameNet. There are approximately 9.6 frame elements per frame. There are 12595 lexical units of which 5135 are nouns, 4816 are verbs, 2268 are adjectives. There are 12.1 lexical units per frame. There have been some attempts at extending the coverage of FrameNet. One such effort is by BIBREF83 who use a new broad-coverage lexical-semantic resource called PPDB to add lemmas as pontential triggers for a frame and to automatically rewrite existing example sentences with these new triggers. PPDB, The Paraphrase Database, is a lexical, phrasal and syntactic paraphrase database BIBREF84 . They use PPDB's lexical rules along with a 5-gram Kneser-Ney smoothed language model trained using KenLM BIBREF85 on the raw English sequence of the Annotated Gigaword corpus BIBREF86 . PropBank PropBank BIBREF87 , BIBREF88 , BIBREF89 is an annotated corpus of verb propositions and their arguments. PropBank does not annotate events or states of affairs described using nouns. PropBank-style annotations usually are closer to the syntactic level, whereas FrameNet-style annotations are more semantically motivated although, as discussed earlier, FrameNet provides layers of annotations including syntactic parses. PropBank annotates one million words of the Wall Street Journal portion of the Penn Treebank BIBREF90 with predicate-argument structure for verbs using semantic role labels for each verb argument. Although the same tags are used across all verbs (viz., Arg0, Arg1, $\cdots $ , Arg5), these tags have verb-specific meaning. FrameNet requires that the use of a given argument label is consistent across different uses of a specific verb, including its syntactic alternations. Thus, Arg1 (italicized) in “John broke the window broke" is the same window that is annotated as the Arg1 in “The window broke" even though it is the syntactic subject in one case and the syntactic object in another. FrameNet does not guarantee that an argument label is used consistently across different verbs. For example, Arg2 is used as label to designate the destination of the verb “bring", but the extent of the verb “rise". Generally, the arguments are simply listed in the order of their prominence for each verb. However, PropBank tries to use Arg0 as the consistent label for the “prototypical agent" and Arg1 for the “prototypical patient" as discussed in BIBREF7 . PropBank divides words into lexemes using a very coarse-grained sense disambiguation scheme. Two senses are considered distinct only if their argument labels are different. In PropBank each word sense is called a frameset. PropBank's model of predicate-argument structure differs from dependency parsing. In dependency parsing, each phrase can be dependent only on one other phrase. But, in PropBank, a single phrase can be arguments to several predicates. PropBank provides a lexicon which divides each word into coarse-grained senses or framesets, and provides examples usages in a variety of contexts. For example, the to make an attack, criticize strongly sense of the predicate lemma (or, verb) attack is given in Table 3 along with an example. PropBank tries to keep rolesets consistent across related verbs. Thus, for example, the buy roleset is similar to the purchase and sell rolesets. See Table 4 , taken from BIBREF87 . One can clearly see that it may be possible to merge such similar framesets together to obtain something similar to the verb roles in FrameNet's Commerce frameset. Although similar, PropBank differs from FrameNet we have discussed earlier in several ways. PropBank is a resource focussed on verbs whereas FrameNet is focussed on frame semantics that generalizes descriptions across similar verbs as well as nouns and other words (e.g., adjectives) as discussed earlier. PropBank was created with the idea of serving as training data to be used with machine learning algorithms for the task of semantic role labeling. It requires all arguments to a verb to be syntactic constituents in nature. In addition, PropBank differentiates among senses of a verb if the senses take different sets of arguments. There is a claim that due to such differences, semantic role labeling is easier using a corpus annotated with PropBank type annotation compared to FrameNet type annotation. VerbNet VerbNet BIBREF91 , BIBREF64 , BIBREF92 attempts to provide a definitive resource for lexical entries for English verbs. It is compatible with WordNet, but has lexical entries with explicit syntactic and semantic information about verbs, using Levin's verb classes BIBREF45 . It uses verb classes to capture generalizations and for efficient encoding of the lexicon. Its syntactic frames for verb classes are represented using a fine-grained variation of Lexicalized Tree Adjoining Grammers BIBREF93 , BIBREF94 , BIBREF65 augmented with semantic predicates, allowing for creating compositional meanings for more complex constituents such as phrases and clauses. VerbNet provides traditional semantic information such as thematic roles and semantic predicates, with syntactic frames and selectional restrictions. it also allows for extension of verb meaning through adjunction of particular syntactic phrases. A verb entry corresponds to a set of classes, corresponding to the different senses of the verb. For each verb sense, there is a verb class as well as specific selectional restrictions and semantic characteristics that may not be captured by class membership. VerbNet also contains references to WordNet synsets. Verb classes capture generalizations about verb behavior. Each verb class lists the thematic roles that the predicate-argument structure of its members allows, and provides descriptions of the syntactic frames corresponding to allowed constructs, with selectional restrictions given for each argument in each frame. Verb classes are hierarchically organized. It required some manual restructuring of Levin's classes. Each event $E$ is decomposed into a three-part structure according to BIBREF11 , BIBREF95 . VernNet uses a time function for each predicate specifying whether the predicate is true during the preparatory, culmination or consequent/result stage of an event. This structure allows VerbNet to express the semantics of classes of verbs like Change of State verbs. For example, in the case of the verb break, it is important to distinguish between the state of the object before the end of the action and the new state that results afterwards. Table 5 is an example of a simplified VerbNet entry from its website. The original VerbNet was extended using extensions proposed by BIBREF68 . This resulted in the addition of a large number of new classes, and also a much more comprehensive coverage of English verbs. Table 6 provides statistics of VerbNet's coverage in its initial version, VerbNet as described in BIBREF91 , BIBREF64 , BIBREF96 , and its current version as in its official Website. The absence of any lexicon or resource that provides for accurate and comprehensive predicate-argument structure (or semantic role labels) for English verbs has been long considered a critical element that was needed to produce robust natural language processors. This was shown clearly by BIBREF97 who evaluated an English-Korean machine translation system. The authors showed that among several factors impacting on the low quality of translations, one that was most influential was the inability to predicate-argument structure. Even with a grammatical parse of the source sentence ad complete vocabulary coverage, the translation was frequently bad. This is because, the authors found, that although the parser recognized the constituents that are verb arguments, it was unable to precisely assign the arguments to appropriate positions. This led to garbled translations. Simply preserving the proper argument position labels and not changing other things, resulted in substantial improvement in acceptable translations. When using one parser, the improvement was 50%; with a second parser, the improvement was dramatic 300%. Thus, the purpose in developing lexical resources such as FrameNet and PropBank, PropBank especially so, is to provide for training data annotated with predicate-argument positions with labels. Such data can be used with machine learning techniques. Combining FrameNet, VerbNet and WordNet There have been attempts to integrate lexical resources to obtain more robust resources with wider coverage. We discuss one such effort here. BIBREF98 integrate FrameNet, VerbNet and WordNet discussed earlier into a single and richer resource with the goal of enabling robust semantic parsing. The reason for building connections among the three lexical resources is that similar syntactic patterns often introduce different semantic interpretations and similar meanings can be realized in many different ways. The improved resource provides three enhancements: (1) It extends the coverage of FrameNet, (2) It augments VerbNet's lexicon with frame semantics, and (3) It implements selectional restrictions using WordNet semantic classes. They use knowledge about words and concepts from WordNet, information about different situations from FrameNet, and verb lexicon with selectional restrictions from VerbNet. They extract syntactic features and corresponding semantic roles from all annotated sentences in FrameNet to build a large set of rules that encode the possible syntactic realization of semantic frames. They identify the VerbNet verb class that corresponds to a FrameNet frame and this allows them to parse sentences that include verbs not covered by FrameNet. This they do by exploiting a transitivity relation via VerbNet classes: verbs that belong to the same Levin classes are likely to share the same FrameNet frame, and their frame semantics can be analyzed even if not explicitly defined in FrameNet. They use information from WordNet in several stages in the parsing process. The argument constraints encoded in VerbNet (e.g., $+animate, +concrete$ ) are mapped to WordNet semantic classes, to provide selectional restrictions for better frame selection and role labeling in a semantic parser. In addition, the mapping between WordNet verb entries and FrameNet lexical units allows them to extend the parser coverage, by assigning common frames to verbs that are related in meaning according to the WordNet semantic hierarchies. The authors found that their mapping algorithms produced 81.25% correct assignment of VerbNet entries with a correct FrameNet frame. They also were able to map 78.22% VerbNet predicate-argument structures with some syntactic features and selectional restrictions to the corresponding FrameNet semantic roles. OntoNotes and Other Large-scale Annotated Corpora The OntoNotes project BIBREF99 , BIBREF100 , BIBREF101 has created an infrastructure for much richer domain independent representation of shallow meaning for use in natural language processing tasks, including event detection and extraction, in English, Chinese and Arabic. OntoNotes annotates documents at several layers: syntax, propositions, word senses including nominalizations and eventive noun senses, named entities, ontology linking and co-reference. It has been designed to be a well-annotated large-scale corpus from which machine learning programs can learn many different aspects of meaning felicitously. OntoNotes uses Penn TreeBank parses BIBREF102 , PropBank propositional structures BIBREF87 , BIBREF88 , BIBREF89 on top of Penn Treebank, and uses the Omega ontology BIBREF103 for word sense disambiguation. As we know, the Penn Treebank is annotated with information from which one can extract predicate-argument structures. The developers of OntoNotes use a parser that recovers these annotations BIBREF104 . The Penn Treebank also has markers for “empty" categories that represent displaced constituents. Thus, to create OntoNotes, its developers use another parser BIBREF105 , BIBREF106 to extract function words. They also use a maximum entropy learner and voted preceptons to recover empty categories. PropBank, as we know, annotates the one-million word Wall Street Journal part of the Penn Treebank with semantic argument structures for verbs. As we have noted earlier, the creators of OntoNote and others have discovered that WordNet's very fine grained sense distinctions make inter-annotator agreement or good tagging performance difficult. To achieve better performance, OntoNotes uses a method BIBREF107 , BIBREF108 for sense inventory creation and annotation that includes links between grouped word senses and the Omega ontology BIBREF103 . OntoNotes represents sense distinctions in a hierarchical structure, like a decision tree, where coarse-grained distinctions are made at the root and increasingly fine-grained restrictions until reaching WordNet senses at the leaves. Sets of senses under specific nodes of the tree are grouped together into single entries, along with syntactic and semantic criteria for their groupings; these are presented to annotators for improved annotation agreement, obtaining up to 90% inter-annotator agreement. OntoNote follows a similar method for annotation of nouns. To allow access to additional information such as subsumption, property inheritance, predicate frames from other sources, links to instances and so on, OntoNotes also links to an ontology. This requires decomposing the hierarchical structure of OntoNotes into subtrees which then can be inserted at the appropriate conceptual node in the ontology. OntoNotes represents its terms in the Omega ontology BIBREF103 . Omega has been assembled by merging a variety of sources such as WordNet, Mikrokosmos BIBREF109 , and a few upper ontologies such as DOLCE BIBREF110 , SUMO BIBREF111 , and Penman Upper Model BIBREF112 . OntoNote also includes and cross-references verb frames from PropBank, FrameNet, WordNet and Lexical Conceptual Structures BIBREF113 . OntoNotes also has coreferences. It connects coreferring instances of specific referring expressions, primarily NPs that introduce or access a discourse entity. For the purpose of our paper, it is important to know that OntoNotes tries to annotate nouns that carry predicate structure, e.g., those whose structure is derived from their verbal form. In particular, OntoNotes annotates nominalization and eventive senses of nouns. OntoNotes applies two strict criteria for identifying a sense of a noun as a nominalization BIBREF101 . The noun must relate transparently to a verb, and typically display a nominalizing morpheme such as -ment (govern/government), -ion (contribute/contribution), though it allows some zero-derived nouns such as $kill$ , the noun derived from $kill$ , the verb. The noun must be able to be used in a clausal noun phrase, with its core verbal arguments related by semantically empty or very “light" licensers, such as genitive markers (as in The Roman's destruction of the city.. or with the verb's usual particle or prepositional satellites as in John's longing for fame and fortune... Just like nominalization senses, OntoNotes has strict definition of eventive senses. They have two definitional criteria (1) and (2), and a diagnostic test (3), for determining if a noun sense is eventive. Activity causing a change of state: A noun sense is eventive when it refers to a single unbroken activity or process, occurring during a specific time period, that effects a change in the world of discourse. Reference to activity proper: The noun must refer to the actual activity or process, not merely to the result of the activity or the process. The noun patterns with eventive predicates in the “have" test: BIBREF114 describes the following heuristic lexico-syntactic diagnostic test to apply to many nouns. The test has four parts to it as discussed briefly below. Create a natural sounding sentence using the construction X had <NP> where <NP> is a noun phrase headed by the noun in question, e.g., John had a party. Check if the sentence can be used in present progressive as in John is having a party. If the sentence is felicitous, it adds to the noun being inventive. If it sounds odd, it adds to the evidence that the noun is stative. Check if the sentence can be used in a pseudo-cleft construction such as What John did was have a party. If it is felicitous, the noun is more likely to be eventive. If not, it is more likely to be stative. Check if the sentence suggests iterative or habitual action using the simple present such as John has a party every Friday. If so, it adds evidence that the noun is eventive. If the sentence suggests that the situation is taking place at that very moment that it is uttered, it adds evidence that the noun is stative as in John has a cold. In addition to OntoNotes, there have been other efforts at obtaining large-scale annotated corpora such at the GLARF project BIBREF115 that tries to capture information from various Treebanks and superimpose a predicate argument structure. The Unified Linguistic Annotation (ULA) project BIBREF116 is a collaborative effort that aims to merge PropBank, NomBank, the Penn Discourse Treebank BIBREF117 and TimeBank BIBREF118 with co-reference information. Extracting Events from Textual Documents Different models of events have been used in computational linguistics work geared toward information extraction. We discuss TimeML events next followed by events. We discuss biomedical event extraction in Section , and extraction of events from Twitter in Section . TimeML Events TimeML is a rich specification language for event and temporal expressions in natural language text. In the TimeML BIBREF119 , BIBREF120 annotation scheme, an event is a general term for situations that happen or occur. Events can be punctual or momentary, or last for a period of time. Events in TimeML format may also include predicates describing states or circumstances in which something holds true. Only those states that participate in an opposition structure, as discussed in Subsection UID42 , are annotated. In general, an event can be expressed in terms of verbs, nominalizations, adjectives, predicative clauses, or prepositional phrases. TimeML allows an event, annotated with the EVENT tag, to be one of seven types: occurrence, state, report, i-action, i-state, aspectual and perception. The first five are special cases. The last two, Occurrence and State are used for general cases that do not fall in the special ones. Reporting: A reporting event describes an action declaring something, narrating an event, informing about a situation, and so on. Some verbs which express this kind of event are say, report, tell, explain, and state. An example sentence with the verb say is Punongbayan said that the 4,795-foot-high volcano was spewing gases up to 1,800 degrees. I-Action: I stands for intensional. According to the TimeML annotation guidelines, an i-action is a dynamic event that takes an event-denoting argument, which must be explicitly present in the text. Examples of verbs that are used to express i-actions include attempt, try, promise and offer. An example sentence with the verb try is Companies such as Microsoft or a combined worldcom MCI are trying to monopolize Internet access. I-State: I-State stands for intensional state. Like an I-Action, an I-State event takes an argument that expresses an event. Unlike an I-Action, the I-State class is used for events which are states. An example sentence that uses the verb believe is We believe that his words cannot distract the world from the facts of Iraqi aggression. Other verbs used to express i-states include intend, want, and think. Aspectual: An aspectual predicate takes an event as an argument, and points to a part of the temporal structure of the event. Such a part may be the beginning, the middle or the end of an event. Verbs such as begin, finish and continue are such aspectual predicates. An example sentence with the verb begin is All non-essential personnel should begin evacuating the sprawling base. Perception: This class includes events involving the physical perception of another event. Such events are typically expressed by verbs such as see, watch, glimpse, hear, listen, and overhear. An example sentence with the verb see is Witnesses tell Birmingham police they saw a man running. Occurrence: An occurrence is a general event that occurs or happens in the world. An example of an occurrence is given in the following sentence: The Defense Ministry said 16 planes have landed so far with protective equipment against biological and chemical warfare. The occurrence has been highlighted in bold. State: A state describes circumstances in which something obtains or holds true. An example sentence that shows two states is It is the US economic and political embargo which has kept Cuba in a box. TimeML allows one to mark up temporal expressions using the TIMEX3 tag. Temporal expressions are of three types: (a) Fully specified temporal expressions such as June 11, 2013, (b) Underspecified temporal expressions such as Monday, (c) Durations such as three days. TimeML uses the SIGNAL tag to annotate sections of text, usually function words, that indicate how temporal objects are related to each other. The material marked by SIGNAL may contain different types of linguistic elements: indicators of temporal relations such as prepositions such as on and during, other temporal connectives such as when, etc. The TIMEX3 and SIGNAL tags were introduced by BIBREF122 , BIBREF123 . A major innovation of TimeML is the LINK tags that encode relations between temporal elements of a document and also help establish ordering between the events in a document. There are three types of links: TLINK showing temporal relationships between events, or between an event and a time; SLINK or a subordination link to show context that introduces relations between two events, or an event and a signal; ALINK or an aspectual link to show relationship between an aspectual event and its argument event. TLINK allows for 13 temporal relations introduced by BIBREF124 , BIBREF125 . SLINK is used to express contexts such as use of modal verbs, negatives, positive and negative evidential relations, factives which require the event argument to be true, and counterfactives which require the event argument to be false. ALINK expresses initiation, culmination, termination or continuation relationships between an event and its argument event. Finally, TimeML is able to express three types of causal relations: an event causing an event, an entity causing an event, and the special situation where the use of the discourse marker and as a signal to introduce a TLINK indicating that one event happened before another as in He kicked the ball and it rose into the air. The creators of TimeML have spent significant efforts to develop a fairly large corpus annotated with TimeML tags. This corpus is called the TIMEBANK corpus BIBREF118 and has 300 annotated articles. This corpus has been used to learn to extract events and temporal relations among events. ACE Events In the ACE model, only “interesting” events are annotated in corpora and thus extracted by a trained program. ACE annotators specify the event types they want to be extracted. For example, in one information extraction contest, an ACE 2005 event was of 8 types, each with one has one or more sub-types. The types are given below. ***Maybe, give some examples*** Life: Be-born, marry, divorce, injure and die Movement: Transport Transaction: Transfer-ownership, Transfer money Business: Start-organization, Merge-organization, Declare-bankruptcy Contact: Meet, Phone-write Conflict: Attack, demonstrate Personnel: Start position, End position, Nominate, Elect, and Justice: Arrest-Jail, Release-Parole, Trial-Hearing, Charge-Indict, Sue, Convict, Sentence, Fine, Execute, Extradite, Acquit, Appeal, Pardon. Each event also has four categorial attributes. The attributes and their values are given below. Modality: Asserted and Other where Other includes, but is not limited to: Believed events; Hypothetical events; Commanded and requested events; Threatened, Proposed and Discussed events; and Promised events. Polarity: Positive and Negative. Genericity: Specific, Generic Tense: Past, Present, Future and Unspecified. ACE events have arguments. Each event type has a set of possible argument roles, which may be filled by entities, time expressions or other values. Each event type has a set of possible argument roles. There are a total of 35 role types although no single event can have all 35 roles. A complete description of which roles go with which event type can be found in the annotation guidelines for ACE 2005 events. In an ACE event, time is noted if when explicitly given. Others have defined events or event profiles themselves to suit their purpose. For example, Cybulska and Vossen BIBREF126 , BIBREF127 describe an historical information extraction system where they extract event and participant information from Dutch historical archives. They extract information using what they call profiles. For example, they have developed 402 profiles for event extraction although they use only 22 of them in the reported system. For extraction of participants, they use 314 profiles. They also 43 temporal profiles and 23 location profiles to extract temporal and locational information. Profiles are created using semantic and syntactic information as well as information gleaned from Wordnet BIBREF50 . The ACE annotation scheme, discussed earlier, was developed by NIST in 1999, and the ERE (Entities, Relations and Events) scheme was defined as a simpler version of ACE BIBREF128 . One of ERE's goals is also to make annotating easier and annotations more consistent across annotators. ERE attempts to achieve these goals by removing the most problematic annotations in ACE and consolidating others. We will discuss the three types annotations now: Entities, Relations and Events. For example, consider Entities. ACE and ERE both have Person, Organization, Geo-Political Entity and Location as types of entities. ACE has two additional types, Weapon and Vehicle, which ERE does not have. ERE doesn't distinguish between Facility and Location types and merge them into Location. ERE has a type called Title for titles, honorifics, roles and professions. ACE has subtypes for entity mentions, which ERE does not. In addition to subtypes, ACE classifies entity mentions into classes (e.g., Specific, Generic and Underspecified), ERE has only Specific. ACE and ERE also have differences in how extents and heads are marked, and levels of entity mentions. The purpose of Relation annotation in both ACE and ERE is to extract a representation of the meaning of the text, not necessarily tied to the underlying syntactic or lexical representation. Both schemes include Physical, Part-Whole, Affiliation and Social relations although the details are a bit different. Both tag relations inside a single sentence and tags only explicit mentions. Nesting of tags is not allowed. Each relation can have up to two ordered Argument slots. Neither model tags negative relations. However, ERE annotates only asserted ("real") events whereas ACE allows others as well, e.g., Believed Events, Hypothetical Events, Desired Events and Requested Events. There is no explicit trigger word in ACE, which annotates the full clause that serves as the trigger for a relation whereas ERE attempts to minimize the annotated span by allowing for the tagging of an optional trigger word or phrase. ACE justifies tagging of each Relation by assigning Syntactic Clauses to them, such as Possessive, PreMod and Coordination. The three types of Relations inn ERE and ACE have sub-types: Physical, Part-Whole, and Social and Affiliation, but ERE collapses ACE types and sub-types to make them more concise, possibly less specific. BIBREF128 discuss the similarities and differences between ACE and ERE in detail. Events in both ACE and ERE are defined as `specific occurrences' involving `specific participants'. Like entities and relations, ERE is less specific and simplified compared to ACE. Both annotation schemes annotate the same event types: Life, Movement, Transaction, Business, Conflict, Contact, Personnel, and Justice. BIBREF129 use another annotation scheme called Richer Event Description (RED), synthesizing co-reference BIBREF130 , BIBREF131 and THYME-TimeML temporal relations BIBREF132 . BIBREF129 discusses challenges in annotating documents with the RED schema, in particular cause-effect relations. The usual way to annotate cause-effect relations is using the counter-factual definition of causation in philosophy BIBREF133 , BIBREF134 : “X causes Y" means if X had not occurred, Y would not have happened. However, BIBREF129 found that this definition leads to many difficult and sometimes erroneous annotations, and that's why while performing RED annotations, they used another definition BIBREF135 , BIBREF136 which treats causation as “a local relation depending on intrinsic properties of the events and what goes on between then, and nothing else". In particular, the definition is “X causes Y" means Y was inevitable given X. In fact, in the annotations performed by BIBREF129 , they use the new definition to make judgements, but use the old definition as a precondition to the new one. The Knowledge Base Population Track (TAC-BKP) was started by NIST in 2009 to evaluate knowledge bases (KBs) created from the output of information extraction systems. The primary tasks are a) Entity linking–linking extracted entities to entities in knowledge bases, and b) Slot filling–adding information to entity profiles, information that is missing from the knowledge base BIBREF137 . Wikipedia articles have been used as reference knowledge bases in evaluating TAC-KBP tasks. For example, given an entity, the goal is to identify individual nuggets of information using a fixed list of inventory relations and attributes. For example, given a celebrity name, the task is to identify attributes such as schools attended, occupations, important jobs held, names of immediate family members, etc., and then insert them into the knowledge base. Many people compare slot filling to answering a fixed set of questions, obtaining the answers and filling in the appropriate slots in the knowledge base. Slot filling in TAC-KBP differs from extraction in ACE and ERE notations in several ways such as TAC-KBP seeks out information for named entities only, chiefly PERs and ORGs, TAC-KBP seeks to obtain values for slots and not mentions, and events are handled as uncorrelated slots, and assessment is like in question-answering. Our focus on this paper has been on extracting events, and we know that to extract events properly, we need to explicitly extract event mentions, and also extract associated attributes such as agents, locations, time of occurrence, duration, etc. Rather than explicitly modeling events, TAC-KBP does so implicitly as it captures various relations associated with for example the agent of the event. For example, given a sentence “Jobs is the founder and CEO of Apple", TAC-KBP may pick "Apple" as the focal entity and identify "Jobs" as the filler of its founder slot, and "Jobs" as the filler of its CEO slot. However, an ACE or ERE annotation program will ideally pick the event as Founding, with Jobs as an argument (say the first argument or arg1, or the Actor) of the event, and "Apple" as another argument, say arg2. Extracting Events Many even extraction systems have been built over the years. A big motivator for development of event extraction systems seem to be various contests that are held every few years, although there has been considerable amount of non-contest related research as well. Although we discuss extraction of events represented by various formats, the methods are not really different from each other. That is why we discuss TimeML events in more detail and present the others briefly in this section. We describe a few of the approaches that have been used for extracting TimeML type events. Quite a few papers that attempt to do so have been published BIBREF121 , BIBREF138 , BIBREF139 , BIBREF140 , BIBREF141 , and we pick just a few representative papers. : BIBREF121 implemented an event and event feature extraction system called EVITA and showed that a linguistically motivated rule-based system, with some help using statistical disambiguation perfumed well on this task. Evita is claimed to be a unique tool within the TimeML framework in that it is very general, being not based on any pre-established list of event patterns and being domain-independent. Evita can also identify, based on linguistic cues, grammatical information associated with event referring expressions, such as tense, aspect, polarity and modality, as stated in the TimeML specification. Evita does not directly identify event participants, but can work with named entity taggers to link arguments to events. Evita breaks down the event recognition problem to a number of sub-tasks. Evita preprocesses the input text using the Alembic Workbench POS tagger, lemmatizer to find lexical stems, and chunkier to obtain phrase chunks, verbal, nominal and adjectival, the three that are commonly used as event referring expressions BIBREF142 . For each subtask after pre-processing, it combines linguistic- and statistically-based knowledge. Linguistic knowledge is used in local and limited contexts such as verb phrases and to extract morphological information. Statistical knowledge is used to disambiguate nominal events. The sub-tasks in event recognition in Evita are: determination of event candidates and then the events, identification of grammatical features of events, additional clustering of event chunks for event detection and grammatical feature identification in some situations. For event identification, Evita looks at the lexical items tagged by the preprocessing step. It uses different strategies for identifying events in the three categories: verbs, nouns and adjectives. For identifying events in a verbal chunk, Evita performs lexical look-up and limited contextual parsing in order to exclude weak stative predicates such as be and generics such as verbs with bare plural subjects. Identifying events expressed by nouns involves a phase of lexical look-up and disambiguation using WordNet, and by mapping events SemCor and TimeBank 1.2 to WordNet synsets. Evita consults 25 subtrees from WordNet where all the synsets denote events. One of these, the largest, is the tree underneath the sunset that contains the word event. If the result of this lexical look-up is not conclusive (i.e., if a nominal occurs as both event and non-event in WordNet), a disambiguation step is applied, based on rules learned by a Bayesian classifier trained on SemCor. To identify events from adjectives, Evita uses a conservative approach, where it tags only those adjectives that were annotated as such in TimeBank 1.2, when such adjectives occur as the head of a predicative complement. To identify grammatical features (e.g., tense, aspect, modality, polarity and non-finite morphology) of events, Evita uses different procedures based on the part of speech of the event denoting expression. But, in general it involves using morphology, pattern matching, and applying a large number (e.g., 140 such rules for verbal chunks) simple linguistic rules. However, to identify the event class, it performs lexical look-up and word sense disambiguation. Clustering is used to identify chunks from the preprocessing stage, that contribute information about the same event, e.g., when some modal auxiliaries and use of copular verbs. Clustering is activated by specific triggers such as the presence of a chunk headed by an auxiliary verb or a copular verb. Evaluation of Evita was performed by comparing its performance against TimeBanck 1.2. The reported performance was that Evita had 74.03% precision, 87.31% recall and an F-measure of 80.12% in event detection. Accuracy (precision?) for polarity, aspect and modality was over 97% in each case. : BIBREF138 use TimeBank-annotated events and identify which words and phrases are events. They consider event identification as a classification task that works on word-chunks. They use the BIO formulation that augments each class label with whether the word is the Beginning, Inside or Outside of a chunk BIBREF143 . They use a number of features, categorized into various classes, for machine learning. These include affix features (e.g., three or four characters from the beginning and end of each word), morphological features (e.g., base form of the word, and base form of any verb associated with the word if the word is a noun or gerund, for example), word-class features (e.g., POS tags, which noun or verb cluster a word belongs to where the clusters are obtained using co-occurrence statistics in the manner of BIBREF144 ), governing features (e.g., governing light verb, determiner type—cardinal or genitive, for example), and temporal features (e.g., a BIO label indicating whether the word is contained inside a TIMEX2 temporal annotation, a governing temporal preposition like since, till, before, etc.). They also use negation features and Wordnet hypernyms as features. For classification, they use the TinySVM implementation of SVM by BIBREF145 . They perform experiments with TimeBank documents using a 90% stratified sampling for training and 10% for testing. They obtained 82% precision and 71% recall, with an F-measure of 0.759. They did compare their algorithm with an version of Evita they programmed themselves; this system obtained 0.727 F-measure, and thus Bethard and Martin's approached performed about 4% better. When Bethard and Martin's system was extended to identifying semantic class of an event, it did not perform as well, obtaining precision of 67%, recall of 51%, and F-measure of 0.317. However, the system was much better at identifying the classes of verbs with F-measure of 0.707 compared to finding classes of nouns with an F-measure of 0.337 only. TIPSem (Temporal Information Processing based on Semantic information) is a system that participated in the TemEval-2 Competition BIBREF146 in 2010, which presented several tasks to participants, although we are primarily interested in the event extraction task. TIPSem achieved the best F1 score in all the tasks in TempEval-2 for Spanish, and for English it obtained the best F1 metric in the task of extracting events, which required the recognition and classification of events as defined by TimeML EVENT tag. TIPSem learns Conditional Random Field (CRF) models using features for different language analysis levels, although the approach focuses on semantic information, primarily semantic roles and semantic networks. Conditional Random Fields present a popular and efficient machine learning technique for supervised sequence labeling BIBREF147 . The features used for training the CRF models are similar to one used by others such as Bethard and Martin, although details vary. However, they add semantic role labels to the mix of features. In particular, they identify roles for each governing verb. Semantic role labeling BIBREF148 , BIBREF149 , BIBREF150 identifies for each predicate in a sentence, semantic roles and determine their arguments (agent, patient, etc.) and their adjuncts (locative, temporal, etc.). The previous two features were combined in TIPSem to capture the relation between them. The authors think this combination introduces additional information by distinguishing roles that are dependent on different verbs. The importance of this falls especially on the numbered roles (A0, A1, etc.) meaning different things when depending on different verbs. The test corpus consists of 17K words for English and 10K words for Spanish, provided by the organizers of TempEval-2. For English, they obtained precision of 0.81, recall of 0.86 and F-measure of 0.83 for recognition with event classification accuracy of 0.79; for Spanish the numbers were 0.90, 0.86, 0.88 for recognition and 0.66 for classification accuracy. We provide these numbers although we know that it is difficult to compare one system with another, for example Bethard and Martin's system with TIPSem since the corpora used are difference. As in TempEval-2, TempEval-3 BIBREF151 participants took part in a task where they had to determine the extent of the events in a text as defined by the TimeML EVENT tag. In addition, systems may determine the value of the features CLASS, TENSE, ASPECT, POLARITY, MODALITY and also identify if the event is a main event or not. The main attribute to annotate is CLASS. The TempEval-3 dataset was mostly automatically generated, using a temporal merging system. The half-million token text corpus from English Gigaword2 was automatically annotated using TIPSem, TIPSem-B BIBREF140 and TRIOS BIBREF152 . These systems were re-trained on the TimeBank and AQUAINT corpus, using the TimeML temporal relation set. The outputs of these three state-of-the-art system were merged using a merging algorithm BIBREF151 . The dataset used comprised about 500K tokens of “silver" standard data and about 100K tokens of “gold" standard data for training, compared to the corpus of roughly 50K tokens corpus used in TempEval 1 and 2. There were seven participants and all the participants except one used machine learning approaches. The top performing system was ATT-1 BIBREF153 with precision 81.44, recall 80;67 and F1 of 81.05 for event recognition, and 71.88 for event classification. Close behind was the ATT-2 system BIBREF153 with precision, recall and F-1 of 81.02, 80.81 and 80.92 for event recognition respectively, and 71.10 for event classification. Both systems used MaxEnt classifiers with Obviously, different sets of features impact on the performance of event recognition and classification BIBREF154 , BIBREF155 , BIBREF156 . In particular, BIBREF157 also examined performance based on different sizes of n-grams in a small scale (n=1,3). Inspired by such work, in building the ATT systems, the creators intended to systematically investigate the performance of various models and for each task, they trained twelve models exploring these two dimensions, three of which we submitted for TempEval-3, and of these three performed among the top ten in TempEval-3 Competition. The ATT-1 models include lexical, syntactic and semantic features, ATT-2 models include only lexical and syntactic features, and ATT-3 models include only lexical features, i.e., words. They experimented with context windows of 0, 1, 3, and 7 words preceding and following the token to be labeled. For each window size, they trained ATT-1, ATT-2 and ATT-3 models. The ATT-1 models had 18 basic features per token in the context window for up to 15 tokens, so up to 270 basic feaures for each token to be labeled. The ATT-2 models had 16 basic features per token in the context window, so up to 240 basic features for each token to be labeled. The ATT-3 models had just 1 basic feature per token in the context window, so up to 15 basic features for each token to be labeled. For event extraction and classification, and event feature classification, they used the efficient binary MaxEnt classifier for multi-class classification, available in the machine learning toolkit LLAMA BIBREF158 . They also used LLAMA's pre-processor to build unigram, bigram and trigram extended features from basic features. For event and time expression extraction, they trained BIO classifiers. It was found that the absence of semantic features causes only small changes in F1. The absence of syntactic features causes F1 to drop slightly (less than 2.5% for all but the smallest window size), with recall decreasing while precision improves somewhat. F1 is also impacted minimally by the absence of semantic features, and about 2-5% by the absence of syntactic features for all but the smallest window size.1 A was surprising that that ATT-3 models that use words only performed well, especially in terms of precision (precision, recall and F2 of 81.95, 75.57 and 78.63 for event recognition, and 69.55 F1 for event classification) . It is also surprising that the words only models with window sizes of 3 and 7 performed as well as the models with a window size of 15. These results are promising for “big dataÓ text analytics, where there may not be time to do heavy preprocessing of input text or to train large models. We have already discussed several approaches to extraction of events represented by TimeML representation. Extracting events that use other representation is not very different, but different representations have existed and exist, and therefore we briefly present some such attempts. Some of these predate the time TimeML became popular. For example, the various Message Understanding Conferences (MUCs, seven were organize by DARPA from 1987 to 1997), asked participants to extract a small number of relations and events. For instance, MUC-7, the last one, called for the extraction of 3 relations (person-employer, maker-product, and organization-location) and 1 event spacecraft launches. For example, the MUC-7 and ACE events did not attempt to cover all events, but a limited number of pre-specified event types or classes that participants need to detect during a contest period, based on which the contestants submit papers for publication. The number and the type of arguments covered are also limited and are pre-specified before the competitions start. BIBREF159 discuss a relation and event extraction system covering areas such as political, financial, business, military, and life-related topics. The system consists of tagging modules, a co-reference resolution module, and a temple generation module. They store the events generated in MUC-7 BIBREF160 format, which is not very unlike the ACE format. Events are extracted along with their event participants, e.g., who did what to whom when and where? For example, for a BUYING event, REES extracts the buyer, the artifact, the seller, and the time and location of the BUYING event. REES covers 61 types of events. There are 39 types of relations. The tagging component consists of three modules: NameTagger, NPTagger and EventTagger. Each module relies on the same pattern-based extraction engine, but uses different sets of patterns. The NameTagger recognizes names of people, organizations, places, and artifacts (only vehicles in the implemented system). The NPTagger then takes the output of the NameTagger and first recognizes non-recursive Base Noun Phrase (BNP) BIBREF143 , and then complex NPs for only the four main semantic types of NPs, i.e., Person, Organization, Location, and Artifact (vehicle, drug and weapon). The EventTagger recognizes events applying its lexicon-driven, syntactically-based generic patterns. REES uses a declarative, lexicon-driven approach. This approach requires a lexicon entry for each event-denoting word, which is generally a verb. The lexicon entry specifies the syntactic and semantic restrictions on the verb's arguments. After the tagging phase, REES sends the output through a rule-based co-reference resolution module that resolves: definite noun phrases of Organization, Person, and Location types, and singular personal pronouns. REES outputs the extracted information in the form of either MUC-style templates or XML. One of the challenges of event extraction is to be able to recognize and merge those event descriptions which refer to the same event. The Template Generation module uses a set of declarative, customizable rules to merge co- referring events into a single event. The system's recall, precision, and F-Measure scores for the training set (200 texts) and the blind set (208 texts) from about a dozen news sources. On the so-called training set, the system achieved F-measure of 64.75 for event extraction and 75.35 for relation extraction. The blind set F-Measure for 31 types of relations (73.95 As seen earlier in Subsection "ACE Events" , the way ACE events are specified, they have a lot of details that need to be extracted. BIBREF161 follows several steps to extract events and uses machine learning algorithms at every step. The steps are pre-processing of text data, identifying anchors, assigning event types, extracting arguments identifying attributes of events such as modality, polarity, genericity and tense, and finally identifyings event co-referents of the same individuated event. In other words, Ahn attempts to cover all the steps sequentially, making the simplifying assumption that they are unrelated to each other. A single place in a textual document which may be considered the primary place of reference or discussion about an event is called the event anchor. Ahn treats finding the anchor for an event within a document as a word classification task, using a two-stage classification process. He uses a binary classifier to classify a word as being an event anchor or not. He then classifies those identified as event anchors into one of the event classes. Ahn used one classifier for binary classification and then another classifiers to classify only the positive instances. Ahn treats identifying event arguments as a pair classification task. Each event mention is paired with each of the entity, time and value mentions occurring in the same sentence to form a single classification instance. There were 35 role types in the ACE 2006 task, but no event type allows arguments of all types. Each event type had its own set of allowable roles. The classification experiment run was a multi-class classification where a separate multi-class classifier was used for each event type. Ahn trains a separate classifier for each attribute. Genericity, modality, and polarity are each binary classification tasks, while tense is a multi-class task. For event coreference, Ahn follows the approach given in BIBREF162 . Each event mention in a document is paired with every other event mention, and a classifier assigns to each pair of mentions the probability that the paired mentions corefer. These probabilities are used in a left-to-right entity linking algorithm in which each mention is compared with all already-established events (i.e., event mention clusters) to determine whether it should be added to an existing event or start a new one. Ahn experimented with various combinations of a maximum entropy classifier MegaM BIBREF163 and a memory-based nearest neighbor classifier called TIMBL BIBREF164 , for the various tasks. The ACE specification provided a way to measure the performance of an event extraction system. The evaluation called ACE value is obtained by scoring each of the component tasks individually and then obtaining a normalized summary value. Overall, using the best learned classifiers for the various subtasks, they achieve an ACE value score of 22.3%, where the maximum score is 100%. The value is low, but other systems at the time had comparable performance. BIBREF165 describe an approach to classify sentences in a document as specifying one or more events from a certain ACE 2006 class. They classify each sentence in a document as containing an instance of a certain type or not. Unlike BIBREF161 , they are not interested in identifying arguments or any additional processing. Also, unlike Ahn who classifies each word as possibly being an event anchor for a specific type of ACE event, Naughton et al. perform a classification of each sentence in a document as being an on-event sentence or an off-event sentence. An on-event sentence is a sentence that contains one or more instances of the target event type. An off-event sentence is a sentence that does not contain any instances of the target event type. They use several approaches to classify a sentence as on-event or off-event. These include the following: SVM-based machine learning BIBREF166 , language modeling approaches using count smoothing, and a manual approach which looks for Wordnet synonyms or hypernyms of certain trigger words in a sentence. Naughton et al. found that 1) use of a large number of features to start but then reduction of these features using information gain, and 2) use of SVM produces the best results although all versions of SVM (i.e., with all features with no reduction, just the terms without complex features, or a selection of terms and other features) all work very well. A “surprising" finding was that the “manual" trigger-based classification approach worked almost as well as the SVM based approaches. Determining Event Coreference When an event is mentioned in several places within a document, finding which references are to the same event is called determining event coreference. These are co-referents to the event. Determining when two event mentions in text talk about the same event or co-refer is a difficult problem. As BIBREF167 point out that the events may be actual occurrences or hypothetical events. BIBREF162 present a statistical language-independent framework for identifying and tracking named, nominal and pronominal references to entities within unrestricted text documents, and chaining them into groups corresponding to each logical entity present in the text. The model can use arbitrary feature types, integrating a variety of lexical, syntactic and semantic features. The mention detection model also uses feature streams derived from different named entity classifiers. For mention detection, the approach used is based on a log-linear Maximum Entropy classifier BIBREF168 and a linear Robust Risk Minimization classifier BIBREF169 . Then they use a MaxEnt model for predicting whether a mention should or should not be linked to an existing entity, and to build entity chains. Both classifiers can integrate arbitrary types of information and are converted into suitable for sequence classification for both tasks. For entity tracking, the process works from left to right. It starts with an initial entity consisting of the first mention of a document, and the next mention is processed by either linking it with one of the existing entities, or starting a new entity. Atomic features used by the entity linking algorithm include string match, context, mention count, distance between the two mentions in words and sentences, editing distance, properties of pronouns such gender, number and reflexiveness. The best combination of features was able to obtain slightly more than 73% F-1 value using both RRM and MaxEnt algorithms for mention detection. Entity tracking was evaluated in terms of what is called the ACE value. A gauge of the performance of an EDT system is the ACE value, a measure developed especially for this purpose. It estimates the normalized weighted cost of detection of specific-only entities in terms of misses, false alarms and substitution errors. Florian et al. achieved an ACE value of 73.4 out of 100 for the MaxEnt classifier and 69.7 for the RRM classifier. BIBREF161 follows the approach by BIBREF162 for entity coreference determination. He uses a binary classifier to determine if any two event mentions in the document refer to the same event. Thus, he pairs each event with every other event, and the classifier assigns each pair a probability that they are the same. The probability is used with entity linking/matching algorithm to determine event co-reference. Event co-referencing requires event mentions to be clustered to event clusters. Event mentions in a cluster are the same event. The system described here obtained an ACE value of between 88-91%, where the maximum ACE value is 100%. BIBREF161 uses the following features for event co-reference determination. Let the candidate be the earlier event mention and the anaphor be the later mention. The anchors for the candidate and the anaphor, the full or original form, and also in lowercase, and POS tag. Type of the candidate event and the anaphor event. Depth of candidate anchor word in parse tree. Distance between the candidate and anchor, measured in sentences. Number, heads, and roles of shared arguments, etc. Supervised approaches to solving event coreference use linguistic properties to decide if a pair of event mentions is coreferential BIBREF170 , BIBREF171 , BIBREF161 , BIBREF172 . These models depend on labeled training data, and annotating a large corpus with event coreference information requires substantial manual effort. In addition, since these models make local pairwise decisions, they are unable to capture a global event distribution at topic or document collection level. BIBREF173 present how nonparametric Bayesian models can be applied to an open-domain event coreference task in an unsupervised manner. The first model extends the hierarchical Dirichlet process BIBREF174 to take into account additional properties associated with event mentions. The second model overcomes some of the limitations of the first model, and uses the infinite factorial hidden Markov model BIBREF175 coupled to the infinite hidden Markov model BIBREF176 in order to consider a potentially infinite number of features associated with observable objects which are event mentions here, perform an automatic selection of the most salient features, and capture the structural dependencies of observable objects or event mentions at the discourse level. Furthermore, both models can work with a potentially infinite number of categorical outcomes or events in this case. Two event mentions corefer if they have the same event properties and share the same event participants. To find coreferring event mentions, Bejan and Harabagiu describe words that may be possible event mentions with lexical features, class features such as POS and event classes such BIBREF119 as occurrence, state and action, Wordnet features, semantic features obtained by a semantic parse BIBREF177 and the predicate argument structures encoded in PropBank annotations BIBREF89 as well as semantic annotations encoded in the FrameNet corpus BIBREF79 . The first model represents each event mention by a finite number of feature types, and is also inspired by the Bayesian model proposed by BIBREF178 . In this model, a Dirichlet process (DP) BIBREF179 is associated with each document, and each mixture component (i.e., event) is shared across documents since In the process of generating an event mention, an event index z is first sampled by using a mech- anism that facilitates sampling from a prior for in- finite mixture models called the Chinese restaurant franchise (CRF) representation, as reported in BIBREF174 . The second model they use is called the iHMM-iFHMM model (infinite hidden Markov model–infinite factorial hidden Markov model). The iFHMM framework uses the Markov Indian buffet process (mIBP) BIBREF175 in order to represent each object as a sparse subset of a potentially unbounded set of latent features BIBREF180 , BIBREF181 , Specifically, the mIBP defines a distribution over an unbounded set of binary Markov chains, where each chain can be associated with a binary latent feature that evolves over time according to Markov dynamics. The iFHMM allows a flexible representation of the latent structure by letting the number of parallel Markov chains be learned from data, it cannot be used where the number of clustering components is infinite. An iHMM represents a nonparametric extension of the hidden Markov model (HMM) BIBREF182 that allows performing inference on an infinite number of states. To further increase the representational power for modeling discrete time series data, they develop a nonparametric extension that combines the best of the two models, and lets the two parameters M and K be learned from data Each step in the new iHMM-iFHMM generative process is performed in two phases: (i) the latent feature variables from the iFHMM framework are sampled using the mIBP mechanism; and (ii) the features sampled so far, which become observable during this second phase, are used in an adapted version of the beam sampling algorithm BIBREF183 to infer the clustering components (i.e., latent events). They report results in terms of recall (R), precision (P), and F-score (F) by employing the mention-based B3 metric BIBREF184 , the entity-based CEAF metric BIBREF185 , and the pairwise F1 (PW) metric. Their experiments for show that both of these models work well when the feature and cluster numbers are treated as free parameters, and the selection of feature values is performed automatically. BIBREF167 argue that events represent complex phenomena and can therefore co-refer fully, being identical, like other researchers have discussed, or co-refer partially, being quasi-identical or only partially identical. Two event mentions fully co-refer if their activity, event or state representation is identical in terms of all features used (e.g., agent, location or time). Two event mentions are quasi-identical if they partially co-refer, i.e., most features are the same, but there may be additional details to one or the other. When two events fully co-refer, Hovy et al. state they may be lexically identical (i.e., the same senses of the same word, e.g., destroy and destruction), synonymous words, one mention is a wider reading of the other (e.g., The attack took place yesterday and The bombing killed four people), one mention is a paraphrase of the other with possibly some syntactic differences (e.g., He went to Boston and He came to Boston), and one mention deictically refers to the other (e.g., the party and that event). Quasi-identity or partial co-reference may arise in two ways: membership identity or subevent identity. Membership identity occurs when one mention, say A, is a set of multiple instances of the same type of event, and the other mention, say B, is one of the individual events in A (e.g., I attended three parties last week. The first one was the best.). Subevent identity is found when one mention, say A, is a stereotypical sequence (or script) of events whereas the other mention, say B, is one of the actions or events within the script (e.g., The family ate at the restaurant. The dad paid the waitress at the end.). Hovy et al.attempt to build a corpus containing event co-reference links with high quality annotations, i.e., annotations with high inter-annotator agreement, to be useful for machine learning. They have created two corpora to assist with a project on automated deep reading of texts. One corpus is in the domain of violent events (e..g., bombings, killens and wars), and the other one containing texts about the lives of famous people. In both of these corpora, they have annotated a limited number of articles with full and partial co-references. BIBREF186 claims that performing event co-reference with high accuracy requires deep understanding of the text and statistically-based methods, both supervised and unsupervised, do not perform well. He claims that this is the case because because it is absolutely necessary to identify arguments of an event reliably before event co-references can be found. Arguments are difficult to identify because many are implicit and linguistically unexpressed. Successful even co-reference identification needs determination of spatio-temporal anchoring and locations in time and space are also very often implicit. The system he builds uses a linguistically based semantic module, which has a number of different submodules which take care of Spatio-Temporal Reasoning, Discourse Level Anaphora Resolution, and determining Topic Hierarchy. The coreference algorithm works as follows: for each possible referent it check all possible coreference links, at first using only the semantic features, which are: wordform and lemma identity; then semantic similarity measured on the basis of a number of similarity criteria which are lexically based. The system searches WordNet synsets and assign a score according to whether the possible referents are directly contained in the same synset or not. A different score is assigned if their relation can be inferred from the hierarchy. Other computational lexical resources they use include FrameNet and Frames hierarchy; SumoMilo and its semantic classification. After collecting all possible coreferential relations, the system filters out those links that are inconsistent or incompatible. Argument structure and spatiotemporal relations are computed along with dependence relations; temporal logical relations as computed using an adaptation of Allen's algorithm. The system also computes semantic similarity, where high values are preferred. The paper does not give any results to support the initial hypothesis, although the ideas are interesting. BIBREF187 use granularity in computing event coreference. The intuition is, that an event with a longer duration, that happens on a bigger area and with multiple particpants (for instance a war between Russia and Ukraine) might be related to but will probably not fully corefer with a Òlower levelÓ event of shorter duration and with single participants involved (e.g. A Russian soldier has shot dead a Ukrainian naval officer). Coreference between mentions of two events is determined by computing compatibility of contents of event attributes. The attributes used are event trigger, time, location, human and non-human participant slots BIBREF188 . Granularity size is mentioned in terms of durations of event actions BIBREF189 and granularity levels of event participants, time and locations. Granularity is given in terms of partonomic relations or through the part-of relation, between entities and events, using the taxonomy of meronymic relations by BIBREF190 . Granularity levels of the human participant slot are contained within WinstonÕs et al. Member-Collection relations. The temporal granularity levels make part of WinstonÕs Portion-Mass relationships and locational levels are in line with Place-Area relations in WinstonÕs taxonomy. Cybulska and Vossen experimented with a decision-tree supervised pairwise binary classifier to determine coreference of pairs of event mentions. They also ran experiments with a linear SVM and a multinomial Naive Bayes classifier but the decision-tree classifier outperformed both of them. For the experiments, Cybulska and Vossen use the ECB+ dataset BIBREF191 . The ECB+ corpus contains a new corpus component, consisting of 502 texts, describing different instances of event types. They provide results in terms of several metrics: recall, precision and F-score, MUC BIBREF192 , B3 BIBREF184 , mention-based CEAF BIBREF185 , BLANC BIBREF193 , and CoNLL F1 BIBREF194 , and find that the introduction of the granularity concept into similarity computation improves results for every metric. Biomedical Event Extraction Researchers are interested in extracting information from the huge amount of biomedical literature published on a regular basis. Of course, one aspect of information extraction is event extraction, the focus of this paper. In the biomedical context, an event extraction system tries to extract details of bimolecular interactions among biomedical entities such as proteins and genes, and the processes they take part in, as described in terms of textual documents. Manually annotated corpora are used to train machine learning techniques and evaluate event extraction techniques. There have been several workshops on biomedical natural language processing. We focus on the BioNLP Shared Tasks in recent years that had competitions on event extraction. There have been three BioNLP Shared Task competitions so far: 2009, 2011, and 2013. The BioNLP 2009 Shared Task BIBREF195 was based on the GENIA corpus BIBREF196 which contains PubMed abstracts of articles on transcription factors in human blood cells. There was a second BioNLP Shared Task competition organized in 2011 to measure the advances in approaches and associated results BIBREF197 . The third BioNLP ST was held in 2013. We discuss some notable systems from BioNLP ST 2011 and 2013. Before the BioNLP Shared Tasks, event extraction in the biomedical domain usually classified each pair of named entities (usually protein names) co-occurring in the text as interacting or not. BioNLP Shared Tasks extended such an approach by adding relations such as direction, type and nesting. An event defines the type of interaction, such as phosphorylation, and is usually marked in the text with a trigger word (e.g., phosphorylates) describing the interaction. This word forms the core of the event description. A directed event has roles that have inherent directionality such as cause or theme, the agent or target of the biological process. In addition, events can act as arguments of other events, creating complex nested structures. For example, in the sentence Stat3 phosphorylation is regulated by Vav, a phosphorylation-event is the argument of the regulation-event. The BioNLP Shared Tasks provide task definitions, benchmark data and evaluations, and participants compete by developing systems to perform the specified tasks. The theme of BioNLP-ST 2011 was a generalization of the 2009 contest, generalized in three ways: text types, event types, and subject domains. The 2011 event-related tasks were arranged in four tracks: GENIA task (GE) BIBREF197 , Epigenetics and Post-translational Modifications (EPI) BIBREF198 , Infectious Diseases (ID) BIBREF199 , and the Bacteria Track BIBREF200 , BIBREF201 . Of the four event-related shared tasks in BioNLP 2011, the first three were related to event extraction. The Genia task was focused on the domain of transcription factors in human blood cell. Trascription is a complex but just the first step in the process in which the instructions contained in the DNA in the nucleus of a cell are used to produce proteins that control most life processes. Transcription factors are proteins that control the transcription process. The EPI task was focused on events related to epigenetics, dealing with protein and DNA modifications, with 14 new event types, including major protein modification types and their reverse reactions. Epigenesis refers to the development of a plant or animal from a seed, spore or egg, through a sequence of steps in which cells differentiate and organs form. The EPI task was designed toward pathway extraction and curation of domain databases BIBREF202 , BIBREF203 . A biological pathway refers to a sequence of actions among molecules in a cell that leads to a certain product or a change in the cell. The ID task was focused on extraction of events relevant to biomolecular mechanisms of infectious diseases from full length publications. Tasks other than ID focused on abstracts only. In this paper, we discuss the systems and approaches for only the 2011 GE Task. This is because several of the winning systems for the GE Task did well in the other two relevant tasks as well. The Genia Task is described in Table 7 . The table shows for each event type, the primary and secondary arguments to be extracted. For example, a phosphorylation event is primarily extracted with the protein to be phosphorylated, which is the addition of a phosphate group to a protein or other organic molecule. As secondary information, the specific site to be phosphorylated may be extracted. From a computational viewpoint, the event types represent different levels of complexity. When only primary arguments are considered, the first five event types in Table 7 are classified as simple events, requiring only unary arguments. The binding and regulation types are more complex. Binding requires the detection of an arbitrary number of arguments, and Regulation requires detection of recursive event structure. Consider the sentence In this study we hypothesized that the phosphorylation of TRAF2 inhibits binding to the CD40 cytoplasmic domain. Here there are two protein (entity) names: TRAF2 and CD40. The word phosphorylation refers to an event; this string is a trigger word. Thus, the goal of the GE task was to identify a structure like the ones in Tables 8 and 9 . In the tables, $T_i$ represents a trigger word, and $E_i$ represents an event associated with the corresponding trigger word. There are three events, $E_1$ is the phosphorylation event, $E_2$ is the binding event and $E_3$ is the negative regulation event. For each trigger word, we see the starting and ending character positions in the entire string. For each event, we see the participants in it. The second task identifies an additional site argument. Table 10 shows the best results for various tasks in the BioNLP 2011 contests. BIBREF197 note an improvement of 10% over the basic GE task, in 2011 (Task GEa), compared to 2009. The results of the GE tasks show that automatic extraction of simple events–those with unary arguments, e.g., gene expression, localization and phosphorylation—can be achieved at about 70% in F-score, but the extraction of complex events, e.g., binding and regulation is very challenging, with only 40% performance level. The GE and ID results show that generalization to full papers is possible, with just a small loss in performance. The results of phosphorylation events in GE and EP are similar (GEp vs. EPIp), which leads BIBREF197 to conclude that removal of the GE domain specificity does not reduce event extraction performance by much. EPIc results indicate that there are challenges to extracting similar event types that need to be overcome; EPIf results indicate that there are difficult challenges in extracting additional arguments. The complexity of the ID task is similar to that of the GE task; this shows up in the final results, also indicating that it is possible to generalize to new subject domains and new argument (entity) types. Below, we provide a brief description of some of the approaches to biomedical event extraction from the BioNLP 2011 contests. Technical Methods Used in BioNLP Shared Tasks 2011 The team that won the GE Task was the FAUST system BIBREF204 , followed by the UMass system BIBREF205 , then the UTurku system BIBREF206 . The performance of these three systems on the various tasks is given in Table 11 . In addition, we have the Stanford system in the table because it performed fairly well on the tasks. The UMass system BIBREF205 looks at a sentence as having an event structure, and then projects it onto a labeled graph. See Figure 6 for a target event structure and the projected graph for the sentence fragment Phosphorylation of TRAF2 inhibits binding to CD40. The system searches for a structure that connects the event and its participating entities and imposes certain constraints on the structure. Thus, the UMass system treats the search for such a structure as an optimization problem. To formulate this optimization problem, the system represents the structure in terms of a set of binary variables, inspired by the work of BIBREF207 , BIBREF208 . These binary variables are based on the projection of the events to the labeled graph. An example of a binary variable is $a_{i,l.r}$ to indicate that between positions $i$ and $l$ in the sentence, there is an edge labeled $r$ from a set of possible edge labels $R$ . Another such binary variable is $t_{i,p,q}$ that indicates that at position $i$ , there is a binding event with arguments $p$ and $q$ . Given a number of such variables, it is possible to write an objective function to optimize in order to obtain events and entity bindings. The system decomposes the biomedical event extraction task into three sub-tasks: (a) event triggers and outgoing edges on arguments, (b) event triggers and incoming edges on arguments, and (c) and protein-protein bindings. The system obtains an objective function for each of the sub-tasks. It solves the three optimization problems one by one in a loop, till no changes take place, or up to a certain number of iterations. The approach uses optimizing by dual decomposition BIBREF209 , BIBREF210 since the dual of the original optimization problem is solved. The Stanford system BIBREF211 exploits the observation that event structures bear a close relation to dependency graphs BIBREF212 . They cast bimolecular events in terms of these structures which are pseudo-syntactic in nature. They claim that standard parsing tools such as maximum-spanning tree parsers and parse rerankers can be applied to perform event extraction with minimum domain specific training. They use an off-the-shelf dependency parser, MSTParser BIBREF213 , BIBREF214 , but extend it with event-specific features. Their approach requires conversion to and from dependency trees, at the beginning and and at the end. The features in the MSTParser are quite local (i.e., able to examine a portion of each event at a time); the decoding necessary can be performed globally, allowing the dependency parser some trade-offs. Event parsing is performed using three modules: 1) anchor detection to identify and label event anchors, 2) event parsing to form candidate event structures by linking entries and event anchors, and 3) event reranking to select the best candidate event structure. First, they parse the sentences with a reranking parser BIBREF215 with the biomedical parsing model from BIBREF216 , using the set of Stanford dependencies BIBREF217 . After the parsing, they perform anchor detection using a technique inspired by techniques for named entity recogntion to label each token with an event type or none, using a logistic regression classifier. The classifier uses features inspired by BIBREF208 . They change a parameter to obtain high recall to overgenerate event anchors. Multiword event anchors are reduced to their syntactic head. The event anchors and the included entities become a “reduced" sentence, input to the event parser. Thus, the event parser gets words that are believed to directly take part in the events. This stage uses the MSTParser with additional event parsing features. The dependency trees are decoded and converted back to event structures. Finally, for event reranking, the system gets $n$ best list of event structures from each decoder in the previous step of event parsing. The reranker uses global features of an event structure to restore and output the highest scoring structure. The reranking approach is based on parse reranking BIBREF218 , but is based on features of event structures instead of syntactic constituency structure. They use the cvlm estimator BIBREF215 when learning weights for the reranking model. Since the reranker can work with outputs of multiple decoders, they use it as an ensemble technique as in BIBREF219 . The FAUST system BIBREF204 shows that using a straightforward model combination strategy with two competitive systems, the UMass system BIBREF205 and the Stanford system BIBREF211 just described, can produce a new system with substantially high accuracy. The new system uses the framework of stacking BIBREF220 . The new system does it by including the predictions of the Stanford system into the UMass system, simply as a feature. Using this simple model of stacking, the FAUST system was able to obtain first place in three tasks out of four where it participated. The Turku Event Extraction System BIBREF206 , BIBREF221 can be easily adapted to different event schemes, following the theme of event generalization in BioNLP 2011. The system took part in eight tasks in BioNLP 2011 and demonstrated the best performance in four of them. The Turku system divides event extraction into three main steps: i) Perform named entity recognition in the sentence, ii) Predict argument relations between entities, and iii) Finally, separate entity/argument sets into individual events. The Turku system uses a graph notation with trigger and protein/gene entities as nodes and relations (e.g., theme) as edges. In particular, an event in the graph representation is a trigger node along with its outgoing edges. The steps are shown in Figure 7 . The Turku system uses Support Vector Machines BIBREF222 , BIBREF223 at various stages to perform each of the sub-tasks. To use an SVM classifier, one needs to convert text into features understood by the classifier. The Turku system performs a number of analyses on the sentences, to obtain features, which are mostly binary. The features are categorized into token features (e.g., Porter-stem BIBREF224 , Penn Treebank part-of-speech tags BIBREF102 , character bi- and tri-grams, presence of punctuation on numeric characters), sentence features (e.g., the number of named entities in the sentence), dependency chains (up to a depth of three, to define the context of the words), dependency with $n$ -grams (joining a token with two flanking dependencies as well as each dependency with two flanking tokens), trigger features (e.g., the trigger word a gene or a protein) and external features (e.g., Wordnet hypernyms, the presence of a word in a list of key terms). Applicable combinations of these features are then used by the three steps in event detection: trigger detection, edge detection and unmerging. Trigger words are detected by classifying each token as negative or as one of the positive trigger classes using SVMs. Sometimes several triggers overlap, in which case a merged class (e.g. phosphorylation–regulation) is used. After trigger prediction, triggers of merged classes are split into their component classes. Edge detection is used to predict event arguments or triggerless events and relations, all of which are defined as edges in the graph representation. The edge detector defines one example per direction for each pair of entities in the sentence, and uses the SVM classifier to classify the examples as negatives or as belonging to one of the positive classes. When edges are predicted between these nodes, the result is a merged graph where overlapping events are merged into a single node and its set of outgoing edges. To produce the final events, these merged nodes need to be Òpulled apartÓ into valid trigger and argument combinations. Unmerging is also performed using the SVM classifier. Speculation and negation are detected independently, with binary classification of trigger nodes using SVMs. The features used are mostly the same as for trigger detection, with the addition of a list of speculation-related words. Extracting Events from Socially Generated Documents With the explosive expansion of the Internet during the past twenty years, the volume of socially generated text has skyrocketed. Socially generated text includes blogs and microblogs. For example, Twitter, started in 2006, has become a social phenomenon. It allows individuals with accounts to post short messages that are up to 140 characters long. Currently, more than 340 million tweets are sent out every day. While a majority of posts are conversational or not particularly meaningful, about 3.6% of the posts concern topics of mainstream news. Twitter has been credited with providing the most current news about many important events before traditional media, such as the attacks in Mumbai in November 2008. Twitter also played a prominent role in the unfolding of the troubles in Iran in 2009 subsequent to a disputed election, and the so-called Twitter Revolutions in Tunisia and Egypt in 2010-11. Most early work on event extraction of information from documents found on the Internet has focussed on news articles BIBREF225 , BIBREF226 , BIBREF227 . However, as noted earlier, social networking sites such as Twitter and Facebook have become important complimentary sources of such information. Individual tweets, like SMS messages, are usually short and self-contained and therefore are not composed of complex discourse structures as is the case with texts containing narratives. However, extracting structured representation of events from short or informal texts is also challenging because most tweets are about mundane things, without any news value and of interest only to the immediate social network. Individual tweets are also very terse, without much context or content. In addition, since Twitter users can talk about any topic, it is not clear a priori what event types may be appropriate for extraction. The architecture of the system called TwiCal for event extraction BIBREF228 from Twitter messages is given in Figure 8 . Given a stream of raw tweets, TwiCal extract events with associated named entities and times of occurrence. First the tweets are POS tagged using a tagger BIBREF228 , especially trained with Twitter data. Then named entities are recognized BIBREF229 using a recognizer trained with Twitter data as well. After this, phrases that mention events (or, event triggers or event phrases or just events) are extracted using supervised learning. BIBREF228 annotated 1,000 tweets with event phrases, following guidelines for annotation of EVENT tags in Timebank BIBREF119 . The system recognizes event triggers as a sequence labeling task using Conditional Random Fields BIBREF147 . It uses a contextual dictionary, orthographic features, features based on the Twitter-tuned POS tagger, and dictionaries of event terms gathered from WordNet BIBREF121 . Once a large number of events have been extracted by this CRF learner, TwiCal categorizes these events into types using an unsupervised approach based on latent variable models, inspired by work on modeling selectional preferences BIBREF230 , BIBREF231 , BIBREF232 , BIBREF233 and unsupervised information extraction BIBREF234 , BIBREF225 , BIBREF235 . This automatic discovery of event types is similar to topic modeling, where one automatically identifies the extant topics in a corpus of text documents. The automatically discovered types (topics) are quickly inspected by a human effort to filter out incoherent ones, and the rest are annotated with informative labels. Examples of event types discovered along with top event phrases and top entities are given in Table 12 . The resulting set of types are applied to categorize millions of extracted events without the use of any manually annotated examples. For inference, the system uses collapsed Gibbs sampling BIBREF236 and prediction is performed using a streaming approach to inference BIBREF237 . To resolve temporal expressions, TwiCal uses TempEx BIBREF238 , which takes as input a reference date, some text and POS tags, and marks temporal expressions with unambiguous calendar references. Finally, the system measures the strength of association between each named entity and date based on the number of tweets they co-occur in, in order to determine if the event is significant. Examples of events extracted by TwiCal are given in Table 13 . Each event is a 4-tuple including a named entity, event phrase, calendar date and event type. The TwiCal system describe above used topic modeling using latent variables as one of the several computational components; it is used to capture events captured using supervised learning into types or topics. BIBREF239 point out some drawbacks of using such an approach. The main problem is that frequently the result generated by Latent Dirichlet Analysis (LDA) is difficult to interpret because it simply gives a list of words associate with the topic. For example, when BIBREF239 attempt to find the four most important topics using LDA based on a Twitter collection emanating from Singapore on June 16, 2010, they find the topics listed in Table 14 . Therefore, Weng et al. present another approach to detect events from a corpus of Twitter messages. Their focus is on detection and therefore, not on extraction of components that describe an event. Event detection is based on the assumption that when an event is taking place, some related words show an increase in usage. In this scheme, an event is represented by a number of keywords showing a burst in appearance count BIBREF240 , BIBREF241 . Although it is clear that tweets report events, but such reports are usually overwhelmed by high flood of meaningless “babbles". In addition, the algorithms for event detection must be scalable to handle the torrent of Twitter posts. The EDCoW (Event Detection with Clustering of Wavelet-based Signals) system builds signals for individual words by applying wavelet analysis on frequency-based raw signals of words occurring in the Twitter posts. These signals capture only the bursts in the words' appearance. The signals are computed efficiently by wavelet analysis BIBREF242 , BIBREF243 . Wavelets are quickly vanishing oscillating functions and unlike sine and cosine functions used in Discrete Fourier Transformation (DFT) which are localized in frequency but extend infinitely in time, wavelets are localized both in time and frequency. Therefore, wavelet transformation is able to provide precise measurements about when and to what extent bursts take place in a signal. BIBREF239 claim that this makes it a better choice for event detection when building signals for individual words. Wavelet transformation converts signals from time domain to time-scale domain where scale can be considered the inverse of frequency. Such signals also take less space for storage. Thus, the first thing EDCoW does is convert frequencies over time to wavelets, using a sliding window interval. It removes trivial words by examining signal auto-correlations. The remaining words are then clustered to form events with a modularity-based graph partitioning technique, which uses a scalable eigenvalue algorithm. It detects events by grouping sets of words with similar patterns of burst. To cluster, similarities between words need to be computed. It does so by using cross correlation, which is a common measure of similarity between two signals BIBREF244 . Cross correlation is a pairwise operation. Cross correlation values among a number of signals can be represented in terms of a correlation matrix $\mathcal {M}$ , which happens to be a symmetric sparse matrix of adjacent similarities. With this graph setup, event detection can be formulated as a graph partitioning problem, i.e., to cut the graph into subgraphs. Each subgraph corresponds to an event, which contains a set of words with high cross correlation, and also that the cross correlation between words in different subgraphs are low. The quality of such partitioning is measures using a metric called modularity BIBREF245 , BIBREF246 . The modularity of a graph is defined as the sum of weights of all the edges that fall within subgraphs (after partitioning) subtracted by the expected edge weight sum if the edges were placed at random. The main computation task in this component is finding the largest eigenvalue and corresponding eigenvector, of the sparse symmetric modularity matrix. This is solved using power iteration, which is able to scale up with the increase in the number of words in the tweets BIBREF247 . EDCoW requires each individual event to contain at least two words. To differentiate big events from trivial ones, EDCoW quantifies the events' significance, which depends on two factors, the number of words and cross-correlation among the words related to the event. To make EDCoW work with TwiCal to see if it improves performance, the topic detection module will have to be replaced. EDCoW associates fewer words to topics because it filters words away before associating with a topic. Table 15 gives a few event words obtained by EDCoW and the corresponding event description. Please note that the event description was created by the authors and not the system. Summarization BIBREF248 use event-based features to represent sentences and shows that their approach improves the quality of the final summaries compared to a baseline bag-of-words approach. Question Answering Event recognition is a core task in question-answering since the majority of web questions have been found to be relate to events and situations in the world BIBREF121 . For example, to answer the question How many people were killed in Baghdad in March?, or Who was the Prime MInister of India in when China and India fought their only war?, the question-answering system may have to identify events across a bunch of documents before creating an answer. Future Directions of Research It also seems like when doctors take notes on a patient's history or medical record, the information is not written in order of events or in temporal order all the time. It will be good to take notes from here and there and put them in an event ordered fashion or temporally ordered manner. Extracting an event based structure of the medical record would help understand the medical history better. Most systems process sentences in isolation, like most event extraction systems at the current time. Therefore, events crossing sentence boundaries cannot be detected. ...
GENIA corpus
9595bf228c9e859b0dc745e6c74070be2468d2cf
9595bf228c9e859b0dc745e6c74070be2468d2cf_0
Q: Does the training dataset provide logical form supervision? Text: Introduction Open domain semantic parsing aims to map natural language utterances to structured meaning representations. Recently, seq2seq based approaches have achieved promising performance by structure-aware networks, such as sequence-to-actionBIBREF0 and STAMPBIBREF1. However, this kind of approach mixes up low-level entities, predicates and high-level structures together, which loses precision at each level to some extent. So the sketch-based method may be an another choice for disentangling high-level structures from low-level details. In this work, we conduct our sketch-based approach on MSParS, a large hand-annotated semantic dataset mapping questions to logical forms. We argue there are at least two advantages to sketch-based method. Firstly, basic attention based seq2seq networkBIBREF2, BIBREF3 does not perform well in semantic parsing because logical forms are structured sequences and it fails to incorporate structure information of logical forms. Then sequence-to-tree(seq2tree)BIBREF4 proposes a structure-aware decoder to utilize the information. But its architecture also becomes much more complex. Instead of using intricate decoders, we can extract high-level sketches for logical forms and classify samples into several sketch classes. Logical forms of a certain sketch class have a fixed pattern which is shown in Table TABREF2. So the structure problem is finally simplified to a classification task. Secondly, logical forms often need to copy a span of questions. Although CopynetBIBREF5 and PointerBIBREF6 implement the copy mechanism, it is still difficult to achieve the expected effect. But for the sketch-based method, this problem becomes an individual entity labeling task which is easier than generating entities. Generally speaking, the seq2seq way decodes the entire meaning representation in one go while we deal with different parts at different levels of granularity just like coarse2fineBIBREF7. Although we increase the number of stages, the network architecture of each stage is much simpler without sacrificing the accuracy. In this way, we are able to locate the errors and optimize according parts. We propose to decompose the process into three stages. In the first stage, we deal with a sketch classification task. Then, we find the entities in the questions through an entity labeling task. Actually, we combine the two stages through the multi-task model for both accuracy and efficiencyBIBREF8. The last stage is the most difficult part since the knowledge base of MSParS is not available. We define question pattern-logical form pattern pair and use the matching network to rank all these pairs. Seq2seq based approach is one of the two methods we adopted here to help rescore on the whole. We also incorporate state-of-art pre-trained work, BertBIBREF9, in above tasks to incorporate more priori knowledge. The error rate of our multi-task model is lower than 2%, which ensures the right sketch and entities. So the last stage actually determines the accuracy to a large extent. Our accuracy achieves 77.42% after above three stages. Seq2seq based approach and co-occurrence relationship improve the accuracy to 86.86% in validation set. Our final accuracy in full test set reaches 84.47%. And the accuracy on hard test subset has been promoted to 63.08% finally which is higher than the best model on the submission list by 5.65%. In the rest of our paper, we first analyze the special features of MSParS for this task in section 2. Afterwords, we discuss our system in detail in section 3. Then in section 4, we demonstrate our experimental setup, results and analyzation. Related works are mentioned in section 5. At last, we make a conclusion of the whole paper and propose our future work. Data Analyzation The dataset MSParS is published by NLPCC 2019 evaluation task. The whole dataset consists of 81,826 samples annotated by native English speakers. 80% of them are used as training set. 10% of them are used as validation set while the rest is used as test set. 3000 hard samples are selected from the test set. Metric for this dataset is the exactly matching accuracy on both full test set and hard test subset. Each sample is composed of the question, the logical form, the parameters(entity/value/type) and question type as the Table TABREF3 demonstrates. Samples are classified to 12 classes originally at a coarse level while we reclassify them at a finer level, which is the basis of our sketch-based method. We replace the predicate in the triple as $P_i$, the entity in the triple as $E_i$ and distinguish different ones with subscripts. The number in superlative class and comparative class is replaced as $V$ while the type in the triple begin with special predicate “isa" is replaced as $T$ as well. In this way, we get the sketch of the logical form. Finally, we produce 15 classes of sketches. We believe the features of questions highly correlate with the sketch of logical forms. For instance, the sketch must begin with “argmore" or “argless" if there are comparative words such as “higher", “more" and “before" in questions. Therefore, we take questions as input to classify samples to different sketch classes. As the Table TABREF3 suggests, entities are concatenated tokens from the question. So we implement entity labeling to label every token in the questions. Nonetheless, cases are tough when there are more than one entities in the logical form. Suppose that we have labeled $E_1$ and $E_2$ from the question. We do not know which one we should choose to fill in the first entity slot in the sketch. We solve this problem and pick out the suitable predicate simultaneously. The entities in the questions are replaced by label “entity” with subscipts suggesting the order they appear in questions to get question patterns. When it comes to logical form patterns, the entities in logical forms are substituted as well while predicates are split to small tokens. Table TABREF4 gives an example of these two patterns. In this way, we combine the entity collocations with predicates successfully. Another reason for label “entity” used here is generalization. For instance, “what is birth date for barack obama" shares the same question pattern “what is birth date for entity1" with “what is birth date for donald trump". The predicate used in these logical forms is “mso:people.person.date_of_birth”. So we can draw the conclusion that the predicate for this question pattern is likely to be “mso:people.person.date_of_birth”. If “what is birth date for george bush" appears in the test set, we are able to find the right predicate even if we do not see “george bush” before. Without the impact of specific entities, our model learns the mapping from question patterns to logical form patterns more accurately. Since we do not have a knowledge base, we can only extract logical form patterns in training set. And we find 90.34% of logical form patterns in validation set are covered by that in training set, which ensures the feasibility of our method. We take question patterns paired with logical form patterns as input. Then, we get logical form candidates through combining sketches and entities with logical form patterns. The ones with higher scores are more likely to be right. Proposed Approach ::: Sketch Classification The single sentence classification fine-tuned task in Bert is applied in this stage. A special classification embedding ([CLS]) is added to the beginning. We use the final hidden state corresponding to this token as the aggregate sequence representation for classification task denoted as $C_s \in \mathbb {R}^h$, so the probability of class $c_i$ can be computed as: where $W_s \in \mathbb {R}^{k_s \times h}$ and $b_s \in \mathbb {R}^{k_s}$, $k_s$ is the number of sketch classes here. $W_s$, $b_s$ and all the parameters of Bert are fine-tuned jointly to maximize the log likelihood probability of the correct label. Proposed Approach ::: Entity Labeling We use the single sentence tagging fine-tuned task in Bert here to label every token in the question whether it is an entity token that appears in the logical form as well. To simplify the problem, we use 3 labels for the tokens in the questions. Label “b" represents the first token in an entity while label “i” for the rest ones. And label “o” represents those tokens which are not in any entities. Because of the lexical rules in Bert, we also label the special token ([CLS]) at the beginning of the sentence and the special token ([SEP]) at the ending of the sentence as “o”. The last label “p” is for all the padding tokens added to reach max_length. Besides, some tokens in the questions are split into several smaller tokens by Bert. For the split ones, they are labeled as “i” if they are in the entities and “o” otherwise. In this stage, we use all the final hidden states denoted as $D \in \mathbb {R}^{h \times m}$ where m is the max_length of the input tokens we set. The hidden state is mapped into dimension $k_e$ via $E = W_eD + b_e$ where $W_e \in \mathbb {R}^{k_e \times h}$ and $b_e \in \mathbb {R}^{k_e \times m}$, $k_e$ is the number of labels here. We employ the CRF on the top of the network taking $E$ as input representations. The objective is to minimize the loss of CRF layer. Proposed Approach ::: Multi-Task Model We combine sketch classification and entity labeling to share information together, which means sketches of samples can help label entities while the labeled entities can help sketch classification conversely. The architecture of our model is shown in Fig. FIGREF9 where the parameters of Bert model is fine-tuned together for two tasks. Since the scale of dataset is large, we can save lots of time through multi-task model instead of training two different models. Finally, it contributes to both accuracy and efficiency. In this way, our loss to minimize is the weighted sum of the cross-entropy loss in sketch classification task and the CRF loss in entity labeling task. Proposed Approach ::: Pattern Pair Matching Network Besides the single sentence tasks, Bert provides sentence pair classification tasks as well. We implement the matching network taking question patterns and logical form patterns as input. The right pattern pairs are regarded as positive samples. We select negative samples only from the logical form patterns in the same sketch class for fixed question patterns. The sketch mentioned is from the multi-task model. Just like sketch classification, we denote the final hidden state corresponding to token ([CLS]) as $C_p \in \mathbb {R}^h$, so the probability can be computed as: where $W_p \in \mathbb {R}^{2 \times h}$, $b_p \in \mathbb {R}^{2}$ and $c_j \in \lbrace 0, 1\rbrace $. $W_p$, $b_p$ and all the parameters of bert are fine-tuned jointly to maximize the log likelihood probability of the correct class. In the prediction stage, the candidates for a question pattern are from logical form patterns in the same sketch class as well. The probabilities of class “1" are scores we get for these pattern pairs. From logical form patterns, we get not only right predicates, but right orders as well in which entities should appear. So with the sketch and entities we aquire in the multi-task model, we can already generate complete logical form candidates with scores between 0 and 1. Proposed Approach ::: Predicate-Entity Pair Matching Network To alleviate the absence of knowledge base, we incorporate the co-occurrence relationship between predicates and entities to evaluate the candidates. We create the second matching network based on Bert as well. This time, the pairs we take as input are predicate-entity ones. We label the predicate-entity pair as “1” if they have ever appeared in one triple in training set. For a certain entity, we select predicates that never appear with this entity as negetive samples. In the prediction stage, we score the predicate-entity pairs in logical form candidates. However, this network does not take questions into account. The predicate for a certain entity can differ a lot according to various questions. For instance, the predicate for “what is birth date for barack obama" is apparently different from that for “what is birth place for barack obama". But the entity “barack obama" has only one predicate with highest score. Although this matching network only considers the co-occurrence relationship regardless of the information from questions, scores produced by it do work as an auxiliary. Proposed Approach ::: Pointer Network Although it is not easy for a complex network to generate the whole logical form, such networks do reflect the mapping from an overall perspective. So we adopt PointerBIBREF6 here to rerank. We take the questions as input. For logical forms, entities and predicates are composed of words concatenated by “_” or “.”. In order to utilize the information of words, we split all entities and predicates and take split logical form candidates as output. For a fixed question, we calculate cross-entropy losses of different pairs with split logical forms. Then every loss is divided by the max one and subtracted by 1 to be normalized between 0 and 1. The higher the score is, the more the logical form candidate is likely to be true. Proposed Approach ::: Ranking A linear combination of the three intermediate scores from pattern pair matching network, predicate-entity pair matching network and Pointer is used to rerank logical form candidates. Weights are roughly adjusted in validation set. Experiment The number of sketch classes is 15 and the number of labels is 4 in the multi-task model. The Bert model we applied is “BERT-Base, Uncased" with 12-layer, 768-hidden, 12-heads and 110M parameters. All the parameters are fine-tuned in validation set. In the multi-task model, we train the model for 10 epoches. We set batch size to 32 and learning rate to 2e-5. The weight of the loss in sketch classification is 1 while that in entity labeling is 2. We train 3 models in pattern pair matching network with different epoches. As for predicate-entity pair matching network, the number of epoch we use is 3. In Pointer, word embeddings were initialized by GloveBIBREF10. The hidden dim of LSTM is set to 256. More details will be released in our source codes later. Because of the instability of the performance of neural network over training epoches, ensemble learning is incorporated both in pattern pair matching network and Pointer. Scores of Pointer is the simple average of scores from 3 models with different epoches. When it comes to pattern pair matching net, it is a little complex. We make a prediction for training set with our “best" model. We apply ranking sampling here. From those labeled as “0” but with probabilities larger than 0.0001, we select 20 of them while 5 of those whose probabilities are smaller than 0.0001 as new negative samples. We train new models with new training data resampled before every epoch based on one “best" model and base model of Bert. After several epoches, we average the probabilities of new models and original models for ensemble. We demonstrate the detailed performance in Table TABREF16. All samples are classified into 15 classes. We show the results for every class and the overall weighted average preformance in validation set. Because the complete test set is not open yet, we only provide the overall results in test set returned after submission. It can be seen the overall error rate of our multi-task model is only 1.93% which means this task is successful. In sketch classification, $Err_s$ scores of all classes are lower than 1% except multi-turn-answer. Its recall is 100.00% while its precision is 91.38%. 0.92% of samples in multi-turn-entity are misclassified to multi-turn-answer in validation set. We find there are separator “$|||$” in logical forms from three classes of multi-turn questions. Multi-turn-predicate questions have two different entities while both multi-turn-entity and multi-turn-answer questions have only one. This kind of entity information is passed to sketch classification through shared parameters. So our system makes some mistakes while distinguishing multi-turn-entity samples from multi-turn-answer samples. As for entity labeling, the overall error rate is 1.72%. We check the wrong samples and find our model is not so good at recognizing entity boundaries especially while encountering some special tokens such as articles, genitive “s” and quotation mark. Actually, it is not easy for human to define an entity in these cases as well. At last, $Err_f$ of our best model is 13.14% in validation set, 15.53% in full test set and 36.92% in hard test subset. We inspect the output of our model in order to identify the causes of errors. The entity error takes up 20.43% not only because of wrong entities but also right entities in wrong order. 79.57% of incorrect samples have wrong predicates although their entities are right. Our accuracy is extremely low for multi-choice. We look into this class and find 50.72% of errors are because of right entities with wrong order. Actually, there are three different entities in sketch of multi-choice class and two of them are semantically exchangeable in the form $( or ( equal\ ?x\ E_1 ) ( equal\ ?x\ E_2 ) )$. So it is not easy for our pattern pair matching network to deal with this problem. In the meantime, our model achieves error rate of 0% for 3 classes in validation set. Our system is compared with that of other teams in NLPCC 2019 Shared Task 2. The top 4 results are shown in Table TABREF17. Our system on the submission list is $WLIS$ which achieves the 3rd place. After some optimizations for parameters, seq2seq network structure and sampling, the performance of our new system has been improved a lot. The accuracy of our new baseline reaches 77.42%. By incorporating two auxiliary scores, the accuracy is improved to 86.86% in validation set. Accuracy achieves 84.47% in full test set and 63.08% in hard test subset. Our accuracy in full test set supasses the 2nd place but is still lower than the 1st place by 1.21% while the accuracy on hard subset is higher than that of the 1st place by 5.65%. Related Work Semantic parsing is a long-standing problem in NLP mapping natural language utterances to logical formsBIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Since it is not easy for semantic parsing to label data manually, reinforcement learningBIBREF17 and transferBIBREF18, BIBREF19 are applied when data is not enough. But in most cases, we are studying how to improve the results when enough data is available for supervised learning. Basic seq2seq networkBIBREF20 enables the model to be trained in an end-to-end mode. Later, structure-aware models are designed to generate logical forms more elaborately. Seq2treeBIBREF4 is equipped with a tree-structured decoder to parse hierarchical logical forms while STAMPBIBREF1 adopts a switching gate in the decoder to control the generation of SQL. The models mentioned above all generate the whole logical form in one go. There are also some works that applied sketch-based approach to solve the problem. It has already been explored in the field of program synthesisBIBREF21. Coarse2fineBIBREF7 decomposes the decoding process to 2 stages. Sketches are generated in the first stage while model in the second stage fills in missing details. SQL generating is especially suitable for this method because of its easy sketches. SqlnetBIBREF22 divides the task into 6 subtasks to generate different part of SQL. SQLovaBIBREF23 also inherits this idea and incorporate BertBIBREF9 in his model. The idea of our system is similar to that of SQLova. We do not use complex decoders to make our network structure-aware. The architectures of models are easy in every stage. We first determine sketches as the high-level structure. Low-level details are added in later stages. The losses of seq2seq network is applied here to rerank from an overall perspective. So we actually combine both seq2seq method and sketch-based method to some extent. Conclusion In this paper, we presented a sketch-based system for semantic parsing which disentangles high-level structures from low-level details. Due to the absence of knowledge base, we propose to collect question patterns and logical form patterns to capture the implicit relationship between questions and predicates, which can then be used to perform reranking in a Pointer network within a seq2seq framework. Our previous submitted system achieves the 3rd place while our new system outperforms the 1st place for accuracy in hard test subset. Since the knowledge base will be released later, in future work we would like to incorporate new knowledge to improve our system. We will extend our system to other semantic parsing tasks as well. Acknowledgements This work is supported in part by the NSFC (Grant No.61672057, 61672058, 61872294), the National Hi-Tech R&D Program of China (No. 2018YFB1005100). For any correspondence, please contact Yansong Feng.
Yes
94c5f5b1eb8414ad924c3568cedd81dc35f29c48
94c5f5b1eb8414ad924c3568cedd81dc35f29c48_0
Q: What is the difference between the full test set and the hard test set? Text: Introduction Open domain semantic parsing aims to map natural language utterances to structured meaning representations. Recently, seq2seq based approaches have achieved promising performance by structure-aware networks, such as sequence-to-actionBIBREF0 and STAMPBIBREF1. However, this kind of approach mixes up low-level entities, predicates and high-level structures together, which loses precision at each level to some extent. So the sketch-based method may be an another choice for disentangling high-level structures from low-level details. In this work, we conduct our sketch-based approach on MSParS, a large hand-annotated semantic dataset mapping questions to logical forms. We argue there are at least two advantages to sketch-based method. Firstly, basic attention based seq2seq networkBIBREF2, BIBREF3 does not perform well in semantic parsing because logical forms are structured sequences and it fails to incorporate structure information of logical forms. Then sequence-to-tree(seq2tree)BIBREF4 proposes a structure-aware decoder to utilize the information. But its architecture also becomes much more complex. Instead of using intricate decoders, we can extract high-level sketches for logical forms and classify samples into several sketch classes. Logical forms of a certain sketch class have a fixed pattern which is shown in Table TABREF2. So the structure problem is finally simplified to a classification task. Secondly, logical forms often need to copy a span of questions. Although CopynetBIBREF5 and PointerBIBREF6 implement the copy mechanism, it is still difficult to achieve the expected effect. But for the sketch-based method, this problem becomes an individual entity labeling task which is easier than generating entities. Generally speaking, the seq2seq way decodes the entire meaning representation in one go while we deal with different parts at different levels of granularity just like coarse2fineBIBREF7. Although we increase the number of stages, the network architecture of each stage is much simpler without sacrificing the accuracy. In this way, we are able to locate the errors and optimize according parts. We propose to decompose the process into three stages. In the first stage, we deal with a sketch classification task. Then, we find the entities in the questions through an entity labeling task. Actually, we combine the two stages through the multi-task model for both accuracy and efficiencyBIBREF8. The last stage is the most difficult part since the knowledge base of MSParS is not available. We define question pattern-logical form pattern pair and use the matching network to rank all these pairs. Seq2seq based approach is one of the two methods we adopted here to help rescore on the whole. We also incorporate state-of-art pre-trained work, BertBIBREF9, in above tasks to incorporate more priori knowledge. The error rate of our multi-task model is lower than 2%, which ensures the right sketch and entities. So the last stage actually determines the accuracy to a large extent. Our accuracy achieves 77.42% after above three stages. Seq2seq based approach and co-occurrence relationship improve the accuracy to 86.86% in validation set. Our final accuracy in full test set reaches 84.47%. And the accuracy on hard test subset has been promoted to 63.08% finally which is higher than the best model on the submission list by 5.65%. In the rest of our paper, we first analyze the special features of MSParS for this task in section 2. Afterwords, we discuss our system in detail in section 3. Then in section 4, we demonstrate our experimental setup, results and analyzation. Related works are mentioned in section 5. At last, we make a conclusion of the whole paper and propose our future work. Data Analyzation The dataset MSParS is published by NLPCC 2019 evaluation task. The whole dataset consists of 81,826 samples annotated by native English speakers. 80% of them are used as training set. 10% of them are used as validation set while the rest is used as test set. 3000 hard samples are selected from the test set. Metric for this dataset is the exactly matching accuracy on both full test set and hard test subset. Each sample is composed of the question, the logical form, the parameters(entity/value/type) and question type as the Table TABREF3 demonstrates. Samples are classified to 12 classes originally at a coarse level while we reclassify them at a finer level, which is the basis of our sketch-based method. We replace the predicate in the triple as $P_i$, the entity in the triple as $E_i$ and distinguish different ones with subscripts. The number in superlative class and comparative class is replaced as $V$ while the type in the triple begin with special predicate “isa" is replaced as $T$ as well. In this way, we get the sketch of the logical form. Finally, we produce 15 classes of sketches. We believe the features of questions highly correlate with the sketch of logical forms. For instance, the sketch must begin with “argmore" or “argless" if there are comparative words such as “higher", “more" and “before" in questions. Therefore, we take questions as input to classify samples to different sketch classes. As the Table TABREF3 suggests, entities are concatenated tokens from the question. So we implement entity labeling to label every token in the questions. Nonetheless, cases are tough when there are more than one entities in the logical form. Suppose that we have labeled $E_1$ and $E_2$ from the question. We do not know which one we should choose to fill in the first entity slot in the sketch. We solve this problem and pick out the suitable predicate simultaneously. The entities in the questions are replaced by label “entity” with subscipts suggesting the order they appear in questions to get question patterns. When it comes to logical form patterns, the entities in logical forms are substituted as well while predicates are split to small tokens. Table TABREF4 gives an example of these two patterns. In this way, we combine the entity collocations with predicates successfully. Another reason for label “entity” used here is generalization. For instance, “what is birth date for barack obama" shares the same question pattern “what is birth date for entity1" with “what is birth date for donald trump". The predicate used in these logical forms is “mso:people.person.date_of_birth”. So we can draw the conclusion that the predicate for this question pattern is likely to be “mso:people.person.date_of_birth”. If “what is birth date for george bush" appears in the test set, we are able to find the right predicate even if we do not see “george bush” before. Without the impact of specific entities, our model learns the mapping from question patterns to logical form patterns more accurately. Since we do not have a knowledge base, we can only extract logical form patterns in training set. And we find 90.34% of logical form patterns in validation set are covered by that in training set, which ensures the feasibility of our method. We take question patterns paired with logical form patterns as input. Then, we get logical form candidates through combining sketches and entities with logical form patterns. The ones with higher scores are more likely to be right. Proposed Approach ::: Sketch Classification The single sentence classification fine-tuned task in Bert is applied in this stage. A special classification embedding ([CLS]) is added to the beginning. We use the final hidden state corresponding to this token as the aggregate sequence representation for classification task denoted as $C_s \in \mathbb {R}^h$, so the probability of class $c_i$ can be computed as: where $W_s \in \mathbb {R}^{k_s \times h}$ and $b_s \in \mathbb {R}^{k_s}$, $k_s$ is the number of sketch classes here. $W_s$, $b_s$ and all the parameters of Bert are fine-tuned jointly to maximize the log likelihood probability of the correct label. Proposed Approach ::: Entity Labeling We use the single sentence tagging fine-tuned task in Bert here to label every token in the question whether it is an entity token that appears in the logical form as well. To simplify the problem, we use 3 labels for the tokens in the questions. Label “b" represents the first token in an entity while label “i” for the rest ones. And label “o” represents those tokens which are not in any entities. Because of the lexical rules in Bert, we also label the special token ([CLS]) at the beginning of the sentence and the special token ([SEP]) at the ending of the sentence as “o”. The last label “p” is for all the padding tokens added to reach max_length. Besides, some tokens in the questions are split into several smaller tokens by Bert. For the split ones, they are labeled as “i” if they are in the entities and “o” otherwise. In this stage, we use all the final hidden states denoted as $D \in \mathbb {R}^{h \times m}$ where m is the max_length of the input tokens we set. The hidden state is mapped into dimension $k_e$ via $E = W_eD + b_e$ where $W_e \in \mathbb {R}^{k_e \times h}$ and $b_e \in \mathbb {R}^{k_e \times m}$, $k_e$ is the number of labels here. We employ the CRF on the top of the network taking $E$ as input representations. The objective is to minimize the loss of CRF layer. Proposed Approach ::: Multi-Task Model We combine sketch classification and entity labeling to share information together, which means sketches of samples can help label entities while the labeled entities can help sketch classification conversely. The architecture of our model is shown in Fig. FIGREF9 where the parameters of Bert model is fine-tuned together for two tasks. Since the scale of dataset is large, we can save lots of time through multi-task model instead of training two different models. Finally, it contributes to both accuracy and efficiency. In this way, our loss to minimize is the weighted sum of the cross-entropy loss in sketch classification task and the CRF loss in entity labeling task. Proposed Approach ::: Pattern Pair Matching Network Besides the single sentence tasks, Bert provides sentence pair classification tasks as well. We implement the matching network taking question patterns and logical form patterns as input. The right pattern pairs are regarded as positive samples. We select negative samples only from the logical form patterns in the same sketch class for fixed question patterns. The sketch mentioned is from the multi-task model. Just like sketch classification, we denote the final hidden state corresponding to token ([CLS]) as $C_p \in \mathbb {R}^h$, so the probability can be computed as: where $W_p \in \mathbb {R}^{2 \times h}$, $b_p \in \mathbb {R}^{2}$ and $c_j \in \lbrace 0, 1\rbrace $. $W_p$, $b_p$ and all the parameters of bert are fine-tuned jointly to maximize the log likelihood probability of the correct class. In the prediction stage, the candidates for a question pattern are from logical form patterns in the same sketch class as well. The probabilities of class “1" are scores we get for these pattern pairs. From logical form patterns, we get not only right predicates, but right orders as well in which entities should appear. So with the sketch and entities we aquire in the multi-task model, we can already generate complete logical form candidates with scores between 0 and 1. Proposed Approach ::: Predicate-Entity Pair Matching Network To alleviate the absence of knowledge base, we incorporate the co-occurrence relationship between predicates and entities to evaluate the candidates. We create the second matching network based on Bert as well. This time, the pairs we take as input are predicate-entity ones. We label the predicate-entity pair as “1” if they have ever appeared in one triple in training set. For a certain entity, we select predicates that never appear with this entity as negetive samples. In the prediction stage, we score the predicate-entity pairs in logical form candidates. However, this network does not take questions into account. The predicate for a certain entity can differ a lot according to various questions. For instance, the predicate for “what is birth date for barack obama" is apparently different from that for “what is birth place for barack obama". But the entity “barack obama" has only one predicate with highest score. Although this matching network only considers the co-occurrence relationship regardless of the information from questions, scores produced by it do work as an auxiliary. Proposed Approach ::: Pointer Network Although it is not easy for a complex network to generate the whole logical form, such networks do reflect the mapping from an overall perspective. So we adopt PointerBIBREF6 here to rerank. We take the questions as input. For logical forms, entities and predicates are composed of words concatenated by “_” or “.”. In order to utilize the information of words, we split all entities and predicates and take split logical form candidates as output. For a fixed question, we calculate cross-entropy losses of different pairs with split logical forms. Then every loss is divided by the max one and subtracted by 1 to be normalized between 0 and 1. The higher the score is, the more the logical form candidate is likely to be true. Proposed Approach ::: Ranking A linear combination of the three intermediate scores from pattern pair matching network, predicate-entity pair matching network and Pointer is used to rerank logical form candidates. Weights are roughly adjusted in validation set. Experiment The number of sketch classes is 15 and the number of labels is 4 in the multi-task model. The Bert model we applied is “BERT-Base, Uncased" with 12-layer, 768-hidden, 12-heads and 110M parameters. All the parameters are fine-tuned in validation set. In the multi-task model, we train the model for 10 epoches. We set batch size to 32 and learning rate to 2e-5. The weight of the loss in sketch classification is 1 while that in entity labeling is 2. We train 3 models in pattern pair matching network with different epoches. As for predicate-entity pair matching network, the number of epoch we use is 3. In Pointer, word embeddings were initialized by GloveBIBREF10. The hidden dim of LSTM is set to 256. More details will be released in our source codes later. Because of the instability of the performance of neural network over training epoches, ensemble learning is incorporated both in pattern pair matching network and Pointer. Scores of Pointer is the simple average of scores from 3 models with different epoches. When it comes to pattern pair matching net, it is a little complex. We make a prediction for training set with our “best" model. We apply ranking sampling here. From those labeled as “0” but with probabilities larger than 0.0001, we select 20 of them while 5 of those whose probabilities are smaller than 0.0001 as new negative samples. We train new models with new training data resampled before every epoch based on one “best" model and base model of Bert. After several epoches, we average the probabilities of new models and original models for ensemble. We demonstrate the detailed performance in Table TABREF16. All samples are classified into 15 classes. We show the results for every class and the overall weighted average preformance in validation set. Because the complete test set is not open yet, we only provide the overall results in test set returned after submission. It can be seen the overall error rate of our multi-task model is only 1.93% which means this task is successful. In sketch classification, $Err_s$ scores of all classes are lower than 1% except multi-turn-answer. Its recall is 100.00% while its precision is 91.38%. 0.92% of samples in multi-turn-entity are misclassified to multi-turn-answer in validation set. We find there are separator “$|||$” in logical forms from three classes of multi-turn questions. Multi-turn-predicate questions have two different entities while both multi-turn-entity and multi-turn-answer questions have only one. This kind of entity information is passed to sketch classification through shared parameters. So our system makes some mistakes while distinguishing multi-turn-entity samples from multi-turn-answer samples. As for entity labeling, the overall error rate is 1.72%. We check the wrong samples and find our model is not so good at recognizing entity boundaries especially while encountering some special tokens such as articles, genitive “s” and quotation mark. Actually, it is not easy for human to define an entity in these cases as well. At last, $Err_f$ of our best model is 13.14% in validation set, 15.53% in full test set and 36.92% in hard test subset. We inspect the output of our model in order to identify the causes of errors. The entity error takes up 20.43% not only because of wrong entities but also right entities in wrong order. 79.57% of incorrect samples have wrong predicates although their entities are right. Our accuracy is extremely low for multi-choice. We look into this class and find 50.72% of errors are because of right entities with wrong order. Actually, there are three different entities in sketch of multi-choice class and two of them are semantically exchangeable in the form $( or ( equal\ ?x\ E_1 ) ( equal\ ?x\ E_2 ) )$. So it is not easy for our pattern pair matching network to deal with this problem. In the meantime, our model achieves error rate of 0% for 3 classes in validation set. Our system is compared with that of other teams in NLPCC 2019 Shared Task 2. The top 4 results are shown in Table TABREF17. Our system on the submission list is $WLIS$ which achieves the 3rd place. After some optimizations for parameters, seq2seq network structure and sampling, the performance of our new system has been improved a lot. The accuracy of our new baseline reaches 77.42%. By incorporating two auxiliary scores, the accuracy is improved to 86.86% in validation set. Accuracy achieves 84.47% in full test set and 63.08% in hard test subset. Our accuracy in full test set supasses the 2nd place but is still lower than the 1st place by 1.21% while the accuracy on hard subset is higher than that of the 1st place by 5.65%. Related Work Semantic parsing is a long-standing problem in NLP mapping natural language utterances to logical formsBIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Since it is not easy for semantic parsing to label data manually, reinforcement learningBIBREF17 and transferBIBREF18, BIBREF19 are applied when data is not enough. But in most cases, we are studying how to improve the results when enough data is available for supervised learning. Basic seq2seq networkBIBREF20 enables the model to be trained in an end-to-end mode. Later, structure-aware models are designed to generate logical forms more elaborately. Seq2treeBIBREF4 is equipped with a tree-structured decoder to parse hierarchical logical forms while STAMPBIBREF1 adopts a switching gate in the decoder to control the generation of SQL. The models mentioned above all generate the whole logical form in one go. There are also some works that applied sketch-based approach to solve the problem. It has already been explored in the field of program synthesisBIBREF21. Coarse2fineBIBREF7 decomposes the decoding process to 2 stages. Sketches are generated in the first stage while model in the second stage fills in missing details. SQL generating is especially suitable for this method because of its easy sketches. SqlnetBIBREF22 divides the task into 6 subtasks to generate different part of SQL. SQLovaBIBREF23 also inherits this idea and incorporate BertBIBREF9 in his model. The idea of our system is similar to that of SQLova. We do not use complex decoders to make our network structure-aware. The architectures of models are easy in every stage. We first determine sketches as the high-level structure. Low-level details are added in later stages. The losses of seq2seq network is applied here to rerank from an overall perspective. So we actually combine both seq2seq method and sketch-based method to some extent. Conclusion In this paper, we presented a sketch-based system for semantic parsing which disentangles high-level structures from low-level details. Due to the absence of knowledge base, we propose to collect question patterns and logical form patterns to capture the implicit relationship between questions and predicates, which can then be used to perform reranking in a Pointer network within a seq2seq framework. Our previous submitted system achieves the 3rd place while our new system outperforms the 1st place for accuracy in hard test subset. Since the knowledge base will be released later, in future work we would like to incorporate new knowledge to improve our system. We will extend our system to other semantic parsing tasks as well. Acknowledgements This work is supported in part by the NSFC (Grant No.61672057, 61672058, 61872294), the National Hi-Tech R&D Program of China (No. 2018YFB1005100). For any correspondence, please contact Yansong Feng.
3000 hard samples are selected from the test set
ba05a53f5563b9dd51cc2db241c6e9418bc00031
ba05a53f5563b9dd51cc2db241c6e9418bc00031_0
Q: How is the discriminative training formulation different from the standard ones? Text: Introduction The cocktail party problem BIBREF0 , BIBREF1 , referring to multi-talker overlapped speech recognition, is critical to enable automatic speech recognition (ASR) scenarios such as automatic meeting transcription, automatic captioning for audio/video recordings, and multi-party human-machine interactions, where overlapping speech is commonly observed and all streams need to be transcribed. The problem is still one of the hardest problems in ASR, despite encouraging progresses BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this paper, we address the speech recognition problem when multiple people speak at the same time and only a single channel of overlapped speech is available. This is useful when only a single microphone is present, or when microphone array based algorithms fail to perfectly separate the speech. Specifically, the paper focuses on an unsupervised inference method, which does not need any prior knowledge of speakers. To obtain transcriptions of all speakers from the overlapped speech, joint inference is conducted based on multiple knowledge sources: frequency domain voice discrimination, temporal speaker tracing, linguistic information and speech recognition. Prior work in unsupervised single-channel overlapped speech recognition generally separates the problem into speech separation and recognition stages. Before the deep learning era, the most popular speech separation technique is computational auditory scene analysis (CASA) BIBREF2 . There are two main stages in CASA approaches: segmentation and grouping. The segmentation stage decomposes mixed speech into time-frequency segments assumed to be derived from the corresponding speakers based on perceptual grouping cues BIBREF6 . The grouping stage simultaneously and sequentially concatenates the segments to generate independent streams for each speaker. Non-negative matrix factorization (NMF) BIBREF7 is another popular technique which aims to learn a set of non-negative bases that can be used to estimate mixing factors during evaluation. Recently, several deep learning based techniques have been proposed but seldom concentrate on the unsupervised case, which is more applicable. In BIBREF8 , BIBREF9 , BIBREF10 , the authors propose deep clustering (DPCL), in which a deep network is trained to produce spectrogram embeddings that are discriminative for partition labels given in training data. The model is optimized so that in the neural network embedding space the time-frequency bins belonging to the same speaker are closer and those of different speakers are farther away. Speech segmentations are therefore implicitly encoded in the embeddings, and can be obtained by clustering algorithm. In BIBREF11 , a DNN-based gender mixture detection system and three gender-dependent speech separation systems are constructed. The latter ones directly infer the feature streams of two speakers respectively. For all these methods, speech separation and recognition are two separate components and the latter is applied to the separated feature streams. The mismatched feature in the speech recognition stage is one of the limitation in these methods. In light of permutation invariant training (PIT) proposed in speech separation BIBREF12 originally, the PIT-ASR model BIBREF13 is the first attempt in joint modeling of unsupervised single-channel mixed speech recognition. Whereas the original PIT technique jointly models the voice discrimination and speaker tracing, PIT-ASR further integrates speech recognition into the neural network with a unified cross entropy (CE) criterion. Although PIT-ASR shows promising results, it suffers from several disadvantages, which are analyzed in Section "Unsupervised Single-channel Overlapped Speech Recognition" . In this paper, progressive joint modeling is proposed to divide the single channel overlapped speech recognition problem into three sub-problems for initialization: frame-wise interpreting, speaker tracing and speech recognition (Figure 1 ). Each module is initialized by placing it into a series of networks that solve progressively more difficult problems. After the initialization, modules are jointly trained with two novel strategies, namely self-transfer learning and multi-output sequence discriminative training. Transfer learning is introduced in this problem, which leverages parallel clean speech to improve the training targets for the network. Our discriminative training formulation is a modification of standard formulations, that also penalizes competing outputs of the system. The proposed framework achieves 30% relative improvement over both a strong jointly trained system, PIT-ASR, and a separately optimized system, PIT for speech separation with clean speech ASR. The improvement comes from better model generalization, training efficiency and the sequence level linguistic knowledge integration. The rest of the paper is organized as follows. In Section "Unsupervised Single-channel Overlapped Speech Recognition" , the unsupervised single-channel overlapped speech recognition problem is briefly reviewed. In Section "Modularization" , the modular initialization and progressive joint training is proposed. In Section "Unsupervised Single-channel Overlapped Speech Recognition"0 , the self-transfer learning is proposed and in Section "Multi-output Sequence Discriminative Training" , multi-output sequence discriminative training is proposed. In Section "Experiment" , the experimental results are reported in artificial overlapped Switchboard corpus and Eval2000 hub5e-swb test set, followed by the conclusion in Section "Conclusion" . Unsupervised Single-channel Overlapped Speech Recognition Unsupervised single-channel overlapped speech recognition refers to the speech recognition problem when multiple unseen talkers speak at the same time and only a single channel of overlapped speech is available. Different from supervised mode, there's not any prior knowledge of speakers in the evaluation stage. In the problem, only the linearly overlapped single-channel signal is known, which is defined as, $$\begin{split} \mathbf {O}_{u}^{(m)}=\sum _{n=1}^N \mathbf {O}_{un}^{(r)} \end{split}$$ (Eq. 4) where $\mathbf {O}_{un}^{(r)}$ is the clean signal stream of speaker $n$ at utterance $u$ and $\mathbf {O}_{u}^{(m)}$ is the overlapped speech stream of utterance $u$ . $N$ is the number of streams. Single channel ASR is always formulated as a supervised sequence labeling problem given by $P(\mathbf {L}_u|\mathbf {O}_{u})$ , which is the posterior probability of transcription sequence $\mathbf {L}_u$ given the feature sequence $\mathbf {O}_{u}$ in the utterance $u$ . Nevertheless, the multi-speaker problem is to model the joint distribution of $n$0 streams of transcriptions given the overlapped speech signal, $n$1 . Due to the symmetric labels given the mixture signals, it is no longer a supervised optimization problem. One branch of methods assumes the inference streams are conditionally independent, and tries to assign the correct transcription stream $n$2 to the corresponding output stream $n$3 , $$\begin{split} P(\mathbf {L}_{u1},...,\mathbf {L}_{uN}|\mathbf {O}_{u}^{(m)}) \approx \prod _{n=1}^N P(\mathbf {L}_{un}^{(r)}|\mathbf {O}_{u}^{(m)}) \end{split}$$ (Eq. 5) Another branch of methods assume the overlapped signal can be separated to $\mathbf {O}_{un}^{(c)}\mathop {:} \mathbf {O}_{un}^{(c)}\approx \mathbf {O}_{un}^{(r)}$ . Because the original streams $\mathbf {O}_{un}^{(r)}$ are conditionally independent with each other, the separated signal streams $\mathbf {O}_{un}^{(c)}$ are also assumed to be conditionally independent. Thus Equation ( 5 ) can be derived to Equation ( 6 ), $$\begin{split} P(\mathbf {L}_{u1},...,\mathbf {L}_{uN}|\mathbf {O}_{u}^{(m)}) \approx \prod _{n=1}^N P(\mathbf {L}_{un}^{(r)}|\mathbf {O}_{un}^{(c)}) \end{split}$$ (Eq. 6) However, neither assumption is precise. For the first assumption, there is no pre-determined method to obtain the ideal label arrangements, which is called the speaker tracing problem. The second assumption is that the speech separation and recognition are independent processes, which introduces an artificial information bottleneck. In BIBREF12 , the reference streams are treated as an unordered set. The PIT framework is proposed to address the speech separation problem by firstly determining the assignment of the reference stream and inference stream that minimizes the error at the utterance level based on the forward-pass result. This is followed by minimizing the error given the utterance level best assignment. BIBREF13 extends this by integrating speech recognition into the neural network with a unified cross-entropy (CE) training criterion. $$\begin{split} \mathcal {J}_{\text{CE-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \sum _t \frac{1}{N} \sum _{n\in [1,N]} CE({l}_{utn}^{(s^{\prime })},{l}_{utn}^{(r)}) \end{split}$$ (Eq. 7) Here, $\mathbf {S}$ is the permutation set of the reference representation and the inference representation. ${l}_{utn}^{(s^{\prime })}$ is the $n$ -th inference label of permutation $s^{\prime }$ at frame $t$ in utterance $u$ and ${l}_{utn}^{(r)}$ is the corresponding transcription label obtained by clean speech forced-alignment BIBREF14 . The PIT-ASR criterion BIBREF13 elegantly integrates speech separation, speaker tracing and speech recognition together as Figure 2 (a). Its joint modeling approach eliminates the artificial bottleneck between the speech separation and speech recognition tasks. But the method suffers from several disadvantages which deteriorates the performance: Methods In this work, we propose three separate enhancements to improve the performance of PIT-ASR. First, the structure and accuracy of the model is improved through modularization and pretraining. Frame-wise interpreting, speaker tracing, and speech recognition modules replace the monolithic structures used in previous work. These modules are progressively pretrained and jointly fine-tuned. Second, we demonstrate a natural way to incorporate a form of transfer learning. Clean speech features are used to generate soft label targets which are interpolated with the reference label alignments. Third, multi-output discriminative training is applied to the system. As with single-stream speech recognition, multi-stream discriminative training can help with model generalization. Additionally, the objective function is augmented to reduce cross-speaker word assignment errors. Modularization In the original formulation, a PIT-ASR model consists of a single monolithic structure that predicts independent targets for each speaker. We improve this by replacing the main network structure with a modular structure, shown in Figure 1 . This modular structure consists of three tasks, namely interpreting mixed acoustic data, tracing speakers across time, and predicting acoustic label sequences. First, the frame-wise module is designed to extract the local time-frequency information necessary to separate the overlapped speech into individual acoustic representations. It is entirely local and does not depend on sequence-level information. Second, the speaker tracing module accepts frame-wise acoustic representations from the frame-wise module and traces the speaker information. This process concatenates adjacent acoustic representations of the same speaker together to infer the recovered speech features of each speaker. Third, the speech recognition modules accept the sequences of recovered acoustic features from each speaker, and produce sequences of label scores suitable for use in an automatic speech recognition system. Because each speech recognition module performs the same task, it is natural to share the parameters of this module across each instance in the final model. Although it is possible to train the modularized network of Figure 2 (e) from random initialization, it is better to use a progressive training strategy. This strategy is motivated by the Curriculum learning theory in BIBREF15 , and integrates both modular initialization and joint training. We train a simple model first, and then use it as a pre-trained building block for a more complicated model and task. Figures 2 (b)-(e) illustrate how the model becomes progressively more complex while solving more difficult problems, from frame-wise mean squared error to whole utterance cross entropy. Our simplest model, shown in Figure 2 (b), is trained to solve a frame-wise speech separation task. For $N$ speakers, given the mixed data $\mathbf {O}_{u}^{(m)}$ , the model infers an acoustic representation ${o}_{utn}$ for each speaker $n$ at frame $t$ of utterance $u$ . The objective function of the frame-wise training, is given as $$\begin{split} \mathcal {J}_{\text{F-PIT}}=\sum _u\sum _t \frac{1}{N}\min _{s^{\prime }\in \mathbf {S}} \sum _{n\in [1,N]} MSE({o}_{utn}^{(s^{\prime })},{o}_{utn}^{(r)}) \end{split}$$ (Eq. 12) where, $\mathbf {S}$ is the permutation set of the reference representation and the inference representation. ${o}_{utn}^{(s^{\prime })}$ and ${o}_{utn}^{(r)}$ is the frame level acoustic representation of permutation $s^{\prime }$ and the reference clean speech, respectively. In each frame $t$ of the utterance $u$ , the overall minimum square error, $MSE$ , is obtained by comparing all the reference and inference representations of each permutation $s^{\prime }$ . The architecture for pre-training the speaker tracing module is explained in Figure 2 (c). The tracing module is combined with a pre-trained frame-wise module that has had its $N$ output layers removed. As in BIBREF12 , the PIT objective function is applied in utterance level. $$\begin{split} \mathcal {J}_{\text{U-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \sum _t \frac{1}{N} \sum _{n\in [1,N]} MSE({o}_{utn}^{(s^{\prime })},{o}_{utn}^{(r)}) \end{split}$$ (Eq. 13) The speech recognition module is separately pretrained in the same way as a conventional acoustic model, with clean speech and a cross-entropy objective function, maximizing $p(\mathbf {L}_u|\mathbf {O}_{u})$ . This is illustrated in Figure 2 (d). The final model, shown in Figure 2 (e), is created by stacking the speech recognition modules onto the outputs of the pre-trained speaker tracing and frame-wise modules. It is jointly trained with an utterance level PIT-CE objective function given in Equation ( 7 ). Notably, even though the proposed structure has several copies of the speech recognition module, the numbers of parameters between Figure 2 (a) and Figure 2 (e) are similar. Because the speech recognition modules are solving similar problems, their parameters can be shared . In preliminary experiments, the performance gap between sharing and independent parameters is less than 3%. Thus to make the comparison fair, without specific explanation, the shared structure is used. The advantage of the progressive joint training includes: Decreased model complexity leading to better system generalization and data efficiency. By separating system into proper modules, the model complexity is less than the all-in-one system in PIT-ASR. As unsupervised single-channel overlapped speech recognition contains several of the hardest components in speech processing, the model complexity reduction is critical to the empirical training performance. Faster convergence of the training process and better quality of the local minimum. Curriculum learning improves both the training speed and the performance of the model BIBREF15 . As shown in Figure 4 , the progressive joint training needs fewer epochs to converge, and it converges to a better local minimum. An additional benefit is that the module initializations all take much less time compared with the joint training . Potential to integrate with other technologies. State-of-the-art technologies in each field can be applied to the initialization of each module. Transfer Learning Based Joint Training Transfer learning, or teacher-student training, is a powerful technique to address domain adaptation problems in speech recognition. In this section, we show that multi-channel ASR is a type of domain adaptation, and that transfer learning can be used to improve model training. Transfer learning has been proposed to solve the distribution mismatch problem in feature space BIBREF16 . To use this method in single-output ASR domain adaptation, parallel data must be available from a source domain and a target domain. A fully trained model in the source domain (the teacher) processes data and generates posterior probabilities, which are sometimes referred to as “soft labels.” These soft labels then replace or augment the usual “hard labels” when training the student model with parallel data in the target domain BIBREF17 . To train the student, the Kullback-Leibler divergence (KLD) between the output distributions of the teacher and student models is minimized as below. $$\begin{split} KLD(y^{(T)},y^{(S)})=\sum _i y_i^{(T)} \log \frac{y_i^{(T)}}{y_i^{(S)}} \\ = \sum _i\ [\ y_i^{(T)} \log {y_i^{(T)}} - y_i^{(T)}\log {y_i^{(S)}}\ ] \end{split}$$ (Eq. 22) $$\begin{split} = \sum _i\ - y_i^{(T)}\log {y_i^{(S)}} \end{split}$$ (Eq. 23) where $y_i^{(T)}$ and $y_i^{(S)}$ is the teacher and student distributions respectively. Because the first term is not related to the student model optimization, only the second term is used for optimization. Comparing Equation ( 23 ) to CE criterion in ASR, the hard labeling is replaced by the soft distribution inferred from the source data by the teacher model. In light of above discussion, self-transfer learning can be extended to the training of any multi-channel speech recognition system. The student is, of course the multi-channel speech recognition system. It operates in the target domain of mixed speech acoustic data, and must produce separate outputs for each speaker in the mixture. The teacher also must produce separate outputs for each speaker, but has access to the source domain: un-mixed clean speech. The teacher model is a set of clean speech acoustic models operating independently on the separate channels of clean speech. The self-transfer learning method then minimizes the KLD between the output distribution of the mixed speech model and the set of clean speech models. The KL divergence defined for utterance level PIT training between the clean speech model distribution and the joint model distribution is as below, $$\begin{split} \mathcal {J}_{\text{KLD-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \sum _t \frac{1}{N} \sum _{n\in [1,N]} \\ KLD(P({l}_{utn}^{(c)}|\mathbf {O}_{un}^{(r)}),P({l}_{utn}^{(s^{\prime })}|\mathbf {O}_{u}^{(m)})) \end{split}$$ (Eq. 25) where the calculation of each $KLD(\cdot )$ pair is the same to the adaptation-purpose single-channel case in Equation ( 23 ). Namely, the joint-trained model distribution, $y^{(S)}=P({l}_{utn}^{(s^{\prime })}|\mathbf {O}_{u}^{(m)})$ , is taken as the student model distribution, and the clean speech model distribution, $y^{(T)}=P({l}_{utn}^{(c)}|\mathbf {O}_{un}^{(r)})$ , is taken as the teacher model distribution. It is notable that when this method is applied to the modular structure proposed in this work, as in Figure 3 , the speech recognition modules can be initialized with an exact copy of the teacher model. The training framework for self-transfer learning is shown in Figure 3 . The soft targets generated by the teacher models are interpolated with the hard labeling as in BIBREF18 . The training procedure is as below: Clone the speaker tracing layers in the bottom. Clone $2N$ copies of clean ASR model initialized in Section "Modularization" , half for stacking upon the speaker tracing layers, half for model inference given each clean speech stream. Use simultaneous clean speech streams $\mathbf {O}_{un}^{(r)}$ and the overlapped speech stream $\mathbf {O}_{u}^{(m)}$ to do joint training. For each mini-batch, do forward propagation of the clean ASR model using each clean speech stream to calculate N streams of $P({l}_{utn}^{(c)}|\mathbf {O}_{un}^{(r)})$ respectively. Do forward propagation of the joint model using overlapped speech stream to calculate N streams of inference distributions, $P({l}_{utn}^{(s^{\prime })}|\mathbf {O}_{u}^{(m)})$ . For that mini-batch, calculate the error signal of Equation ( 25 ) and then do back propagation for the joint model. Update parameters of the joint model and repeat until convergence. The proposed method elegantly solves the label mismatch problem and helps the model convergence. Namely, using hard labeling obtained from forced-alignment in the clean speech is not proper, because the feature has been distorted in the mixed speech. The proposed method replaces it with the soft distribution. In addition, the proposed method formulates the joint training of multi-channel ASR by domain adaptation between clean speech and overlapped speech. Thus the soft distribution also helps model convergence, because it's easier to recreate its performance, compared with training a speech recognition model from scratch. The evidence can be observed from the training curve in Figure 4 that the initial CE of self-transfer learning based progressive joint training is much better than that of both joint modeling and progressive joint modeling. Notably, the different starting points between the progressive joint modeling and self-transfer learning based progressive joint modeling is because the CE in the former system is calculated versus hard labeling, while for the latter system it is versus the soft distribution inferred from simultaneous clean speech . Thus with a better starting point and less parameter updating requirement, finally the model also comes into better minimum in the figure. The relationships of the proposed method and previous works are summarized as below. [leftmargin=*] Model space adaptation. The formulation of the proposed method is similar to KLD-based adaptation BIBREF20 and teacher-student based domain adaptation BIBREF17 . In BIBREF20 , to conservatively update model parameters using adaptation data, the target probability distribution is changed from the ground truth alignment to a linear interpolation with the distribution estimated from the unadapted model. In BIBREF17 , the feature mismatch problem in the target domain is solved by minimizing the inference distribution divergence between the target and source domains using parallel-data. The reader is free to consider the proposed method as analogous to optimizing student network in the target domain, i.e. overlapped speech, to behave similarly to the well-trained teacher network in the source domain, i.e. clean speech, while bearing in mind the proposed method requires module stacking because the motivation is to do joint training. Stereo piecewise linear compensation for environment (SPLICE) BIBREF21 . The SPLICE algorithm uses stereo data to do noise reduction and channel distortion compensation. In BIBREF22 , the clean feature is used for the teacher model to provide supervision on the stereo noisy data trained student model. In BIBREF18 , the multi-channel enhanced feature is used for the teacher model. In this work, the teacher-student framework is also based on stereo data. The student model is initialized better to cope with the more difficult modeling problem, and the entire framework is expanded to handle multiple output streams. Progressive stacking transfer learning. BIBREF23 proposes to progressively conduct transfer learning to train speech enhancement layers. The motivation of the progressive stacking is only to gradually model a hard task by dividing into several same but smaller tasks. Thus the criteria of all tasks are the same. However, the proposed method is to do joint training of distinct tasks. And each task is fully trained with specific data and criterion. Self-supervised training. BIBREF24 proposes to use a teacher model based on more accurate sensor information as the supervision of the student model. The motivation of the proposed method is different, which is to transfer distribution between two models with feature mismatch. Besides, the empirical procedure of the proposed method is to fine-tune the original model in the target feature distribution from supervision of itself in parallel source distribution. Following the transfer learning diagram, the joint model can also benefit from an ensemble of teachers BIBREF25 . Specifically, because the problem includes several sub-problems discussed in Section "Modularization" , different neural network structures can show different superiorities, e.g., with different numbers of stream-dependent layers and stream-independent layers. Learning from an ensemble of joint models with different structures is promising in both performance improvement and model compression. Multi-output Sequence Discriminative Training Speech recognition is inherently a sequence prediction problem. In single-output ASR, sequence level criteria such as sequence discriminative training tend to improve performance. The unsupervised single-channel overlapped speech recognition problem further includes the speaker tracing problem, which is also a sequence level problem. Previous works concatenate frame level CE as the criteria of the sequence level problem, which limits the modeling effect of the neural network. In this paper, sequence discriminative training of multiple output streams is proposed for the first time. In single-output ASR, to form a sequence discriminative training criterion, it is necessary to calculate the sequence posterior probability using Bayes' theorem as below, $$\begin{split} P(\mathbf {L}_u|\mathbf {O}_u)=\frac{p(\mathbf {O}_u|\mathbf {L}_u)P(\mathbf {L}_u)}{p(\mathbf {O}_u)} \end{split}$$ (Eq. 41) Here, $\mathbf {L}_u$ is the word sequence of utterance $u$ . $P(\mathbf {L}_u)$ is the language model probability. $p(\mathbf {O}_u|\mathbf {L}_u)$ is the corresponding acoustic part. The marginal probability $p(\mathbf {O}_u)$ of the feature sequence $\mathbf {O}_u$ , is modeled by summation of the probability over all possible hypothesis sequences. $$\begin{split} p(\mathbf {O}_u)=\sum _\mathbf {L} p(\mathbf {O}_u,\mathbf {L})= \sum _\mathbf {L} P(\mathbf {L}) p(\mathbf {O}_u|\mathbf {L}) \end{split}$$ (Eq. 42) Here, $\mathbf {L}$ denotes all competing hypotheses. As an example of the sequence discriminative training criteria, the maximum mutual information (MMI) BIBREF26 of inference distribution stream $\mathbf {L}_u$ in utterance $u$ is defined as below, $$\begin{split} \mathcal {J}_{\text{SEQ}}(\mathbf {L}_u,\mathbf {L}_u^{(r)})= \log P(\mathbf {L}_u^{(r)}|\mathbf {O}_u) \end{split}$$ (Eq. 43) where $\mathbf {L}_u^{(r)}$ is the corresponding reference. For the overlapped speech recognition problem, the conditional independence assumption in the output label streams is still made as in Equation ( 5 ). Then the cross-entropy based PIT can be transformed to sequence discriminative criterion based PIT as below, $$\begin{split} \mathcal {J}_{\text{SEQ-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \frac{1}{N} \sum _{n\in [1,N]}-\mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)}) \end{split}$$ (Eq. 44) Different from Equation ( 7 ), the best permutation is decided by $\mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)})$ , which is the sequence discriminative criterion of taking the $s^{\prime }$ -th permutation in $n$ -th output inference stream at utterance $u$ . Similar to CE-PIT, $\mathcal {J}_{\text{SEQ}}$ of all the permutations are calculated and the minimum permutation is taken to do the optimization. The definition of $\mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)})$ is similar to Equation ( 43 ) in single output ASR. $$\begin{split} \mathcal {J}_{\tt {MMI}} =\sum _u \mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)}) \\ =\sum _{u} \log \frac{\sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)}{\sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})} \end{split}$$ (Eq. 45) For simplicity, $\mathbf {L}_u=\mathbf {L}_{un}^{(r)}$ and $\mathbf {L}$ is all the hypothesis sequences generated by the output stream $\mathbf {L}_{un}^{(s^{\prime })}$ . $p(\mathbf {O}_u|\mathbf {L}_u)$ and $p(\mathbf {O}_u|\mathbf {L})$ is the conditional likelihood obtained from forward propagation of the joint model, $P(\mathbf {L}_u)$ and $P(\mathbf {L})$ is the prior probability obtained from language model. The hypothesis sequence $\mathbf {L}$ refers to all the competing hypotheses in the speech recognition. Bad modeling of $\mathbf {L}$ , namely ignoring some modeling errors, results in imprecise estimation of Equation ( 42 ), which hurts ASR performance. Thus competing hypotheses modeling is key to the discriminative training. Empirically, it is constrained by linguistic search space. In the single output ASR, the linguistic search space is further pruned by the online method, lattice-based discriminative training BIBREF27 , or by the offline method, lattice-free (LF) discriminative training BIBREF28 , BIBREF29 , BIBREF30 . Compared with single output ASR, the additional error hypothesis types include: i) Assignment errors: someone said a word, but it is assigned to the wrong channel. ii) cross talk errors: one person said a word, but it appears in multiple channels. They both come from imperfect acoustic modeling and result in several challenges in discriminative training. The first problem is linguistic search space modeling. As discussed in the first paragraph, there are mainly two branches of methods. When training the joint model, speaker tracing results can always change, which results in different permutations of the same utterance between different epochs. Thus if using lattice-based method, lattices should be updated after each epoch in case of bias in the search space modeling. Another choice is to use a pre-pruned senone level language model as the common search space for all utterances BIBREF30 . With this method, the lattice generation problem in multiple outputs can be solved and the discriminative training can be conducted efficiently in the shared search space. The second problem is the swapped word modeling in multiple outputs. Swapped word results in both cross talk errors and assignment errors. Thus bad swapped word modeling hurts the ASR performance. Generally, the linguistic search space is estimated from the transcription of the training dataset. And then sequence criterion is calculated in this search space. Because there's no swapped word phenomenon in the transcription, thus the search space doesn't contain swapped word cases, which results in overestimating the sequence criterion. Especially for the multiple output streams, the swapped word errors are critical to the ASR performance. Three methods are proposed to cope with the problem. [leftmargin=*] Artificial swapped words. A very simple method is to generate several copies of the transcription with artificially swapped words in each copy. And then the language model is estimated on the re-generated transcription. With this method, some of the swapped word cases can still exist in the search space. Thus the problem is alleviated. Empirically, the senone level language model is obtained from the senone level transcription, namely clustered tri-phone state alignment. In case of significant increasing in the search space because of the swapped word, we set a rule that in each frame, the probability of senone swapping is $\alpha $ . But if the senone is swapped, the senone sequence of the following $\beta $ frames won't be swapped. And $\gamma $ copies of the transcriptions are generated. De-correlated lattice free MMI (LF-DC-MMI). The motivation is that swapped words come from the other output streams. Thus adding these output streams into the search space and minimizing them in the denominator of discriminative training can alleviate the problem. $$\begin{split} \mathcal {J}_{\tt {LF\text{-}DC\text{-}MMI}} =\sum _{u} \log [ \frac{\sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)}{(\ \sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})\ )^{1-\lambda } } \cdot \\ \frac{1}{(\ {\sum _{\mathbf {L}_{\hat{u}}}} p(\mathbf {O}_u|{\mathbf {L}_{\hat{u}}})^{\kappa }P({\mathbf {L}_{\hat{u}}})\ )^\lambda } ] \end{split}$$ (Eq. 49) In Equation ( 49 ), the other output streams are denoted as $\mathbf {L}_{\hat{u}}$ . An interpolation weight $\lambda $ is added with the augmented term in the denominator. De-correlated lattice free boosted MMI (LF-DC-bMMI). Analogous to boosted MMI BIBREF31 as Equation ( 51 ), $$\begin{split} \mathcal {J}_{\tt {LF\text{-}bMMI}} =\sum _{u} \log \frac{\sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)}{\sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})e^{-b\ \mathop {\max }_{\mathbf {L}_u} A(\mathbf {L},\mathbf {L}_u)}} \end{split}$$ (Eq. 51) we propose de-correlated lattice free boosted MMI (LF-DC-bMMI) as Equation ( 52 ). Here, $b$ is the boosting factor. $A(\mathbf {L},\mathbf {L}_u)$ is the state level accuracy between sequence $\mathbf {L}$ and $\mathbf {L}_u$ . By this method, the ASR error hypotheses can be further minimized in the denominator. In the proposed method, both the ASR errors between the target inference sequence and the target reference, and the falsely-recognition of the interfere streams, are boosted. $$\begin{split} \mathcal {J}_{\tt {LF\text{-}DC\text{-}bMMI}} =\sum _{u} \log \ [\ \sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)\cdot \\ \frac{1}{\sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})e^{-b\ \mathop {\max }_{\mathbf {L}_u} A(\mathbf {L},\mathbf {L}_u) -\hat{b}\ {\mathop {\max }_{\mathbf {L}_{\hat{u}}}} (1-A(\mathbf {L},\mathbf {L}_{\hat{u}}) ) }}\ ] \end{split}$$ (Eq. 52) where $\hat{b}$ is the de-correlated boosting factor and $A(\mathbf {L},\mathbf {L}_{\hat{u}})$ measures how many falsely recognitions of the interfere streams. Experiments are conducted on all three methods in Section "Sequence Discriminative Training" and the first method can be further combined with the other two. Experiment The experimental results are reported in artificial overlapped Switchboard corpus and Eval2000 hub5e-swb test set. Although the methods presented here are valid for any number of overlapped speakers, we focus on the two-talker scenario. Experimental Setup For training, the Switchboard corpus BIBREF32 is used, which contains about 300 hours of speech. Evaluation is carried out on the Switchboard (SWB) subset of the NIST 2000 CTS (hub5e-swb) test set. The waveforms were segmented according to the NIST partitioned evaluation map (PEM) file. Two-talker overlapped speech is artificially generated by mixing these waveform segments. To maximize the speech overlap, we developed a procedure to mix similarly sized segments at around 0dB. First, we sort the speech segments by length. Then, we take segments in pairs, zero-padding the shorter segment so both have the same length. These pairs are then mixed together to create the overlapped speech data. The overlapping procedure is similar to BIBREF13 except that we make no modification to the signal levels before mixing . After overlapping, there's 150 hours data in the training, called 150 hours dataset, and 915 utterances in the test set. After decoding, there are 1830 utterances for evaluation, and the shortest utterance in the hub5e-swb dataset is discarded. Additionally, we define a small training set, the 50 hours dataset, as a random 50 hour subset of the 150 hours dataset. Results are reported using both datasets. In the training stage, 80-dimensional log-filterbank features were extracted every 10 milliseconds, using a 25-millisecond analysis window. The convolution neural network (CNN) models use 41 context frames (20 in both left and right) and the long short term memory networks (LSTM) processed one frame of input at a time. All neural networks were trained with the Microsoft Cognitive Toolkit (CNTK) BIBREF33 . The detailed setup of CNN is listed in Section "Separate Optimization v.s. Joint Modeling" . The acoustic model is based on three state left-to-right triphone models with 9000 tied states (senones). The individual senone alignments for the two-talkers in each mixed speech utterance are from the single-speaker ASR alignment BIBREF30 . For compatibility, the alignment of the shorter utterance within the mixed speech is padded with the silence state at the front and the end. The clean speech recognition performance in the corpus can be referred to BIBREF30 , BIBREF29 . Using clean speech model to do decoding in the overlapped speech isn't reported as it's as bad as in BIBREF13 . The baseline model of joint training is a PIT-ASR model with a setup similar to BIBREF13 . The PIT-ASR model is composed of 10 bidirectional LSTM layers with 768 memory cells in each layer , and 80-dimensional feature. The baseline model of separately optimized system is a PIT for speech separation (PIT-SS) model combined with a clean speech ASR model. As PIT-SS model has shown competitive performance compared with other speech separation systems, only PIT-SS model is taken as the baseline. The PIT-SS model has a setup similar to BIBREF12 but with 6 bidirectional LSTM layers with 768 memory cells in each layer, it directly outputs multiple channels of the 80 dimensional log-filterbank features the speech recognition module expects. The speech recognition module, pretrained as a clean speech model, is composed of 4 bidirectional LSTM layers with 768 memory cells in each layer. It is trained from the corresponding source speech segments used to create the overlapped corpus. After initialization, the WER performance of the ASR model in the clean speech test set is 17.0%. Although it would be easy to incorporate a stronger acoustic model BIBREF30 in conjunction with the proposed method, we chose a structure that allows for a fair comparison, in terms of the number of model parameters, among the baselines and proposed methods. However, as discussed in Section "Modularization" , the modular system needs fewer parameters and training iterations to achieve good performance. Notably, the use of enhanced signals after speech separation as training data of the speech recognition module tends to degrade the ASR performance and isn't included. The reason can be from the sporadic distortions that signal processing inevitably adds, similar to what has been observed in BIBREF18 . In the evaluation stage, a 30k-vocabulary language model derived from the most common words in the Switchboard and Fisher corpora is used. The decoder uses a statically compiled unigram graph, and dynamically applies the language model score. The unigram graph has about 300k states and 500k arcs BIBREF30 . Two outputs of the PIT-ASR model are both used in decoding to obtain the hypotheses for two talkers. For scoring, we evaluated the hypotheses on the pairwise score mode against the two references, and used the assignment with better word error rate (WER) for each utterance BIBREF13 . Only the average WER of two output streams is reported, as the task is to correctly recognize all words from both speakers. Separate Optimization v.s. Joint Modeling Table 1 shows the performance of the naive joint modeling, PIT-ASR, compared with that of the separately optimized system. The first row shows the performance of the joint training baseline model in this corpus, PIT-ASR BIBREF13 . Compared with the 0dB WER result listed in BIBREF13 , 55.80%, the performance is reasonable . As discussed in Section "Modularization" , the separately optimized system has a similar number of parameters but different model architecture. Thus to make a fair comparison, the model with 6 stream-independent layers in the bottom and 4 parameter-shared stream-dependent layers in the top, denoted as 6 $\cdot $ 4, is listed in the second row . The learnable structure is the same to the dash-dot blocks shown in Figure 2 (e), but trained from scratch as 10 $\cdot $ 0. The performance of the 6 $\cdot $ 4 structure is significantly better than that of the 10 $\cdot $ 0 structure. The reason is that unlike in the pure speech separation task, the speech recognition stage in this task is also very hard and needs more nonlinear layers. It also shows that this task is much harder than the speech separation, so better joint training method to fulfill the performance of each module is critical to the success. Without fine-tuning parameters, the performance of the separately optimized system is shown in the third row. The significantly worse performance comes from the feature mismatch in Equation ( 6 ). With fine-tuning parameters, the performance is restored in the fourth and fifth rows. The system in the fifth row can be viewed as a strong baseline with separate optimization and fine-tuning. The better performance of the progressive joint training is from better model generalization and training efficiency . Figure 4 show the effect. Training curves of both joint modeling, i.e. the second row in the table, and progressive joint modeling, i.e. the fifth row in the table, are plotted. From the figure, both better starting point and better converged minimum can be observed in the joint progressive training. With better joint training strategy shown in Section "Self-transfer Learning Based Joint Modeling" , such modeling effect can be further fulfilled. Table 2 shows the performance of the joint modeling from different modular initialization setups. All modular systems are fine-tuned after joint training. The first and second rows show the naive joint training models with BLSTM and BLSTM combined with CNN, respectively. 6 $\cdot $ 4 BLSTM refers to 6 layers BLSTM in the bottom and 4 parameter-shared layers for each output in the top, i.e. 6 $\cdot $ 4 in Table 1 . Layer-wise context expansion with attention (LACE) model is used for CNN BIBREF36 , which is a TDNN BIBREF37 variant where each higher layer is a weighted sum of nonlinear transformations of a window of lower layer frame. Each LACE block starts with a convolution layer with stride 2 which sub-samples the input and increases the number of channels. This layer is followed by three RELU-convolution layers with jump links. The channel size is 48. The slightly different setup compared with BIBREF30 is to make parameter number of one LACE block comparable with one layer of bidirectional LSTM with 768 memory cells, i.e. 5M parameters. Only one block of LACE is used to replace one layer of BLSTM as the frame-wise interpreting module, because the speaker tracing and speech recognition modules are the hardest parts in the problem. The other parts of the structure are the same, and the proposed structure is denoted as 1 LACE + 5 $\cdot $ 4 BLSTM. From the table, it can be observed that there's no improvement by merely stacking these kinds of neural networks together and jointly training them. In the third and fourth rows, the model is firstly initialized with frame-wise interpreting, Figure 2 (b), speaker tracing, Figure 2 (c), and speech recognition, Figure 2 (d), tasks respectively and then jointly trained. Comparing the fourth row to the third row, 1 LACE + 5 $\cdot $ 4 BLSTM shows larger improvement than 6 $\cdot $ 4 BLSTM. Two conclusions can be derived from the results: i) CNN structure is more suitable for the frame-wise interpreting module because it focuses on the local context and has better modeling power of frequency variations BIBREF38 . Meanwhile, LSTM is good at temporal modeling, which is more suitable for the speaker tracing and speech recognition module. The architecture with 1 LACE + 5 $\cdot $ 4 BLSTM layers combines their strength. Notably, BIBREF38 proposes a similar structure, called CLDNN, to form the acoustic model with modules focusing on different scales and take advantage of the complementarity of CNN, LSTM and DNN. The difference is that, to fulfill the respective advantages in modeling, the proposed method further pretrains each module with different criteria. ii) As the performance improvement from modularization and initialization is much larger in 1 LACE + 5 $\cdot $ 4 BLSTM, it shows that module-wise initialization is important to fulfill the modeling power of neural networks especially with different structures and scales. To further analyze the frame-wise interpreting ability of CNN and BLSTM, experiments without frame-wise interpreting initialization are conducted in the fifth and sixth rows. This time, the performances are similar both in 6 $\cdot $ 4 BLSTM and 1 LACE + 5 $\cdot $ 4 BLSTM. It shows that in initialization of the speaker tracing module, both BLSTM and CNN can spontaneously learn the frame-wise interpreting ability. We notice that in BIBREF12 , frame-wise PIT training doesn't show good performance, which is similar to our observation. Self-transfer Learning Based Joint Modeling Table 3 shows the performance improvement of the transfer learning applied to joint modeling. For transfer learning, the interpolation weight between hard and soft labels is 0.5 . The original PIT-ASR system is in the first row and a better PIT-ASR baseline with 6 $\cdot $ 4 structure in Table 1 is also included in the fourth row. The ensemble-based transfer learning proposed in Section UID37 is tested in the second row. The ensemble contains 3 types of structure, 9 $\cdot $ 1, 6 $\cdot $ 4 and 3 $\cdot $ 7, where the left number denotes the bottom stream-independent layers and the right number denotes the top stream-dependent layers. The student network learns from each teacher one-by-one. Although it's not a large gain, it shows improvement after learning from each teacher. The result of replacing hard labeling with simultaneous clean speech based transfer learning is listed in the third and the fifth rows. In both model architectures, transfer learning brings about a relative 10% improvement over the respective baseline, which is comparable with the result in BIBREF17 . It shows that soft distribution inferred by the model with similar architecture is superior to the hard labeling. The self-transfer learning based progressive joint modeling is finally listed in the sixth and the seventh row by using CE-trained and MMI-trained clean speech teacher respectively. Notably, as the model framework discussed in "Transfer Learning Based Joint Training" , the initializations of speech recognition modules are the respective clean speech teachers. The result shows over 30% relative improvement. Comparing the third and the fifth rows with Table 2 , it can be further observed that, combining progressive joint training and self-transfer learning brings about even larger improvement compared with the summation of the relative improvement from each of the two technologies. The learning curve of the proposed method is also shown in Figure 4 . From these results, we conclude: i) The proposed method brings about faster convergence and better converged minimum. The reason is discussed in Section "Transfer Learning Based Joint Training" . The better convergence result also comes from the removal of the inappropriate hard alignment in the joint training. ii) Easier convergence helps the model fulfill the best performance in each module. That's the explanation of the even better synergy result compared with the summation of the relative improvements from transfer learning and progressive joint training. iii) Better teacher generates better student. And the MMI-trained distribution can also be transferred to the student model, similar to what has been observed in BIBREF39 . Figure 5 further shows the student performance versus quality of the teacher in transfer learning based joint modeling. It can be observed that better student can usually be obtained with better teacher. An important reason is that self-transfer learning is conducted by minimizing the divergence of its own distributions in mixed speech and clean speech. Thus better original distribution, including MMI-trained distribution, can intrinsically be part of the joint model and brings about better results. The only inflection point is in epoch=3 of the MMI teacher, where the student performance is similar to epoch=1 although the teacher model has better WER performance. We believe the reason is that the distribution of the teacher model of epoch=3 is hard to transfer to the student model because of the transition process from CE-trained distribution to MMI-trained distribution. Sequence Discriminative Training Table 4 shows the performance improvement of sequence discriminative training based joint modeling. In this experiment, the baseline is PIT-ASR model without progressive joint training. All the structures are kept the same as 10 $\cdot $ 0 and only criteria are changed. $\kappa =0.1$ , which is in accordance with the decoding setup. The senone level language model for competing hypothesis modeling is obtained from the clustered tri-phone state alignment. Tri-gram is used, similar to BIBREF30 . For the artificial swapped word method, the hyper-parameters in Section UID46 is decided by the preliminary experiment. $\alpha =0.4$ , $\beta =10$ and $\gamma =2$ . The swapped word senone level search graph is 4 times larger than the original tri-gram senone level language model graph. $\lambda =0.1$ , $b=0.1$ and $\hat{b}=0.2$ . The baseline PIT-ASR system is shown in the first row, denoted as PIT-CE to show the criterion of the system. Applying the naive sequence discriminative training method for multiple outputs in the second row, only brings about 4.9% relative improvement. For the recognition result, word precision is improved but insertion error increases. The reason is from imperfect search space modeling of swapped words discussed in "Multi-output Sequence Discriminative Training" . By applying the proposed LF-DC-MMI method in the third row, the recognition result is significantly improved compared with both baseline and the naive LF-MMI. The proposed method minimizes the swapped words from parallel output streams in the denominator modeling. Thus the problem can be alleviated. The fourth and fifth rows show the effect of using bMMI instead of MMI in the formulations. The bMMI criterion boosts the ASR errors in the denominator modeling, implicitly including possibly swapped words. Although significant improvement can be observed between LF-MMI and LF-bMMI, the LF-DC-bMMI explicitly includes the swapped words and achieves an even better result. The proposed artificial swapped word method is shown in the sixth and seventh rows. By comparing the sixth row with the second row, and comparing the seventh row with the fifth row, it shows slight but consistent improvement in solving the swapped word problem. And the method can also be combined with LF-DC-bMMI to achieve 8.2% relative improvement versus the CE-trained PIT-ASR baseline. In the eighth to tenth rows, experiments are conducted on 150 hours corpus. The results are similar, and LF-DC-bMMI criterion shows consistent improvement versus PIT-CE and naive sequence discriminative training criterion. In Section "Combination and Extension to Larger Corpus" , it is shown that sequence discriminative training can be combined with other technologies and achieves further consistent and significant improvement. The discriminative training criterion helps the system training in two ways. Firstly, sequence level criterion helps the sequence level speaker tracing problem in PIT modeling. Specifically, linguistic information is encoded in the senone level language modeling in discriminative training. Thus the procedure implicitly integrates linguistic information in the speaker tracing problem. Secondly, sequence discriminative training improves the speech recognition module. Notably, all the sequence discriminative training procedures are applied after CE initialization as in BIBREF30 . With initialization, it also helps Equation ( 44 ) to reach a better minimum. Combination and Extension to Larger Corpus Table 5 summarizes the performance improvement of integrating all the proposed methods. The PIT-ASR model BIBREF13 , denoted as PIT-CE, is taken as the baseline of naive joint modeling in the first row. The separately optimized system, namely PIT-SS+ASR, is not included here. As shown in Table 2 , the performance deteriorates because of feature mismatch. Instead, the proposed progressive joint training model in the second row can be taken as a stronger separately optimized system with fine-tuning. The proposed self-transfer learning based joint training model shows further significant improvement in the third and fourth rows. Finally, the multi-output sequence discriminative training is applied and achieves moderate improvement, although the teacher model is already MMI-trained, similar to what has been observed in BIBREF40 . Figure 6 shows decoding examples of the proposed methods versus the PIT baseline. The baseline contains many errors due to bad model generalization with limited size of dataset. With the proposed methods, errors are significantly reduced. Notably, in this example, the self-transfer learning based progressive joint training mainly reduces errors from similar pronunciations, while sequence discriminative training mainly reduces explicit syntax or linguistic errors, which is in line with the expectation. With similar number of parameters but different neural networks, namely 1 LACE + 5 $\cdot $ 4 BLSTM, the system can be consistently improved in the sixth to ninth rows. We believe the further improvement comes from the proper modularization of the problem, which is discussed in Section "Separate Optimization v.s. Joint Modeling" . Table 6 expands the dataset to 150 hours to show the effect of more training data. The naive joint training baseline in the first row significantly benefits from more data and shrinks the gap to the proposed progressive joint training model in the second row. However, it still even significantly worse than the self-transfer learning and sequence discriminative training based joint model trained in 50 hours data in Table 5 . It again shows the disadvantages of large model complexity and insufficient model generalization discussed in Section "Unsupervised Single-channel Overlapped Speech Recognition" . i.e. compared with merely increasing data, the better method to solve the problem is to improve the model generalization. Besides, the convergence speed of naive joint training model in the larger dataset is even slower, namely 4 times more epochs versus the proposed method. Comparing Table 6 with Table 5 , the proposed self-transfer learning based joint training and multi-output sequence discriminative training show consistent relative improvement versus the progressive joint training. Compared with Table 5 , sequence discriminative training achieves larger relative improvement on the CE-trained teacher based system. In both 50 hours and 150 hours corpus, the proposed method achieves over 30% relative improvement respectively, versus the PIT-ASR system and the PIT-SS+ASR system. The improvement comes from better model generalization, training efficiency and the sequence level linguistic knowledge integration. Although this paper addresses the case of simultaneous speech of two people talking at a relative level of 0dB, we believe it will be straightforward to extend the system to handle more realistic conditions. The case where one speaker is louder than the other has already been observed to be easier for PIT-style models than the 0dB data explored in this paper BIBREF13 . For more than two speakers, extension of the proposed system should follow the same construction described in BIBREF34 . Finally, we expect robustness to background noise and reverberation to come from standard techniques such as multi-condition training BIBREF41 . Conclusion In this work, we proposed to divide the single channel overlapped speech recognition problem into three sub-problems: frame-wise interpreting, speaker tracing and speech recognition. Each module is firstly optimized separately with specific designed criteria, which significantly improves the system generalization and training efficiency. After the initialization, modules are jointly trained with two novel strategies: self-transfer learning and multi-output sequence discriminative training. Specifically, in the joint training stage, the clean speech model fine-tunes its parameters with other modules in overlapped speech to fit its own distribution in the simultaneous clean speech. And then sequence discriminative training designed for multiple outputs is applied to integrate linguistic and sequence information. The proposed framework achieves 30% relative improvement over both a strong jointly trained system, PIT-ASR, and a separately optimized system, PIT-SS+ASR. The proposed framework shows promising perspectives of future improvements, which are: i) Integrating state-of-the-art technologies in each module in the initialization stage, e.g., DPCL BIBREF8 . ii) Applying other sequence level criteria to improve the speaker tracing and speech recognition modules, i.e. connectionist temporal classification (CTC) BIBREF42 . iii) Explicit integration of language model in the joint modeling, e.g. joint decoding BIBREF5 and end-to-end modeling BIBREF43 . Acknowledgment We thank Chris Basoglu, Frank Seide for their invaluable assistance with CNTK; Mike Seltzer, Takuya Yoshioka, Hakan Erdogan and Andreas Stolcke for many helpful conversations. The first author would like to further thank Jerry and Juncheng Gu for their supports during the internship. []Zhehuai Chen received his B.S. degree in the Department of Electronic and Information Engineering from Huazhong University of Science and Technology, China, in 2014. He is currently a Ph.D. candidate in Shanghai Jiao Tong University working on speech recognition. His current research interests include speech recognition, speech synthesis and deep learning.
the best permutation is decided by $\mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)})$ , which is the sequence discriminative criterion of taking the $s^{\prime }$ -th permutation in $n$ -th output inference stream at utterance $u$
7bf3a7d19f17cf01f2c9fa16401ef04a3bef65d8
7bf3a7d19f17cf01f2c9fa16401ef04a3bef65d8_0
Q: How are the two datasets artificially overlapped? Text: Introduction The cocktail party problem BIBREF0 , BIBREF1 , referring to multi-talker overlapped speech recognition, is critical to enable automatic speech recognition (ASR) scenarios such as automatic meeting transcription, automatic captioning for audio/video recordings, and multi-party human-machine interactions, where overlapping speech is commonly observed and all streams need to be transcribed. The problem is still one of the hardest problems in ASR, despite encouraging progresses BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this paper, we address the speech recognition problem when multiple people speak at the same time and only a single channel of overlapped speech is available. This is useful when only a single microphone is present, or when microphone array based algorithms fail to perfectly separate the speech. Specifically, the paper focuses on an unsupervised inference method, which does not need any prior knowledge of speakers. To obtain transcriptions of all speakers from the overlapped speech, joint inference is conducted based on multiple knowledge sources: frequency domain voice discrimination, temporal speaker tracing, linguistic information and speech recognition. Prior work in unsupervised single-channel overlapped speech recognition generally separates the problem into speech separation and recognition stages. Before the deep learning era, the most popular speech separation technique is computational auditory scene analysis (CASA) BIBREF2 . There are two main stages in CASA approaches: segmentation and grouping. The segmentation stage decomposes mixed speech into time-frequency segments assumed to be derived from the corresponding speakers based on perceptual grouping cues BIBREF6 . The grouping stage simultaneously and sequentially concatenates the segments to generate independent streams for each speaker. Non-negative matrix factorization (NMF) BIBREF7 is another popular technique which aims to learn a set of non-negative bases that can be used to estimate mixing factors during evaluation. Recently, several deep learning based techniques have been proposed but seldom concentrate on the unsupervised case, which is more applicable. In BIBREF8 , BIBREF9 , BIBREF10 , the authors propose deep clustering (DPCL), in which a deep network is trained to produce spectrogram embeddings that are discriminative for partition labels given in training data. The model is optimized so that in the neural network embedding space the time-frequency bins belonging to the same speaker are closer and those of different speakers are farther away. Speech segmentations are therefore implicitly encoded in the embeddings, and can be obtained by clustering algorithm. In BIBREF11 , a DNN-based gender mixture detection system and three gender-dependent speech separation systems are constructed. The latter ones directly infer the feature streams of two speakers respectively. For all these methods, speech separation and recognition are two separate components and the latter is applied to the separated feature streams. The mismatched feature in the speech recognition stage is one of the limitation in these methods. In light of permutation invariant training (PIT) proposed in speech separation BIBREF12 originally, the PIT-ASR model BIBREF13 is the first attempt in joint modeling of unsupervised single-channel mixed speech recognition. Whereas the original PIT technique jointly models the voice discrimination and speaker tracing, PIT-ASR further integrates speech recognition into the neural network with a unified cross entropy (CE) criterion. Although PIT-ASR shows promising results, it suffers from several disadvantages, which are analyzed in Section "Unsupervised Single-channel Overlapped Speech Recognition" . In this paper, progressive joint modeling is proposed to divide the single channel overlapped speech recognition problem into three sub-problems for initialization: frame-wise interpreting, speaker tracing and speech recognition (Figure 1 ). Each module is initialized by placing it into a series of networks that solve progressively more difficult problems. After the initialization, modules are jointly trained with two novel strategies, namely self-transfer learning and multi-output sequence discriminative training. Transfer learning is introduced in this problem, which leverages parallel clean speech to improve the training targets for the network. Our discriminative training formulation is a modification of standard formulations, that also penalizes competing outputs of the system. The proposed framework achieves 30% relative improvement over both a strong jointly trained system, PIT-ASR, and a separately optimized system, PIT for speech separation with clean speech ASR. The improvement comes from better model generalization, training efficiency and the sequence level linguistic knowledge integration. The rest of the paper is organized as follows. In Section "Unsupervised Single-channel Overlapped Speech Recognition" , the unsupervised single-channel overlapped speech recognition problem is briefly reviewed. In Section "Modularization" , the modular initialization and progressive joint training is proposed. In Section "Unsupervised Single-channel Overlapped Speech Recognition"0 , the self-transfer learning is proposed and in Section "Multi-output Sequence Discriminative Training" , multi-output sequence discriminative training is proposed. In Section "Experiment" , the experimental results are reported in artificial overlapped Switchboard corpus and Eval2000 hub5e-swb test set, followed by the conclusion in Section "Conclusion" . Unsupervised Single-channel Overlapped Speech Recognition Unsupervised single-channel overlapped speech recognition refers to the speech recognition problem when multiple unseen talkers speak at the same time and only a single channel of overlapped speech is available. Different from supervised mode, there's not any prior knowledge of speakers in the evaluation stage. In the problem, only the linearly overlapped single-channel signal is known, which is defined as, $$\begin{split} \mathbf {O}_{u}^{(m)}=\sum _{n=1}^N \mathbf {O}_{un}^{(r)} \end{split}$$ (Eq. 4) where $\mathbf {O}_{un}^{(r)}$ is the clean signal stream of speaker $n$ at utterance $u$ and $\mathbf {O}_{u}^{(m)}$ is the overlapped speech stream of utterance $u$ . $N$ is the number of streams. Single channel ASR is always formulated as a supervised sequence labeling problem given by $P(\mathbf {L}_u|\mathbf {O}_{u})$ , which is the posterior probability of transcription sequence $\mathbf {L}_u$ given the feature sequence $\mathbf {O}_{u}$ in the utterance $u$ . Nevertheless, the multi-speaker problem is to model the joint distribution of $n$0 streams of transcriptions given the overlapped speech signal, $n$1 . Due to the symmetric labels given the mixture signals, it is no longer a supervised optimization problem. One branch of methods assumes the inference streams are conditionally independent, and tries to assign the correct transcription stream $n$2 to the corresponding output stream $n$3 , $$\begin{split} P(\mathbf {L}_{u1},...,\mathbf {L}_{uN}|\mathbf {O}_{u}^{(m)}) \approx \prod _{n=1}^N P(\mathbf {L}_{un}^{(r)}|\mathbf {O}_{u}^{(m)}) \end{split}$$ (Eq. 5) Another branch of methods assume the overlapped signal can be separated to $\mathbf {O}_{un}^{(c)}\mathop {:} \mathbf {O}_{un}^{(c)}\approx \mathbf {O}_{un}^{(r)}$ . Because the original streams $\mathbf {O}_{un}^{(r)}$ are conditionally independent with each other, the separated signal streams $\mathbf {O}_{un}^{(c)}$ are also assumed to be conditionally independent. Thus Equation ( 5 ) can be derived to Equation ( 6 ), $$\begin{split} P(\mathbf {L}_{u1},...,\mathbf {L}_{uN}|\mathbf {O}_{u}^{(m)}) \approx \prod _{n=1}^N P(\mathbf {L}_{un}^{(r)}|\mathbf {O}_{un}^{(c)}) \end{split}$$ (Eq. 6) However, neither assumption is precise. For the first assumption, there is no pre-determined method to obtain the ideal label arrangements, which is called the speaker tracing problem. The second assumption is that the speech separation and recognition are independent processes, which introduces an artificial information bottleneck. In BIBREF12 , the reference streams are treated as an unordered set. The PIT framework is proposed to address the speech separation problem by firstly determining the assignment of the reference stream and inference stream that minimizes the error at the utterance level based on the forward-pass result. This is followed by minimizing the error given the utterance level best assignment. BIBREF13 extends this by integrating speech recognition into the neural network with a unified cross-entropy (CE) training criterion. $$\begin{split} \mathcal {J}_{\text{CE-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \sum _t \frac{1}{N} \sum _{n\in [1,N]} CE({l}_{utn}^{(s^{\prime })},{l}_{utn}^{(r)}) \end{split}$$ (Eq. 7) Here, $\mathbf {S}$ is the permutation set of the reference representation and the inference representation. ${l}_{utn}^{(s^{\prime })}$ is the $n$ -th inference label of permutation $s^{\prime }$ at frame $t$ in utterance $u$ and ${l}_{utn}^{(r)}$ is the corresponding transcription label obtained by clean speech forced-alignment BIBREF14 . The PIT-ASR criterion BIBREF13 elegantly integrates speech separation, speaker tracing and speech recognition together as Figure 2 (a). Its joint modeling approach eliminates the artificial bottleneck between the speech separation and speech recognition tasks. But the method suffers from several disadvantages which deteriorates the performance: Methods In this work, we propose three separate enhancements to improve the performance of PIT-ASR. First, the structure and accuracy of the model is improved through modularization and pretraining. Frame-wise interpreting, speaker tracing, and speech recognition modules replace the monolithic structures used in previous work. These modules are progressively pretrained and jointly fine-tuned. Second, we demonstrate a natural way to incorporate a form of transfer learning. Clean speech features are used to generate soft label targets which are interpolated with the reference label alignments. Third, multi-output discriminative training is applied to the system. As with single-stream speech recognition, multi-stream discriminative training can help with model generalization. Additionally, the objective function is augmented to reduce cross-speaker word assignment errors. Modularization In the original formulation, a PIT-ASR model consists of a single monolithic structure that predicts independent targets for each speaker. We improve this by replacing the main network structure with a modular structure, shown in Figure 1 . This modular structure consists of three tasks, namely interpreting mixed acoustic data, tracing speakers across time, and predicting acoustic label sequences. First, the frame-wise module is designed to extract the local time-frequency information necessary to separate the overlapped speech into individual acoustic representations. It is entirely local and does not depend on sequence-level information. Second, the speaker tracing module accepts frame-wise acoustic representations from the frame-wise module and traces the speaker information. This process concatenates adjacent acoustic representations of the same speaker together to infer the recovered speech features of each speaker. Third, the speech recognition modules accept the sequences of recovered acoustic features from each speaker, and produce sequences of label scores suitable for use in an automatic speech recognition system. Because each speech recognition module performs the same task, it is natural to share the parameters of this module across each instance in the final model. Although it is possible to train the modularized network of Figure 2 (e) from random initialization, it is better to use a progressive training strategy. This strategy is motivated by the Curriculum learning theory in BIBREF15 , and integrates both modular initialization and joint training. We train a simple model first, and then use it as a pre-trained building block for a more complicated model and task. Figures 2 (b)-(e) illustrate how the model becomes progressively more complex while solving more difficult problems, from frame-wise mean squared error to whole utterance cross entropy. Our simplest model, shown in Figure 2 (b), is trained to solve a frame-wise speech separation task. For $N$ speakers, given the mixed data $\mathbf {O}_{u}^{(m)}$ , the model infers an acoustic representation ${o}_{utn}$ for each speaker $n$ at frame $t$ of utterance $u$ . The objective function of the frame-wise training, is given as $$\begin{split} \mathcal {J}_{\text{F-PIT}}=\sum _u\sum _t \frac{1}{N}\min _{s^{\prime }\in \mathbf {S}} \sum _{n\in [1,N]} MSE({o}_{utn}^{(s^{\prime })},{o}_{utn}^{(r)}) \end{split}$$ (Eq. 12) where, $\mathbf {S}$ is the permutation set of the reference representation and the inference representation. ${o}_{utn}^{(s^{\prime })}$ and ${o}_{utn}^{(r)}$ is the frame level acoustic representation of permutation $s^{\prime }$ and the reference clean speech, respectively. In each frame $t$ of the utterance $u$ , the overall minimum square error, $MSE$ , is obtained by comparing all the reference and inference representations of each permutation $s^{\prime }$ . The architecture for pre-training the speaker tracing module is explained in Figure 2 (c). The tracing module is combined with a pre-trained frame-wise module that has had its $N$ output layers removed. As in BIBREF12 , the PIT objective function is applied in utterance level. $$\begin{split} \mathcal {J}_{\text{U-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \sum _t \frac{1}{N} \sum _{n\in [1,N]} MSE({o}_{utn}^{(s^{\prime })},{o}_{utn}^{(r)}) \end{split}$$ (Eq. 13) The speech recognition module is separately pretrained in the same way as a conventional acoustic model, with clean speech and a cross-entropy objective function, maximizing $p(\mathbf {L}_u|\mathbf {O}_{u})$ . This is illustrated in Figure 2 (d). The final model, shown in Figure 2 (e), is created by stacking the speech recognition modules onto the outputs of the pre-trained speaker tracing and frame-wise modules. It is jointly trained with an utterance level PIT-CE objective function given in Equation ( 7 ). Notably, even though the proposed structure has several copies of the speech recognition module, the numbers of parameters between Figure 2 (a) and Figure 2 (e) are similar. Because the speech recognition modules are solving similar problems, their parameters can be shared . In preliminary experiments, the performance gap between sharing and independent parameters is less than 3%. Thus to make the comparison fair, without specific explanation, the shared structure is used. The advantage of the progressive joint training includes: Decreased model complexity leading to better system generalization and data efficiency. By separating system into proper modules, the model complexity is less than the all-in-one system in PIT-ASR. As unsupervised single-channel overlapped speech recognition contains several of the hardest components in speech processing, the model complexity reduction is critical to the empirical training performance. Faster convergence of the training process and better quality of the local minimum. Curriculum learning improves both the training speed and the performance of the model BIBREF15 . As shown in Figure 4 , the progressive joint training needs fewer epochs to converge, and it converges to a better local minimum. An additional benefit is that the module initializations all take much less time compared with the joint training . Potential to integrate with other technologies. State-of-the-art technologies in each field can be applied to the initialization of each module. Transfer Learning Based Joint Training Transfer learning, or teacher-student training, is a powerful technique to address domain adaptation problems in speech recognition. In this section, we show that multi-channel ASR is a type of domain adaptation, and that transfer learning can be used to improve model training. Transfer learning has been proposed to solve the distribution mismatch problem in feature space BIBREF16 . To use this method in single-output ASR domain adaptation, parallel data must be available from a source domain and a target domain. A fully trained model in the source domain (the teacher) processes data and generates posterior probabilities, which are sometimes referred to as “soft labels.” These soft labels then replace or augment the usual “hard labels” when training the student model with parallel data in the target domain BIBREF17 . To train the student, the Kullback-Leibler divergence (KLD) between the output distributions of the teacher and student models is minimized as below. $$\begin{split} KLD(y^{(T)},y^{(S)})=\sum _i y_i^{(T)} \log \frac{y_i^{(T)}}{y_i^{(S)}} \\ = \sum _i\ [\ y_i^{(T)} \log {y_i^{(T)}} - y_i^{(T)}\log {y_i^{(S)}}\ ] \end{split}$$ (Eq. 22) $$\begin{split} = \sum _i\ - y_i^{(T)}\log {y_i^{(S)}} \end{split}$$ (Eq. 23) where $y_i^{(T)}$ and $y_i^{(S)}$ is the teacher and student distributions respectively. Because the first term is not related to the student model optimization, only the second term is used for optimization. Comparing Equation ( 23 ) to CE criterion in ASR, the hard labeling is replaced by the soft distribution inferred from the source data by the teacher model. In light of above discussion, self-transfer learning can be extended to the training of any multi-channel speech recognition system. The student is, of course the multi-channel speech recognition system. It operates in the target domain of mixed speech acoustic data, and must produce separate outputs for each speaker in the mixture. The teacher also must produce separate outputs for each speaker, but has access to the source domain: un-mixed clean speech. The teacher model is a set of clean speech acoustic models operating independently on the separate channels of clean speech. The self-transfer learning method then minimizes the KLD between the output distribution of the mixed speech model and the set of clean speech models. The KL divergence defined for utterance level PIT training between the clean speech model distribution and the joint model distribution is as below, $$\begin{split} \mathcal {J}_{\text{KLD-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \sum _t \frac{1}{N} \sum _{n\in [1,N]} \\ KLD(P({l}_{utn}^{(c)}|\mathbf {O}_{un}^{(r)}),P({l}_{utn}^{(s^{\prime })}|\mathbf {O}_{u}^{(m)})) \end{split}$$ (Eq. 25) where the calculation of each $KLD(\cdot )$ pair is the same to the adaptation-purpose single-channel case in Equation ( 23 ). Namely, the joint-trained model distribution, $y^{(S)}=P({l}_{utn}^{(s^{\prime })}|\mathbf {O}_{u}^{(m)})$ , is taken as the student model distribution, and the clean speech model distribution, $y^{(T)}=P({l}_{utn}^{(c)}|\mathbf {O}_{un}^{(r)})$ , is taken as the teacher model distribution. It is notable that when this method is applied to the modular structure proposed in this work, as in Figure 3 , the speech recognition modules can be initialized with an exact copy of the teacher model. The training framework for self-transfer learning is shown in Figure 3 . The soft targets generated by the teacher models are interpolated with the hard labeling as in BIBREF18 . The training procedure is as below: Clone the speaker tracing layers in the bottom. Clone $2N$ copies of clean ASR model initialized in Section "Modularization" , half for stacking upon the speaker tracing layers, half for model inference given each clean speech stream. Use simultaneous clean speech streams $\mathbf {O}_{un}^{(r)}$ and the overlapped speech stream $\mathbf {O}_{u}^{(m)}$ to do joint training. For each mini-batch, do forward propagation of the clean ASR model using each clean speech stream to calculate N streams of $P({l}_{utn}^{(c)}|\mathbf {O}_{un}^{(r)})$ respectively. Do forward propagation of the joint model using overlapped speech stream to calculate N streams of inference distributions, $P({l}_{utn}^{(s^{\prime })}|\mathbf {O}_{u}^{(m)})$ . For that mini-batch, calculate the error signal of Equation ( 25 ) and then do back propagation for the joint model. Update parameters of the joint model and repeat until convergence. The proposed method elegantly solves the label mismatch problem and helps the model convergence. Namely, using hard labeling obtained from forced-alignment in the clean speech is not proper, because the feature has been distorted in the mixed speech. The proposed method replaces it with the soft distribution. In addition, the proposed method formulates the joint training of multi-channel ASR by domain adaptation between clean speech and overlapped speech. Thus the soft distribution also helps model convergence, because it's easier to recreate its performance, compared with training a speech recognition model from scratch. The evidence can be observed from the training curve in Figure 4 that the initial CE of self-transfer learning based progressive joint training is much better than that of both joint modeling and progressive joint modeling. Notably, the different starting points between the progressive joint modeling and self-transfer learning based progressive joint modeling is because the CE in the former system is calculated versus hard labeling, while for the latter system it is versus the soft distribution inferred from simultaneous clean speech . Thus with a better starting point and less parameter updating requirement, finally the model also comes into better minimum in the figure. The relationships of the proposed method and previous works are summarized as below. [leftmargin=*] Model space adaptation. The formulation of the proposed method is similar to KLD-based adaptation BIBREF20 and teacher-student based domain adaptation BIBREF17 . In BIBREF20 , to conservatively update model parameters using adaptation data, the target probability distribution is changed from the ground truth alignment to a linear interpolation with the distribution estimated from the unadapted model. In BIBREF17 , the feature mismatch problem in the target domain is solved by minimizing the inference distribution divergence between the target and source domains using parallel-data. The reader is free to consider the proposed method as analogous to optimizing student network in the target domain, i.e. overlapped speech, to behave similarly to the well-trained teacher network in the source domain, i.e. clean speech, while bearing in mind the proposed method requires module stacking because the motivation is to do joint training. Stereo piecewise linear compensation for environment (SPLICE) BIBREF21 . The SPLICE algorithm uses stereo data to do noise reduction and channel distortion compensation. In BIBREF22 , the clean feature is used for the teacher model to provide supervision on the stereo noisy data trained student model. In BIBREF18 , the multi-channel enhanced feature is used for the teacher model. In this work, the teacher-student framework is also based on stereo data. The student model is initialized better to cope with the more difficult modeling problem, and the entire framework is expanded to handle multiple output streams. Progressive stacking transfer learning. BIBREF23 proposes to progressively conduct transfer learning to train speech enhancement layers. The motivation of the progressive stacking is only to gradually model a hard task by dividing into several same but smaller tasks. Thus the criteria of all tasks are the same. However, the proposed method is to do joint training of distinct tasks. And each task is fully trained with specific data and criterion. Self-supervised training. BIBREF24 proposes to use a teacher model based on more accurate sensor information as the supervision of the student model. The motivation of the proposed method is different, which is to transfer distribution between two models with feature mismatch. Besides, the empirical procedure of the proposed method is to fine-tune the original model in the target feature distribution from supervision of itself in parallel source distribution. Following the transfer learning diagram, the joint model can also benefit from an ensemble of teachers BIBREF25 . Specifically, because the problem includes several sub-problems discussed in Section "Modularization" , different neural network structures can show different superiorities, e.g., with different numbers of stream-dependent layers and stream-independent layers. Learning from an ensemble of joint models with different structures is promising in both performance improvement and model compression. Multi-output Sequence Discriminative Training Speech recognition is inherently a sequence prediction problem. In single-output ASR, sequence level criteria such as sequence discriminative training tend to improve performance. The unsupervised single-channel overlapped speech recognition problem further includes the speaker tracing problem, which is also a sequence level problem. Previous works concatenate frame level CE as the criteria of the sequence level problem, which limits the modeling effect of the neural network. In this paper, sequence discriminative training of multiple output streams is proposed for the first time. In single-output ASR, to form a sequence discriminative training criterion, it is necessary to calculate the sequence posterior probability using Bayes' theorem as below, $$\begin{split} P(\mathbf {L}_u|\mathbf {O}_u)=\frac{p(\mathbf {O}_u|\mathbf {L}_u)P(\mathbf {L}_u)}{p(\mathbf {O}_u)} \end{split}$$ (Eq. 41) Here, $\mathbf {L}_u$ is the word sequence of utterance $u$ . $P(\mathbf {L}_u)$ is the language model probability. $p(\mathbf {O}_u|\mathbf {L}_u)$ is the corresponding acoustic part. The marginal probability $p(\mathbf {O}_u)$ of the feature sequence $\mathbf {O}_u$ , is modeled by summation of the probability over all possible hypothesis sequences. $$\begin{split} p(\mathbf {O}_u)=\sum _\mathbf {L} p(\mathbf {O}_u,\mathbf {L})= \sum _\mathbf {L} P(\mathbf {L}) p(\mathbf {O}_u|\mathbf {L}) \end{split}$$ (Eq. 42) Here, $\mathbf {L}$ denotes all competing hypotheses. As an example of the sequence discriminative training criteria, the maximum mutual information (MMI) BIBREF26 of inference distribution stream $\mathbf {L}_u$ in utterance $u$ is defined as below, $$\begin{split} \mathcal {J}_{\text{SEQ}}(\mathbf {L}_u,\mathbf {L}_u^{(r)})= \log P(\mathbf {L}_u^{(r)}|\mathbf {O}_u) \end{split}$$ (Eq. 43) where $\mathbf {L}_u^{(r)}$ is the corresponding reference. For the overlapped speech recognition problem, the conditional independence assumption in the output label streams is still made as in Equation ( 5 ). Then the cross-entropy based PIT can be transformed to sequence discriminative criterion based PIT as below, $$\begin{split} \mathcal {J}_{\text{SEQ-PIT}}=\sum _u \min _{s^{\prime }\in \mathbf {S}} \frac{1}{N} \sum _{n\in [1,N]}-\mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)}) \end{split}$$ (Eq. 44) Different from Equation ( 7 ), the best permutation is decided by $\mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)})$ , which is the sequence discriminative criterion of taking the $s^{\prime }$ -th permutation in $n$ -th output inference stream at utterance $u$ . Similar to CE-PIT, $\mathcal {J}_{\text{SEQ}}$ of all the permutations are calculated and the minimum permutation is taken to do the optimization. The definition of $\mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)})$ is similar to Equation ( 43 ) in single output ASR. $$\begin{split} \mathcal {J}_{\tt {MMI}} =\sum _u \mathcal {J}_{\text{SEQ}}(\mathbf {L}_{un}^{(s^{\prime })},\mathbf {L}_{un}^{(r)}) \\ =\sum _{u} \log \frac{\sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)}{\sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})} \end{split}$$ (Eq. 45) For simplicity, $\mathbf {L}_u=\mathbf {L}_{un}^{(r)}$ and $\mathbf {L}$ is all the hypothesis sequences generated by the output stream $\mathbf {L}_{un}^{(s^{\prime })}$ . $p(\mathbf {O}_u|\mathbf {L}_u)$ and $p(\mathbf {O}_u|\mathbf {L})$ is the conditional likelihood obtained from forward propagation of the joint model, $P(\mathbf {L}_u)$ and $P(\mathbf {L})$ is the prior probability obtained from language model. The hypothesis sequence $\mathbf {L}$ refers to all the competing hypotheses in the speech recognition. Bad modeling of $\mathbf {L}$ , namely ignoring some modeling errors, results in imprecise estimation of Equation ( 42 ), which hurts ASR performance. Thus competing hypotheses modeling is key to the discriminative training. Empirically, it is constrained by linguistic search space. In the single output ASR, the linguistic search space is further pruned by the online method, lattice-based discriminative training BIBREF27 , or by the offline method, lattice-free (LF) discriminative training BIBREF28 , BIBREF29 , BIBREF30 . Compared with single output ASR, the additional error hypothesis types include: i) Assignment errors: someone said a word, but it is assigned to the wrong channel. ii) cross talk errors: one person said a word, but it appears in multiple channels. They both come from imperfect acoustic modeling and result in several challenges in discriminative training. The first problem is linguistic search space modeling. As discussed in the first paragraph, there are mainly two branches of methods. When training the joint model, speaker tracing results can always change, which results in different permutations of the same utterance between different epochs. Thus if using lattice-based method, lattices should be updated after each epoch in case of bias in the search space modeling. Another choice is to use a pre-pruned senone level language model as the common search space for all utterances BIBREF30 . With this method, the lattice generation problem in multiple outputs can be solved and the discriminative training can be conducted efficiently in the shared search space. The second problem is the swapped word modeling in multiple outputs. Swapped word results in both cross talk errors and assignment errors. Thus bad swapped word modeling hurts the ASR performance. Generally, the linguistic search space is estimated from the transcription of the training dataset. And then sequence criterion is calculated in this search space. Because there's no swapped word phenomenon in the transcription, thus the search space doesn't contain swapped word cases, which results in overestimating the sequence criterion. Especially for the multiple output streams, the swapped word errors are critical to the ASR performance. Three methods are proposed to cope with the problem. [leftmargin=*] Artificial swapped words. A very simple method is to generate several copies of the transcription with artificially swapped words in each copy. And then the language model is estimated on the re-generated transcription. With this method, some of the swapped word cases can still exist in the search space. Thus the problem is alleviated. Empirically, the senone level language model is obtained from the senone level transcription, namely clustered tri-phone state alignment. In case of significant increasing in the search space because of the swapped word, we set a rule that in each frame, the probability of senone swapping is $\alpha $ . But if the senone is swapped, the senone sequence of the following $\beta $ frames won't be swapped. And $\gamma $ copies of the transcriptions are generated. De-correlated lattice free MMI (LF-DC-MMI). The motivation is that swapped words come from the other output streams. Thus adding these output streams into the search space and minimizing them in the denominator of discriminative training can alleviate the problem. $$\begin{split} \mathcal {J}_{\tt {LF\text{-}DC\text{-}MMI}} =\sum _{u} \log [ \frac{\sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)}{(\ \sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})\ )^{1-\lambda } } \cdot \\ \frac{1}{(\ {\sum _{\mathbf {L}_{\hat{u}}}} p(\mathbf {O}_u|{\mathbf {L}_{\hat{u}}})^{\kappa }P({\mathbf {L}_{\hat{u}}})\ )^\lambda } ] \end{split}$$ (Eq. 49) In Equation ( 49 ), the other output streams are denoted as $\mathbf {L}_{\hat{u}}$ . An interpolation weight $\lambda $ is added with the augmented term in the denominator. De-correlated lattice free boosted MMI (LF-DC-bMMI). Analogous to boosted MMI BIBREF31 as Equation ( 51 ), $$\begin{split} \mathcal {J}_{\tt {LF\text{-}bMMI}} =\sum _{u} \log \frac{\sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)}{\sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})e^{-b\ \mathop {\max }_{\mathbf {L}_u} A(\mathbf {L},\mathbf {L}_u)}} \end{split}$$ (Eq. 51) we propose de-correlated lattice free boosted MMI (LF-DC-bMMI) as Equation ( 52 ). Here, $b$ is the boosting factor. $A(\mathbf {L},\mathbf {L}_u)$ is the state level accuracy between sequence $\mathbf {L}$ and $\mathbf {L}_u$ . By this method, the ASR error hypotheses can be further minimized in the denominator. In the proposed method, both the ASR errors between the target inference sequence and the target reference, and the falsely-recognition of the interfere streams, are boosted. $$\begin{split} \mathcal {J}_{\tt {LF\text{-}DC\text{-}bMMI}} =\sum _{u} \log \ [\ \sum _{\mathbf {L}_u} p(\mathbf {O}_u|\mathbf {L}_u)^{\kappa }P(\mathbf {L}_u)\cdot \\ \frac{1}{\sum _{\mathbf {L}} p(\mathbf {O}_u|\mathbf {L})^{\kappa }P(\mathbf {L})e^{-b\ \mathop {\max }_{\mathbf {L}_u} A(\mathbf {L},\mathbf {L}_u) -\hat{b}\ {\mathop {\max }_{\mathbf {L}_{\hat{u}}}} (1-A(\mathbf {L},\mathbf {L}_{\hat{u}}) ) }}\ ] \end{split}$$ (Eq. 52) where $\hat{b}$ is the de-correlated boosting factor and $A(\mathbf {L},\mathbf {L}_{\hat{u}})$ measures how many falsely recognitions of the interfere streams. Experiments are conducted on all three methods in Section "Sequence Discriminative Training" and the first method can be further combined with the other two. Experiment The experimental results are reported in artificial overlapped Switchboard corpus and Eval2000 hub5e-swb test set. Although the methods presented here are valid for any number of overlapped speakers, we focus on the two-talker scenario. Experimental Setup For training, the Switchboard corpus BIBREF32 is used, which contains about 300 hours of speech. Evaluation is carried out on the Switchboard (SWB) subset of the NIST 2000 CTS (hub5e-swb) test set. The waveforms were segmented according to the NIST partitioned evaluation map (PEM) file. Two-talker overlapped speech is artificially generated by mixing these waveform segments. To maximize the speech overlap, we developed a procedure to mix similarly sized segments at around 0dB. First, we sort the speech segments by length. Then, we take segments in pairs, zero-padding the shorter segment so both have the same length. These pairs are then mixed together to create the overlapped speech data. The overlapping procedure is similar to BIBREF13 except that we make no modification to the signal levels before mixing . After overlapping, there's 150 hours data in the training, called 150 hours dataset, and 915 utterances in the test set. After decoding, there are 1830 utterances for evaluation, and the shortest utterance in the hub5e-swb dataset is discarded. Additionally, we define a small training set, the 50 hours dataset, as a random 50 hour subset of the 150 hours dataset. Results are reported using both datasets. In the training stage, 80-dimensional log-filterbank features were extracted every 10 milliseconds, using a 25-millisecond analysis window. The convolution neural network (CNN) models use 41 context frames (20 in both left and right) and the long short term memory networks (LSTM) processed one frame of input at a time. All neural networks were trained with the Microsoft Cognitive Toolkit (CNTK) BIBREF33 . The detailed setup of CNN is listed in Section "Separate Optimization v.s. Joint Modeling" . The acoustic model is based on three state left-to-right triphone models with 9000 tied states (senones). The individual senone alignments for the two-talkers in each mixed speech utterance are from the single-speaker ASR alignment BIBREF30 . For compatibility, the alignment of the shorter utterance within the mixed speech is padded with the silence state at the front and the end. The clean speech recognition performance in the corpus can be referred to BIBREF30 , BIBREF29 . Using clean speech model to do decoding in the overlapped speech isn't reported as it's as bad as in BIBREF13 . The baseline model of joint training is a PIT-ASR model with a setup similar to BIBREF13 . The PIT-ASR model is composed of 10 bidirectional LSTM layers with 768 memory cells in each layer , and 80-dimensional feature. The baseline model of separately optimized system is a PIT for speech separation (PIT-SS) model combined with a clean speech ASR model. As PIT-SS model has shown competitive performance compared with other speech separation systems, only PIT-SS model is taken as the baseline. The PIT-SS model has a setup similar to BIBREF12 but with 6 bidirectional LSTM layers with 768 memory cells in each layer, it directly outputs multiple channels of the 80 dimensional log-filterbank features the speech recognition module expects. The speech recognition module, pretrained as a clean speech model, is composed of 4 bidirectional LSTM layers with 768 memory cells in each layer. It is trained from the corresponding source speech segments used to create the overlapped corpus. After initialization, the WER performance of the ASR model in the clean speech test set is 17.0%. Although it would be easy to incorporate a stronger acoustic model BIBREF30 in conjunction with the proposed method, we chose a structure that allows for a fair comparison, in terms of the number of model parameters, among the baselines and proposed methods. However, as discussed in Section "Modularization" , the modular system needs fewer parameters and training iterations to achieve good performance. Notably, the use of enhanced signals after speech separation as training data of the speech recognition module tends to degrade the ASR performance and isn't included. The reason can be from the sporadic distortions that signal processing inevitably adds, similar to what has been observed in BIBREF18 . In the evaluation stage, a 30k-vocabulary language model derived from the most common words in the Switchboard and Fisher corpora is used. The decoder uses a statically compiled unigram graph, and dynamically applies the language model score. The unigram graph has about 300k states and 500k arcs BIBREF30 . Two outputs of the PIT-ASR model are both used in decoding to obtain the hypotheses for two talkers. For scoring, we evaluated the hypotheses on the pairwise score mode against the two references, and used the assignment with better word error rate (WER) for each utterance BIBREF13 . Only the average WER of two output streams is reported, as the task is to correctly recognize all words from both speakers. Separate Optimization v.s. Joint Modeling Table 1 shows the performance of the naive joint modeling, PIT-ASR, compared with that of the separately optimized system. The first row shows the performance of the joint training baseline model in this corpus, PIT-ASR BIBREF13 . Compared with the 0dB WER result listed in BIBREF13 , 55.80%, the performance is reasonable . As discussed in Section "Modularization" , the separately optimized system has a similar number of parameters but different model architecture. Thus to make a fair comparison, the model with 6 stream-independent layers in the bottom and 4 parameter-shared stream-dependent layers in the top, denoted as 6 $\cdot $ 4, is listed in the second row . The learnable structure is the same to the dash-dot blocks shown in Figure 2 (e), but trained from scratch as 10 $\cdot $ 0. The performance of the 6 $\cdot $ 4 structure is significantly better than that of the 10 $\cdot $ 0 structure. The reason is that unlike in the pure speech separation task, the speech recognition stage in this task is also very hard and needs more nonlinear layers. It also shows that this task is much harder than the speech separation, so better joint training method to fulfill the performance of each module is critical to the success. Without fine-tuning parameters, the performance of the separately optimized system is shown in the third row. The significantly worse performance comes from the feature mismatch in Equation ( 6 ). With fine-tuning parameters, the performance is restored in the fourth and fifth rows. The system in the fifth row can be viewed as a strong baseline with separate optimization and fine-tuning. The better performance of the progressive joint training is from better model generalization and training efficiency . Figure 4 show the effect. Training curves of both joint modeling, i.e. the second row in the table, and progressive joint modeling, i.e. the fifth row in the table, are plotted. From the figure, both better starting point and better converged minimum can be observed in the joint progressive training. With better joint training strategy shown in Section "Self-transfer Learning Based Joint Modeling" , such modeling effect can be further fulfilled. Table 2 shows the performance of the joint modeling from different modular initialization setups. All modular systems are fine-tuned after joint training. The first and second rows show the naive joint training models with BLSTM and BLSTM combined with CNN, respectively. 6 $\cdot $ 4 BLSTM refers to 6 layers BLSTM in the bottom and 4 parameter-shared layers for each output in the top, i.e. 6 $\cdot $ 4 in Table 1 . Layer-wise context expansion with attention (LACE) model is used for CNN BIBREF36 , which is a TDNN BIBREF37 variant where each higher layer is a weighted sum of nonlinear transformations of a window of lower layer frame. Each LACE block starts with a convolution layer with stride 2 which sub-samples the input and increases the number of channels. This layer is followed by three RELU-convolution layers with jump links. The channel size is 48. The slightly different setup compared with BIBREF30 is to make parameter number of one LACE block comparable with one layer of bidirectional LSTM with 768 memory cells, i.e. 5M parameters. Only one block of LACE is used to replace one layer of BLSTM as the frame-wise interpreting module, because the speaker tracing and speech recognition modules are the hardest parts in the problem. The other parts of the structure are the same, and the proposed structure is denoted as 1 LACE + 5 $\cdot $ 4 BLSTM. From the table, it can be observed that there's no improvement by merely stacking these kinds of neural networks together and jointly training them. In the third and fourth rows, the model is firstly initialized with frame-wise interpreting, Figure 2 (b), speaker tracing, Figure 2 (c), and speech recognition, Figure 2 (d), tasks respectively and then jointly trained. Comparing the fourth row to the third row, 1 LACE + 5 $\cdot $ 4 BLSTM shows larger improvement than 6 $\cdot $ 4 BLSTM. Two conclusions can be derived from the results: i) CNN structure is more suitable for the frame-wise interpreting module because it focuses on the local context and has better modeling power of frequency variations BIBREF38 . Meanwhile, LSTM is good at temporal modeling, which is more suitable for the speaker tracing and speech recognition module. The architecture with 1 LACE + 5 $\cdot $ 4 BLSTM layers combines their strength. Notably, BIBREF38 proposes a similar structure, called CLDNN, to form the acoustic model with modules focusing on different scales and take advantage of the complementarity of CNN, LSTM and DNN. The difference is that, to fulfill the respective advantages in modeling, the proposed method further pretrains each module with different criteria. ii) As the performance improvement from modularization and initialization is much larger in 1 LACE + 5 $\cdot $ 4 BLSTM, it shows that module-wise initialization is important to fulfill the modeling power of neural networks especially with different structures and scales. To further analyze the frame-wise interpreting ability of CNN and BLSTM, experiments without frame-wise interpreting initialization are conducted in the fifth and sixth rows. This time, the performances are similar both in 6 $\cdot $ 4 BLSTM and 1 LACE + 5 $\cdot $ 4 BLSTM. It shows that in initialization of the speaker tracing module, both BLSTM and CNN can spontaneously learn the frame-wise interpreting ability. We notice that in BIBREF12 , frame-wise PIT training doesn't show good performance, which is similar to our observation. Self-transfer Learning Based Joint Modeling Table 3 shows the performance improvement of the transfer learning applied to joint modeling. For transfer learning, the interpolation weight between hard and soft labels is 0.5 . The original PIT-ASR system is in the first row and a better PIT-ASR baseline with 6 $\cdot $ 4 structure in Table 1 is also included in the fourth row. The ensemble-based transfer learning proposed in Section UID37 is tested in the second row. The ensemble contains 3 types of structure, 9 $\cdot $ 1, 6 $\cdot $ 4 and 3 $\cdot $ 7, where the left number denotes the bottom stream-independent layers and the right number denotes the top stream-dependent layers. The student network learns from each teacher one-by-one. Although it's not a large gain, it shows improvement after learning from each teacher. The result of replacing hard labeling with simultaneous clean speech based transfer learning is listed in the third and the fifth rows. In both model architectures, transfer learning brings about a relative 10% improvement over the respective baseline, which is comparable with the result in BIBREF17 . It shows that soft distribution inferred by the model with similar architecture is superior to the hard labeling. The self-transfer learning based progressive joint modeling is finally listed in the sixth and the seventh row by using CE-trained and MMI-trained clean speech teacher respectively. Notably, as the model framework discussed in "Transfer Learning Based Joint Training" , the initializations of speech recognition modules are the respective clean speech teachers. The result shows over 30% relative improvement. Comparing the third and the fifth rows with Table 2 , it can be further observed that, combining progressive joint training and self-transfer learning brings about even larger improvement compared with the summation of the relative improvement from each of the two technologies. The learning curve of the proposed method is also shown in Figure 4 . From these results, we conclude: i) The proposed method brings about faster convergence and better converged minimum. The reason is discussed in Section "Transfer Learning Based Joint Training" . The better convergence result also comes from the removal of the inappropriate hard alignment in the joint training. ii) Easier convergence helps the model fulfill the best performance in each module. That's the explanation of the even better synergy result compared with the summation of the relative improvements from transfer learning and progressive joint training. iii) Better teacher generates better student. And the MMI-trained distribution can also be transferred to the student model, similar to what has been observed in BIBREF39 . Figure 5 further shows the student performance versus quality of the teacher in transfer learning based joint modeling. It can be observed that better student can usually be obtained with better teacher. An important reason is that self-transfer learning is conducted by minimizing the divergence of its own distributions in mixed speech and clean speech. Thus better original distribution, including MMI-trained distribution, can intrinsically be part of the joint model and brings about better results. The only inflection point is in epoch=3 of the MMI teacher, where the student performance is similar to epoch=1 although the teacher model has better WER performance. We believe the reason is that the distribution of the teacher model of epoch=3 is hard to transfer to the student model because of the transition process from CE-trained distribution to MMI-trained distribution. Sequence Discriminative Training Table 4 shows the performance improvement of sequence discriminative training based joint modeling. In this experiment, the baseline is PIT-ASR model without progressive joint training. All the structures are kept the same as 10 $\cdot $ 0 and only criteria are changed. $\kappa =0.1$ , which is in accordance with the decoding setup. The senone level language model for competing hypothesis modeling is obtained from the clustered tri-phone state alignment. Tri-gram is used, similar to BIBREF30 . For the artificial swapped word method, the hyper-parameters in Section UID46 is decided by the preliminary experiment. $\alpha =0.4$ , $\beta =10$ and $\gamma =2$ . The swapped word senone level search graph is 4 times larger than the original tri-gram senone level language model graph. $\lambda =0.1$ , $b=0.1$ and $\hat{b}=0.2$ . The baseline PIT-ASR system is shown in the first row, denoted as PIT-CE to show the criterion of the system. Applying the naive sequence discriminative training method for multiple outputs in the second row, only brings about 4.9% relative improvement. For the recognition result, word precision is improved but insertion error increases. The reason is from imperfect search space modeling of swapped words discussed in "Multi-output Sequence Discriminative Training" . By applying the proposed LF-DC-MMI method in the third row, the recognition result is significantly improved compared with both baseline and the naive LF-MMI. The proposed method minimizes the swapped words from parallel output streams in the denominator modeling. Thus the problem can be alleviated. The fourth and fifth rows show the effect of using bMMI instead of MMI in the formulations. The bMMI criterion boosts the ASR errors in the denominator modeling, implicitly including possibly swapped words. Although significant improvement can be observed between LF-MMI and LF-bMMI, the LF-DC-bMMI explicitly includes the swapped words and achieves an even better result. The proposed artificial swapped word method is shown in the sixth and seventh rows. By comparing the sixth row with the second row, and comparing the seventh row with the fifth row, it shows slight but consistent improvement in solving the swapped word problem. And the method can also be combined with LF-DC-bMMI to achieve 8.2% relative improvement versus the CE-trained PIT-ASR baseline. In the eighth to tenth rows, experiments are conducted on 150 hours corpus. The results are similar, and LF-DC-bMMI criterion shows consistent improvement versus PIT-CE and naive sequence discriminative training criterion. In Section "Combination and Extension to Larger Corpus" , it is shown that sequence discriminative training can be combined with other technologies and achieves further consistent and significant improvement. The discriminative training criterion helps the system training in two ways. Firstly, sequence level criterion helps the sequence level speaker tracing problem in PIT modeling. Specifically, linguistic information is encoded in the senone level language modeling in discriminative training. Thus the procedure implicitly integrates linguistic information in the speaker tracing problem. Secondly, sequence discriminative training improves the speech recognition module. Notably, all the sequence discriminative training procedures are applied after CE initialization as in BIBREF30 . With initialization, it also helps Equation ( 44 ) to reach a better minimum. Combination and Extension to Larger Corpus Table 5 summarizes the performance improvement of integrating all the proposed methods. The PIT-ASR model BIBREF13 , denoted as PIT-CE, is taken as the baseline of naive joint modeling in the first row. The separately optimized system, namely PIT-SS+ASR, is not included here. As shown in Table 2 , the performance deteriorates because of feature mismatch. Instead, the proposed progressive joint training model in the second row can be taken as a stronger separately optimized system with fine-tuning. The proposed self-transfer learning based joint training model shows further significant improvement in the third and fourth rows. Finally, the multi-output sequence discriminative training is applied and achieves moderate improvement, although the teacher model is already MMI-trained, similar to what has been observed in BIBREF40 . Figure 6 shows decoding examples of the proposed methods versus the PIT baseline. The baseline contains many errors due to bad model generalization with limited size of dataset. With the proposed methods, errors are significantly reduced. Notably, in this example, the self-transfer learning based progressive joint training mainly reduces errors from similar pronunciations, while sequence discriminative training mainly reduces explicit syntax or linguistic errors, which is in line with the expectation. With similar number of parameters but different neural networks, namely 1 LACE + 5 $\cdot $ 4 BLSTM, the system can be consistently improved in the sixth to ninth rows. We believe the further improvement comes from the proper modularization of the problem, which is discussed in Section "Separate Optimization v.s. Joint Modeling" . Table 6 expands the dataset to 150 hours to show the effect of more training data. The naive joint training baseline in the first row significantly benefits from more data and shrinks the gap to the proposed progressive joint training model in the second row. However, it still even significantly worse than the self-transfer learning and sequence discriminative training based joint model trained in 50 hours data in Table 5 . It again shows the disadvantages of large model complexity and insufficient model generalization discussed in Section "Unsupervised Single-channel Overlapped Speech Recognition" . i.e. compared with merely increasing data, the better method to solve the problem is to improve the model generalization. Besides, the convergence speed of naive joint training model in the larger dataset is even slower, namely 4 times more epochs versus the proposed method. Comparing Table 6 with Table 5 , the proposed self-transfer learning based joint training and multi-output sequence discriminative training show consistent relative improvement versus the progressive joint training. Compared with Table 5 , sequence discriminative training achieves larger relative improvement on the CE-trained teacher based system. In both 50 hours and 150 hours corpus, the proposed method achieves over 30% relative improvement respectively, versus the PIT-ASR system and the PIT-SS+ASR system. The improvement comes from better model generalization, training efficiency and the sequence level linguistic knowledge integration. Although this paper addresses the case of simultaneous speech of two people talking at a relative level of 0dB, we believe it will be straightforward to extend the system to handle more realistic conditions. The case where one speaker is louder than the other has already been observed to be easier for PIT-style models than the 0dB data explored in this paper BIBREF13 . For more than two speakers, extension of the proposed system should follow the same construction described in BIBREF34 . Finally, we expect robustness to background noise and reverberation to come from standard techniques such as multi-condition training BIBREF41 . Conclusion In this work, we proposed to divide the single channel overlapped speech recognition problem into three sub-problems: frame-wise interpreting, speaker tracing and speech recognition. Each module is firstly optimized separately with specific designed criteria, which significantly improves the system generalization and training efficiency. After the initialization, modules are jointly trained with two novel strategies: self-transfer learning and multi-output sequence discriminative training. Specifically, in the joint training stage, the clean speech model fine-tunes its parameters with other modules in overlapped speech to fit its own distribution in the simultaneous clean speech. And then sequence discriminative training designed for multiple outputs is applied to integrate linguistic and sequence information. The proposed framework achieves 30% relative improvement over both a strong jointly trained system, PIT-ASR, and a separately optimized system, PIT-SS+ASR. The proposed framework shows promising perspectives of future improvements, which are: i) Integrating state-of-the-art technologies in each module in the initialization stage, e.g., DPCL BIBREF8 . ii) Applying other sequence level criteria to improve the speaker tracing and speech recognition modules, i.e. connectionist temporal classification (CTC) BIBREF42 . iii) Explicit integration of language model in the joint modeling, e.g. joint decoding BIBREF5 and end-to-end modeling BIBREF43 . Acknowledgment We thank Chris Basoglu, Frank Seide for their invaluable assistance with CNTK; Mike Seltzer, Takuya Yoshioka, Hakan Erdogan and Andreas Stolcke for many helpful conversations. The first author would like to further thank Jerry and Juncheng Gu for their supports during the internship. []Zhehuai Chen received his B.S. degree in the Department of Electronic and Information Engineering from Huazhong University of Science and Technology, China, in 2014. He is currently a Ph.D. candidate in Shanghai Jiao Tong University working on speech recognition. His current research interests include speech recognition, speech synthesis and deep learning.
we sort the speech segments by length, we take segments in pairs, zero-padding the shorter segment so both have the same length, These pairs are then mixed together
20f7b359f09c37e6aaaa15c2cdbb52b031ab4809
20f7b359f09c37e6aaaa15c2cdbb52b031ab4809_0
Q: What baseline system is used? Text: Introduction Mining Twitter data has increasingly been attracting much research attention in many NLP applications such as in sentiment analysis BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and stock market prediction BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Recently, Davidov2010 and Reyes2013 have shown that Twitter data includes a high volume of “ironic” tweets. For example, a user can use positive words in a Twitter message to her intended negative meaning (e.g., “It is awesome to go to bed at 3 am #not”). This especially results in a research challenge to assign correct sentiment labels for ironic tweets BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . To handle that problem, much attention has been focused on automatic irony detection in Twitter BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF13 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . In this paper, we propose a neural network model for irony detection in tweets. Our model obtains the fifth best performances in both binary and multi-class irony detection subtasks in terms of INLINEFORM0 score BIBREF25 . Details of the two subtasks can be found in the task description paper BIBREF25 . We briefly describe the subtasks as follows: Dataset The dataset consists of 4,618 tweets (2,222 ironic + 2,396 non-ironic) that are manually labelled by three students. Some pre-processing steps were applied to the dataset, such as the emoji icons in a tweet are replaced by a describing text using the Python emoji package. Additionally, all the ironic hashtags, such as #not, #sarcasm, #irony, in the dataset have been removed. This makes difficult to correctly predict the label of a tweet. For example, “@coreybking thanks for the spoiler!!!! #not” is an ironic tweet but without #not, it probably is a non-ironic tweet. The dataset is split into the training and test sets as detailed in Table TABREF5 . Note that there is also an extended version of the training set, which contains the ironic hashtags. However, we only use the training set which does not contain the ironic hashtags to train our model as it is in line with the test set. Our modeling approach We first describe our MLP-based model for ironic tweet detection in Section SECREF7 . We then present the features used in our model in Section SECREF8 . Neural network model We propose to use the Multilayer Perceptron (MLP) model BIBREF28 to handle both the ironic tweet detection subtasks. Figure FIGREF3 presents an overview of our model architecture including an input layer, two hidden layers and a softmax output layer. Given a tweet, the input layer represents the tweet by a feature vector which concatenates lexical, syntactic, semantic and polarity feature representations. The two hidden layers with ReLU activation function take the input feature vector to select the most important features which are then fed into the softmax layer for ironic detection and classification. Features Table TABREF11 shows the number of lexical, syntactic, semantic and polarity features used in our model. Our lexical features include 1-, 2-, and 3-grams in both word and character levels. For each type of INLINEFORM0 -grams, we utilize only the top 1,000 INLINEFORM1 -grams based on the term frequency-inverse document frequency (tf-idf) values. That is, each INLINEFORM2 -gram appearing in a tweet becomes an entry in the feature vector with the corresponding feature value tf-idf. We also use the number of characters and the number of words as features. We use the NLTK toolkit to tokenize and annotate part-of-speech tags (POS tags) for all tweets in the dataset. We then use all the POS tags with their corresponding tf-idf values as our syntactic features and feature values, respectively. A major challenge when dealing with the tweet data is that the lexicon used in a tweet is informal and much different from tweet to tweet. The lexical and syntactic features seem not to well-capture that property. To handle this problem, we apply three approaches to compute tweet vector representations. Firstly, we employ 300-dimensional pre-trained word embeddings from GloVe BIBREF29 to compute a tweet embedding as the average of the embeddings of words in the tweet. Secondly, we apply the latent semantic indexing BIBREF30 to capture the underlying semantics of the dataset. Here, each tweet is represented as a vector of 100 dimensions. Thirdly, we also extract tweet representation by applying the Brown clustering algorithm BIBREF31 , BIBREF32 —a hierarchical clustering algorithm which groups the words with similar meaning and syntactical function together. Applying the Brown clustering algorithm, we obtain a set of clusters, where each word belongs to only one cluster. For example in Table TABREF13 , words that indicate the members of a family (e.g., “mum”, “dad”) or positive sentiment (e.g., “interesting”, “awesome”) are grouped into the same cluster. We run the algorithm with different number of clustering settings (i.e., 80, 100, 120) to capture multiple semantic and syntactic aspects. For each clustering setting, we use the number of tweet words in each cluster as a feature. After that, for each tweet, we concatenate the features from all the clustering settings to form a cluster-based tweet embedding. Motivated by the verbal irony by means of polarity contrast, such as “I really love this year's summer; weeks and weeks of awful weather”, we use the number of polarity signals appearing in a tweet as the polarity features. The signals include positive words (e.g., love), negative words (e.g., awful), positive emoji icon and negative emoji icon. We use the sentiment dictionaries provided by BIBREF33 to identify positive and negative words in a tweet. We further use boolean features that check whether or not a negation word is in a tweet (e.g., not, n't). Implementation details We use Tensorflow BIBREF34 to implement our model. Model parameters are learned to minimize the the cross-entropy loss with L INLINEFORM0 regularization. Figure FIGREF16 shows our training mechanism. In particular, we follow a 10-fold cross-validation based voting strategy. First, we split the training set into 10 folds. Each time, we combine 9 folds to train a classification model and use the remaining fold to find the optimal hyperparameters. Table TABREF18 shows optimal settings for each subtask. In total, we have 10 classification models to produce 10 predicted labels for each test tweet. Then, we use the voting technique to return the final predicted label. Metrics The metrics used to evaluate our model include accuracy, precision, recall and F INLINEFORM0 . The accuracy is calculated using all classes in both tasks. The remainders are calculated using only the positive label in subtask 1 or per class label (i.e., macro-averaged) in subtask 2. Detail description of the metrics can be found in BIBREF25 . Results for subtask 1 Table TABREF21 shows our official results on the test set for subtask 1 with regards to the four metrics. By using a simple MLP neural network architecture, our system achieves a high performance which is ranked third and fifth out of forty-four teams using accuracy and F INLINEFORM0 metrics, respectively. Results for subtask 2 Table TABREF23 presents our results on the test set for subtask 2. Our system also achieves a high performance which is ranked third and fifth out of thirty-two teams using accuracy and F INLINEFORM0 metrics, respectively. We also show in Table TABREF24 the performance of our system on different class labels. For ironic classes, our system achieves the best performance on the verbal irony by means of a polarity contrast with INLINEFORM1 of 60.73%. Note that the performance on the situational class is not high. The reason is probably that the number of situational tweets in the training set is small (205/3,834), i.e. not enough to learn a good classifier. Discussions Apart from the described MLP models, we have also tried other neural network models, such as Long Short-Term Memory (LSTM) BIBREF35 and Convolutional Neural Network (CNN) for relation classification BIBREF36 . We found that LSTM achieves much higher performance than MLP does on the extended training set containing the ironic hashtags (about 92% vs 87% with 10-fold cross-validation using INLINEFORM0 on subtask 1). However, without the ironic hashtags, the performance is lower than MLP's. We also employed popular machine learning techniques, such as SVM BIBREF37 , Logistic Regression BIBREF38 , Ridge Regression Classifier BIBREF39 , but none of them produces as good results as MLP does. We have also implemented ensemble models, such as voting, bagging and stacking. We found that with 10-fold cross-validation based voting strategy, our MLP models produce the best irony detection and classification results. Conclusion We have presented our NIHRIO system for participating the Semeval-2018 Task 3 on “Irony detection in English tweets”. We proposed to use Multilayer Perceptron to handle the task using various features including lexical features, syntactic features, semantic features and polarity features. Our system was ranked the fifth best performing one with regards to F INLINEFORM0 score in both the subtasks of binary and multi-class irony detection in tweets. Acknowledgments This research is supported by the National Institute for Health Research (NIHR) Innovation Observatory at Newcastle University, United Kingdom.
Unanswerable
3efc0981e7f959d916aa8bb32ab1c347b8474ff8
3efc0981e7f959d916aa8bb32ab1c347b8474ff8_0
Q: What type of lexical, syntactic, semantic and polarity features are used? Text: Introduction Mining Twitter data has increasingly been attracting much research attention in many NLP applications such as in sentiment analysis BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and stock market prediction BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Recently, Davidov2010 and Reyes2013 have shown that Twitter data includes a high volume of “ironic” tweets. For example, a user can use positive words in a Twitter message to her intended negative meaning (e.g., “It is awesome to go to bed at 3 am #not”). This especially results in a research challenge to assign correct sentiment labels for ironic tweets BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . To handle that problem, much attention has been focused on automatic irony detection in Twitter BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF13 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . In this paper, we propose a neural network model for irony detection in tweets. Our model obtains the fifth best performances in both binary and multi-class irony detection subtasks in terms of INLINEFORM0 score BIBREF25 . Details of the two subtasks can be found in the task description paper BIBREF25 . We briefly describe the subtasks as follows: Dataset The dataset consists of 4,618 tweets (2,222 ironic + 2,396 non-ironic) that are manually labelled by three students. Some pre-processing steps were applied to the dataset, such as the emoji icons in a tweet are replaced by a describing text using the Python emoji package. Additionally, all the ironic hashtags, such as #not, #sarcasm, #irony, in the dataset have been removed. This makes difficult to correctly predict the label of a tweet. For example, “@coreybking thanks for the spoiler!!!! #not” is an ironic tweet but without #not, it probably is a non-ironic tweet. The dataset is split into the training and test sets as detailed in Table TABREF5 . Note that there is also an extended version of the training set, which contains the ironic hashtags. However, we only use the training set which does not contain the ironic hashtags to train our model as it is in line with the test set. Our modeling approach We first describe our MLP-based model for ironic tweet detection in Section SECREF7 . We then present the features used in our model in Section SECREF8 . Neural network model We propose to use the Multilayer Perceptron (MLP) model BIBREF28 to handle both the ironic tweet detection subtasks. Figure FIGREF3 presents an overview of our model architecture including an input layer, two hidden layers and a softmax output layer. Given a tweet, the input layer represents the tweet by a feature vector which concatenates lexical, syntactic, semantic and polarity feature representations. The two hidden layers with ReLU activation function take the input feature vector to select the most important features which are then fed into the softmax layer for ironic detection and classification. Features Table TABREF11 shows the number of lexical, syntactic, semantic and polarity features used in our model. Our lexical features include 1-, 2-, and 3-grams in both word and character levels. For each type of INLINEFORM0 -grams, we utilize only the top 1,000 INLINEFORM1 -grams based on the term frequency-inverse document frequency (tf-idf) values. That is, each INLINEFORM2 -gram appearing in a tweet becomes an entry in the feature vector with the corresponding feature value tf-idf. We also use the number of characters and the number of words as features. We use the NLTK toolkit to tokenize and annotate part-of-speech tags (POS tags) for all tweets in the dataset. We then use all the POS tags with their corresponding tf-idf values as our syntactic features and feature values, respectively. A major challenge when dealing with the tweet data is that the lexicon used in a tweet is informal and much different from tweet to tweet. The lexical and syntactic features seem not to well-capture that property. To handle this problem, we apply three approaches to compute tweet vector representations. Firstly, we employ 300-dimensional pre-trained word embeddings from GloVe BIBREF29 to compute a tweet embedding as the average of the embeddings of words in the tweet. Secondly, we apply the latent semantic indexing BIBREF30 to capture the underlying semantics of the dataset. Here, each tweet is represented as a vector of 100 dimensions. Thirdly, we also extract tweet representation by applying the Brown clustering algorithm BIBREF31 , BIBREF32 —a hierarchical clustering algorithm which groups the words with similar meaning and syntactical function together. Applying the Brown clustering algorithm, we obtain a set of clusters, where each word belongs to only one cluster. For example in Table TABREF13 , words that indicate the members of a family (e.g., “mum”, “dad”) or positive sentiment (e.g., “interesting”, “awesome”) are grouped into the same cluster. We run the algorithm with different number of clustering settings (i.e., 80, 100, 120) to capture multiple semantic and syntactic aspects. For each clustering setting, we use the number of tweet words in each cluster as a feature. After that, for each tweet, we concatenate the features from all the clustering settings to form a cluster-based tweet embedding. Motivated by the verbal irony by means of polarity contrast, such as “I really love this year's summer; weeks and weeks of awful weather”, we use the number of polarity signals appearing in a tweet as the polarity features. The signals include positive words (e.g., love), negative words (e.g., awful), positive emoji icon and negative emoji icon. We use the sentiment dictionaries provided by BIBREF33 to identify positive and negative words in a tweet. We further use boolean features that check whether or not a negation word is in a tweet (e.g., not, n't). Implementation details We use Tensorflow BIBREF34 to implement our model. Model parameters are learned to minimize the the cross-entropy loss with L INLINEFORM0 regularization. Figure FIGREF16 shows our training mechanism. In particular, we follow a 10-fold cross-validation based voting strategy. First, we split the training set into 10 folds. Each time, we combine 9 folds to train a classification model and use the remaining fold to find the optimal hyperparameters. Table TABREF18 shows optimal settings for each subtask. In total, we have 10 classification models to produce 10 predicted labels for each test tweet. Then, we use the voting technique to return the final predicted label. Metrics The metrics used to evaluate our model include accuracy, precision, recall and F INLINEFORM0 . The accuracy is calculated using all classes in both tasks. The remainders are calculated using only the positive label in subtask 1 or per class label (i.e., macro-averaged) in subtask 2. Detail description of the metrics can be found in BIBREF25 . Results for subtask 1 Table TABREF21 shows our official results on the test set for subtask 1 with regards to the four metrics. By using a simple MLP neural network architecture, our system achieves a high performance which is ranked third and fifth out of forty-four teams using accuracy and F INLINEFORM0 metrics, respectively. Results for subtask 2 Table TABREF23 presents our results on the test set for subtask 2. Our system also achieves a high performance which is ranked third and fifth out of thirty-two teams using accuracy and F INLINEFORM0 metrics, respectively. We also show in Table TABREF24 the performance of our system on different class labels. For ironic classes, our system achieves the best performance on the verbal irony by means of a polarity contrast with INLINEFORM1 of 60.73%. Note that the performance on the situational class is not high. The reason is probably that the number of situational tweets in the training set is small (205/3,834), i.e. not enough to learn a good classifier. Discussions Apart from the described MLP models, we have also tried other neural network models, such as Long Short-Term Memory (LSTM) BIBREF35 and Convolutional Neural Network (CNN) for relation classification BIBREF36 . We found that LSTM achieves much higher performance than MLP does on the extended training set containing the ironic hashtags (about 92% vs 87% with 10-fold cross-validation using INLINEFORM0 on subtask 1). However, without the ironic hashtags, the performance is lower than MLP's. We also employed popular machine learning techniques, such as SVM BIBREF37 , Logistic Regression BIBREF38 , Ridge Regression Classifier BIBREF39 , but none of them produces as good results as MLP does. We have also implemented ensemble models, such as voting, bagging and stacking. We found that with 10-fold cross-validation based voting strategy, our MLP models produce the best irony detection and classification results. Conclusion We have presented our NIHRIO system for participating the Semeval-2018 Task 3 on “Irony detection in English tweets”. We proposed to use Multilayer Perceptron to handle the task using various features including lexical features, syntactic features, semantic features and polarity features. Our system was ranked the fifth best performing one with regards to F INLINEFORM0 score in both the subtasks of binary and multi-class irony detection in tweets. Acknowledgments This research is supported by the National Institute for Health Research (NIHR) Innovation Observatory at Newcastle University, United Kingdom.
Our lexical features include 1-, 2-, and 3-grams in both word and character levels., number of characters and the number of words, POS tags, 300-dimensional pre-trained word embeddings from GloVe, latent semantic indexing, tweet representation by applying the Brown clustering algorithm, positive words (e.g., love), negative words (e.g., awful), positive emoji icon and negative emoji icon, boolean features that check whether or not a negation word is in a tweet
10f560fe8e1c0c7dea5e308ee4cec16d07874f1d
10f560fe8e1c0c7dea5e308ee4cec16d07874f1d_0
Q: How does nextsum work? Text: Introduction Writing a summary is a different task compared to producing a longer article. As a consequence, it is likely that the topic and discourse moves made in summaries differ from those in regular articles. In this work, we present a powerful extractive summarization system which exploits rich summary-internal structure to perform content selection, redundancy reduction, and even predict the target summary length, all in one joint model. Text summarization has been addressed by numerous techniques in the community BIBREF0 . For extractive summarization, which is the focus of this paper, a popular task setup is to generate summaries that respect a fixed length limit. In the summarization shared tasks of the past Document Understanding Conferences (DUC), these limits are defined in terms of words or bytes. As a result, much work has framed summarization as a constrained optimization problem, in order to select a subset of sentences with desirable summary qualities such as informativeness, coherence, and non-redundancy within the length budget BIBREF1 , BIBREF2 , BIBREF3 . One problem with this setup is that it does not match many real-world summarization settings. For example, writers can tailor the length of their summaries to the amount of noteworthy content in the source article. Summaries created by news editors for archives, such as the New York Times Annotated Corpus BIBREF4 , exhibit a variety of lengths. There is also evidence that in the context of web search, people prefer summaries of different lengths for the documents in search results depending on the type of the search query BIBREF5 . More generally, current systems focus heavily on properties of the source document to learn to identify important sentences, and score the coherence of sentence transitions. They reason about the content of summaries primarily for purposes of avoiding redundancy, and respecting the length budget. But they ignore the idea that it might actually be useful to learn content structure and discourse planning for summaries from large collections of multi-sentence summaries. This work proposes an extractive summarization system that focuses on capturing rich summary-internal structure. Our key idea is that since summaries in a domain often follow some predictable structure, a partial summary or set of summary sentences should help predict other summary sentences. We formalize this intuition in a model called NextSum, which selects the next summary sentence based not only on properties of the source text, but also on the previously selected sentences in the summary. An example choice is shown in Table 1 . This setup allows our model to capture summary-specific discourse and topic transitions. For example, it can learn to expand on a topic that is already mentioned in the summary, or to introduce a new topic. It can learn to follow a script or discourse relations that are expected for that domain's summaries. It can even learn to predict the end of the summary, avoiding the need to explicitly define a length cutoff. The core of our system is a next-sentence prediction component, which is a feed-forward neural network driven by features capturing the prevalence of domain subtopics in the source and the summary, sentence importance in the source, and coverage of the source document by the summary so far. A full summary can then be generated by repeatedly predicting the next sentence until the model predicts that the summary should end. Since summary-specific moves may depend on the domain, we first explore domain-specific summarization on event-oriented news topics (War Crimes, Assassinations, Bombs) from the New York Times Annotated Corpus BIBREF4 . We also train a domain-general model across multiple types of events. NextSum predicts the next summary sentence with remarkably high accuracies, reaching 67% compared to a chance accuracy of 9%. The generated summaries outperform the lead baseline as well as domain-specific summarization baselines without requiring explicit redundancy check or a length constraint. Moreover, the system produces summaries of variable lengths which correlate with how long human summaries are for the same texts. Related work Many approaches to extractive summarization are unsupervised, and focus on the role of word frequency and source document representation for selecting informative and non-redundant content BIBREF6 , BIBREF7 , BIBREF8 . More recently, supervised approaches are popular, which view content selection as a sentence-level binary classification problem, typically using a neural network BIBREF9 , BIBREF10 . Using source structure. Source structure is a common cue for summarization. Relative word frequency and position of sentences are standardly used in many systems. Discourse- and graph-based summarization techniques explicitly focus on computing document structure BIBREF11 , BIBREF12 , BIBREF13 . Other techniques include learning probabilistic topic models over source articles within a domain to capture subtopics and transitions between them BIBREF14 , BIBREF15 , BIBREF16 . But, the use of structure from summaries is less explored. Using summary structure. Actually, almost all systems maintain some representation of the partial summary at a timestep. At the very least, it is needed for respecting a length limit and for preventing redundancy. Even in recent neural network based extractive summarization, a representation of the summary so far has been proposed to allow redundancy checks BIBREF10 . However, current methods do not focus on capturing rich summary discourse and content structure. Recent abstractive neural summarization models based on encoder-decoder frameworks actually have greater scope for capturing summary structure and content. The use of techniques such as attention and pointer mechanisms can be viewed as a form of summary structure modelling BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, because such systems currently operate at the word level, these mechanisms are mostly used for handling issues such as grammaticality, out-of-vocabulary items, predicate-argument structure, and local coherence. By contrast, we aim to capture higher-level transitions in the contents of a summary. Next-sentence prediction. The way we learn summary structure is by training a module for next summary sentence prediction. A parallel idea can be found in the form of next-utterance prediction in retrieval-based dialogue systems BIBREF21 , BIBREF22 , BIBREF23 . There have also been recent attempts at predicting the next sentence in text. The skip-thought model BIBREF24 is trained to predict a sentence from its neighbouring sentences to produce sentence representations. CLSTM2016 and pichotta:acl16 evaluate neural language models on next-sentence and event prediction. In contrast, we aim to predict the next output sentence within the tangible application of summarization. NextSum model overview We first present the key ideas, and the next section explains how we implement the model. NextSum comprises two components, a next-sentence prediction system, and a summary generation module. The first is a supervised system trained to select the next summary sentence, given a set of candidate sentences from the source, and the summary generated so far. NextSum's generation component builds a summary by making repeated calls to the next-sentence predictor. Predicting the next summary sentence The next-sentence predictor is trained on a corpus of source articles and their gold-standard summaries written by humans. In this work, we focus on single-document summarization. Consider a source article $X = \lbrace s_1,..,s_M\rbrace $ containing $M$ sentences, and a gold-standard extractive summary $G=g_1..g_N$ , a sequence of $N$ sentences. Since $G$ is extractive, $G \subseteq X$ . In NextSum, summaries are created by adding one sentence at a time. Let $Y_T=y_1..y_T$ be the partial summary at timestep $T$ ; $Y_T$ has $T$ sentences. At time $T+1$ , the goal of NextSum is to score a set of candidate sentences from the source, $C_{T+1} =\lbrace s_1,..,s_K\rbrace $ , $C_{T+1} \subseteq X$ and find the best next sentence to follow $Y_T$ . Let the gold-standard next sentence be $g_{T+1}$ . The set $C_{T+1}$ may either be all of the source sentences which have not yet been included in the summary, or be limited to a smaller size $T$0 . For now, assume that all the unselected source sentences are in the candidate set, and thus $T$1 . The model selects the next summary sentence from $C_{T+1}$ such that: $ {\hat{y}}_{T+1} = {\arg \max }_{s_i \in C_{T+1}} Pr(s_i|X,Y_T;\theta )$ When there is a tie, the earlier sentence in the article is selected. In this work, $Pr(s_i|X,Y_T;\theta )$ is estimated by a neural network parameterized by $\theta $ . Recall that the oracle next sentence $g_{T+1}$ is in $C_{T+1}$ . Hence one approach to learn the parameters of $Pr(s_i|X,Y_T;\theta )$ is to frame it as a binary classification problem where the label for sentence $g_{T+1} \in C_{T+1}$ is 1, and 0 for all $s_w \in C_{T+1}$ where $s_w \ne g_{T+1}$ . We implement this classifier using a feed-forward neural network which takes the encoded representations of ( $X$ , $Y_T$ and $\theta $0 ), and outputs the probability of label 1, $\theta $1 , which we use as $\theta $2 . The loss for the classification at timestep $\theta $3 is the binary cross-entropy loss: $ L = -\log p_{g_{T+1}} -\sum _{s_i \in C_{T+1}; s_i \ne g_{T+1}} \log (1-p_{s_i})$ . One of the special features of NextSum is that we model the end of the summary within the same setup. To do so, we introduce a special sentence $\langle \textrm {EOS}\rangle $ (End of Summary) to mark the end of every gold-standard summary, i.e. $G=y_1..y_N\langle \textrm {EOS}\rangle $ . In the model, $\langle \textrm {EOS}\rangle $ is included in candidate sets at every timestep. This inclusion allows the model to learn to discriminate between selecting a sentence from the source versus ending the summary by picking the $\langle \textrm {EOS}\rangle $ marker. Thus our candidate set is in fact $C^{\prime }_{T+1} = C_{T+1} \cup \lbrace \langle \textrm {EOS}\rangle \rbrace $ . Summary generation After the next sentence prediction model is trained, it can be used to generate a complete summary for a source article. The model performs this task by iteratively predicting the next sentence until $\langle \textrm {EOS}\rangle $ is selected. Note that, unlike previous work, the generation component is not given the target length of the summary. To generate the full summary, the model employs a greedy method that simply calls the next-sentence prediction module repeatedly until $\langle \textrm {EOS}\rangle $ is selected. We also tried beam search decoding for a more globally optimal sequence of sentences, but we found in preliminary experiments that this search did not improve our results. We developed two versions of our system. Previous methods of summary content selection assume a fixed length limit. To compare against these systems, in one version of our model, NextSum $_{L}$ , the length limit is provided as a constraint. If, after the model generates a summary sentence, the word count exceeds the given length, we stop generation and truncate the last sentence so the summary is within the length limit. The second version, NextSum, is the full model which predicts the summary length. Both systems have no access to the oracle partial summary, and use their own previous decisions to construct the partial summary. We evaluate all the summaries by comparing them with gold-standard abstracts using ROUGE BIBREF30 . We use ROUGE-2 F-score, as NextSum generates summaries of varied length. In all these systems, the target length of the summary is given as a constraint. We set the length $k$ to the average length (in words) of summaries in the training dataset for each domain (Table 3 ). Lead takes the first $k$ words from the source article. For single-document extractive summarization, the lead is a very strong baseline which many systems fail to beat BIBREF31 . CHMM is the approach used by barzilay04 for extractive summarization using content models. CHMM computes an importance score for each topic $v$ . This score is a probability computed by: 1) counting the articles in the training set where $v$ appears in both the article and its summary, 2) and normalizing by the number of articles containing $v$ . To generate a summary, the model ranks the topics in order of decreasing importance, and adds one sentence from the source for each topic (breaks ties randomly if multiple sentences decoded into the same topic). The generation stops upon reaching the length limit. This method scores the summary-worthy nature of sentences based solely on their topic. Transition is an iterative greedy approach based on the transition probability of topics from the content model. It selects $\hat{y}_{T+1} = \arg \max _{s_j \in C_{T+1}}P(\textrm {topic}(s_j)|\textrm {topic}(y_{T}))$ at each timestep until the length limit is reached. This baseline simulates a degenerate version of next-sentence prediction, where the choice is based on a single feature at topic level; i.e., the probability of transitioning from the topic of the last summary sentence to the topic of the candidate. Like our model, this baseline has no access to the oracle partial summary, and uses its previous decisions for next sentence selection. CHMM-T is also an iterative greedy approach where the evaluation function is the product between topic transition probability (Transition) and topic importance (CHMM). Apart from the above domain baselines, we also compare with two other types of summaries. General is based on a recent competitive neural network based extractive system BIBREF9 . This model is designed to be domain-general. We trained it on the DailyMail dataset BIBREF28 , containing around 200K articles and their highlights, without using pretrained embeddings. Our systems are not directly comparable, because NextSum is trained on much less data, but we show this result to give an idea of the performance of recent methods. Oracle is the gold-standard extractive summary created from abstracts using the mapping method from Section "Obtaining extractive summaries" . It represents an upper bound on the performance of any extractive summary. Table 5 shows the ROUGE-2 F-score results for all the systems. The baselines, NextSum $_L$ , oracle and general are fixed length summaries. Among the baselines, we see that the simple lead summary comprising the first $k$ words of the source article is the strongest, outperforming domain-trained content model systems in all the domains. The oracle results, however, show that there is still considerable scope for the improvement of automatic systems performing sentence extraction. The oracle extractive summary (which was chosen to maximize similarity with the abstract) gets close to double the ROUGE score of lead baseline in the crime domain. Both NextSum $_L$ and NextSum outperform the lead (with statistical significance) in all cases except the bombs domain. Importantly, NextSum, which does automatic length prediction, outperforms NextSum $_L$ , indicating that automatically tailoring summaries to different lengths is clearly of value. In the next section, we examine this length prediction ability in detail. Comparing performance across domains, the source articles in bombs domain are on average longer than the other domains (refer Table 3 ), which could be a reason that content selection performance is lower here. This domain also has longer gold standard summaries and the correlation between the length of human abstracts and source articles is also the lowest in this domain. The domain-general system of cheng-lapata:2016 is trained on a much larger general corpus of summary-article pairs. While our results are not directly comparable, we see that NextSum's performance is competitive with current methods, and since it is based on a new outlook and no explicit constraints, it provides much scope for future improvements. Implementing NextSum In this section, we explain how we select the candidate set, what features we use in the neural network for next sentence prediction, and the design of the generation component. Candidate selection Some source articles are very long, which means that $C^{\prime }_{T+1}$ can contain many candidate sentences if we take all of the unselected sentences as candidates. In practice, we limit the size of $C^{\prime }_{T+1}$ in order to reduce the search space of the model, which improves running time. In the single-document scenario, the source text sentences are in a natural discourse, and thus in a logical and temporal order. Hence, it is not unreasonable to assume that a good summary is a subsequence of the source. Given this assumption, suppose the last sentence chosen for the summary is $s_j$ at timestep $T$ , then we consider the $K$ sentences in the source immediately following $s_j$ as the candidate set at time $T+1$ . During development, we found that when $K=10$ , the gold-standard next summary sentence is in the candidate set 90% of the time, and is present 80% of the time when using $K$ =5. Based on this empirical support for the subsequence hypothesis, we use $K=10$ plus the end of summary marker for all the experiments in this paper, for a total candidate set size of 11. For comparison, a source article in our corpus has on average 33 sentences, and the maximum is as high as 500 sentences. During training, when fewer than 10 sentences remain, we randomly sample other sentences from the entire article to ensure having enough negative samples. The model is trained on balanced dataset by downsampling, and tested on the distribution where each candidate set has size 11. Features for next sentence prediction We have a source document $X=\lbrace s_1..s_M\rbrace $ with $M$ sentences, $Y_T=y_1..y_T$ is a partial summary at time $T$ , and let $s$ be a sentence (or $\langle $ EOS $\rangle $ ) in the candidate set ${C^{\prime }}_{T+1}$ . NextSum's next sentence prediction relies on computing $Pr(s|X,Y_T;\theta )$ using a feedforward neural network with parameters $\theta $ . This network learns from rich feature-based representations of $M$0 , $M$1 , $M$2 , and their interactions. Domain subtopics. These features are based on topics induced from a large collection of documents in the same domain as the source article. These topics are obtained using the content-model approach of barzilay04. The content model is a Hidden Markov Model (HMM), where the states correspond to topics, and transitions between them indicate how likely it is for one topic to follow another. The emission distribution from a state is a bigram language model indicating what lexical content is likely under that topic. Each sentence in the article is emitted by one state (i.e., one topic). The probability of an article $T=s_1...s_N$ under a HMM with $M$ states $\lbrace \textrm {topic}_1,..,\textrm {topic}_M\rbrace $ is given by: $\sum _{\textrm {topic}_1..\textrm {topic}_n} \prod _{i=1}^{N} P(\textrm {topic}_i|\textrm {topic}_{i-1})P(s_i|\textrm {topic}_i)$ Content models can be trained in an unsupervised fashion to maximize the log likelihood of the articles from the domain. We choose the number of topics on a development set. Once trained, the model can compute the most likely state sequence for sentences in the source document, and in the partial summary, using Viterbi decoding. Based on the predicted topics, we compute a variety of features: [noitemsep] the proportion of source sentences assigned to each topic the proportion of sentences in the partial summary assigned to each topic the most likely topic of the candidate $s$ given by ${\arg \max }_{i \in \textrm {Topics}} P(\textrm {topic}_i|s)$ the emission probability of $s$ from each topic the transition probability between the topic of the previous summary sentence $y_T$ , and the topic of $s$ , $P(\textrm {topic}(s)|\textrm {topic}(y_T))$ a global estimation of observing the candidate $s$ , $P(s)= \sum _{i \in \textrm {Topics}}P(s|\textrm {topic}_i)$ Content. We compute an encoding of source, summary so far, and the candidate sentence by averaging the pretrained word2vec embeddings BIBREF25 (trained on Google News Corpus) of each word in the span (900 features in total, 300 each for the source, summary so far, and the candidate). We also add features for the 1,000 most frequent words in the training articles, in order to encode their presence in $s$ , and in the sentence previous to $s$ in the source article, i.e. ( $s-1$ ). Similarly, for $s$ and $s-1$ , we record the presence of each part-of-speech tag and named entity. We expect these features for $s$ and $s-1$ are useful for predicting $\langle \textrm {EOS}\rangle $ , since the last sentence in a summary might contain some lexical cues. Redundancy. These features calculate the degree to which the candidate sentence overlaps with the summary so far. They include $\textit {sim}(s, y_t)$ for $t = T$ , $T-1$ , $T-2$ (3 features), where $\textit {sim}(p,q)$ is computed using cosine similarity between count vector representations of the words in $s$ and $y_t$ . We also include the number of overlapping nouns and verbs between $s$ and $Y_T$ (2 features). Position. The position of a sentence in the source document is an important indicator for content selection and is widely used in systems. We indicate the position in the source of the last generated summary sentence $y_T$ (as one of 5 bins, the size of each bin depends on the length of the source article). We also indicate the position of the candidate sentence, and its distance to $y_T$ in the source (normalized by the length of the source). Length. We include features for the length of the source, both as number of sentences, and number of words (binned into 5 bins). We also include the number of sentences and words in the summary so far. The length measures for the partial summary are not binned. Coverage. These features compute how much of the source will be covered by the summary when a candidate sentence is added to it. We use the KL divergence between the source and candidate summary when $s$ is included in it: $D_{KL} (X\Vert Y_T \cup \lbrace s\rbrace )$ where the distribution of $X$ and $Y_T$ are unigram language models. Sentence importance. We also indicate the individual importance of a candidate sentence. The frequency of a word in the source is known to be a strong feature for importance BIBREF26 . With this intuition, we include the $ \frac{1}{|s|}\sum _{w \in s} \textrm {uni}_X(w)$ where $w$ is a token in the candidate sentence, and $\textrm {uni}_X(w)$ is the unigram probability of $w$ in the source $X$ . We also use a separate pre-trained model of word importance. This model feeds the context of a target word (the two words before and two words after) into a LSTM model which outputs the probability of the target word appearing in a summary. The importance score of a sentence is then the average and maximum of the predicted scores of each word in the sentence. This model is trained on the same training and development data sets. Data We hypothesize that next-sentence prediction is more likely to be successful in event-oriented domains (describing events as opposed to explanations and opinions). Moreover, summary-specific moves may be more prominent and learnable from summary-article pairs within specific domains compared to a general corpus. So we create three domain-specific and one domain-general dataset, all focusing on events. We use the New York Times Annotated Corpus (NYtimes) BIBREF4 since it provides topic metadata, has thousands of article-summary pairs on different topics, and summaries are not written to set lengths. We selected three topics: “War Crimes and Criminals” (crime), “Assassinations and Attempted Assassinations” (assassin.), and “Bombs and Explosives” (bombs). We also create a more general dataset (mixed) by randomly sampling from all the three domains. We sample a similar number of articles across each domain, and randomly split each domain into 80% training, 10% development and 10% test data. Table 2 shows the sizes of these datasets. We use the Stanford CoreNLP toolkit BIBREF27 to tokenize, segment sentences, and assign part of speech tags to all the texts. Length of articles and summaries As previously mentioned, summaries are often written to express the summary-worthy content of an article, and not restricted to an arbitrary length. This property can be seen in our data (Table 3 ). The NYTimes summaries are abstractive in nature and range from a minimum of 2 words to as many as 278 words. The last column of the table gives the Kendall Tau correlation (corrected for ties) between the length of the source and the summary. There is a significant positive correlation, implying that the length of the article is indicative of its information content. This finding motivates us to include the length of the source article as a feature for next sentence prediction, though we note that the source length by itself is not enough to determine the summary length without doing further analysis of the source content. Obtaining extractive summaries The summaries from NYTimes are abstractive in nature. Our system is extractive, and for training the next sentence selection from the source, we need a mapping between the abstractive summary and the sentences in the source article. Note that we create these extractive summaries only for training our model. We will evaluate NextSum's output by comparing with the abstractive human summaries as is standard practice. We map each sentence in the abstract to the most similar sentence in the source article. Let $A=a_1..a_n$ be the sequence of sentences in the abstract. For each $a_i$ , we find $y_i = \arg \max _{s_j \in X}\cos (a_i,s_j)$ where $X$ is the set of source sentences, and $\cos (p,q)$ is the cosine similarity between the word unigrams of $p$ and $q$ . The sequence $Y=y_1..y_n$ corresponding to $A=a_1..a_n$ forms the gold standard extractive summary. Since the extractive summary mirrors the sequence of content in the abstract, the structure of the summary is preserved, allowing our next sentence prediction system to be trained on the extractive sequence of sentences. It is also for this reason that we do not use summarization datasets such as the CNN/Daily Mail corpus BIBREF28 where summaries are three-sentence highlights, and do not have any discernible discourse structure as a whole. Experiments We first evaluate our model intrinsically on the next-sentence prediction task, then test its performance on the full summary generation problem. Next-sentence prediction Here, the goal is to select the best sentence to follow the partial summary from a candidate set of 11 options (see Section "Data" ). For evaluating this part of our system, we assume that we have oracle partial summaries; i.e., the partial summary at timestep $T$ , $Y_T=y_1..y_T$ is the same as the gold summary sequence up to time $T$ . The question is how well we can predict the next sentence in this sequence from the candidate set $C^{\prime }_{T+1}$ . The correct answer is the sentence in the gold standard at position $T+1$ . The prediction at each timestep is a separate classification example. Recall that we framed the machine learning problem as one of binary classification. We thus present two sets of results: a) on the binary task, and b) on the final choice of one sentence from the candidate set (among the 11 candidates). In task (a.), the binary evaluation, the model discriminates among the 2 classes by thresholding at $p_s>0.5$ . The best setting has 4 hidden layers, each layer comprising between 500 to 1,500 neurons. We trained the model by backpropagation using the Adam optimizer BIBREF29 for up to 75 epochs. Hyperparameters were tuned on the development set. The choice of a final sentence, Task (b.), is made by picking the candidate sentence with highest $p_s$ . Table 4 shows the accuracy on binary classification task and 1-of-11 task, on the different domains. In the 1-of-11 task, the expected chance-level accuracy is roughly 9.1%, since we force every candidate set to have size 11. Our next-utterance prediction system's accuracy is between 60 to 67% on the different domains, showing that there are distinctive clues on summary internal structure and content, which can be learned by a model. Note also that the accuracy numbers are consistent across all domains and the mixed case indicating that the patterns are fairly domain-general within event-oriented documents. These evaluations are somewhat idealistic in that the model has access to oracle partial summaries during prediction. We next evaluate NextSum on the full summarization task. Performance of length prediction NextSum requires neither redundancy removal nor length constraints. In this section, we show that our system produces summaries of varied lengths which correlate with the lengths of human-written summaries of the same source article. Figure 1 shows the distribution of the length (in words) of NextSum summaries (all domains put together). The generated lengths vary greatly, and span the average range covered by the summaries in the training data. The majority of lengths are in the 30 to 50 words limit. Hence NextSum is specializing summary lengths to cover a wide range. Next, we measure how well these summary lengths correlate with the lengths of the human-written abstracts. Table 6 shows the Kendall Tau correlation (corrected for ties) between length in words of the NextSum summary and the length of the abstract for the same source. NextSum's summary lengths correlate fairly well with those of the abstracts, leading to significant numbers in all the domains and the mixed case. Again, the length prediction is worse on the Bombs domain compared to the rest. Overall, this result shows promise that we can develop summarization systems which automatically tailor their content based on properties of the source. Conclusion In this work, we have presented the first summarization system which integrates content selection, summary length prediction, and redundancy removal. Central to this system is the use of a next-sentence prediction system which learns summary-internal discourse transitions. We show that NextSum outperforms a number of baselines on ROUGE-2 F-scores even when the summary length is not provided to the system. Furthermore, the lengths of the predicted summaries correlate positively with the lengths of human-written abstracts, indicating that our method implicitly captures some aspect of how much summary-worthy content is present in the source article. In future work, we plan to investigate whether this approach also leads to more coherent summaries. This issue will be especially important in the multi-document setting, which we would also like to investigate using an extension of our model.
selects the next summary sentence based not only on properties of the source text, but also on the previously selected sentences in the summary
07580f78b04554eea9bb6d3a1fc7ca0d37d5c612
07580f78b04554eea9bb6d3a1fc7ca0d37d5c612_0
Q: Can the approach be generalized to other technical domains as well? Text: Introduction Neural machine translation (NMT), a new approach to solving machine translation, has achieved promising results BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . An NMT system builds a simple large neural network that reads the entire input source sentence and generates an output translation. The entire neural network is jointly trained to maximize the conditional probability of a correct translation of a source sentence with a bilingual corpus. Although NMT offers many advantages over traditional phrase-based approaches, such as a small memory footprint and simple decoder implementation, conventional NMT is limited when it comes to larger vocabularies. This is because the training complexity and decoding complexity proportionally increase with the number of target words. Words that are out of vocabulary are represented by a single unknown token in translations, as illustrated in Figure 1 . The problem becomes more serious when translating patent documents, which contain several newly introduced technical terms. There have been a number of related studies that address the vocabulary limitation of NMT systems. Jean el al. Jean15 provided an efficient approximation to the softmax to accommodate a very large vocabulary in an NMT system. Luong et al. Luong15 proposed annotating the occurrences of a target unknown word token with positional information to track its alignments, after which they replace the tokens with their translations using simple word dictionary lookup or identity copy. Li et al. Li16 proposed to replace out-of-vocabulary words with similar in-vocabulary words based on a similarity model learnt from monolingual data. Sennrich et al. Sennrich16 introduced an effective approach based on encoding rare and unknown words as sequences of subword units. Luong and Manning Luong16 provided a character-level and word-level hybrid NMT model to achieve an open vocabulary, and Costa-jussà and Fonollosa Jussa16 proposed a NMT system based on character-based embeddings. However, these previous approaches have limitations when translating patent sentences. This is because their methods only focus on addressing the problem of unknown words even though the words are parts of technical terms. It is obvious that a technical term should be considered as one word that comprises components that always have different meanings and translations when they are used alone. An example is shown in Figure 1 , wherein Japanese word “”(bridge) should be translated to Chinese word “” when included in technical term “bridge interface”; however, it is always translated as “”. In this paper, we propose a method that enables NMT to translate patent sentences with a large vocabulary of technical terms. We use an NMT model similar to that used by Sutskever et al. Sutskever14, which uses a deep long short-term memories (LSTM) BIBREF7 to encode the input sentence and a separate deep LSTM to output the translation. We train the NMT model on a bilingual corpus in which the technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except technical terms. Similar to Sutskever et al. Sutskever14, we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using statistical machine translation (SMT). We also use it to rerank the 1,000-best SMT translations on the basis of the average of the SMT and NMT scores of the translated sentences that have been rescored with the technical term tokens. Our experiments on Japanese-Chinese patent sentences show that our proposed NMT system achieves a substantial improvement of up to 3.1 BLEU points and 2.3 RIBES points over a traditional SMT system and an improvement of approximately 0.6 BLEU points and 0.8 RIBES points over an equivalent NMT system without our proposed technique. Japanese-Chinese Patent Documents Japanese-Chinese parallel patent documents were collected from the Japanese patent documents published by the Japanese Patent Office (JPO) during 2004-2012 and the Chinese patent documents published by the State Intellectual Property Office of the People's Republic of China (SIPO) during 2005-2010. From the collected documents, we extracted 312,492 patent families, and the method of Utiyama and Isahara Uchiyama07bs was applied to the text of the extracted patent families to align the Japanese and Chinese sentences. The Japanese sentences were segmented into a sequence of morphemes using the Japanese morphological analyzer MeCab with the morpheme lexicon IPAdic, and the Chinese sentences were segmented into a sequence of words using the Chinese morphological analyzer Stanford Word Segment BIBREF8 trained using the Chinese Penn Treebank. In this study, Japanese-Chinese parallel patent sentence pairs were ordered in descending order of sentence-alignment score and we used the topmost 2.8M pairs, whose Japanese sentences contain fewer than 40 morphemes and Chinese sentences contain fewer than 40 words. Neural Machine Translation (NMT) NMT uses a single neural network trained jointly to maximize the translation performance BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF5 . Given a source sentence $x$ $=(x_1,\ldots ,x_N)$ and target sentence $y$ $=(y_1,\ldots ,y_M)$ , an NMT system uses a neural network to parameterize the conditional distributions $$p(y_l \mid y_{< l},\mbox{$x$}) \nonumber $$ (Eq. 6) for $1 \le l \le M$ . Consequently, it becomes possible to compute and maximize the log probability of the target sentence given the source sentence $$\log p(\mbox{$y$} \mid \mbox{$x$}) = \sum _{l=1}^{M} \log p(y_l|y_{< l},\mbox{$x$})$$ (Eq. 7) In this paper, we use an NMT model similar to that used by Sutskever et al. Sutskever14. It uses two separate deep LSTMs to encode the input sequence and output the translation. The encoder, which is implemented as a recurrent neural network, reads the source sentence one word at a time and then encodes it into a large vector that represents the entire source sentence. The decoder, another recurrent neural network, generates a translation on the basis of the encoded vector one word at a time. One important difference between our NMT model and the one used by Sutskever et al. Sutskever14 is that we added an attention mechanism. Recently, Bahdanau et al. Bahdanau15 proposed an attention mechanism, a form of random access memory, to help NMT cope with long input sequences. Luong et al. Luong15b proposed an attention mechanism for different scoring functions in order to compare the source and target hidden states as well as different strategies for placing the attention. In this paper, we utilize the attention mechanism proposed by Bahdanau et al. Bahdanau15, wherein each output target word is predicted on the basis of not only a recurrent hidden state and the previously predicted word but also a context vector computed as the weighted sum of the hidden states. NMT Training after Replacing Technical Term Pairs with Tokens Figure 2 illustrates the procedure of the training model with parallel patent sentence pairs, wherein technical terms are replaced with technical term tokens “ $TT_{1}$ ”, “ $TT_{2}$ ”, $\ldots $ . In the step 1 of Figure 2 , we align the Japanese technical terms, which are automatically extracted from the Japanese sentences, with their Chinese translations in the Chinese sentences. Here, we introduce the following two steps to identify technical term pairs in the bilingual Japanese-Chinese corpus: According to the approach proposed by Dong et al. Dong15b, we identify Japanese-Chinese technical term pairs using an SMT phrase translation table. Given a parallel sentence pair $\langle S_J, S_C\rangle $ containing a Japanese technical term $t_J$ , the Chinese translation candidates collected from the phrase translation table are matched against the Chinese sentence $S_C$ of the parallel sentence pair. Of those found in $S_C$ , $t_C$ with the largest translation probability $P(t_C\mid t_J)$ is selected, and the bilingual technical term pair $\langle t_J,t_C\rangle $ is identified. For the Japanese technical terms whose Chinese translations are not included in the results of Step UID11 , we then use an approach based on SMT word alignment. Given a parallel sentence pair $\langle S_J, S_C\rangle $ containing a Japanese technical term $t_J$ , a sequence of Chinese words is selected using SMT word alignment, and we use the Chinese translation $t_C$ for the Japanese technical term $t_J$ . As shown in the step 2 of Figure 2 , in each of Japanese-Chinese parallel patent sentence pairs, occurrences of technical term pairs $\langle t_J^{\ 1},t_C^1 \rangle $ , $\langle t_J^2,t_C^2\rangle $ , $\ldots $ , $\langle t_J^k,t_C^k\rangle $ are then replaced with technical term tokens $\langle TT_{1},TT_{1} \rangle $ , $\langle TT_{2},TT_{2} \rangle $ , $\ldots $ , $\langle TT_{k},TT_{k} \rangle $ . Technical term pairs $\langle t_J^{1},t_C^1 \rangle $ , $\langle t_J^2,t_C^2\rangle $ , $\langle t_J^2,t_C^2\rangle $0 , $\langle t_J^2,t_C^2\rangle $1 are numbered in the order of occurrence of Japanese technical terms $\langle t_J^2,t_C^2\rangle $2 ( $\langle t_J^2,t_C^2\rangle $3 ) in each Japanese sentence $\langle t_J^2,t_C^2\rangle $4 . Here, note that in all the parallel sentence pairs $\langle t_J^2,t_C^2\rangle $5 , technical term tokens “ $\langle t_J^2,t_C^2\rangle $6 ”, “ $\langle t_J^2,t_C^2\rangle $7 ”, $\langle t_J^2,t_C^2\rangle $8 that are identical throughout all the parallel sentence pairs are used in this procedure. Therefore, for example, in all the Japanese patent sentences $\langle t_J^2,t_C^2\rangle $9 , the Japanese technical term $\ldots $0 which appears earlier than other Japanese technical terms in $\ldots $1 is replaced with $\ldots $2 . We then train the NMT system on a bilingual corpus, in which the technical term pairs is replaced by “ $\ldots $3 ” ( $\ldots $4 ) tokens, and obtain an NMT model in which the technical terms are represented as technical term tokens. NMT Decoding and SMT Technical Term Translation Figure 3 illustrates the procedure for producing Chinese translations via decoding the Japanese sentence using the method proposed in this paper. In the step 1 of Figure 3 , when given an input Japanese sentence, we first automatically extract the technical terms and replace them with the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\ldots $ ). Consequently, we have an input sentence in which the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\ldots $ ) represent the positions of the technical terms and a list of extracted Japanese technical terms. Next, as shown in the step 2-N of Figure 3 , the source Japanese sentence with technical term tokens is translated using the NMT model trained according to the procedure described in Section "NMT Training after Replacing Technical Term Pairs with Tokens" , whereas the extracted Japanese technical terms are translated using an SMT phrase translation table in the step 2-S of Figure 3 . Finally, in the step 3, we replace the technical term tokens “ $TT_{i}$ ” ( $i=1,2,\ldots $ ) of the sentence translation with SMT the technical term translations. NMT Rescoring of 1,000-best SMT Translations As shown in the step 1 of Figure 4 , similar to the approach of NMT rescoring provided in Sutskever et al.Sutskever14, we first obtain 1,000-best translation list of the given Japanese sentence using the SMT system. Next, in the step 2, we then replace the technical terms in the translation sentences with technical term tokens “ $TT_{i}$ ” ( $i = 1,2,3,\ldots $ ), which must be the same with the tokens of their source Japanese technical terms in the input Japanese sentence. The technique used for aligning Japanese technical terms with their Chinese translations is the same as that described in Section "NMT Training after Replacing Technical Term Pairs with Tokens" . In the step 3 of Figure 4 , the 1,000-best translations, in which technical terms are represented as tokens, are rescored using the NMT model trained according to the procedure described in Section "NMT Training after Replacing Technical Term Pairs with Tokens" . Given a Japanese sentence $S_J$ and its 1,000-best Chinese translations $S_C^{\ n}$ ( $n=1,2,\ldots ,\ 1,000$ ) translated by the SMT system, NMT score of each translation sentence pair $\langle S_J, S_C^n \rangle $ is computed as the log probability $\log p(S_C^n \mid S_J)$ of Equation ( 7 ). Finally, we rerank the 1,000-best translation list on the basis of the average SMT and NMT scores and output the translation with the highest final score. Training and Test Sets We evaluated the effectiveness of the proposed NMT system in translating the Japanese-Chinese parallel patent sentences described in Section "Japanese-Chinese Patent Documents" . Among the 2.8M parallel sentence pairs, we randomly extracted 1,000 sentence pairs for the test set and 1,000 sentence pairs for the development set; the remaining sentence pairs were used for the training set. According to the procedure of Section "NMT Training after Replacing Technical Term Pairs with Tokens" , from the Japanese-Chinese sentence pairs of the training set, we collected 6.5M occurrences of technical term pairs, which are 1.3M types of technical term pairs with 800K unique types of Japanese technical terms and 1.0M unique types of Chinese technical terms. Out of the total 6.5M occurrences of technical term pairs, 6.2M were replaced with technical term tokens using the phrase translation table, while the remaining 300K were replaced with technical term tokens using the word alignment. We limited both the Japanese vocabulary (the source language) and the Chinese vocabulary (the target language) to 40K most frequently used words. Within the total 1,000 Japanese patent sentences in the test set, 2,244 occurrences of Japanese technical terms were identified, which correspond to 1,857 types. Training Details For the training of the SMT model, including the word alignment and the phrase translation table, we used Moses BIBREF9 , a toolkit for a phrase-based SMT models. For the training of the NMT model, our training procedure and hyperparameter choices were similar to those of Sutskever et al. Sutskever14. We used a deep LSTM neural network comprising three layers, with 512 cells in each layer, and a 512-dimensional word embedding. Similar to Sutskever et al. (2014), we reversed the words in the source sentences and ensure that all sentences in a minibatch are roughly the same length. Further training details are given below: All of the LSTM's parameter were initialized with a uniform distribution ranging between -0.06 and 0.06. We set the size of a minibatch to 128. We used the stochastic gradient descent, beginning at a learning rate of 0.5. We computed the perplexity of the development set using the currently produced NMT model after every 1,500 minibatches were trained and multiplied the learning rate by 0.99 when the perplexity did not decrease with respect to the last three perplexities. We trained our model for a total of 10 epoches. Similar to Sutskever et al. Sutskever14, we rescaled the normalized gradient to ensure that its norm does not exceed 5. We implement the NMT system using TensorFlow, an open source library for numerical computation. The training time was around two days when using the described parameters on an 1-GPU machine. Evaluation Results We calculated automatic evaluation scores for the translation results using two popular metrics: BLEU BIBREF10 and RIBES BIBREF11 . As shown in Table 1 , we report the evaluation scores, on the basis of the translations by Moses BIBREF9 , as the baseline SMT and the scores based on translations produced by the equivalent NMT system without our proposed approach as the baseline NMT. As shown in Table 1 , the two versions of the proposed NMT systems clearly improve the translation quality when compared with the baselines. When compared with the baseline SMT, the performance gain of the proposed system is approximately 3.1 BLEU points if translations are produced by the proposed NMT system of Section "NMT Rescoring of 1,000-best SMT Translations" or 2.3 RIBES points if translations are produced by the proposed NMT system of Section "NMT Decoding and SMT Technical Term Translation" . When compared with the result of decoding with the baseline NMT, the proposed NMT system of Section "NMT Decoding and SMT Technical Term Translation" achieved performance gains of 0.8 RIBES points. When compared with the result of reranking with the baseline NMT, the proposed NMT system of Section "NMT Rescoring of 1,000-best SMT Translations" can still achieve performance gains of 0.6 BLEU points. Moreover, when the output translations produced by NMT decoding and SMT technical term translation described in Section "NMT Decoding and SMT Technical Term Translation" with the output translations produced by decoding with the baseline NMT, the number of unknown tokens included in output translations reduced from 191 to 92. About 90% of remaining unknown tokens correspond to numbers, English words, abbreviations, and symbols. In this study, we also conducted two types of human evaluation according to the work of Nakazawa et al. Nakazawa15: pairwise evaluation and JPO adequacy evaluation. During the procedure of pairwise evaluation, we compare each of translations produced by the baseline SMT with that produced by the two versions of the proposed NMT systems, and judge which translation is better, or whether they are with comparable quality. The score of pairwise evaluation is defined by the following formula, where $W$ is the number of better translations compared to the baseline SMT, $L$ the number of worse translations compared to the baseline SMT, and $T$ the number of translations having their quality comparable to those produced by the baseline SMT: $$score=100 \times \frac{W-L}{W+L+T} \nonumber $$ (Eq. 34) The score of pairwise evaluation ranges from $-$ 100 to 100. In the JPO adequacy evaluation, Chinese translations are evaluated according to the quality evaluation criterion for translated patent documents proposed by the Japanese Patent Office (JPO). The JPO adequacy criterion judges whether or not the technical factors and their relationships included in Japanese patent sentences are correctly translated into Chinese, and score Chinese translations on the basis of the percentage of correctly translated information, where the score of 5 means all of those information are translated correctly, while that of 1 means most of those information are not translated correctly. The score of the JPO adequacy evaluation is defined as the average over the whole test sentences. Unlike the study conducted Nakazawa et al. BIBREF12 , we randomly selected 200 sentence pairs from the test set for human evaluation, and both human evaluations were conducted using only one judgement. Table 2 shows the results of the human evaluation for the baseline SMT, the baseline NMT, and the proposed NMT system. We observed that the proposed system achieved the best performance for both pairwise evaluation and JPO adequacy evaluation when we replaced technical term tokens with SMT technical term translations after decoding the source sentence with technical term tokens. Throughout Figure 5 $\sim $ Figure 7 , we show an identical source Japanese sentence and each of its translations produced by the two versions of the proposed NMT systems, compared with translations produced by the three baselines, respectively. Figure 5 shows an example of correct translation produced by the proposed system in comparison to that produced by the baseline SMT. In this example, our model correctly translates the Japanese sentence into Chinese, whereas the translation by the baseline SMT is a translation error with several erroneous syntactic structures. As shown in Figure 6 , the second example highlights that the proposed NMT system of Section "NMT Decoding and SMT Technical Term Translation" can correctly translate the Japanese technical term “”(laminated wafer) to the Chinese technical term “”. The translation by the baseline NMT is a translation error because of not only the erroneously translated unknown token but also the Chinese word “”, which is not appropriate as a component of a Chinese technical term. Another example is shown in Figure 7 , where we compare the translation of a reranking SMT 1,000-best translation produced by the proposed NMT system with that produced by reranking with the baseline NMT. It is interesting to observe that compared with the baseline NMT, we obtain a better translation when we rerank the 1,000-best SMT translations using the proposed NMT system, in which technical term tokens represent technical terms. It is mainly because the correct Chinese translation “”(wafter) of Japanese word “” is out of the 40K NMT vocabulary (Chinese), causing reranking with the baseline NMT to produce the translation with an erroneous construction of “noun phrase of noun phrase of noun phrase”. As shown in Figure 7 , the proposed NMT system of Section "NMT Rescoring of 1,000-best SMT Translations" produced the translation with a correct construction, mainly because Chinese word “”(wafter) is a part of Chinese technical term “”(laminated wafter) and is replaced with a technical term token and then rescored by the NMT model (with technical term tokens “ $TT_{1}$ ”, “ $TT_{2}$ ”, $\ldots $ ). Conclusion In this paper, we proposed an NMT method capable of translating patent sentences with a large vocabulary of technical terms. We trained an NMT system on a bilingual corpus, wherein technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except the technical terms. Similar to Sutskever et al. Sutskever14, we used it as a decoder to translate the source sentences with technical term tokens and replace the tokens with technical terms translated using SMT. We also used it to rerank the 1,000-best SMT translations on the basis of the average of the SMT score and that of NMT rescoring of translated sentences with technical term tokens. For the translation of Japanese patent sentences, we observed that our proposed NMT system performs better than the phrase-based SMT system as well as the equivalent NMT system without our proposed approach. One of our important future works is to evaluate our proposed method in the NMT system proposed by Bahdanau et al. Bahdanau15, which introduced a bidirectional recurrent neural network as encoder and is the state-of-the-art of pure NMT system recently. However, the NMT system proposed by Bahdanau et al. Bahdanau15 also has a limitation in addressing out-of-vocabulary words. Our proposed NMT system is expected to improve the translation performance of patent sentences by applying approach of Bahdanau et al. Bahdanau15. Another important future work is to quantitatively compare our study with the work of Luong et al. Luong15. In the work of Luong et al. Luong15, they replace low frequency single words and translate them in a post-processing Step using a dictionary, while we propose to replace the whole technical terms and post-translate them with phrase translation table of SMT system. Therefore, our proposed NMT system is expected to be appropriate to translate patent documents which contain many technical terms comprised of multiple words and should be translated together. We will also evaluate the present study by reranking the n-best translations produced by the proposed NMT system on the basis of their SMT rescoring. Next, we will rerank translations from both the n-best SMT translations and n-best NMT translations. As shown in Section "Evaluation Results" , the decoding approach of our proposed NMT system achieved the best RIBES performance and human evaluation scores in our experiments, whereas the reranking approach achieved the best performance with respect to BLEU. A translation with the highest average SMT and NMT scores of the n-best translations produced by NMT and SMT, respectively, is expected to be an effective translation.
There is no reason to think that this approach wouldn't also be successful for other technical domains. Technical terms are replaced with tokens, therefore so as long as there is a corresponding process for identifying and replacing technical terms in the new domain this approach could be viable.
dc28ac845602904c2522f5349374153f378c42d3
dc28ac845602904c2522f5349374153f378c42d3_0
Q: How many tweets were manually labelled? Text: Abstract Currency trading (Forex) is the largest world market in terms of volume. We analyze trading and tweeting about the EUR-USD currency pair over a period of three years. First, a large number of tweets were manually labeled, and a Twitter stance classification model is constructed. The model then classifies all the tweets by the trading stance signal: buy, hold, or sell (EUR vs. USD). The Twitter stance is compared to the actual currency rates by applying the event study methodology, well-known in financial economics. It turns out that there are large differences in Twitter stance distribution and potential trading returns between the four groups of Twitter users: trading robots, spammers, trading companies, and individual traders. Additionally, we observe attempts of reputation manipulation by post festum removal of tweets with poor predictions, and deleting/reposting of identical tweets to increase the visibility without tainting one's Twitter timeline. Introduction Foreign exchange market (Forex) is a global decentralized market for trading with currencies. The daily trading volume exceeds 5 trillion USD, thus making it the largest market in the world. In this paper we analyze three sources of data, over a period of three years (from January 2014 to December 2016) BIBREF0 : We focus on potential missinformation spreading and manipulations on Twitter. The main issue is: What is the ground truth? We address this problem by moving out of the social network system and by observing another, financial market system. Actual financial gains in the market provide clues to potential manipulations in the social network. We relate both systems by applying and adapting the “event study” methodology BIBREF1 . The currency announcements are events which are expected to influence the EUR-USD exchange rate. If the event signal (buy, hold, or sell) is properly recognized then some actual financial returns can be made in the hours (or days) after the event. In contrast to classical event studies, we categorize events on the basis of sentiment (properly called “stance”) of relevant Twitter users. In our previous work, we already analyzed the effects of Twitter stance on stock prices (30 stocks from the Dow Jones index) BIBREF2 , BIBREF3 . We showed that the peaks of Twitter activity and their polarity are significantly correlated with stock returns. In this paper, we show that, for certain classes of Twitter users, returns after the events are statistically significant (albeit small). And we can also identify differences in returns after the potential manipulations of Twitter feed. The paper is organized as follows. In section SECREF2 we specify how the Forex tweets were collected, a subset manually annotated, and a stance classification model constructed. Section SECREF3 provides simple rules to identify different classes of Twitter users (such as trading robots, spammers, and actual traders). We show that there are large differences in Twitter stance between these users. Section SECREF4 describes the event study methodology in some detail, as needed to understand the subsequent results. We show significant differences in cumulative abnormal returns between the different user groups. In section SECREF5 we address potential manipulations of the user Twitter feed with a tentative goal to improve her/his reputation and visibility. We focus on the tweets that were deleted after we originally collected them, and analyze different reasons for this post festum deletions. We conclude with the ideas for further work and enhancements of the preliminary, but promising, results presented so far. Twitter stance model Tweets related to Forex, specifically to EUR and USD, were acquired through the Twitter search API with the following query: “EURUSD”, “USDEUR”, “EUR”, or “USD”. In the period of three years (January 2014 to December 2016) almost 15 million tweets were collected. A subset of them (44,000 tweets) was manually labeled by knowledgeable students of finance. The label captures the leaning or stance of the Twitter user with respect to the anticipated move of one currency w.r.t. the other. The stance is represented by three values: buy (EUR vs. USD), hold, or sell. The tweets were collected, labeled and provided to us by the Sowa Labs company (http://www.sowalabs.com). The labeled tweets were generalized into a Twitter stance model. For supervised learning, variants of SVM BIBREF4 are often used, because they are well suited for large scale text categorization, are robust, and perform well. For Forex tweets, we constructed a two plane SVM classifier BIBREF5 , BIBREF6 . The two plane SVM assumes the ordering of stance values and implements ordinal classification. It consists of two SVM classifiers: One classifier is trained to separate the `buy' tweets from the `hold-or-sell' tweets; the other separates the `sell' tweets from the `buy-or-hold' tweets. The result is a classifier with two hyperplanes that partitions the vector space into three subspaces: buy, hold, or sell. During classification, the distances from both hyperplanes determine the predicted stance value. The stance classifier was evaluated by 10-fold blocked cross-validation. Since tweets are time-ordered, they should not be randomly selected into individual folds, but retained in blocks of consecutive tweets BIBREF7 . The results of performance evaluation are in Table TABREF5 . Note that the F INLINEFORM0 measure considers just the `buy' and `sell' classes, as is common in the three-valued sentiment classification evaluations BIBREF5 . Twitter user groups Different types of Twitter users have very different intentions regarding their impact and message they want to spread. In recent years, specially automated robots became increasingly influential. To properly estimate the relation between the Forex market and tweetosphere, it is important to focus on relevant Twitter users, i.e., Forex trading companies and individual traders. In related work, it was already shown that bots exercise a profound impact on content popularity and activity on Twitter. For example, Gilani et al. BIBREF8 implemented a simple bot detection mechanism based on click frequency and user agent strings. To classify users into three categories (organizations, journalists/media bloggers, and individuals), De Choudhury et al. BIBREF9 trained an automatic classifier. An alternative approach is to detect communities in a retweet network, e.g., BIBREF10 , BIBREF11 . It turns out that it is easy to identify Forex trading robots. Their tweets ( INLINEFORM0 ) all start with one of the eighth patterns (such as “Closed Buy”, “Sell stop”, ...). The Forex Twitter users can then be classified into one of the four groups by the following simple rules: where INLINEFORM0 indicates the daily activity of the user, and INLINEFORM1 is the proportion of the user tweets that were retweeted by others. Figure FIGREF10 shows the proportions of different Twitter user groups and their tweets in our dataset. We can see that more than half of the users are individuals, but that the trading robots produce by far the largest fraction of Forex tweets. There are also considerable differences in the stance between different user groups. Figure FIGREF11 shows that trading robots produce almost exclusively polarized tweets (no `hold' tweets). On the other hand, spammers (without robots) are predominantly neutral (relatively few `buy' or 'sell' tweets). The groups we focus on, trading companies and individuals, are more opinionated than spammers. It is interesting that in their tweets the `sell' signal is prevailing, probably due to the downward trend of EUR vs. USD in the last three years. Event study An event study captures the impact of external events on the market returns. External events that we consider here are the currency related announcements by the central banks (FED and ECB) and governments (around 750 in the three years). In an event study, Cumulative Abnormal Return (CAR) is defined as a measure of return which exceed the overall market return. Specifically: The other essential component of an event study is determining the type of event in terms of its expected impact on the price. In stock market, typically Earnings Announcements are studied. If an announcement exceeds prior expectations of analysts, it is classified as positive, and stock prices are expected to rise. An event study combines announcements about several stocks, over longer period of time, and computes the average CARs in the days or hours after the announcements. In our case, we do not consider expectation of the analysts, but instead use the stance of the Forex Twitter users regarding the EUR vs. USD exchange rate. We consider all tweets in one hour after the announcement, and aggregate their stance to categorize the event. Then we compute the CARs for up to one day after the event, at one minute resolution. If Twitter stance correctly predicts the exchange rate movement then there should be some tangible returns (CARs) in the hours after the event. Figure FIGREF15 shows returns, aggregated over all 750 events, for different Twitter user groups. The expected result is visible for trading companies (bottom-left chart). For `buy' events (we buy EUR at time 0) CARs are positive (return is around 0.1%, small but significant), for `sell' events (we sell EUR at time 0) CARs are negative , and for `hold' events (no transaction) CARs are around zero. Similar results are obtained for individual traders (bottom-right chart), but the separation of events is not as clear as for trading companies. On the other hand, trading robots and spam users (top two charts in Figure FIGREF15 ) show no useful correlation between the Twitter stance and CARs. As a consequence, we conclude that it is important to properly identify them and eliminate their tweets from any trading strategy based on Twitter. Reputation manipulation Here we focus on another aspect of Twitter misuse for potential manipulation: post festum deletion of tweets by the Twitter user. What are the reasons for users to delete their tweets? Previous research addressed prediction of malicious or deleted tweets BIBREF12 , BIBREF13 , BIBREF14 , and identification of deleted and suspicious accounts BIBREF15 . On one hand, some authors show that typos and rephrasing are among the major causes for deleting tweets BIBREF13 . On the other hand, other authors found that in deleted tweets, a significantly higher fraction of the vocabulary consists of swear words, and markers that indicate anger, anxiety, and sadness BIBREF16 . We verified which of the tweets that were collected during the three years in near real time, still exist. It turns out that in our dataset, 4.7% (689,658) posts were post festum deleted by the users. Different user groups exhibit different patterns of deletion. A histogram in Figure FIGREF16 shows fractions of tweets deleted by different user groups. The majority of users do not delete their own tweets at all (peak at 0). At the other extreme (100), there is about 5% of the users who deleted their accounts and all their tweets. But the really interesting are the trading companies, where only one third of them does not delete tweets, and more than half of them delete up to 10% of their tweets. We focus on the deleted tweets by trading companies and individual traders and search for signs of reputation manipulations. A breakdown of deleted tweets for both groups in terms of different stances is in Table TABREF17 . Deleting tweets to increase CARs One reason for companies and individuals to delete their tweets might be to create an image of their capabilities to predict the market. For example, one can post two contradictory tweets at the same time: EUR will go up, and EUR will go down. After the market shows the actual EUR move, the incorrect prediction is deleted, and the user's timeline shows his forecasting insight. We compare the results of the event study before and after the tweets were deleted. Figure FIGREF19 shows CARs for trading companies and individual traders after removing their deleted tweets. At this point, we can report only negative results, i.e., there is no increase of CARs, and the `hold' events are further away from the zero line than in Figure FIGREF15 . Analyzing trading companies We analyze deleted tweets of 189 (out of 195) Twitter users categorized as trading companies that have active Twitter accounts (by deleting an account, all the tweets from that account are also deleted). The 189 companies deleted 3,741 tweets. Among them, four deleted all Forex related tweets from their profile while the accounts are still active, 8 users deleted between 10% and 40% of their tweets, 33 users deleted between 1% and 5% of their tweets, and only 68 did not delete any tweets. The deleting behaviour of trading companies is shown in Figure FIGREF21 . Note that the majority (76% of the trading companies) deleted less than 1% of their tweets. Note also that there are no trading companies that delete between 5 and 10% of their tweets. We analyze the deleted tweets and focus on criteria that might indicate reputation manipulation. Out of the 3,741 deleted tweets, 3,611 are unique (same author and identical text) while 130 tweets are deleted more than once. An extreme case is a tweet (advertising easy and safe profit) which is deleted 46 times (same author and identical text). The deleting and reposting of identical tweets is one form of increasing visibility without tainting the author's Twitter timeline. A tweet that is deleted and posted again appears several times in the user's followers feed while it appears just once in the authors timeline. This can be therefore considered a kind of reputation manipulation. Out of the 93 tweets that were deleted and reposted, 50 were deleted and reposted once while the rest were deleted and reposted several times. The 746 `recommendation' tweets that were deleted afterward point to a potential reputation manipulation by deleting the bad recommendations. The breakdown of deleted tweets is shown in Figure FIGREF22 . One of the major reasons to delete tweets are typos and rephrasing BIBREF13 . In these cases, a very similar tweet to the deleted tweet is posted again. We check for each of the 3,575 tweets that were deleted once and not reposted, if they were deleted due to a typo. We define typo as a reason of tweet deletion if the tweet is: posted by the same author, within the three next tweets after the deleted one, with a very similar text ( INLINEFORM0 Levenshtein distance INLINEFORM1 ), and the difference is not in the URLs present in the tweet. We found that 122 deleted tweets were reposted with changes so small that indicate typos. Another category of deleted tweets are retweets. If retweets are deleted, it is usually because the original tweets were deleted. In our dataset, 406 retweets are deleted. We check the remaining 3,437 tweets for the use of vocabulary specific for trading: long, short, bear, bull, bearish, bullish, resistance, support, buy, sell, close. We identify 746 tweets that are recommendations for trading (manually confirmed). This is another kind of possible reputation manipulation: a tweet with recommendation is posted and afterwards, if the recommendation turns out to be spurious, the tweet is deleted. The author's Twitter timeline then falsely appears as if following his recommendations would yield profit. We inspect a specific Twitter account from the category trading companies that posted more than 500 tweets and deleted between 10% and 40% of them. The identity of the account cannot be revealed due to the privacy issues. The tweets deleted fall into the following categories: Reposts: 91, 60 of them are advertisements (e.g., subscribe for analysis), Links (to recommendations): 17, Recommendations: 11, Retweet: 1 (if the original tweet is deleted, retweets are also deleted). We manually checked each of the 11 recommendations that were deleted. In all the cases, the recommendations turned out to be bad, i.e., an investor would loose money. An (anonymized) example of a bad recommendation post is the following: "@user_mention while daily candle is above 1.xyz we are bullish on $EURUSD." while in the actual Forex market, EUR went down. This user used both types of reputation manipulation: deleting poor recommendations, and deleting/reposting of identical tweets to increase their visibility. The percentage of deleted poor predictions is small compared to all the deleted tweets and compared to all the posted tweets. We speculate that the manipulation by tweet deletion needs to be subtle to go unnoticed by the users' followers. However, even a subtle reputation burst in a domain as competitive as Forex trading can bring major benefits to the deceptive user. Conclusions This is an initial study of potential misuses of Twitter to influence the public interested in Forex trading. We identify different types of Twitter accounts that are posting tweets related to the EUR-USD currency exchange. We show that there are considerable differences between them in terms of Twitter stance distribution and CARs. If we eliminate trading robots and spam, we find significant correlations between the Twitter stance and CARs (the returns are small, but the Forex market has very low trading costs). The remaining posts come from the Forex trading companies and individual traders. We further analyze the reasons for post festum deleting of tweets. Some reasons are harmless (such as correcting typos), but some show indications of reputation busting. We consider this a promising direction for further, more in-depth analysis. Acknowledgements The authors acknowledge financial support from the H2020 FET project DOLFINS (grant no. 640772), and the Slovenian Research Agency (research core funding no. P2-103).
44,000 tweets
ac148fb921cce9c8e7b559bba36e54b63ef86350
ac148fb921cce9c8e7b559bba36e54b63ef86350_0
Q: What dataset they use for evaluation? Text: Introduction Machine summarization systems have made significant progress in recent years, especially in the domain of news text. This has been made possible among other things by the popularization of the neural sequence-to-sequence (seq2seq) paradigm BIBREF0, BIBREF1, BIBREF2, the development of methods which combine the strengths of extractive and abstractive approaches to summarization BIBREF3, BIBREF4, and the availability of large training datasets for the task, such as Gigaword or the CNN-Daily Mail corpus which comprise of over 3.8M shorter and 300K longer articles and aligned summaries respectively. Unfortunately, the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms. In this work, we investigate the application of back-translation to training a summarization system in an unsupervised fashion from unaligned full text and summaries corpora. Back-translation has been successfully applied to unsupervised training for other sequence to sequence tasks such as machine translation BIBREF5 or style transfer BIBREF6. We outline the main differences between these settings and text summarization, devise initialization strategies which take advantage of the asymmetrical nature of the task, and demonstrate the advantage of combining varied initializers. Our approach outperforms the previous state-of-the-art on unsupervised text summarization while using less training data, and even matches the rouge scores of recent semi-supervised methods. Related Work BIBREF7's work on applying neural seq2seq systems to the task of text summarization has been followed by a number of works improving upon the initial model architecture. These have included changing the base encoder structure BIBREF8, adding a pointer mechanism to directly re-use input words in the summary BIBREF9, BIBREF3, or explicitly pre-selecting parts of the full text to focus on BIBREF4. While there have been comparatively few attempts to train these models with less supervision, auto-encoding based approaches have met some success BIBREF10, BIBREF11. BIBREF10's work endeavors to use summaries as a discrete latent variable for a text auto-encoder. They train a system on a combination of the classical log-likelihood loss of the supervised setting and a reconstruction objective which requires the full text to be mostly recoverable from the produced summary. While their method is able to take advantage of unlabelled data, it relies on a good initialization of the encoder part of the system which still needs to be learned on a significant number of aligned pairs. BIBREF11 expand upon this approach by replacing the need for supervised data with adversarial objectives which encourage the summaries to be structured like natural language, allowing them to train a system in a fully unsupervised setting from unaligned corpora of full text and summary sequences. Finally, BIBREF12 uses a general purpose pre-trained text encoder to learn a summarization system from fewer examples. Their proposed MASS scheme is shown to be more efficient than BERT BIBREF13 or Denoising Auto-Encoders (DAE) BIBREF14, BIBREF15. This work proposes a different approach to unsupervised training based on back-translation. The idea of using an initial weak system to create and iteratively refine artificial training data for a supervised algorithm has been successfully applied to semi-supervised BIBREF16 and unsupervised machine translation BIBREF5 as well as style transfer BIBREF6. We investigate how the same general paradigm may be applied to the task of summarizing text. Mixed Model Back-Translation Let us consider the task of transforming a sequence in domain $A$ into a corresponding sequence in domain $B$ (e.g. sentences in two languages for machine translation). Let $\mathcal {D}_A$ and $\mathcal {D}_B$ be corpora of sequences in $A$ and $B$, without any mapping between their respective elements. The back-translation approach starts with initial seq2seq models $f^0_{A \rightarrow B}$ and $f^0_{B \rightarrow A}$, which can be hand-crafted or learned without aligned pairs, and uses them to create artificial aligned training data: Let $\mathcal {S}$ denote a supervised learning algorithm, which takes a set of aligned sequence pairs and returns a mapping function. This artificial data can then be used to train the next iteration of seq2seq models, which in turn are used to create new artificial training sets ($A$ and $B$ can be switched here): The model is trained at each iteration on artificial inputs and real outputs, then used to create new training inputs. Thus, if the initial system isn't too far off, we can hope that training pairs get closer to the true data distribution with each step, allowing in turn to train better models. In the case of summarization, we consider the domains of full text sequences $\mathcal {D}^F$ and of summaries $\mathcal {D}^S$, and attempt to learn summarization ($f_{F\rightarrow S}$) and expansion ($f_{S\rightarrow F}$) functions. However, contrary to the translation case, $\mathcal {D}^F$ and $\mathcal {D}^S$ are not interchangeable. Considering that a summary typically has less information than the corresponding full text, we choose to only define initial ${F\rightarrow S}$ models. We can still follow the proposed procedure by alternating directions at each step. Mixed Model Back-Translation ::: Initialization Models for Summarization To initiate their process for the case of machine translation, BIBREF5 use two different initialization models for their neural (NMT) and phrase-based (PBSMT) systems. The former relies on denoising auto-encoders in both languages with a shared latent space, while the latter uses the PBSMT system of BIBREF17 with a phrase table obtained through unsupervised vocabulary alignment as in BIBREF18. While both of these methods work well for machine translation, they rely on the input and output having similar lengths and information content. In particular, the statistical machine translation algorithm tries to align most input tokens to an output word. In the case of text summarization, however, there is an inherent asymmetry between the full text and the summaries, since the latter express only a subset of the former. Next, we propose three initialization systems which implicitly model this information loss. Full implementation details are provided in the Appendix. Mixed Model Back-Translation ::: Initialization Models for Summarization ::: Procrustes Thresholded Alignment (Pr-Thr) The first initialization is similar to the one for PBSMT in that it relies on unsupervised vocabulary alignment. Specifically, we train two skipgram word embedding models using fasttext BIBREF19 on $\mathcal {D}^F$ and $\mathcal {D}^S$, then align them in a common space using the Wasserstein Procrustes method of BIBREF18. Then, we map each word of a full text sequence to its nearest neighbor in the aligned space if their distance is smaller than some threshold, or skip it otherwise. We also limit the output length, keeping only the first $N$ tokens. We refer to this function as $f_{F\rightarrow S}^{(\text{Pr-Thr}), 0}$. Mixed Model Back-Translation ::: Initialization Models for Summarization ::: Denoising Bag-of-Word Auto-Encoder (DBAE) Similarly to both BIBREF5 and BIBREF11, we also devise a starting model based on a DAE. One major difference is that we use a simple Bag-of-Words (BoW) encoder with fixed pre-trained word embeddings, and a 2-layer GRU decoder. Indeed, we find that a BoW auto-encoder trained on the summaries reaches a reconstruction rouge-l f-score of nearly 70% on the test set, indicating that word presence information is mostly sufficient to model the summaries. As for the noise model, for each token in the input, we remove it with probability $p/2$ and add a word drawn uniformly from the summary vocabulary with probability $p$. The BoW encoder has two advantages. First, it lacks the other models' bias to keep the word order of the full text in the summary. Secondly, when using the DBAE to predict summaries from the full text, we can weight the input word embeddings by their corpus-level probability of appearing in a summary, forcing the model to pay less attention to words that only appear in $\mathcal {D}^F$. The Denoising Bag-of-Words Auto-Encoder with input re-weighting is referred to as $f_{F\rightarrow S}^{(\text{DBAE}), 0}$. Mixed Model Back-Translation ::: Initialization Models for Summarization ::: First-Order Word Moments Matching (@!START@$\mathbf {\mu }$@!END@:1) We also propose an extractive initialization model. Given the same BoW representation as for the DBAE, function $f_\theta ^\mu (s, v)$ predicts the probability that each word $v$ in a full text sequence $s$ is present in the summary. We learn the parameters of $f_\theta ^\mu $ by marginalizing the output probability of each word over all full text sequences, and matching these first-order moments to the marginal probability of each word's presence in a summary. That is, let $\mathcal {V}^S$ denote the vocabulary of $\mathcal {D}^S$, then $\forall v \in \mathcal {V}^S$: We minimize the binary cross-entropy (BCE) between the output and summary moments: We then define an initial extractive summarization model by applying $f_{\theta ^*}^\mu (\cdot , \cdot )$ to all words of an input sentence, and keeping the ones whose output probability is greater than some threshold. We refer to this model as $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$. Mixed Model Back-Translation ::: Artificial Training Data We apply the back-translation procedure outlined above in parallel for all three initialization models. For example, $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$ yields the following sequence of models and artificial aligned datasets: Finally, in order to take advantage of the various strengths of each of the initialization models, we also concatenate the artificial training dataset at each odd iteration to train a summarizer, e.g.: Experiments ::: Data and Model Choices We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines. All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11. Experiments ::: Initializers Table TABREF9 compares test ROUGE for different initialization models, as well as the trivial Lead-8 baseline which simply copies the first 8 words of the article. We find that simply thresholding on distance during the word alignment step of (Pr-Thr) does slightly better then the full PBSMT system used by BIBREF5. Our BoW denoising auto-encoder with word re-weighting also performs significantly better than the full seq2seq DAE initialization used by BIBREF11 (Pre-DAE). The moments-based initial model ($\mathbf {\mu }$:1) scores higher than either of these, with scores already close to the full unsupervised system of BIBREF11. In order to investigate the effect of these three different strategies beyond their rouge statistics, we show generations of the three corresponding first iteration expanders for a given summary in Table TABREF1. The unsupervised vocabulary alignment in (Pr-Thr) handles vocabulary shift, especially changes in verb tenses (summaries tend to be in the present tense), but maintains the word order and adds very little information. Conversely, the ($\mathbf {\mu }$:1) expansion function, which is learned from purely extractive summaries, re-uses most words in the summary without any change and adds some new information. Finally, the auto-encoder based (DBAE) significantly increases the sequence length and variety, but also strays from the original meaning (more examples in the Appendix). The decoders also seem to learn facts about the world during their training on article text (EDF/GDF is France's public power company). Experiments ::: Full Models Finally, Table TABREF13 compares the summarizers learned at various back-translation iterations to other unsupervised and semi-supervised approaches. Overall, our system outperforms the unsupervised Adversarial-reinforce of BIBREF11 after one back-translation loop, and most semi-supervised systems after the second one, including BIBREF12's MASS pre-trained sentence encoder and BIBREF10's Forced-attention Sentence Compression (FSC), which use 100K and 500K aligned pairs respectively. As far as back-translation approaches are concerned, we note that the model performances are correlated with the initializers' scores reported in Table TABREF9 (iterations 4 and 6 follow the same pattern). In addition, we find that combining data from all three initializers before training a summarizer system at each iteration as described in Section SECREF8 performs best, suggesting that the greater variety of artificial full text does help the model learn. Experiments ::: Conclusion In this work, we use the back-translation paradigm for unsupervised training of a summarization system. We find that the model benefits from combining initializers, matching the performance of semi-supervised approaches.
The same 2K set from Gigaword used in BIBREF7
094ce2f912aa3ced9eb97b171745d38f58f946dd
094ce2f912aa3ced9eb97b171745d38f58f946dd_0
Q: What is the source of the tables? Text: INTRODUCTION Today most of business-related information is transmitted in an electronic form, such as emails. Therefore, converting these messages into an easily analyzable representation could open numerous business opportunities, as a lot of them are not used fully because of the difficulty to build bespoke parsing methods. In particular, a great number of these transmissions are semi-structured text, which doesn’t necessarily follows the classic english grammar. As seen in Fig. 1 , they can be under the form of tables containing diverse elements, words and numbers, afterwards referred to as tokens. These tables are often implicitly defined, which means that there are no special tags between what is or not part of the table, or even between cells. In these cases, the structure is coming from space or tabs alignment and from the relative order of the tokens. The data often are unlabeled, which means that the content must be read with domain-based knowledge. Thus, automatic extraction of structured information is a major challenge because token candidates come in a variety of forms within a fuzzy context. A high level of supervision is hard to obtain as manual labeling requires time that is hardly affordable when receiving thousands of such emails a day, and even more so as databases can become irrelevant over time. That is why training a generalizable model to extract these data should not rely on labeled inputs, but rather on the content itself - a paradigm called self-supervised learning. Many approaches already exist in Natural Language Processing, such as Part-of-Speech (POS) tagging or Named Entity Recognition (NER), but they do not take advantage of the semi-structured data framework. On the contrary, there exists some information extraction algorithms applied to tables, but they necessitate a great amount of manually defined rules and exceptions. Our model aims to reconcile both approaches for an efficient and totally self-supervised take on information extraction in the particular context of semi-structured data. In this paper, we present a neural architecture for token embedding in plain-text tables, which provides a useful lower-dimensional representation for tasks such as unsupervised, or semi-supervised clustering. Intuitively, tokens with a similar meaning should be close in the feature space to ease any further information extraction. Our model aims to combine the better of the context and the character composition of each token, and that is why the neural architecture is designed to learn both context and character-level representations simultaneously. Finally, we can take advantage of the distances between tokens in the feature space to create proper tables from fuzzy input data. Information Extraction on Semi-Structured Data The field of Information Extraction on Semi-Structured Data has been particularly active in the 1990's and the early 2000's, developed in settings such as the Message Understanding Conferences (MUCs) and, more recently, in the ICDAR 2013 Table Competition BIBREF0 . A very complete survey of information extraction in tables can be found in BIBREF1 and in BIBREF2 . The main goal of systems such as BIBREF3 , BIBREF4 or TINTIN BIBREF5 , is to detect tables in messages, or to label lines such as captions using the density of blank spaces, Conditional Random Fields or Hidden Markov Models respectively. This also has been done more recently in an unsupervised manner by BIBREF6 and BIBREF7 . Obviously the main goal is to extract the content of these tables, which is done by BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , with DEByE BIBREF12 , DIPRE BIBREF13 or WHISK BIBREF14 by learning patterns to match to the data systematically using manually defined rules and trying to generalize them as much as possible. A very thorough panorama of this class of algorithms is presented in BIBREF15 . More recently, BIBREF16 proposes a graph structure in tables to match predefined patterns. Unfortunately, these methods are not flexible enough to be used in the case of a great number of patterns in the data, and need user supervision or gazetteers to work properly, which are not always available. The idea of our model can certainly be related the most with BIBREF17 or BIBREF18 , but we add in new Natural Language Processing tools and neural networks – among other differences. Natural Language Processing In recent years, neural networks have replaced handcrafted features in Natural Language Processing, with excellent results – a recent survey of the topic can be found in BIBREF19 . The seminal paper of Collobert et al. BIBREF20 presents a first idea of token embeddings, or word features vectors, based on lookup tables in a fixed vocabulary and using neural networks. It also brings a general solution to problems such as Part of Speech (POS), Chunking and Named Entity Recognition (NER). The work on word features vectors continued with the classic Word2Vec paper BIBREF21 which is now one of the references on the topic, introducing the skip-gram model for text. There, the method used to train the network is trying to predict the next words in a sentence based on surrounding ones. However, a problem of these approaches are that they rely on a dictionary of words, and that “out-of-vocabulary” words such as orthographic errors get a generic representation. In problems such as information extraction, that is a major issue because the content consists mostly in names that are not classic words, and can evolve in time. Besides, closely related words such as “even” and “uneven” should be close in the feature space, which is not guaranteed by these methods. That is why recently the focus has shifted on a study directly on the characters, that mostly solve these questions. Examples can be found in BIBREF22 and BIBREF23 with LSTMs, or in BIBREF24 , BIBREF25 and BIBREF26 with Convolutional Networks. Further developments presented in BIBREF27 and BIBREF28 aim to learn vector representations of sentences or documents instead of limiting the models to the words only. This is done with the same methods used to get words representations, only with whole rows or paragraphs as the input. These are our main inspirations, but all these algorithms have been created to deal with natural and not semi-structured text, so they do not take advantage of the bi-dimensional structure of the data. An effort worth noting is BIBREF29 with the introduction of Multidimensional Recurrent Neural Networks in the Optical Character Recognition (OCR) field, but the idea has not been developed further. THE SC2T EMBEDDING We will now present the SC2T (Self-Supervised Character and Context-levels on Tables) embedding. As in BIBREF23 , two important ideas guide our neural network architecture: to correctly represent a token, we need to take into account its composition (a number, a word?) as well as its context (the surrounding tokens). As we deal with tokens that mostly are not words in the classic sense of the term, but abbreviations, numbers, unique identifiers... and that we have no dictionary, we can't use word-level features similar to what was done in BIBREF21 . That's why we will use character-level representations, in the same fashion that BIBREF23 , BIBREF24 , BIBREF22 or BIBREF26 . We do not use external dictionary or gazetteers, which allows our program to be relevant on any semi-structured text. Note that given raw text as input, the first stage is the tokenization of the data. A discussion on that topic is complex and beyond the scope of this paper, as special rules have to be applied depending on the data and pertinent segmentation. The Architecture Our architecture is created to learn a character- and context-sensitive embedding of tokens. To build this distributed representation we train our network on a proxy task, which is to reconstruct tokens using only the surrounding ones - an idea recalling auto-encoders. By surrounding, we mean that are contained in a horizontal window of size $h_w$ and a vertical window of size $v_w$ around it, padding with zeros if necessary. This method resembles what is done in BIBREF20 or BIBREF28 for example, but takes advantage of the 2D structure of the data. Selecting tokens which are horizontally adjacent is trivial contrary to vertical ones. Papers such as BIBREF30 and BIBREF31 give good insights on how to define that efficiently. However, for simplicity reasons, we take the tokens of the surrounding lines which rightmost character is closest to the rightmost character of our target token. Each of these surrounding tokens is first transformed in a one-hot encoding on the characters of dimensionality $d$ , padded left with blank spaces to achieve the length $l_t$ for all tokens. Then, they all pass in the same character-level convolutional network (ChNN), which structure is inspired by BIBREF26 . It is composed of a one-hot-encoding then fully connected (FC) layer, then of two one-dimensional CNNs with $n_f = 64$ filters of size 3 with a max-pooling. Finally, a fully connected layer is added to bring the embedding to the desired size. ReLU activations, batch normalization and $25\%$ dropout are also placed between each layer. A diagram of this network can be found in Fig. 2 . The resulting embeddings are then concatenated and fed into the horizontal (HNN) and vertical (VNN) context networks, that have the same structure as the character-level network excepted the input size and that the max-pooling and FC layer is replaced by a simple Flatten layer. They are kept separate from each other because they are not aimed to learn the same relationships in the data. Then their outputs are merged and passed through two fully connected layers (LNN), the last of them of size $s_e$ . Thus, we have two useful representations for a given token: the output from the LNN network (of size $s_e$ ), plus the output taken directly from the character CNN on the token itself (of size $ch_e$ ). We then concatenate and feed them to the last part of the network, E, which consists of two fully connected layers and whose final output is compared to the one-hot-encoding of the original token. The concatenation is followed by a dropout layer to prevent the network to only use the input token. A value of $0.5$ yields the best results in our experience, which confirms the idea presented in BIBREF23 . Our model allows a simultaneous training of all the components in the network using backpropagation. Finally, our context- and character-sensitive embedding is obtained by taking the output of the first FC layer in the E network, which has size $ch_e + s_e$ , and we will see in the next part that it is indeed a useful distributed representation of tokens. A diagram of our whole network can be found in Fig. 3 . We use CNNs in all the stages of our network instead of LSTMs or other layers for two reasons: first, in the case of tables, the sequential aspect is often negligible. Besides, we implemented the same program with bidirectional LSTMs and it did not yield better results, while slowing down the whole process. This is a problem because speed of execution is an important factor in industrial applications treating tens of thousands of messages each day, each containing hundreds or thousands of tokens. Alternative Model An alternative to the previous model can be considered. Indeed, instead of letting the E network merge the character and context embeddings, we could just concatenate them, applying a constant importance coefficient $K$ that has to be defined depending on the data. Indeed, if the different categories in the data are from different types (e.g., textual names and numbers), the character content has to be privileged, unlike the case of more context dependent tokens (e.g., numbers in a certain order). Usually, if the structure of the data is disrupted, we will need to rely more on characters. $K$ will increase the weight of one part or another, given that clustering algorithms put more importance on greater values in the data. Obviously, this coefficient $K$ necessitates an intervention of the user, and a knowledge of the data. Thus, it is not applicable in general but can be very efficient in particular cases, as we will see in section IV. Tokens and Lines Clustering Once we obtain our token embeddings, a simple clustering algorithm such as k-means++ BIBREF32 can be used to compute a clustering of the tokens. Obtaining coherent groups of tokens can lead to many developments. It can be used for manual labeling and bootstrapping quickly a labeled dataset for supervised learning, but it can also be the basis of an efficient semi-supervised algorithm. We also need to cluster lines in the data: indeed, a message is often composed of one or multiples headers, the data itself, as well as disclaimers and signatures, and more generally blocks of natural language in the document. Once again, their repartition or presence is not guaranteed, so an adaptable clustering is necessary. To obtain an embedding of the lines, we simply compute a max-pooling of the embeddings of its tokens. We used this method for separating headers, disclaimers and table content by 3-means clustering on our data. EMPIRICAL RESULTS To assess the efficiency of our embeddings, we use them to label tokens in the Online Retail Data Set from UCI via k-means++ clustering. We chose it because this is a varied public dataset that fits the kind of problem we are dealing with. Unfortunately, the relevant Information Extraction papers we found (sec. "Information Extraction on Semi-Structured Data" ) used either custom datasets, or datasets that are not online anymore. The Dataset The Online Retail Data Set consists of a clean list of 25873 invoices, totaling 541909 rows and 8 columns. InvoiceNo, CustomerID and StockCode are mostly 5 or 6-digit integers with occasional letters. Quantity is mostly 1 to 3-digit integers, a part of them being negative, and UnitPrice is composed of 1 to 6 digits floating values. InvoiceDate are dates all in the same format, Country contains strings representing 38 countries and Description is 4224 strings representing names of products. We reconstruct text mails from this data, by separating each token with a blank space and stacking the lines for a given invoice, grouped by InvoiceNo. We will use the column label as ground truth for the tokens in the dataset. For simplicity reasons we add underscores between words in Country and Description to ease the tokenization. Another slight modification has to be done: $25\%$ of the CustomerId values are missing, and we replace them by '00000'. A sample can be found in Fig. 4 . Labeling of tokens using the SC2T Embedding We will now create an embedding of the tokens, and use it in a k-means++ clustering. We will use the homogeneity score $h$ as metrics, which measures if all the data points that are members of a given cluster are given the same label. It can be written $h = \frac{1}{k}\sum _{c = 1}^{k}\frac{\#~C_c \cap L_c}{\#~C_c}$ where $C_c$ is the ensemble of data points in cluster $c$ and $L_c$ is the ensemble of data points that have the label which is most present in cluster $c$ . It represents the accuracy of a semi-supervised clustering where the user simply gives a label to each cluster, corresponding to the majority of its elements. Obviously, $h\rightarrow 1$ when $k$ tends to the number of data points. However, we will not restrain ourselves to taking $k = 8$ , the exact number of labels, as varied data can have the same ground truth labels in a real setting. For example, $12/24/2017$ , 2017 or $Dec-24$ could be all labeled as dates, but might be difficult to group into one cluster. That is why we do not consider the completeness score, which measures if all the data points of a given class are elements of the same cluster, as relevant in our case. So, a good measure of the quality of our clustering is the score reached for a certain number of clusters, e.g. 20 or 100, which will represent the number of points that the user should label to obtain such accuracy. Note that as k-means yields stochastic results, the results given here are a mean of 100 independent runs. At first, we have a simple problem: all the lines follow the same pattern, so a simple extraction rule can perfectly extract data. This is a good baseline for our program as it should retrieve all the information. Our experiment consists of creating homogeneous clusters according to the labels of the tokens after randomly deleting a portion of them (Del.) and/or replacing randomly a part of the characters (CR) - heavy modifications that are not unlike those found in real-life settings. An example of disrupted data can be found in Fig. 5 . Note that we only used a subset of 1000 invoices, $24K$ lines or approximately $190K$ tokens, which yielded slightly worse results compared to the tests we made on the whole dataset. It is logical that the more the context is disrupted, the more we will rely on the characters part. We will present the results in two settings: one with the model presented in "The Architecture" (NoK), the other one with the parameter $K$ presented in "Alternative Model" (K). Best Char % is the proportion of the norm of the character part of the embedding compared to the norm of the whole embedding, which is controlled by variations of $K$ . Results of homogeneity depending on the number of clusters can be found in Table I ( $nc$ being the number of clusters), and our parameters in Table II. We chose the horizontal window such as it takes into account the whole line, but that could be unadapted in the case of very large tables. 2gray!15white Obviously, the more disrupted the data, the less accurate our model. First, we can see that the model with $K$ is better than without in most cases, but remember that the value of $K$ has been cross-validated to obtain the best possible result. This is not realistic in general, but can still be very useful when we have prior knowledge about the data. For example, we observe that without deletions and even with character replacements, the context alone brings 100% accuracy, reflecting that the position entirely determines the label. When we randomly replace characters we cannot rely as much on them, and numbers show that our model is more robust to a deletion of tokens than it is to character replacement, probably because in our dataset tokens with the same label are often similar in composition. It is also interesting to notice that our supervision-free NoK model, even if slightly disadvantaged in simple cases, yields its best results when the data is more disrupted. This is good news, as it is in these cases that we have the least amount of prior knowledge, besides being certainly the most realistic settings and the ones that need new models most. Without surprise, we noticed that it is often CustomerID, InvoiceNo and to a lesser extent StockCode that are mislabeled, due to their same composition. Even in our most difficult case, 50% deletion and 50% character replacement, we obtain decent results in our unsupervised setting. Overall, with as few as 100 token labels out of $190K$ we could get a high clustering accuracy on most of our contexts. The size of the embedding also had to be chosen carefully, because it has to encode enough information while avoiding the curse of dimensionality. Finally, note that the network gets less training data when increasing the percentage of deletions, and that we retrained it from scratch in each setting. An Application to Table Alignment Often, tables are not correctly aligned when data is missing, which creates an erroneous display. To correct this problem, we can define a reference line, that is the longest line that belongs to the table part according to the lines clustering. This line will define the number of columns in our resulting table. Then, for every other line, we try to match each token with a token from the reference line that is on its right, i.e. the token which is closest in the embedding space while allowing the order to be kept. We suppose here that the order is always preserved because in a given table permutations are very unlikely. We then obtain correctly aligned tables, as seen in Fig. 6 , which can be very useful for an easier labeling of the tokens. This can be used even if there are different types of lines containing different information, theses lines being separated beforehand by clustering as presented above in "Tokens and Lines Clustering" . We then take different rows as references. CONCLUSIONS In this paper we present a new Neural Language model that jointly uses the character composition of tokens and their surrounding context in the particular framework of semi-structured text data, for the purpose of generating a distributed representation. We have seen that the embeddings have linearized the space quite well such that a $k$ -means will gather similar tokens, or by max-pooling them, similar lines, and that it could be applied to table realignment. The approach presented here can already allow an information extraction system to function, but it could be even more beneficial to add semi-supervised learning algorithms, as described in BIBREF33 or BIBREF34 . Another solution would be to bootstrap large annotated databases for performing supervised learning. We introduce several hyper-parameters to be tuned, mainly the sizes of our embeddings. We want our model to stay as general and unsupervised as possible, and we argue that tuning them manually is the better solution as existing unsupervised measures of the quality of a clustering (Silhouette Coefficient BIBREF35 , Calinski-Harabaz Index BIBREF36 ) can be misleading for our particular task. Indeed they can favor less clusters that are not homogeneous in terms of labels instead of more cluster that are, which is against our goal. Finally, the fact that we do not have relevant standards for this particular task is problematic. However, our dataset is openly available on the Internet (link above), and can be a simple but representative benchmark for papers to come. ACKNOWLEDGMENT We would like to thank Clement Laisné (Hellebore Technologies) for having developed convenient tools that greatly helped us in our research, as well as all our colleagues for their support. We also thank Caio Filippo Corro for discussions about this paper.
The Online Retail Data Set consists of a clean list of 25873 invoices, totaling 541909 rows and 8 columns.
b5bfa6effdeae8ee864d7d11bc5f3e1766171c2d
b5bfa6effdeae8ee864d7d11bc5f3e1766171c2d_0
Q: Which regions of the United States do they consider? Text: Introduction Human language reflects cultural, political, and social evolution. Words are the atoms of language. Their meanings and usage patterns reveal insight into the dynamical process by which society changes. Indeed, the increasing frequency with which electronic text is used as a means of communicating, e.g., through email, text messaging, and social media, offers us the opportunity to quantify previously unobserved mechanisms of linguistic development. While there are many aspects of language being investigated towards an increased understanding of social and linguistic evolution BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one particular area of focus has been on changes in past tense forms for English verbs BIBREF0 , BIBREF1 , BIBREF2 . These investigations have collectively demonstrated that English verbs are going through a process of regularization, where the original irregular past tense of a verb is replaced with the regular past tense, formed using the suffix -ed. For example, the irregular past tense of the verb `burn' is `burnt' and the regular past tense is `burned'. Over time, the regular past tense has become more popular in general, and for some verbs has overtaken the irregular form. For example, in Fig.~ UID1 , we use the Google Ngram Online Viewer to compare the relative frequency of `burnt' with that of `burned' over the past 200 years. (As shown in an earlier paper involving two of the present authors BIBREF6 , and expanded on below, the Google Ngram dataset is highly problematic but can serve as a useful barometer of lexical change.) In the first half of the 19th century, the irregular past tense `burnt' was more popular. However, the regular past tense `burned' gained in popularity and in the late 1800s became the more popular form, which has persisted through to today. Looking at several examples like this, in a 2011 paper Michel et al. studied the regularization of verbs, along with other cultural and language trends, as an accompaniment to their introduction of the Google Books Ngram corpus (hereafter Ngrams) and the proto-field `Culturomics' BIBREF1 . They found that most of the verb regularization over the last two centuries came from verbs using the suffix -t for the irregular form, and that British English texts were less likely than American English ones to move away from this irregular form. In a 2007 study, Lieberman et al. explored the regularization of English verbs using the CELEX corpus, which gives word frequencies from several textual sources BIBREF0 . Focusing on a set of 177 verbs that were all irregular in Old English, they examined how the rate of verb regularization relates to frequency of usage, finding that more common verbs regularized at a slower rate. They calculated half-lives for irregular verbs binned by frequency, finding that irregular verbs regularize with a half-life proportional to the square root of frequency of usage. In a more recent study, Newberry et al. proposed a method for determining the underlying mechanisms driving language change, including the regularization of verbs BIBREF2 . Using the Corpus of Historical American English and inspired by ideas from evolution, the authors described a method to determine if language change is due to selection or drift, and applied this method to three areas of language change. They used a null hypothesis of stochastic drift and checked if selection would be strong enough to reject this null hypothesis. Of the 36 verbs Newberry et al. studied, only six demonstrated statistical support for selection. They also claimed that rhyming patterns might be a driver of selection. Unfortunately, the corpora used in these studies have considerable limitations and corruptions. For example, early versions of the Ngrams data includes scientific literature, whose explosive growth through the 20th century is responsible for the decreasing trend in relative word usage frequency observed in many common search terms BIBREF6 . Moreover, the library-like nature of the corpus admits no accounting for popularity: Lord of the Rings and an unknown work contribute with equal weight to token counts. Another general concern with large corpora of a global language like English is that language use varies tremendously with culture and geography. Ngrams allows only for the regional exploration of the English language with the British English corpus and the American English corpus. Twitter data enables us to focus on much smaller spatial regions (e.g., county or state). Prior studies of verb regularization have also focused on data reflecting a formal editorial process, such as the one undergone by any published book. This editorial process will tend to normalize the language, reflecting the linguistic opinions of a small minority of canon gatekeepers, rather than portray the language used by everyday people. For example, maybe the irregular from of a particular verb is considered proper by scholars, but a vast majority of the English speaking population uses the regular form. While it is not a verb form, one illustrative example is `whom'. Although `whom' is the correct word to use in the objective case, it is common for everyday speakers to use `who'. In the present study we take tweets to be a closer representation of everyday language. For the vast majority of accounts, tweets are authored by individuals without undergoing a formal editing process. As such, the language therein should more accurately represent average speakers than what is found in books. The demographic groups contributing to Twitter are by no means a carefully selected cross-section of society, but do offer natural language use by the roughly 20% of adult English speakers who use Twitter BIBREF8 . When exploring temporal changes in language use, the Ngrams and CELEX datasets evidently cover a much longer period than the decade for which social media is available. As a result, we are unable to infer anything about the temporal dimension of regularization looking at Twitter. In this paper we use the Ngrams and Twitter datasets to establish estimates of the current state of English verb regularization. We structure our paper as follows: In Sec.~ SECREF2 , we describe the datasets we use. In Sec.~ SECREF3 , we present our results. We study verb regularization in English in general in Sec.~ UID5 . We compare verb regularization in American English (AE) and British English (BE) using both Ngrams and geotagged Twitter data in Sec.~ UID7 . In Sec.~ "Description of data sets" , we employ methods to study regional variation in verb usage, leveraging county level user location data in the United States. We also explore correlations between verb regularization and a number of socio-demographic and economic variables. Finally, in Sec.~ SECREF4 , we provide concluding remarks. Description of data sets To be consistent with prior work, we chose the verb list for our project to match that of Michel et al. BIBREF1 . When comparing BE with AE, we use the subset of verbs that form the irregular past tense with the suffix -t. When calculating frequencies or token counts for the `past tense' we use both the preterite and past participle of the verb. See #1 for a complete tabulation of all verb forms. The Ngrams data reflects relative frequency, providing, for a verb and a given year, the percentage of corpus tokens that are the given verb, where a token is an individual occurrence of a word. The Google Ngram Online Viewer also has a smoothing parameter, $s$ , which averages the relative frequency for the given year with that of each of the $s$ years before and after the given year, if they exist. For example, Fig.~ UID1 uses a smoothing of 3 years and shows that, averaged across the years 1997--2000 (the value displayed for the year 2000), the word `burned' appeared with relative frequency 0.004321% (roughly once every 23,000 tokens), while `burnt' appeared with relative frequency 0.000954% (roughly once every 105,000 tokens). We downloaded the Ngrams verb data for the most recent 6-year period available (2003--2008) BIBREF9 . Specifically, we chose the 2008 values of relative frequency with a smoothing of 5 years, resulting in an average case insensitive#1 For general English, as suggested by BIBREF6 , we queried the English Fiction 2012 corpus, which uses ``books predominantly in the English language that a library or publisher identified as fiction.'' For AE we used the American English 2012 corpus, which uses ``books predominantly in the English language that were published in the United States.'' For BE we used the British English 2012 corpus, which uses ``books predominantly in the English language that were published in Great Britain'' BIBREF10 . The Twitter messages for our project consist of a random sample of roughly 10% of all tweets posted between 9 September 2008 and 22 October 2017. This `decahose' dataset comprises a total of more than 106 billion messages, sent by about 750 million unique accounts. From this larger set, we performed a case-insensitive search for verb forms of interest, also extracting geographic location when available in the meta-data associated with each tweet. Tweets geotagged by mobile phone GPS with a U.S. location comprise about a 0.27% subset of the decahose dataset; United Kingdom locations comprise about a 0.05% subset. Many individuals provide location information, entered as free text, along with their biographical profile. We matched user specified locations of the form `city, state' to a U.S. county when possible, comprising a 2.26% subset of the decahose dataset. Details on this matching process can be found in #1. For general English, we counted the number of tokens in the decahose dataset for each verb. For AE, we used the tweets whose geotagged coordinates are located in the United States, and for BE we used the tweets whose geotagged coordinates are located in the United Kingdom. For the analysis of verbs by county, we used the tweets with the user entered location information. Table~ UID2 summarizes the datasets used for both Ngrams and Twitter. The demographic data for U.S. counties comes from the 2015 American Community Survey 5-year estimates, tables DP02--Selected Social Characteristics, DP03--Selected Economic Characteristics, DP04--Selected Housing Characteristics, and DP05--Demographic and Housing Estimates, which can be found by searching online at https://factfinder.census.gov/. These tables comprise a total of 513 usable socio-demographic and economic variables. We compute the regularization fraction for a verb as the proportion of instances in which the regular form was used for the past tense of the verb. More specifically, for Ngrams we divide the relative frequency for the regular past tense by the sum of the relative frequencies for the regular and irregular past tenses. Similarly, for Twitter we divide the token count for the regular past tense by the sum of the token counts for both the regular and irregular past tenses. If the resulting regularization fraction is greater than $0.5$ , the regular past tense is more popular and we call the verb regular. Otherwise we call the verb irregular. When calculating an average regularization across all verbs, we first compute the regularization fraction for each verb individually. Then we compute the average of the regularization fractions, with each verb contributing the same weight in the average, irrespective of frequency. We perform this `average of averages' to avoid swamping the contribution of less frequent verbs. Verb regularization using Ngrams and Twitter Using the datasets in row (I) of Table~ UID2 , we begin by comparing Ngrams and Twitter with respect to regularization of English verbs in Fig.~ UID3 , where we find that 21 verbs are more regular in Ngrams, and 85 are more regular on Twitter. A Wilcoxon signed rank test of the data has a $p$ -value of $7.9\times 10^{-6}$ , demonstrating strong evidence that verbs on Twitter are more regular than verbs in Ngrams. What mechanisms could be responsible for the observed increase in regularity on Twitter? One possibility is that authors of fiction published in the 2000s, along with their editors, being professional users of English, have a larger vocabulary than the typical user of Twitter. If so, their commitment to proper English would contribute to the appearance of relatively more irregular verbs in books. The average Twitter user may not know, or choose to use, the `correct' past tense form of particular verbs, and thus use the default regular past tense. Another driver may be that non-native English speakers writing English tweets may be more likely to use the default regular form. We will find quantitative support for this mechanism below. As a preview, we note that Fig.~ UID3 shows that `burn' is predominantly regular on Twitter globally, but we see later (Fig.~ UID4 B) that `burn' is irregular on Twitter for both American English and British English. Thus, it is likely that non-native speakers are contributing to this difference. American and British English We next study how verb regularization varies with geographic region. In this subsection we use the datasets in row (II) of Table~ UID2 for AE and row (III) for BE and the subset of verbs that form the irregular past tense with the suffix -t. In Fig.~ UID4 A, we compare American and British English in Ngrams. The average regularization fraction is 0.49 in AE and $0.42$ in BE. For 17 out of 22 verbs, AE shows more regularization, with a Wilcoxon signed rank test $p$ -value of $9.8\times 10^{-4}$ , giving statistical support that AE verbs are more regular on average in Ngrams than BE verbs. As we show in the inset scatter plot of Fig.~ UID4 A, regularization in AE and BE are also strongly positively correlated with a Spearman correlation coefficient of $0.97$ $(p=2.3\times 10^{-14})$ . Verbs that are more regular in AE are also more regular in BE, just not to the same extent. In Fig.~ UID4 B, we compare regularization in AE and BE on Twitter. For Twitter, the average regularization fraction is $0.54$ for AE, higher than Ngrams, and $0.33$ for BE, much lower than Ngrams. As with Ngrams, 17 verbs out of 22 show more regularization in AE than in BE. The Wilcoxon signed rank test gives a weaker but still significant $p$ -value of $1.9\times 10^{-3}$ . The inset in Fig.~ UID4 B also shows a positive correlation, although not as strong as Ngrams, with a Spearman correlation coefficient of $0.87$ $(p=1.1\times 10^{-7})$ . Generally on Twitter, regular AE verbs are also regular in BE, but the difference in regularization fraction is much greater than for Ngrams. In Fig.~ UID6 A, we demonstrate the difference in regularization between AE and BE for both Ngrams and Twitter. The values in this figure for Ngrams can be thought of as, for each verb in Fig.~ UID4 A, subtracting the value of the bottom bar from the top bar, and likewise for Twitter and Fig.~ UID4 B. Positive numbers imply greater regularization in AE, the more common scenario. When the difference is near zero for one corpus, it is usually close to zero for the other corpus as well. However, when Ngrams shows that AE is notably more regular than BE, Twitter tends to show a much larger difference. The average difference in regularization fraction between AE and BE for Twitter is $0.21$ , whereas it is only $0.08$ for Ngrams. Again, we find that these averages are significantly different with a Wilcoxon signed rank $p$ -value of $1.9\times 10^{-2}$ . The inset scatter plot tells a similar story, with a cluster of points near the origin. As the difference in regularization fraction between regions increases in Ngrams, it also tends to increase in Twitter, with Spearman correlation coefficient $0.65$ and $p$ -value $1.0\times 10^{-3}$ . The steep rise shows that the difference increases faster on Twitter than in Ngrams. Fig.~ UID6 B returns to comparing Ngrams and Twitter, but now between AE and BE. For each verb, the bar chart shows the difference between the regularization fraction for Twitter and Ngrams in both AE and BE, with positive values showing that regularization for Twitter is greater. In this case, the values can be thought of as subtracting the values for the bars in Fig.~ UID4 A from the corresponding bars in Fig.~ UID4 B. As we find for English in general, regularization is greater on Twitter than in Ngrams for AE, with an average difference of $0.04$ . However, for BE, regularization is greater in Ngrams than on Twitter, with an average difference in regularization fraction of $-0.09$ . We summarize our findings in Table~ UID8 . We found again that verbs on Twitter are more regular than in Ngrams for American English, likely for many of the same reasons that verbs on Twitter are more regular than Ngrams in general. However, we find that in British English the opposite is true: Verbs on Twitter are less regular than in Ngrams. In decreasing order by average regularization fraction, we have AE Twitter, then AE Ngrams, then BE Ngrams, and finally BE Twitter. Knowing that the general trend is towards regularization BIBREF1 , BIBREF0 , it seems that regularization is perhaps being led by everyday speakers of American English, with American published work following suit, but with a lag. Then, it may be that British English authors and editors are being influenced by American publications and the language used therein. Indeed, some studies have found a general `Americanization' of English across the globe BIBREF11 , BIBREF12 , meaning that the various varieties of English used across the world are becoming more aligned with American English. Finally, it may be that average British users of Twitter are more resistant to the change. Indeed, from the figures in the study by Gonçalves et al., one can see that the `Americanization' of British English is more pronounced in Ngrams than on Twitter BIBREF11 , agreeing with what we have found here. Regularization by US county In Sec.~ UID7 , we demonstrated regional differences in verb regularization by comparing BE and AE. Here, we consider differences on a smaller spatial scale by quantifying regularization by county in the United States using the dataset in row (IV) of Table~ UID2 . We use methods inspired by Grieve et al. to study regional variation in language BIBREF13 . We only include counties that had at least 40 total tokens for the verbs under consideration. We plot the average regularization fraction for each county in the continental U.S. in Fig.~ "Introduction" A, where counties with not enough data are colored black. To control for the skewed distribution of samples associated with county population (see below for more details), we use residuals for this portion of the analysis. After regressing with the $\log _{10}$ of data volume (total number of tokens) for each county, we compute the average regularization fraction residual, which is plotted in Fig.~ "Introduction" B. That is, if we let $d_i$ be the total number of tokens for verbs in tweets from county $i$ ; $\alpha $ and $\beta $ be the slope and intercept parameters computed from regression; and $R_i$ be the average regularization fraction for county $i$ , then we compute the average regularization fraction residual for county $i$ , $r_i^{\text{reg}}$ , as $$r_i^{\text{reg}} = R_i - \left(\beta + \alpha \log _{10} d_i \right).$$ (Eq. 34) Using the average regularization residual at the county level as input, we measure local spatial autocorrelation using the Getis-Ord $Gi^*$ $z$ -score BIBREF14 , $$G_i^* = \frac{ \sum _j w_{ij} r_j^{\text{reg}} - \overline{r}^{\text{reg}}\sum _j w_{ij}}{\sigma \sqrt{\left[n\sum _j w_{ij}^2 - \left( \sum _j w_{ij}\right)^2 \right]/(n-1)}},$$ (Eq. 35) where $$\sigma =\sqrt{ \frac{\sum _j (r_j^{\text{reg}})^2 }{n} - (\overline{r}^{\text{reg}})^2 },$$ (Eq. 36) $\overline{r}^{\text{reg}} = \frac{1}{n}\sum _i r_i^{\text{reg}}$ , $n$ is the number of counties, and $w_{ij}$ is a weight matrix. To obtain the weight matrix used in this calculation, we first create a distance matrix, $s_{ij}$ , where the distance between each pair of counties is the larger of the great circle distance, $s_{ij}^\text{GC}$ , in miles between the centers of the bounding box for each county and 10 miles. That is, $$s_{ij}=\max \left(s_{ij}^\text{GC}, 10\right).$$ (Eq. 37) We make the minimum value for $s_{ij}$ 10 miles to prevent a county from having too large of a weight. We then compute the weight matrix as $$w_{ij}=\frac{1}{\sqrt{s_{ij}}}.$$ (Eq. 38) Fig.~ "Introduction" C shows the results for the lower 48 states, where black represents counties left out because there was not enough data. For each county, the $Gi^*$ $z$ -score computes a local weighted sum of the residuals, $r_j^\text{reg}$ , for the surrounding counties and compares that to the expected value of that weighted sum if all the counties had exactly the average residual, $\overline{r}^\text{reg}$ , as their value, where the weighting is such that closer counties have a higher weight. Areas that are darker blue (positive $z$ -score) belong to a cluster of counties that has higher regularization than average, and those that are darker red (negative $z$ -score) belong to a cluster that has lower regularization than average. So, Fig.~ "Introduction" C shows that, in general, western counties show less regularization than average and eastern counties show more, except that the New England area is fairly neutral. As usual, the $z$ -score gives the number of standard deviations away from the mean. For this we would do a two tail test for significance because we are looking for both high value and low value clusters. For example, a $z$ -score greater in magnitude than $1.96$ is significant at the $.05$ level. If we do a Bonferroni correction based on 3161 counties (the number included for this part of the analysis), then a $z$ -score greater in magnitude than $4.32$ is significant for a two tail test at the $.05/3161\approx 1.58 \times 10^{-5}$ level. We do this same process looking at individual verbs as well. However, when looking at individual verbs, we use the regularization fraction rather than residuals, because the data skew is not as problematic. This is because the main problem with data volume comes when averaging across verbs that have different frequencies of usage, as explained below. Also, here we include counties that have at least 10 tokens. Fig.~ "" gives an example map showing the $Gi^*$ $z$ -scores for the verb `dream'. The maps showing local spatial autocorrelation for the complete list of verbs can be found in the Online Appendix A at . For many of the counties in the US, there is a small sample of Twitter data. We restrict our analysis to counties with a total token count of at least 40 for the verbs we consider. Even for the counties meeting this criteria, the volume of data varies, leading to drastically different sample sizes across counties. More common verbs tend to have popular irregular forms (e.g., `found' and `won'), and less common verbs tend to be regular (e.g., `blessed' and `climbed') BIBREF0 . As a result, samples taken from populous counties are more likely to contain less common verbs. Our `average regularization' is an average of averages, resulting in an underlying trend toward higher rates for more populous counties due to the increased presence of rarer regular verbs. Fig.~ UID17 demonstrates the relationship between data volume and regularization. To explore the connection further, we perform a synthetic experiment as follows. To simulate sampling from counties with varying population sizes, we first combine all verb token counts (using the Twitter dataset from row (I) of Table~ UID2 ) into a single collection. We then randomly sample a synthetic county worth of tokens from this collection. For a set of 1000 logarithmically spaced county sizes, we randomly draw five synthetic collections of verbs (each is a blue circle in Fig.~ UID17 ). For each sample, we compute the average regularization fraction, as we did for U.S. counties. The goal is to infer the existences of any spurious trend introduced by the sampling of sparsely observed counties. The resulting simulated curve is comparable to the trend observed for actual U.S. counties. As the data volume increases, the simulated version converges on roughly $0.17$ , which is the average regularization fraction for all of Twitter. We also explored correlations between verb regularization and various demographic variables. Fig.~ UID17 showed a strong relationship between data volume and verb regularization. It has been shown elsewhere that tweet density positively correlates with population density BIBREF15 , and population size is correlated with many demographic variables. As a result, we use partial correlations as an attempt to control for the likely confounding effect of data volume. For each demographic variable, we compute the regression line between the $\log _{10}$ of data volume, $d_i$ , and regularization, and compute the residuals as in Eq.~ "Methods and results" . Then, if the demographic variable is an `Estimate' variable, where the unit is number of people, we similarly compute the regression line between the $\log _{10}$ of data volume and the $\log _{10}$ of the demographic variable#1 and compute the residuals, $r_i^{\text{dem}}$ , as $$r_i^{\text{dem}} = \log _{10}(D_i) - \left( \delta + \gamma \log _{10} d_i \right),$$ (Eq. 42) where $D_i$ is the value of the demographic variable for county $i$ , and $\gamma $ and $\delta $ are the slope and intercept parameters calculated during regression. Otherwise, the demographic variable is a `Percent' variable, with units of percentage, and we compute the regression line between the $\log _{10}$ of data volume and the demographic variable, and compute residuals as $$r_i^{\text{dem}} = D_i - \left( \delta + \gamma \log _{10} d_i \right).$$ (Eq. 44) The correlation between residuals $r_i^{\text{reg}}$ and $r_i^{\text{dem}}$ gives the partial correlation between average regularization and the demographic variable. Our findings suggest that data volume is a confounding variable in at least some of the cases because, after controlling for data volume, there is generally a large decrease in the correlation between verb regularization and the demographic variables. The largest in magnitude Pearson correlation between verb regularization and a demographic variable is $0.68$ , for the variable `Estimate; SCHOOL ENROLLMENT - Population 3 years and over enrolled in school', whereas the largest in magnitude partial correlation is only $-0.18$ , for the variable `Percent; OCCUPATION - Civilian employed population 16 years and over - Management, business, science, and arts occupations'. Table~ UID20 lists the 10 demographic variables with largest in magnitude partial correlation. Fig.~ UID18 shows an example for one of the demographic variables, the `Percent' variable with largest simple correlation. Fig.~ UID18 A is the scatter plot of the demographic variable with average regularization, which corresponds to simple correlation. Fig.~ UID18 B is the scatter plot of the residuals, $r_i^{\text{dem}}$ and $r_i^{\text{reg}}$ , after regressing with the $\log _{10}$ of data volume, and corresponds with partial correlation. We can see that there is a strong simple correlation ( $-0.52$ ), but after accounting for data volume that correlation largely vanishes ( $-0.15$ ). Similar plots for all of the demographic variables can be found in the Online Appendix B at . Concluding remarks Our findings suggest that, by and large, verb regularization patterns are similar when computed with Ngrams and Twitter. However, for some verbs, the extent of regularization can be quite different. If social media is an indicator of changing patterns in language use, Ngrams data ought to lag with a timescale not yet observable due to the recency of Twitter data. Very reasonably, Ngrams data may not yet be showing some of the regularization that is happening in everyday English. We also found differences in verb regularization between American and British English, but found that this difference is much larger on Twitter than Ngrams. Overall, and in American English specifically, verbs are more regular on Twitter than in Ngrams, but the opposite is true for British English. In the U.S., we also find variation in average verb regularization across counties. Lastly, we showed that there are significant partial correlations between verb regularization and various demographic variables, but they tend to be weak. Our findings do not account for the possible effects of spell checkers. Some people, when tweeting, may be using a spell checker to edit their tweet. If anything, this will likely skew the language on Twitter towards the `correct' form used in edited textual sources. For example, in Fig.~ UID3 we see that `stand' is irregular for both Ngrams and Twitter, and likely most spell checkers would consider the regular `standed' a mistake, but we see that `stand' is still over 100 times more regular on Twitter than in Ngrams. So, the differences between edited language and everyday language may be even larger than what we find here suggests. Future work should look into the effects of spell checkers. Our study explored the idea that edited written language may not fully represent the language spoken by average speakers. However, tweets do not, of course, fully represent the English speaking population. Even amongst users, our sampling is not uniform as it reflects the frequency with which different users tweet #1. Furthermore, the language used on Twitter is not an unbiased sample of language even for people who use it frequently. The way someone spells a word and the way someone pronounces a word may be different, especially, for example, the verbs with an irregular form ending in -t, because -t and -ed are close phonetically. However, the fact that we found differences between the language of Ngrams and the language of Twitter suggests that the true language of everyday people is not fully represented by edited written language. We recommend that future studies should investigate speech data. We are thankful for the helpful reviews and discussions of earlier versions of this work by A. Albright and J. Bagrow, and for help with initial data collection from L. Gray. PSD & CMD were supported by NSF Grant No. IIS-1447634, and TJG, PSD, & CMD were supported by a gift from MassMutual. Table of Verb Forms |c||c||c|c||r| & Regular & 2c||Irregular & Verb & Preterit & Past Participle & Preterit & Past Participle & Token Count (continued) & Regular & 2c||Irregular & Verb & Preterit & Past Participle & Preterit & Past Participle & Token Count 5|r|Continued on next page A tabulation of all verb forms used in this study. The Token Count column gives the sum of all the tokens for the past tense forms of the verb, both regular and irregular, in our Twitter dataset (see row (I) of Table~ UID2 in Sec.~ SECREF2 ). abide & abided & abode & abode & 146,566 alight & alighted & alit & alit & 56,306 arise & arised & arose & arisen & 164,134 awake & awaked & awoke & awoken, awoke & 423,359 become & becomed & became & become & 50,664,026 begin & beginned & began & begun & 5,942,572 bend & bended & bent & bent & 4,777,019 beseech & beseeched & besought & besought & 3,390 bleed & bleeded & bled & bled & 252,225 blend & blended & blent & blent & 436,029 bless & blessed & blest & blest & 22,547,387 blow & blowed & blew & blown & 9,155,246 break & breaked & broke & broken & 54,506,810 breed & breeded & bred & bred & 1,040,854 bring & bringed & brought & brought & 15,303,318 build & builded & built & built & 8,521,553 burn & burned & burnt & burnt & 7,457,942 buy & buyed & bought & bought & 24,841,526 catch & catched & caught & caught & 24,891,188 choose & choosed & chose & chosen & 10,290,205 clap & clapped & clapt & clapt & 405,837 climb & climbed & clomb, clom & clomben & 635,122 cling & clinged & clung & clung & 49,742 creep & creeped & crept & crept & 698,405 deal & dealed & dealt & dealt & 1,181,974 dig & digged & dug & dug & 941,656 dream & dreamed & dreamt & dreamt & 2,794,060 drink & drinked & drank & drunk, drank & 37,295,703 drive & drived & drove & driven & 5,745,497 dwell & dwelled & dwelt & dwelt & 25,725 eat & eated & ate & eaten & 25,084,758 fall & falled & fell & fallen & 25,224,815 fight & fighted & fought & fought & 3,625,297 find & finded & found & found & 80,709,195 flee & fleed & fled & fled & 405,295 freeze & freezed & froze & frozen & 7,454,847 get & getted & got & got, gotten & 500,591,203 give & gived & gave & given & 58,697,198 grow & growed & grew & grown & 17,951,273 hang & hanged & hung & hung & 3,991,956 hear & heared & heard & heard & 52,605,822 hide & hided, hidded & hid & hid, hidden & 7,829,276 hold & holded & held & held & 10,080,725 inlay & inlayed & inlaid & inlaid & 44,811 keep & keeped & kept & kept & 11,785,131 kneel & kneeled & knelt & knelt & 83,765 know & knowed & knew & known & 58,175,701 lay & layed & laid & laid & 5,828,898 leap & leaped & leapt & leapt & 91,620 learn & learned & learnt & learnt & 18,134,586 lose & losed & lost & lost & 72,695,892 mean & meaned & meant & meant & 26,814,977 pay & payed & paid & paid & 21,150,031 plead & pleaded & pled & pled & 193,553 ride & rided & rode & ridden & 1,710,109 seek & seeked & sought & sought & 888,822 sell & selled & sold & sold & 14,251,612 send & sended & sent & sent & 26,265,441 shake & shaked & shook & shaken & 3,223,316 shoe & shoed & shod & shod & 47,780 shrink & shrinked & shrank, shrunk & shrunk, shrunken & 296,188 sing & singed & sang, sung & sung & 6,767,707 sink & sinked & sank, sunk & sunk, sunken & 927,419 slay & slayed & slew & slain & 2,153,981 sleep & sleeped & slept & slept & 9,252,446 slide & slided & slid & slid & 530,659 sling & slinged & slung & slung & 38,320 slink & slinked & slunk & slunk & 5,772 smell & smelled & smelt & smelt & 1,089,814 smite & smitted, smited & smote & smitten, smote & 176,768 sneak & sneaked & snuck & snuck & 797,337 speak & speaked & spoke & spoken & 8,502,050 speed & speeded & sped & sped & 216,062 spell & spelled & spelt & spelt & 3,812,137 spend & spended & spent & spent & 17,603,781 spill & spilled & spilt & spilt & 1,627,331 spin & spinned & spun & spun & 342,022 spoil & spoiled & spoilt & spoilt & 3,891,576 spring & springed & sprang, sprung & sprung & 626,400 stand & standed & stood & stood & 3,942,812 steal & stealed & stole & stolen & 11,884,934 sting & stinged & stung & stung & 391,053 stink & stinked & stank, stunk & stunk & 1,556,197 stride & strided & strode & stridden & 17,811 strike & striked & struck & struck, stricken & 2,167,165 strip & stripped & stript & stript & 837,967 strive & strived & strove & striven & 33,705 swear & sweared & swore & sworn & 1,902,662 sweep & sweeped & swept & swept & 931,245 swim & swimmed & swam & swum & 356,842 swing & swinged & swung & swung & 295,360 take & taked & took & taken & 83,457,822 teach & teached & taught & taught & 9,379,039 tear & teared & tore & torn & 4,238,865 tell & telled & told & told & 71,562,969 thrive & thrived & throve & thriven & 43,612 throw & throwed & threw & thrown & 13,197,226 tread & treaded & trod & trodden & 56,624 vex & vexed & vext & vext & 139,411 wake & waked & woke & woken & 30,796,918 wear & weared & wore & worn & 8,552,191 weep & weeped & wept & wept & 200,690 win & winned & won & won & 45,276,202 wind & winded & wound & wound & 1,340,267 wring & wringed & wrung & wrung & 29,141 write & writed & wrote & written, writ, wrote & $23,926,025$ Details on User Location Matching To study regularization by county, we extracted location information from the user-provided location information, which was entered as free text in the user's biographical profile. To do this, for each tweet we first checked if the location field was populated with text. If so, we then split the text on commas, and checked whether there were two tokens separated by a comma. If so, we made the assumption that it might be of the form `city, state'. Then we used a python package called uszipcode, which can be found here: pythonhosted.org/uszipcode/. We used the package's method to search by city and state. If the package returned a location match, we used the returned latitude and longitude to determine which county the detected city belonged to. The package allows for fuzzy matching, meaning the city and state do not have to be spelled correctly, and it allows for the state to be fully spelled out or be an abbreviation. In the source code of the package there was a hard coded confidence level of 70 for the fuzzy matching. We modified the source code so that the confidence level was an input to the method, and running tests found we were satisfied with a confidence level of 91. We checked by hand the matches of 1000 tweets that this method returned a match for, 100 from each year in the dataset, and found the only potential error in these matches was when the user typed in `Long Island, NY', or a similar variant. For this, the package returned Long Island City, NY, which is on Long Island, but there are multiple counties on Long Island, so the user may actually live in a different county. None of the other 1000 tweets were inappropriately or ambiguously assigned.
all regions except those that are colored black
bf00808353eec22b4801c922cce7b1ec0ff3b777
bf00808353eec22b4801c922cce7b1ec0ff3b777_0
Q: Why did they only consider six years of published books? Text: Introduction Human language reflects cultural, political, and social evolution. Words are the atoms of language. Their meanings and usage patterns reveal insight into the dynamical process by which society changes. Indeed, the increasing frequency with which electronic text is used as a means of communicating, e.g., through email, text messaging, and social media, offers us the opportunity to quantify previously unobserved mechanisms of linguistic development. While there are many aspects of language being investigated towards an increased understanding of social and linguistic evolution BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one particular area of focus has been on changes in past tense forms for English verbs BIBREF0 , BIBREF1 , BIBREF2 . These investigations have collectively demonstrated that English verbs are going through a process of regularization, where the original irregular past tense of a verb is replaced with the regular past tense, formed using the suffix -ed. For example, the irregular past tense of the verb `burn' is `burnt' and the regular past tense is `burned'. Over time, the regular past tense has become more popular in general, and for some verbs has overtaken the irregular form. For example, in Fig.~ UID1 , we use the Google Ngram Online Viewer to compare the relative frequency of `burnt' with that of `burned' over the past 200 years. (As shown in an earlier paper involving two of the present authors BIBREF6 , and expanded on below, the Google Ngram dataset is highly problematic but can serve as a useful barometer of lexical change.) In the first half of the 19th century, the irregular past tense `burnt' was more popular. However, the regular past tense `burned' gained in popularity and in the late 1800s became the more popular form, which has persisted through to today. Looking at several examples like this, in a 2011 paper Michel et al. studied the regularization of verbs, along with other cultural and language trends, as an accompaniment to their introduction of the Google Books Ngram corpus (hereafter Ngrams) and the proto-field `Culturomics' BIBREF1 . They found that most of the verb regularization over the last two centuries came from verbs using the suffix -t for the irregular form, and that British English texts were less likely than American English ones to move away from this irregular form. In a 2007 study, Lieberman et al. explored the regularization of English verbs using the CELEX corpus, which gives word frequencies from several textual sources BIBREF0 . Focusing on a set of 177 verbs that were all irregular in Old English, they examined how the rate of verb regularization relates to frequency of usage, finding that more common verbs regularized at a slower rate. They calculated half-lives for irregular verbs binned by frequency, finding that irregular verbs regularize with a half-life proportional to the square root of frequency of usage. In a more recent study, Newberry et al. proposed a method for determining the underlying mechanisms driving language change, including the regularization of verbs BIBREF2 . Using the Corpus of Historical American English and inspired by ideas from evolution, the authors described a method to determine if language change is due to selection or drift, and applied this method to three areas of language change. They used a null hypothesis of stochastic drift and checked if selection would be strong enough to reject this null hypothesis. Of the 36 verbs Newberry et al. studied, only six demonstrated statistical support for selection. They also claimed that rhyming patterns might be a driver of selection. Unfortunately, the corpora used in these studies have considerable limitations and corruptions. For example, early versions of the Ngrams data includes scientific literature, whose explosive growth through the 20th century is responsible for the decreasing trend in relative word usage frequency observed in many common search terms BIBREF6 . Moreover, the library-like nature of the corpus admits no accounting for popularity: Lord of the Rings and an unknown work contribute with equal weight to token counts. Another general concern with large corpora of a global language like English is that language use varies tremendously with culture and geography. Ngrams allows only for the regional exploration of the English language with the British English corpus and the American English corpus. Twitter data enables us to focus on much smaller spatial regions (e.g., county or state). Prior studies of verb regularization have also focused on data reflecting a formal editorial process, such as the one undergone by any published book. This editorial process will tend to normalize the language, reflecting the linguistic opinions of a small minority of canon gatekeepers, rather than portray the language used by everyday people. For example, maybe the irregular from of a particular verb is considered proper by scholars, but a vast majority of the English speaking population uses the regular form. While it is not a verb form, one illustrative example is `whom'. Although `whom' is the correct word to use in the objective case, it is common for everyday speakers to use `who'. In the present study we take tweets to be a closer representation of everyday language. For the vast majority of accounts, tweets are authored by individuals without undergoing a formal editing process. As such, the language therein should more accurately represent average speakers than what is found in books. The demographic groups contributing to Twitter are by no means a carefully selected cross-section of society, but do offer natural language use by the roughly 20% of adult English speakers who use Twitter BIBREF8 . When exploring temporal changes in language use, the Ngrams and CELEX datasets evidently cover a much longer period than the decade for which social media is available. As a result, we are unable to infer anything about the temporal dimension of regularization looking at Twitter. In this paper we use the Ngrams and Twitter datasets to establish estimates of the current state of English verb regularization. We structure our paper as follows: In Sec.~ SECREF2 , we describe the datasets we use. In Sec.~ SECREF3 , we present our results. We study verb regularization in English in general in Sec.~ UID5 . We compare verb regularization in American English (AE) and British English (BE) using both Ngrams and geotagged Twitter data in Sec.~ UID7 . In Sec.~ "Description of data sets" , we employ methods to study regional variation in verb usage, leveraging county level user location data in the United States. We also explore correlations between verb regularization and a number of socio-demographic and economic variables. Finally, in Sec.~ SECREF4 , we provide concluding remarks. Description of data sets To be consistent with prior work, we chose the verb list for our project to match that of Michel et al. BIBREF1 . When comparing BE with AE, we use the subset of verbs that form the irregular past tense with the suffix -t. When calculating frequencies or token counts for the `past tense' we use both the preterite and past participle of the verb. See #1 for a complete tabulation of all verb forms. The Ngrams data reflects relative frequency, providing, for a verb and a given year, the percentage of corpus tokens that are the given verb, where a token is an individual occurrence of a word. The Google Ngram Online Viewer also has a smoothing parameter, $s$ , which averages the relative frequency for the given year with that of each of the $s$ years before and after the given year, if they exist. For example, Fig.~ UID1 uses a smoothing of 3 years and shows that, averaged across the years 1997--2000 (the value displayed for the year 2000), the word `burned' appeared with relative frequency 0.004321% (roughly once every 23,000 tokens), while `burnt' appeared with relative frequency 0.000954% (roughly once every 105,000 tokens). We downloaded the Ngrams verb data for the most recent 6-year period available (2003--2008) BIBREF9 . Specifically, we chose the 2008 values of relative frequency with a smoothing of 5 years, resulting in an average case insensitive#1 For general English, as suggested by BIBREF6 , we queried the English Fiction 2012 corpus, which uses ``books predominantly in the English language that a library or publisher identified as fiction.'' For AE we used the American English 2012 corpus, which uses ``books predominantly in the English language that were published in the United States.'' For BE we used the British English 2012 corpus, which uses ``books predominantly in the English language that were published in Great Britain'' BIBREF10 . The Twitter messages for our project consist of a random sample of roughly 10% of all tweets posted between 9 September 2008 and 22 October 2017. This `decahose' dataset comprises a total of more than 106 billion messages, sent by about 750 million unique accounts. From this larger set, we performed a case-insensitive search for verb forms of interest, also extracting geographic location when available in the meta-data associated with each tweet. Tweets geotagged by mobile phone GPS with a U.S. location comprise about a 0.27% subset of the decahose dataset; United Kingdom locations comprise about a 0.05% subset. Many individuals provide location information, entered as free text, along with their biographical profile. We matched user specified locations of the form `city, state' to a U.S. county when possible, comprising a 2.26% subset of the decahose dataset. Details on this matching process can be found in #1. For general English, we counted the number of tokens in the decahose dataset for each verb. For AE, we used the tweets whose geotagged coordinates are located in the United States, and for BE we used the tweets whose geotagged coordinates are located in the United Kingdom. For the analysis of verbs by county, we used the tweets with the user entered location information. Table~ UID2 summarizes the datasets used for both Ngrams and Twitter. The demographic data for U.S. counties comes from the 2015 American Community Survey 5-year estimates, tables DP02--Selected Social Characteristics, DP03--Selected Economic Characteristics, DP04--Selected Housing Characteristics, and DP05--Demographic and Housing Estimates, which can be found by searching online at https://factfinder.census.gov/. These tables comprise a total of 513 usable socio-demographic and economic variables. We compute the regularization fraction for a verb as the proportion of instances in which the regular form was used for the past tense of the verb. More specifically, for Ngrams we divide the relative frequency for the regular past tense by the sum of the relative frequencies for the regular and irregular past tenses. Similarly, for Twitter we divide the token count for the regular past tense by the sum of the token counts for both the regular and irregular past tenses. If the resulting regularization fraction is greater than $0.5$ , the regular past tense is more popular and we call the verb regular. Otherwise we call the verb irregular. When calculating an average regularization across all verbs, we first compute the regularization fraction for each verb individually. Then we compute the average of the regularization fractions, with each verb contributing the same weight in the average, irrespective of frequency. We perform this `average of averages' to avoid swamping the contribution of less frequent verbs. Verb regularization using Ngrams and Twitter Using the datasets in row (I) of Table~ UID2 , we begin by comparing Ngrams and Twitter with respect to regularization of English verbs in Fig.~ UID3 , where we find that 21 verbs are more regular in Ngrams, and 85 are more regular on Twitter. A Wilcoxon signed rank test of the data has a $p$ -value of $7.9\times 10^{-6}$ , demonstrating strong evidence that verbs on Twitter are more regular than verbs in Ngrams. What mechanisms could be responsible for the observed increase in regularity on Twitter? One possibility is that authors of fiction published in the 2000s, along with their editors, being professional users of English, have a larger vocabulary than the typical user of Twitter. If so, their commitment to proper English would contribute to the appearance of relatively more irregular verbs in books. The average Twitter user may not know, or choose to use, the `correct' past tense form of particular verbs, and thus use the default regular past tense. Another driver may be that non-native English speakers writing English tweets may be more likely to use the default regular form. We will find quantitative support for this mechanism below. As a preview, we note that Fig.~ UID3 shows that `burn' is predominantly regular on Twitter globally, but we see later (Fig.~ UID4 B) that `burn' is irregular on Twitter for both American English and British English. Thus, it is likely that non-native speakers are contributing to this difference. American and British English We next study how verb regularization varies with geographic region. In this subsection we use the datasets in row (II) of Table~ UID2 for AE and row (III) for BE and the subset of verbs that form the irregular past tense with the suffix -t. In Fig.~ UID4 A, we compare American and British English in Ngrams. The average regularization fraction is 0.49 in AE and $0.42$ in BE. For 17 out of 22 verbs, AE shows more regularization, with a Wilcoxon signed rank test $p$ -value of $9.8\times 10^{-4}$ , giving statistical support that AE verbs are more regular on average in Ngrams than BE verbs. As we show in the inset scatter plot of Fig.~ UID4 A, regularization in AE and BE are also strongly positively correlated with a Spearman correlation coefficient of $0.97$ $(p=2.3\times 10^{-14})$ . Verbs that are more regular in AE are also more regular in BE, just not to the same extent. In Fig.~ UID4 B, we compare regularization in AE and BE on Twitter. For Twitter, the average regularization fraction is $0.54$ for AE, higher than Ngrams, and $0.33$ for BE, much lower than Ngrams. As with Ngrams, 17 verbs out of 22 show more regularization in AE than in BE. The Wilcoxon signed rank test gives a weaker but still significant $p$ -value of $1.9\times 10^{-3}$ . The inset in Fig.~ UID4 B also shows a positive correlation, although not as strong as Ngrams, with a Spearman correlation coefficient of $0.87$ $(p=1.1\times 10^{-7})$ . Generally on Twitter, regular AE verbs are also regular in BE, but the difference in regularization fraction is much greater than for Ngrams. In Fig.~ UID6 A, we demonstrate the difference in regularization between AE and BE for both Ngrams and Twitter. The values in this figure for Ngrams can be thought of as, for each verb in Fig.~ UID4 A, subtracting the value of the bottom bar from the top bar, and likewise for Twitter and Fig.~ UID4 B. Positive numbers imply greater regularization in AE, the more common scenario. When the difference is near zero for one corpus, it is usually close to zero for the other corpus as well. However, when Ngrams shows that AE is notably more regular than BE, Twitter tends to show a much larger difference. The average difference in regularization fraction between AE and BE for Twitter is $0.21$ , whereas it is only $0.08$ for Ngrams. Again, we find that these averages are significantly different with a Wilcoxon signed rank $p$ -value of $1.9\times 10^{-2}$ . The inset scatter plot tells a similar story, with a cluster of points near the origin. As the difference in regularization fraction between regions increases in Ngrams, it also tends to increase in Twitter, with Spearman correlation coefficient $0.65$ and $p$ -value $1.0\times 10^{-3}$ . The steep rise shows that the difference increases faster on Twitter than in Ngrams. Fig.~ UID6 B returns to comparing Ngrams and Twitter, but now between AE and BE. For each verb, the bar chart shows the difference between the regularization fraction for Twitter and Ngrams in both AE and BE, with positive values showing that regularization for Twitter is greater. In this case, the values can be thought of as subtracting the values for the bars in Fig.~ UID4 A from the corresponding bars in Fig.~ UID4 B. As we find for English in general, regularization is greater on Twitter than in Ngrams for AE, with an average difference of $0.04$ . However, for BE, regularization is greater in Ngrams than on Twitter, with an average difference in regularization fraction of $-0.09$ . We summarize our findings in Table~ UID8 . We found again that verbs on Twitter are more regular than in Ngrams for American English, likely for many of the same reasons that verbs on Twitter are more regular than Ngrams in general. However, we find that in British English the opposite is true: Verbs on Twitter are less regular than in Ngrams. In decreasing order by average regularization fraction, we have AE Twitter, then AE Ngrams, then BE Ngrams, and finally BE Twitter. Knowing that the general trend is towards regularization BIBREF1 , BIBREF0 , it seems that regularization is perhaps being led by everyday speakers of American English, with American published work following suit, but with a lag. Then, it may be that British English authors and editors are being influenced by American publications and the language used therein. Indeed, some studies have found a general `Americanization' of English across the globe BIBREF11 , BIBREF12 , meaning that the various varieties of English used across the world are becoming more aligned with American English. Finally, it may be that average British users of Twitter are more resistant to the change. Indeed, from the figures in the study by Gonçalves et al., one can see that the `Americanization' of British English is more pronounced in Ngrams than on Twitter BIBREF11 , agreeing with what we have found here. Regularization by US county In Sec.~ UID7 , we demonstrated regional differences in verb regularization by comparing BE and AE. Here, we consider differences on a smaller spatial scale by quantifying regularization by county in the United States using the dataset in row (IV) of Table~ UID2 . We use methods inspired by Grieve et al. to study regional variation in language BIBREF13 . We only include counties that had at least 40 total tokens for the verbs under consideration. We plot the average regularization fraction for each county in the continental U.S. in Fig.~ "Introduction" A, where counties with not enough data are colored black. To control for the skewed distribution of samples associated with county population (see below for more details), we use residuals for this portion of the analysis. After regressing with the $\log _{10}$ of data volume (total number of tokens) for each county, we compute the average regularization fraction residual, which is plotted in Fig.~ "Introduction" B. That is, if we let $d_i$ be the total number of tokens for verbs in tweets from county $i$ ; $\alpha $ and $\beta $ be the slope and intercept parameters computed from regression; and $R_i$ be the average regularization fraction for county $i$ , then we compute the average regularization fraction residual for county $i$ , $r_i^{\text{reg}}$ , as $$r_i^{\text{reg}} = R_i - \left(\beta + \alpha \log _{10} d_i \right).$$ (Eq. 34) Using the average regularization residual at the county level as input, we measure local spatial autocorrelation using the Getis-Ord $Gi^*$ $z$ -score BIBREF14 , $$G_i^* = \frac{ \sum _j w_{ij} r_j^{\text{reg}} - \overline{r}^{\text{reg}}\sum _j w_{ij}}{\sigma \sqrt{\left[n\sum _j w_{ij}^2 - \left( \sum _j w_{ij}\right)^2 \right]/(n-1)}},$$ (Eq. 35) where $$\sigma =\sqrt{ \frac{\sum _j (r_j^{\text{reg}})^2 }{n} - (\overline{r}^{\text{reg}})^2 },$$ (Eq. 36) $\overline{r}^{\text{reg}} = \frac{1}{n}\sum _i r_i^{\text{reg}}$ , $n$ is the number of counties, and $w_{ij}$ is a weight matrix. To obtain the weight matrix used in this calculation, we first create a distance matrix, $s_{ij}$ , where the distance between each pair of counties is the larger of the great circle distance, $s_{ij}^\text{GC}$ , in miles between the centers of the bounding box for each county and 10 miles. That is, $$s_{ij}=\max \left(s_{ij}^\text{GC}, 10\right).$$ (Eq. 37) We make the minimum value for $s_{ij}$ 10 miles to prevent a county from having too large of a weight. We then compute the weight matrix as $$w_{ij}=\frac{1}{\sqrt{s_{ij}}}.$$ (Eq. 38) Fig.~ "Introduction" C shows the results for the lower 48 states, where black represents counties left out because there was not enough data. For each county, the $Gi^*$ $z$ -score computes a local weighted sum of the residuals, $r_j^\text{reg}$ , for the surrounding counties and compares that to the expected value of that weighted sum if all the counties had exactly the average residual, $\overline{r}^\text{reg}$ , as their value, where the weighting is such that closer counties have a higher weight. Areas that are darker blue (positive $z$ -score) belong to a cluster of counties that has higher regularization than average, and those that are darker red (negative $z$ -score) belong to a cluster that has lower regularization than average. So, Fig.~ "Introduction" C shows that, in general, western counties show less regularization than average and eastern counties show more, except that the New England area is fairly neutral. As usual, the $z$ -score gives the number of standard deviations away from the mean. For this we would do a two tail test for significance because we are looking for both high value and low value clusters. For example, a $z$ -score greater in magnitude than $1.96$ is significant at the $.05$ level. If we do a Bonferroni correction based on 3161 counties (the number included for this part of the analysis), then a $z$ -score greater in magnitude than $4.32$ is significant for a two tail test at the $.05/3161\approx 1.58 \times 10^{-5}$ level. We do this same process looking at individual verbs as well. However, when looking at individual verbs, we use the regularization fraction rather than residuals, because the data skew is not as problematic. This is because the main problem with data volume comes when averaging across verbs that have different frequencies of usage, as explained below. Also, here we include counties that have at least 10 tokens. Fig.~ "" gives an example map showing the $Gi^*$ $z$ -scores for the verb `dream'. The maps showing local spatial autocorrelation for the complete list of verbs can be found in the Online Appendix A at . For many of the counties in the US, there is a small sample of Twitter data. We restrict our analysis to counties with a total token count of at least 40 for the verbs we consider. Even for the counties meeting this criteria, the volume of data varies, leading to drastically different sample sizes across counties. More common verbs tend to have popular irregular forms (e.g., `found' and `won'), and less common verbs tend to be regular (e.g., `blessed' and `climbed') BIBREF0 . As a result, samples taken from populous counties are more likely to contain less common verbs. Our `average regularization' is an average of averages, resulting in an underlying trend toward higher rates for more populous counties due to the increased presence of rarer regular verbs. Fig.~ UID17 demonstrates the relationship between data volume and regularization. To explore the connection further, we perform a synthetic experiment as follows. To simulate sampling from counties with varying population sizes, we first combine all verb token counts (using the Twitter dataset from row (I) of Table~ UID2 ) into a single collection. We then randomly sample a synthetic county worth of tokens from this collection. For a set of 1000 logarithmically spaced county sizes, we randomly draw five synthetic collections of verbs (each is a blue circle in Fig.~ UID17 ). For each sample, we compute the average regularization fraction, as we did for U.S. counties. The goal is to infer the existences of any spurious trend introduced by the sampling of sparsely observed counties. The resulting simulated curve is comparable to the trend observed for actual U.S. counties. As the data volume increases, the simulated version converges on roughly $0.17$ , which is the average regularization fraction for all of Twitter. We also explored correlations between verb regularization and various demographic variables. Fig.~ UID17 showed a strong relationship between data volume and verb regularization. It has been shown elsewhere that tweet density positively correlates with population density BIBREF15 , and population size is correlated with many demographic variables. As a result, we use partial correlations as an attempt to control for the likely confounding effect of data volume. For each demographic variable, we compute the regression line between the $\log _{10}$ of data volume, $d_i$ , and regularization, and compute the residuals as in Eq.~ "Methods and results" . Then, if the demographic variable is an `Estimate' variable, where the unit is number of people, we similarly compute the regression line between the $\log _{10}$ of data volume and the $\log _{10}$ of the demographic variable#1 and compute the residuals, $r_i^{\text{dem}}$ , as $$r_i^{\text{dem}} = \log _{10}(D_i) - \left( \delta + \gamma \log _{10} d_i \right),$$ (Eq. 42) where $D_i$ is the value of the demographic variable for county $i$ , and $\gamma $ and $\delta $ are the slope and intercept parameters calculated during regression. Otherwise, the demographic variable is a `Percent' variable, with units of percentage, and we compute the regression line between the $\log _{10}$ of data volume and the demographic variable, and compute residuals as $$r_i^{\text{dem}} = D_i - \left( \delta + \gamma \log _{10} d_i \right).$$ (Eq. 44) The correlation between residuals $r_i^{\text{reg}}$ and $r_i^{\text{dem}}$ gives the partial correlation between average regularization and the demographic variable. Our findings suggest that data volume is a confounding variable in at least some of the cases because, after controlling for data volume, there is generally a large decrease in the correlation between verb regularization and the demographic variables. The largest in magnitude Pearson correlation between verb regularization and a demographic variable is $0.68$ , for the variable `Estimate; SCHOOL ENROLLMENT - Population 3 years and over enrolled in school', whereas the largest in magnitude partial correlation is only $-0.18$ , for the variable `Percent; OCCUPATION - Civilian employed population 16 years and over - Management, business, science, and arts occupations'. Table~ UID20 lists the 10 demographic variables with largest in magnitude partial correlation. Fig.~ UID18 shows an example for one of the demographic variables, the `Percent' variable with largest simple correlation. Fig.~ UID18 A is the scatter plot of the demographic variable with average regularization, which corresponds to simple correlation. Fig.~ UID18 B is the scatter plot of the residuals, $r_i^{\text{dem}}$ and $r_i^{\text{reg}}$ , after regressing with the $\log _{10}$ of data volume, and corresponds with partial correlation. We can see that there is a strong simple correlation ( $-0.52$ ), but after accounting for data volume that correlation largely vanishes ( $-0.15$ ). Similar plots for all of the demographic variables can be found in the Online Appendix B at . Concluding remarks Our findings suggest that, by and large, verb regularization patterns are similar when computed with Ngrams and Twitter. However, for some verbs, the extent of regularization can be quite different. If social media is an indicator of changing patterns in language use, Ngrams data ought to lag with a timescale not yet observable due to the recency of Twitter data. Very reasonably, Ngrams data may not yet be showing some of the regularization that is happening in everyday English. We also found differences in verb regularization between American and British English, but found that this difference is much larger on Twitter than Ngrams. Overall, and in American English specifically, verbs are more regular on Twitter than in Ngrams, but the opposite is true for British English. In the U.S., we also find variation in average verb regularization across counties. Lastly, we showed that there are significant partial correlations between verb regularization and various demographic variables, but they tend to be weak. Our findings do not account for the possible effects of spell checkers. Some people, when tweeting, may be using a spell checker to edit their tweet. If anything, this will likely skew the language on Twitter towards the `correct' form used in edited textual sources. For example, in Fig.~ UID3 we see that `stand' is irregular for both Ngrams and Twitter, and likely most spell checkers would consider the regular `standed' a mistake, but we see that `stand' is still over 100 times more regular on Twitter than in Ngrams. So, the differences between edited language and everyday language may be even larger than what we find here suggests. Future work should look into the effects of spell checkers. Our study explored the idea that edited written language may not fully represent the language spoken by average speakers. However, tweets do not, of course, fully represent the English speaking population. Even amongst users, our sampling is not uniform as it reflects the frequency with which different users tweet #1. Furthermore, the language used on Twitter is not an unbiased sample of language even for people who use it frequently. The way someone spells a word and the way someone pronounces a word may be different, especially, for example, the verbs with an irregular form ending in -t, because -t and -ed are close phonetically. However, the fact that we found differences between the language of Ngrams and the language of Twitter suggests that the true language of everyday people is not fully represented by edited written language. We recommend that future studies should investigate speech data. We are thankful for the helpful reviews and discussions of earlier versions of this work by A. Albright and J. Bagrow, and for help with initial data collection from L. Gray. PSD & CMD were supported by NSF Grant No. IIS-1447634, and TJG, PSD, & CMD were supported by a gift from MassMutual. Table of Verb Forms |c||c||c|c||r| & Regular & 2c||Irregular & Verb & Preterit & Past Participle & Preterit & Past Participle & Token Count (continued) & Regular & 2c||Irregular & Verb & Preterit & Past Participle & Preterit & Past Participle & Token Count 5|r|Continued on next page A tabulation of all verb forms used in this study. The Token Count column gives the sum of all the tokens for the past tense forms of the verb, both regular and irregular, in our Twitter dataset (see row (I) of Table~ UID2 in Sec.~ SECREF2 ). abide & abided & abode & abode & 146,566 alight & alighted & alit & alit & 56,306 arise & arised & arose & arisen & 164,134 awake & awaked & awoke & awoken, awoke & 423,359 become & becomed & became & become & 50,664,026 begin & beginned & began & begun & 5,942,572 bend & bended & bent & bent & 4,777,019 beseech & beseeched & besought & besought & 3,390 bleed & bleeded & bled & bled & 252,225 blend & blended & blent & blent & 436,029 bless & blessed & blest & blest & 22,547,387 blow & blowed & blew & blown & 9,155,246 break & breaked & broke & broken & 54,506,810 breed & breeded & bred & bred & 1,040,854 bring & bringed & brought & brought & 15,303,318 build & builded & built & built & 8,521,553 burn & burned & burnt & burnt & 7,457,942 buy & buyed & bought & bought & 24,841,526 catch & catched & caught & caught & 24,891,188 choose & choosed & chose & chosen & 10,290,205 clap & clapped & clapt & clapt & 405,837 climb & climbed & clomb, clom & clomben & 635,122 cling & clinged & clung & clung & 49,742 creep & creeped & crept & crept & 698,405 deal & dealed & dealt & dealt & 1,181,974 dig & digged & dug & dug & 941,656 dream & dreamed & dreamt & dreamt & 2,794,060 drink & drinked & drank & drunk, drank & 37,295,703 drive & drived & drove & driven & 5,745,497 dwell & dwelled & dwelt & dwelt & 25,725 eat & eated & ate & eaten & 25,084,758 fall & falled & fell & fallen & 25,224,815 fight & fighted & fought & fought & 3,625,297 find & finded & found & found & 80,709,195 flee & fleed & fled & fled & 405,295 freeze & freezed & froze & frozen & 7,454,847 get & getted & got & got, gotten & 500,591,203 give & gived & gave & given & 58,697,198 grow & growed & grew & grown & 17,951,273 hang & hanged & hung & hung & 3,991,956 hear & heared & heard & heard & 52,605,822 hide & hided, hidded & hid & hid, hidden & 7,829,276 hold & holded & held & held & 10,080,725 inlay & inlayed & inlaid & inlaid & 44,811 keep & keeped & kept & kept & 11,785,131 kneel & kneeled & knelt & knelt & 83,765 know & knowed & knew & known & 58,175,701 lay & layed & laid & laid & 5,828,898 leap & leaped & leapt & leapt & 91,620 learn & learned & learnt & learnt & 18,134,586 lose & losed & lost & lost & 72,695,892 mean & meaned & meant & meant & 26,814,977 pay & payed & paid & paid & 21,150,031 plead & pleaded & pled & pled & 193,553 ride & rided & rode & ridden & 1,710,109 seek & seeked & sought & sought & 888,822 sell & selled & sold & sold & 14,251,612 send & sended & sent & sent & 26,265,441 shake & shaked & shook & shaken & 3,223,316 shoe & shoed & shod & shod & 47,780 shrink & shrinked & shrank, shrunk & shrunk, shrunken & 296,188 sing & singed & sang, sung & sung & 6,767,707 sink & sinked & sank, sunk & sunk, sunken & 927,419 slay & slayed & slew & slain & 2,153,981 sleep & sleeped & slept & slept & 9,252,446 slide & slided & slid & slid & 530,659 sling & slinged & slung & slung & 38,320 slink & slinked & slunk & slunk & 5,772 smell & smelled & smelt & smelt & 1,089,814 smite & smitted, smited & smote & smitten, smote & 176,768 sneak & sneaked & snuck & snuck & 797,337 speak & speaked & spoke & spoken & 8,502,050 speed & speeded & sped & sped & 216,062 spell & spelled & spelt & spelt & 3,812,137 spend & spended & spent & spent & 17,603,781 spill & spilled & spilt & spilt & 1,627,331 spin & spinned & spun & spun & 342,022 spoil & spoiled & spoilt & spoilt & 3,891,576 spring & springed & sprang, sprung & sprung & 626,400 stand & standed & stood & stood & 3,942,812 steal & stealed & stole & stolen & 11,884,934 sting & stinged & stung & stung & 391,053 stink & stinked & stank, stunk & stunk & 1,556,197 stride & strided & strode & stridden & 17,811 strike & striked & struck & struck, stricken & 2,167,165 strip & stripped & stript & stript & 837,967 strive & strived & strove & striven & 33,705 swear & sweared & swore & sworn & 1,902,662 sweep & sweeped & swept & swept & 931,245 swim & swimmed & swam & swum & 356,842 swing & swinged & swung & swung & 295,360 take & taked & took & taken & 83,457,822 teach & teached & taught & taught & 9,379,039 tear & teared & tore & torn & 4,238,865 tell & telled & told & told & 71,562,969 thrive & thrived & throve & thriven & 43,612 throw & throwed & threw & thrown & 13,197,226 tread & treaded & trod & trodden & 56,624 vex & vexed & vext & vext & 139,411 wake & waked & woke & woken & 30,796,918 wear & weared & wore & worn & 8,552,191 weep & weeped & wept & wept & 200,690 win & winned & won & won & 45,276,202 wind & winded & wound & wound & 1,340,267 wring & wringed & wrung & wrung & 29,141 write & writed & wrote & written, writ, wrote & $23,926,025$ Details on User Location Matching To study regularization by county, we extracted location information from the user-provided location information, which was entered as free text in the user's biographical profile. To do this, for each tweet we first checked if the location field was populated with text. If so, we then split the text on commas, and checked whether there were two tokens separated by a comma. If so, we made the assumption that it might be of the form `city, state'. Then we used a python package called uszipcode, which can be found here: pythonhosted.org/uszipcode/. We used the package's method to search by city and state. If the package returned a location match, we used the returned latitude and longitude to determine which county the detected city belonged to. The package allows for fuzzy matching, meaning the city and state do not have to be spelled correctly, and it allows for the state to be fully spelled out or be an abbreviation. In the source code of the package there was a hard coded confidence level of 70 for the fuzzy matching. We modified the source code so that the confidence level was an input to the method, and running tests found we were satisfied with a confidence level of 91. We checked by hand the matches of 1000 tweets that this method returned a match for, 100 from each year in the dataset, and found the only potential error in these matches was when the user typed in `Long Island, NY', or a similar variant. For this, the package returned Long Island City, NY, which is on Long Island, but there are multiple counties on Long Island, so the user may actually live in a different county. None of the other 1000 tweets were inappropriately or ambiguously assigned.
Unanswerable
ec62c4cdbeaafc875c695f2d4415bce285015763
ec62c4cdbeaafc875c695f2d4415bce285015763_0
Q: What state-of-the-art general-purpose pretrained models are made available under the unified API? Text: Introduction In the past 18 months, advances on many Natural Language Processing (NLP) tasks have been dominated by deep learning models and, more specifically, the use of Transfer Learning methods BIBREF0 in which a deep neural network language model is pretrained on a web-scale unlabelled text dataset with a general-purpose training objective before being fine-tuned on various downstream tasks. Following noticeable improvements using Long Short-Term Memory (LSTM) architectures BIBREF1, BIBREF2, a series of works combining Transfer Learning methods with large-scale Transformer architectures BIBREF3 has repeatedly advanced the state-of-the-art on NLP tasks ranging from text classification BIBREF4, language understanding BIBREF5, BIBREF6, BIBREF7, machine translation BIBREF8, and zero-short language generation BIBREF9 up to co-reference resolution BIBREF10 and commonsense inference BIBREF11. While this approach has shown impressive improvements on benchmarks and evaluation metrics, the exponential increase in the size of the pretraining datasets as well as the model sizes BIBREF5, BIBREF12 has made it both difficult and costly for researchers and practitioners with limited computational resources to benefit from these models. For instance, RoBERTa BIBREF5 was trained on 160 GB of text using 1024 32GB V100. On Amazon-Web-Services cloud computing (AWS), such a pretraining would cost approximately 100K USD. Contrary to this trend, the booming research in Machine Learning in general and Natural Language Processing in particular is arguably explained significantly by a strong focus on knowledge sharing and large-scale community efforts resulting in the development of standard libraries, an increased availability of published research code and strong incentives to share state-of-the-art pretrained models. The combination of these factors has lead researchers to reproduce previous results more easily, investigate current approaches and test hypotheses without having to redevelop them first, and focus their efforts on formulating and testing new hypotheses. To bring Transfer Learning methods and large-scale pretrained Transformers back into the realm of these best practices, the authors (and the community of contributors) have developed Transformers, a library for state-of-the art Natural Language Processing with Transfer Learning models. Transformers addresses several key challenges: Introduction ::: Sharing is caring Transformers gathers, in a single place, state-of-the art architectures for both Natural Language Understanding (NLU) and Natural Language Generation (NLG) with model code and a diversity of pretrained weights. This allows a form of training-computation-cost-sharing so that low-resource users can reuse pretrained models without having to train them from scratch. These models are accessed through a simple and unified API that follows a classic NLP pipeline: setting up configuration, processing data with a tokenizer and encoder, and using a model either for training (adaptation in particular) or inference. The model implementations provided in the library follow the original computation graphs and are tested to ensure they match the original author implementations' performances on various benchmarks. Introduction ::: Easy-access and high-performance Transformers was designed with two main goals in mind: (i) be as easy and fast to use as possible and (ii) provide state-of-the-art models with performances as close as possible to the originally reported results. To ensure a low entry barrier, the number of user-facing abstractions to learn was strongly limited and reduced to just three standard classes: configuration, models and tokenizers, which all can be initialized in a simple and unified way by using a common `from_pretrained()` instantiation method. Introduction ::: Interpretability and diversity There is a growing field of study, sometimes referred as BERTology from BERT BIBREF13, concerned with investigating the inner working of large-scale pretrained models and trying to build a science on top of these empirical results. Some examples include BIBREF14, BIBREF15, BIBREF16. Transformers aims at facilitating and increasing the scope of these studies by (i) giving easy access to the inner representations of these models, notably the hidden states, the attention weights or heads importance as defined in BIBREF15 and (ii) providing different models in a unified API to prevent overfitting to a specific architecture (and set of pretrained weights). Moreover, the unified front-end of the library makes it easy to compare the performances of several architectures on a common language understanding benchmark. Transformers notably includes pre-processors and fine-tuning scripts for GLUE BIBREF6, SuperGLUE (BIBREF7) and SQuAD1.1 BIBREF17. Introduction ::: Pushing best practices forward Transformers seeks a balance between sticking to the original authors' code-base for reliability and providing clear and readable implementations featuring best practices in training deep neural networks so that researchers can seamlessly use the code-base to explore new hypothesis derived from these models. To accommodate a large community of practitioners and researchers, the library is deeply compatible with (and actually makes compatible) two major deep learning frameworks: PyTorch BIBREF18 and TensorFlow (from release 2.0) BIBREF19. Introduction ::: From research to production Another essential question is how to make these advances in research available to a wider audience, especially in the industry. Transformers also takes steps towards a smoother transition from research to production. The provided models support TorchScript, a way to create serializable and optimizable models from PyTorch code, and features production code and integration with the TensorFlow Extended framework. Community The development of the Transformers originally steamed from open-sourcing internals tools used at HuggingFace but has seen a huge growth in scope over its ten months of existence as reflected by the successive changes of name of the library: from pytorch-pretrained-bert to pytorch-transformers to, finally, Transformers. A fast-growing and active community of researchers and practitioners has gathered around Transformers. The library has quickly become used both in research and in the industry: at the moment, more than 200 research papers report using the library. Transformers is also included either as a dependency or with a wrapper in several popular NLP frameworks such as Spacy BIBREF20, AllenNLP BIBREF21 or Flair BIBREF22. Transformers is an ongoing effort maintained by the team of engineers and research scientists at HuggingFace, with support from a vibrant community of more than 120 external contributors. We are committed to the twin efforts of developing the library and fostering positive interaction among its community members, with the ambition of creating the standard library for modern deep learning NLP. Transformers is released under the Apache 2.0 license and is available through pip or from source on GitHub. Detailed documentation along with on-boarding tutorials are available on HuggingFace's website. Library design Transformers has been designed around a unified frontend for all the models: parameters and configurations, tokenization, and model inference. These steps reflect the recurring questions that arise when building an NLP pipeline: defining the model architecture, processing the text data and finally, training the model and performing inference in production. In the following section, we'll give an overview of the three base components of the library: configuration, model and tokenization classes. All of the components are compatible with PyTorch and TensorFlow (starting 2.0). For complete details, we refer the reader to the documentation available on https://huggingface.co/transformers/. Library design ::: Core components All the models follow the same philosophy of abstraction enabling a unified API in the library. Configuration - A configuration class instance (usually inheriting from a base class `PretrainedConfig`) stores the model and tokenizer parameters (such as the vocabulary size, the hidden dimensions, dropout rate, etc.). This configuration object can be saved and loaded for reproducibility or simply modified for architecture search. The configuration defines the architecture of the model but also architecture optimizations like the heads to prune. Configurations are agnostic to the deep learning framework used. Tokenizers - A Tokenizer class (inheriting from a base class `PreTrainedTokenizer`) is available for each model. This class stores the vocabulary token-to-index map for the corresponding model and handles the encoding and decoding of input sequences according to the model's tokenization-specific process (ex. Byte-Pair-Encoding, SentencePiece, etc.). Tokenizers are easily modifiable to add user-selected tokens, special tokens (like classification or separation tokens) or resize the vocabulary. Furthermore, Tokenizers implement additional useful features for the users, by offering values to be used with a model; these range from token type indices in the case of sequence classification to maximum length sequence truncating taking into account the added model-specific special tokens (most pretrained Transformers models have a maximum sequence length they can handle, defined during their pretraining step). Tokenizers can be instantiated from existing configurations available through Transformers originating from the pretrained models or created more generally by the user from user-specifications. Model - All models follow the same hierarchy of abstraction: a base class implements the model's computation graph from encoding (projection on the embedding matrix) through the series of self-attention layers and up to the last layer hidden states. The base class is specific to each model and closely follows the original implementation, allowing users to dissect the inner workings of each individual architecture. Additional wrapper classes are built on top of the base class, adding a specific head on top of the base model hidden states. Examples of these heads are language modeling or sequence classification heads. These classes follow similar naming pattern: XXXForSequenceClassification or XXXForMaskedLM where XXX is the name of the model and can be used for adaptation (fine-tuning) or pre-training. All models are available both in PyTorch and TensorFlow (starting 2.0) and offer deep inter-operability between both frameworks. For instance, a model trained in one of frameworks can be saved on drive for the standard library serialization practice and then be reloaded from the saved files in the other framework seamlessly, making it particularly easy to switch from one framework to the other one along the model life-time (training, serving, etc.). Auto classes - In many cases, the architecture to use can be automatically guessed from the shortcut name of the pretrained weights (e.g. `bert-base-cased`). A set of Auto classes provides a unified API that enable very fast switching between different models/configs/tokenizers. There are a total of four high-level abstractions referenced as Auto classes: AutoConfig, AutoTokenizer, AutoModel (for PyTorch) and TFAutoModel (for TensorFlow). These classes automatically instantiate the right configuration, tokenizer or model class instance from the name of the pretrained checkpoints. Library design ::: Training Optimizer - The library provides a few optimization utilities as subclasses of PyTorch `torch.optim.Optimizer` which can be used when training the models. The additional optimizer currently available is the Adam optimizer BIBREF23 with an additional weight decay fix, also known as `AdamW` BIBREF24. Scheduler - Additional learning rate schedulers are also provided as subclasses of PyTorch `torch.optim.lr_scheduler.LambdaLR`, offering various schedules used for transfer learning and transformers models with customizable options including warmup schedules which are relevant when training with Adam. Experimenting with Transformers In this section, we present some of the major tools and examples provided in the library to experiment on a range of downstream Natural Language Understanding and Natural Language Generation tasks. Experimenting with Transformers ::: Language understanding benchmarks The language models provided in Transformers are pretrained with a general purpose training objective, usually a variant of language modeling like standard (sometime called causal) language modeling as used for instance in BIBREF9 or masked language modeling as introduced in BIBREF13. A pretrained model is often evaluated using wide-range language understanding benchmarks. Transformers includes several tools and scripts to evaluate models on GLUE (BIBREF6) and SuperGLUE (BIBREF7). These two benchmarks gather a variety of datasets to evaluate natural language understanding systems. Details of the datasets can be found in the Appendix on page SECREF7. Regarding the machine comprehension tasks, the library feature evaluations on SQuAD1.1 (BIBREF17) and SQuAD2.0 (BIBREF25). Others currently-supported benchmarks include SWAG (BIBREF26), RACE (BIBREF27) and ARC (BIBREF28). Experimenting with Transformers ::: Language model fine-tuning Fine-tuning a language model on a downstream text corpus usually leads to significant gains for tasks on this corpus, in particular when the domain is different (domain adaptation). It also significantly reduces the amount of training data required for fine-tuning on a target task in the target domain. Transformers provides simple scripts to fine-tune models on custom text datasets with the option to add or remove tokens from the vocabulary and several other adaptability features. Experimenting with Transformers ::: Ecosystem Write with Transformer Because Natural Language Processing does not have to be serious and boring, the generative capacities of auto-regressive language models available in Transformers are showcased in an intuitive and playful manner. Built by the authors on top of Transformers, Write with Transformer is an interactive interface that leverages the generative capabilities of pretrained architectures like GPT, GPT2 and XLNet to suggest text like an auto-completion plugin. Generating samples is also often used to qualitatively (and subjectively) evaluate the generation quality of language models BIBREF9. Given the impact of the decoding algorithm (top-K sampling, beam-search, etc.) on generation quality BIBREF29, Write with Transformer offers various options to dynamically tweak the decoding algorithm and investigate the resulting samples from the model. Conversational AI HuggingFace has been using Transfer Learning with Transformer-based models for end-to-end Natural language understanding and text generation in its conversational agent, Talking Dog. The company also demonstrated in fall 2018 that this approach can be used to reach state-of-the-art performances on academic benchmarks, topping by a significant margin the automatic metrics leaderboard of the Conversational Intelligence Challenge 2 held at the Thirty-second Annual Conference on Neural Information Processing Systems (NIPS 2018). The approach used to reach these performances is described in BIBREF30, BIBREF31 and the code and pretrained models, based on the Transformers library, are available online. Using in production To facilitate the transition from research to production, all the models in the library are compatible with TorchScript, an intermediate representation of a PyTorch model that can then be run either in Python in a more efficient way, or in a high-performance environment such as C++. Fine-tuned models can thus be exported to production-friendly environment. Optimizing large machine learning models for production is an ongoing effort in the community and there are many current engineering efforts towards that goal. The distillation of large models (e.g. DistilBERT BIBREF32) is one of the most promising directions. It lets users of Transformers run more efficient versions of the models, even with strong latency constraints and on inexpensive CPU servers. We also convert Transformers models to Core ML weights that are suitable to be embbeded inside a mobile application, to enable on-the-edge machine learning. Code is also made available. Community Many libraries in NLP and Machine Learning have been created on top of Transformers or have integrated Transformers as a package dependency or through wrappers. At the time of writing, the authors have been mostly aware of FastBert, FARM, flair BIBREF22, BIBREF33, AllenNLP BIBREF21 and PyText but there are likely more interesting developments to be found, from research and internal projects to production packages. Architectures Here is a list of architectures for which reference implementations and pretrained weights are currently provided in Transformers. These models fall into two main categories: generative models (GPT, GPT-2, Transformer-XL, XLNet, XLM) and models for language understanding (Bert, DistilBert, RoBERTa, XLM). BERT (BIBREF13) is a bi-directional Transformer-based encoder pretrained with a linear combination of masked language modeling and next sentence prediction objectives. RoBERTa (BIBREF5) is a replication study of BERT which showed that carefully tuning hyper-parameters and training data size lead to significantly improved results on language understanding. DistilBERT (BIBREF32) is a smaller, faster, cheaper and lighter version BERT pretrained with knowledge distillation. GPT (BIBREF34) and GPT2 (BIBREF9) are two large auto-regressive language models pretrained with language modeling. GPT2 showcased zero-shot task transfer capabilities on various tasks such as machine translation or reading comprehension. Transformer-XL (BIBREF35) introduces architectural modifications enabling Transformers to learn dependency beyond a fixed length without disrupting temporal coherence via segment-level recurrence and relative positional encoding schemes. XLNet (BIBREF4) builds upon Transformer-XL and proposes an auto-regressive pretraining scheme combining BERT's bi-directional context flow with auto-regressive language modeling by maximizing the expected likelihood over permutations of the word sequence. XLM (BIBREF8) shows the effectiveness of pretrained representations for cross-lingual language modeling (both on monolingual data and parallel data) and cross-lingual language understanding. We systematically release the model with the corresponding pretraining heads (language modeling, next sentence prediction for BERT) for adaptation using the pretraining objectives. Some models fine-tuned on downstream tasks such as SQuAD1.1 are also available. Overall, more than 30 pretrained weights are provided through the library including more than 10 models pretrained in languages other than English. Some of these non-English pretrained models are multi-lingual models (with two of them being trained on more than 100 languages) . Related work The design of Transformers was inspired by earlier libraries on transformers and Natural Language Processing. More precisely, organizing the modules around three main components (configuration, tokenizers and models) was inspired by the design of the tensor2tensor library BIBREF36 and the original code repository of Bert BIBREF13 from Google Research while concept of providing easy caching for pretrained models steamed from features of the AllenNLP library BIBREF21 open-sourced by the Allen Institute for Artificial Intelligence (AI2). Works related to the Transformers library can be generally organized along three directions, at the intersection of which stands the present library. The first direction includes Natural Language Processing libraries such as AllenNLP BIBREF21, SpaCy BIBREF20, flair BIBREF22, BIBREF33 or PyText. These libraries precede Transformers and target somewhat different use-cases, for instance those with a particular focus on research for AllenNLP or a strong attention to production constrains (in particular with a carefully tuned balance between speed and performance) for SpaCy. The previously mentioned libraries have now been provided with integrations for Transformers, through a direct package dependency for AllenNLP, flair or PyText or through a wrapper called spacy-transformers for SpaCy. The second direction concerns lower-level deep-learning frameworks like PyTorch BIBREF18 and TensorFlow BIBREF19 which have both been extended with model sharing capabilities or hubs, respectively called TensorFlow Hub and PyTorch Hub. These hubs are more general and while they offer ways to share models they differ from the present library in several ways. In particular, they provide neither a unified API across models nor standardized ways to access the internals of the models. Targeting a more general machine-learning community, these Hubs lack the NLP-specific user-facing features provided by Transformers like tokenizers, dedicated processing scripts for common downstream tasks and sensible default hyper-parameters for high performance on a range of language understanding and generation tasks. The last direction is related to machine learning research frameworks that are specifically used to test, develop and train architectures like Transformers. Typical examples are the tensor2tensor library BIBREF36, fairseq BIBREF37 and Megatron-LM. These libraries are usually not provided with the user-facing features that allow easy download, caching, fine-tuning of the models as well as seamless transition to production. Conclusion We have presented the design and the main components of Transformers, a library for state-of-the-art NLP. Its capabilities, performances and unified API make it easy for both practitioners and researchers to access various large-scale language models, build and experiment on top of them and use them in downstream task with state-of-the-art performance. The library has gained significant organic traction since its original release and has become widely adopted among researchers and practitioners, fostering an active community of contributors and an ecosystem of libraries building on top of the provided tools. We are committed to supporting this community and making recent developments in transfer learning for NLP both accessible and easy to use while maintaining high standards of software engineering and machine learning engineering. GLUE and SuperGLUE The datasets in GLUE are: CoLA (BIBREF54), Stanford Sentiment Treebank (SST) (BIBREF53), Microsoft Research Paragraph Corpus (MRPC) BIBREF44, Semantic Textual Similarity Benchmark (STS) BIBREF38, Quora Question Pairs (QQP) BIBREF46, Multi-Genre NLI (MNLI) BIBREF55, Question NLI (QNLI) BIBREF17, Recognizing Textual Entailment (RTE) BIBREF42, BIBREF39, BIBREF45, BIBREF40 and Winograd NLI (WNLI) BIBREF48. The datasets in SuperGLUE are: Boolean Questions (BoolQ) BIBREF41, CommitmentBank (CB) BIBREF43, Choice of Plausible Alternatives (COPA) BIBREF51, Multi-Sentence Reading Comprehension (MultiRC) BIBREF47, Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) BIBREF56, Word-in-Context (WiC) BIBREF49, Winograd Schema Challenge (WSC) BIBREF52, Diverse Natural Language Inference Collection (DNC) BIBREF50, Recognizing Textual Entailment (RTE) BIBREF42, BIBREF39, BIBREF45, BIBREF40 and Winograd NLI (WNLI) BIBREF48
BERT, RoBERTa, DistilBERT, GPT, GPT2, Transformer-XL, XLNet, XLM
405964517f372629cda4326d8efadde0206b7751
405964517f372629cda4326d8efadde0206b7751_0
Q: How is performance measured? Text: Introduction For the past 20 years, topic models have been used as a means of dimension reduction on text data, in order to ascertain underlying themes, or `topics', from documents. These probabilistic models have frequently been applied to machine learning problems, such as web spam filtering BIBREF0 , database sorting BIBREF1 and trend detection BIBREF2 . This paper develops a methodology for incorporating topic models into traditional statistical regression frameworks, such as those used in the Social Sciences and Humanities, to make predictions. Statistical regression is a supervised method, however it should be noted the majority of topic models are themselves unsupervised. When using text data for prediction, we are often confronted with the problem of condensing the data into a manageable form, which still retains the necessary information contained in the text. Methods such as using individual words as predictors, or $n$ -grams, while conceptually quite simple, have a tendency to be extremely computationally expensive (with tens of thousands of predictors in a model). Except on extremely large corpora, this inevitably leads to overfitting. As such, methods that allow text to be summarised by a handful of (semantically meaningful) predictors, like topic models, gives a means to use large amounts of text data more effectively within a supervised predictive context. This paper outlines a statistical framework for predictive topic modelling in a regression context. First, we discuss the implementation of a relatively simple (and widely used) topic model, latent Dirichlet allocation (LDA) BIBREF3 , as a preprocessing step in a regression model. We then compare this model to an equivalent topic model that incorporates supervised learning, supervised LDA (sLDA) BIBREF4 . Using topic models in a predictive framework necessitates estimating topic proportions for new documents, however retraining the LDA model to find these is computationally expensive. Hence we derive an efficient likelihood-based method for estimating topic proportions for previously unseen documents, without the need to retrain. Given these two models hold the `bag of words' assumption (i.e., they assume independence between words in a document), we also investigate the effect of introducing language structure to the model through the hidden Markov topic model (HMTM) BIBREF5 . The implementation of these three topic models as a dimension reduction step for a regression model provides a framework for the implementation of further topic models, dependent on the needs of the corpus and response in question. Definitions The following definitions are used when considering topic models. Vocabulary ( $V$ ): a set of $v$ unique elements (generally words) from which our text is composed. Topic ( $\phi $ ): a probability distribution over the vocabulary. That is, for word $i$ in the vocabulary, a probability $p_{i} \in [0,1]$ is assigned of that word appearing, given the topic, with $\sum _{i = 1}^{v} p_{i} = 1$ . In general, there are a fixed number $k$ of topics, $\phi = \left\lbrace \phi _{1},...,\phi _{k}\right\rbrace $ . Document ( $\mathbf {w}$ ): a collection of $n_{j}$ units (or words) from the vocabulary. Depending on the topic model, the order of these words within the document may or may not matter. Corpus ( $\mathbf {D}$ ): a collection of $m$ documents over which the topic model is applied. That is, $\mathbf {D} = \left\lbrace \mathbf {w}_{1},...,\mathbf {w}_{m}\right\rbrace $ , each with length $n_{j}$ , $j = 1,2,...,m$ . Topic proportion ( $\theta _{j}$ ): a distribution of topics over the document $j$ . A corpus will then have an $m \times k$ matrix $\theta $ , where each row $j = 1,2,...,m$ corresponds to the distribution of topics over document $j$ . LDA regression model Latent Dirichlet allocation (LDA) BIBREF3 , due to its simplicity and effectiveness, continues to be the basis for many topic models today. When considering topic regression, we take LDA as our `baseline' model; i.e., we measure all subsequent models against the performance of the LDA regression model. LDA is an unsupervised process that assumes both topics and topic proportions are drawn from Dirichlet distributions. One reason for its simplicity is that it makes the `bag of words' assumption. LDA assumes the process outlined in Algorithm "Regression model and number of topics" when generating documents. $l = 1,2,...,k$ generate the $k$ topics $\phi _{l} \sim \textrm {Dir}(\beta )$ $j = 1,2,...,m$ let $n_{j} \sim \textrm {Poisson}(\xi )$ , the length of document $j$ choose the topic proportions $\theta _{j} \sim \textrm {Dir}(\alpha )$ $i = 1,2,...,n_{j}$ choose the topic assignment $z_{ji} \sim \textrm {Multi}(\theta _{j})$ choose a word $w_{ji} \sim \textrm {Multi}(\phi _{z_{ji}})$ create the document $k$0 LDA generative process. Here, $\alpha $ (length $k$ ) and $\beta $ (length $v$ ) are hyperparameters of the distributions of the $\theta _{j}$ and $\phi _{l}$ respectively. When topic modelling, we are generally interested in inferring topic proportions $\theta = \left\lbrace \theta _{1},...,\theta _{m}\right\rbrace $ and topics $\phi $ themselves, given the corpus $\mathbf {D}$ . That is, we wish to find $ P\left(\theta ,\phi | \mathbf {D}, \alpha , \beta \right) = \frac{P\left(\theta ,\phi , \mathbf {D} | \alpha , \beta \right)}{P\left( \mathbf {D} | \alpha , \beta \right)}. $ The denominator, $P\left( \mathbf {D} | \alpha , \beta \right)$ , the probability of the corpus, is understandably generally intractable to compute. For the purposes of this paper, we use collapsed Gibbs sampling as outlined in BIBREF6 , as an approximate method for finding the LDA model given the corpus. Regression model and number of topics Given an LDA model on a corpus with some corresponding response variable, we use the topic proportions generated as predictors in a regression model. More specifically, we use the topic proportions $\theta $ as the predictors, as the amount of a document belonging to each topic may be indicative of its response. When applying LDA as a preprocessing step to a regression model, we must also bear in mind the number of topics $k$ we choose for the LDA model. While this number is assumed to be fixed in advance, there are various measures for determining the number that best `fits' the corpus, such as perplexity BIBREF3 and the log likelihood measure outlined in BIBREF6 . However, given we are inferring this topic model with a specific purpose in mind, it would be prudent to include this information into the decision making process. For that reason, we choose the `best' number of topics $k$ to be the number that reduces the cross validation prediction error (CVPE) BIBREF7 of the corresponding LDA regression model, found from $K$ -fold cross validation of the model on the corpus. The CVPE is here defined to be $ \textrm {CVPE}_{K} = \sum \limits _{i = 1}^{K} \frac{m_{i}}{m} \textrm {MSE}_{i}, $ where $K$ is the number of folds, $m_{i}$ is the number of documents in the $i$ th fold, and $m$ the total number of documents in the corpus. The mean square error for the $i$ th fold, denoted by $\textrm {MSE}_{i}$ , is defined as $ \textrm {MSE}_{i} = \sum \limits _{j \in C_{i}} \frac{1}{m_{i}} \left( y_{j} - \hat{y}_{j} \right)^{2}, $ where $\hat{y}_{j}$ is the model estimate of response $y_{j}$ for all documents in the set $C_{i}$ , the $i$ th fold. It follows that the better a model performs, the smaller the MSE and thus the CVPE. While we choose the best number of topics based on the information in the regression model, it should be noted that LDA is still unsupervised, and that the topics have not been generated with the response in mind. Introducing new documents When it comes to prediction, we generally have a corpus for which we find our regression model, and use this model to predict the response of new documents that are not in the original corpus. Because our regression model requires us to know $\theta _{j}$ , the topic proportion, for any new document $j$ , we have two options. Either the topic model can be retrained with the new document added to the corpus, and the regression model retrained with the new topics on the old documents, or the topic proportions can be found based on the existing topic model. For both efficiency's sake (i.e., to avoid retraining the model for every prediction), and for the sake of true prediction, the second option is preferable. Particularly in cross validation, it is necessary to have a completely distinct traning and test set of data. In retraining a topic model with new documents, we do not have a clear distinction between the two sets. BIBREF3 outline a procedure for estimating the topic proportions of a held-out document, however this procedure follows a posterior approach that requires variationally inferring the posterior parameters, which are then used to approximate the expected number of words belonging to each topic, as an estimate for $\theta _{j}$ . We propose here a likelihood-based approach to estimation of topic proportions of new documents, by treating the problem as a case of maximum likelihood estimation. That is, we want to find $\hat{\theta }_{j}$ , the estimate of $\theta _{j}$ that maximises the likelihood of document $j$ occurring, given our existing topic model. Therefore, we aim to maximise $ L(\theta _{j}) &=& f(\mathbf {w}_{j} | \theta _{j}) \\ &=& f(w_{j1},...,w_{jn_{j}} | \theta _{j}), $ where $w_{j1},...,w_{jn_{j}}$ are the words in document $j$ . As LDA is a `bag of words' model, we are able to express this as $ L(\theta _{j}) = \prod \limits _{i = 1}^{n_{j}} f(w_{ji} | \theta _{j}). $ The law of total probability gives $ L(\theta _{j}) = \prod \limits _{i = 1}^{n_{j}} \sum \limits _{l = 1}^{k} f(w_{ji} | z_{ji} = l, \theta _{j}) f(z_{ji} = l | \theta _{j}), $ where $z_{ji}$ is the topic assignment for the $i$ th word in document $j$ . However, as the choice of word $w_{ji}$ is independent of the topic proportions $\theta _{j}$ given its topic assignment $z_{ji}$ , we can write $ L(\theta _{j}) = \prod \limits _{i = 1}^{n_{j}} \sum \limits _{l = 1}^{k} f(w_{ji} | z_{ji} = l) f(z_{ji} = l | \theta _{j}). $ The likelihood is now expressed as the products of the topic proportions and the topics themselves. $ L(\theta _{j}) &=& \prod \limits _{i = 1}^{n_{j}} \sum \limits _{l = 1}^{k} \phi _{l,w_{ji}} \theta _{jl}. $ If we express the document as a set of word counts $\mathbf {N} = \lbrace N_{1},...,N_{v}\rbrace $ , where $N_{i}$ is the number of times the $i$ th word of the vocabulary appears in document $j$ , then we can write the log likelihood of $\theta _{j}$ as $ l(\theta _{j}) = \mathbf {N} \cdot \log \left(\theta _{j} \phi \right). $ In order to deal with words that appear in a new document, and not the original corpus, we assign a probability of 0 to any such word of appearing in any of the $k$ topics; this is equivalent to removing those words from the document. To demonstrate the effectiveness of this method for estimation, we generate documents for which we know the topics and topic proportions. Suppose there exists a corpus comprising of two topics, with a vocabulary of 500 words. Given an assumed LDA model, we generate 500 documents with lengths between 5,000 and 10,000 words. Given our newly generated documents, and known topics $\phi $ , we are able to test the validity of the MLE process outlined above by finding the estimates $\hat{\theta }_{j}$ for each document $j$ and comparing them to known topic proportions $\theta _{j}$ . Figure 1 shows the results of the MLE method for finding topic proportion estimates for documents with certain true values of $\theta _{j}$ . From these figures, there is a tight clustering around the true value $\theta _{j}$ , and thus it is reasonable to assume that the MLE process for estimating the topic proportions of a new document given previously existing topics is sound. This process also holds for greater numbers of topics, as evidenced in Figure 2 , which estimates topic proportions for a three-topic document. Like with the LDA regression model, we require a method for estimating the topic proportion $\theta _{j}$ of any new documents from which we are predicting a response, that does not involve retraining the entire model. To do so, we rely on techniques used for HMMs; specifically, we use a modified Baum-Welch algorithm. The Baum-Welch algorithm is used as an approximate method to find an HMM $\Omega = \lbrace \Theta , \phi , \pi \rbrace $ , given some observed sequence (in this case, a document). However, the key difference here is that our emission probabilities (or topics) $\phi $ are common across all documents in our corpus, and thus when introducing any new documents for prediction we assume that we already know them. Given the Baum-Welch algorithm calculates forward and backward probabilities based on an assumed model, and updates estimates iteratively, we may simply take our assumed $\phi $ found from the initial HMTM as the truth and refrain from updating the emission probabilities. We are generally dealing with very small probabilities in topic modelling - $\phi $ generally has tens of thousands of columns (the length of the vocabulary) over which probabilities must sum to one. While in theory this does not change how we would approach parameter estimation, computationally these probabilities are frequently recognised as zero. To make the process more numerically stable, we implement the adapted Baum-Welch algorithm demonstrated and justified in BIBREF11 . While we are ultimately interested in finding topic proportions $\theta _{j}$ for prediction, the Baum-Welch algorithm finds the transition matrix $\Theta _{j}$ for some document. We are able to deal with this in the same way as finding the original HMTM regression model, by taking $\theta _{j}$ to be the equilibrium probabilities of $\Theta _{j}$ . sLDA regression model LDA is an unsupervised process, which does not take into account the response variable we are predicting when inferring topics. Several supervised methods have been developed to incorporate this knowledge, generally for the purpose of finding `better' topics for the corpus in question. Notably, supervised LDA (sLDA) BIBREF4 builds on the LDA model by assuming that some response $y_{j}$ is generated alongside each document $j = 1,2,...,m$ in the corpus, based on the topics prevalent in the document. When inferring the sLDA model, we are therefore inclined to find topics that best suit the response and therefore the prediction problem at hand. Unlike LDA, we treat the topics $\phi $ as unknown constants rather than random variables. That is, we are interested in maximising $ P\left( \theta , \mathbf {z} | \mathbf {D}, \mathbf {y}, \phi , \alpha , \eta , \sigma ^{2} \right), $ where $\eta $ and $\sigma ^{2}$ are parameters of the normally distributed response variable, $y_{j} \sim N(\eta ^{T} \bar{z}_{j}, \sigma ^{2})$ , where $\bar{z}_{j} = (1/n_{j}) \sum _{i = 1}^{n_{j}} z_{ji}$ . As with LDA, this probability is computationally intractable, and thus we require an approximation method for model inference. For the purposes of this paper, we use a variational expectation-maximisation (EM) algorithm, as outlined in BIBREF4 . When it comes to choosing the model with the most appropriate number of topics for the regression problem at hand, we use the same method as outlined for the LDA regression model in Section "Regression model and number of topics" . The method behind sLDA is specifically developed for prediction. As such, we are able to compute the expected response $y_{j}$ from the document $\mathbf {w}_{j}$ and the model $\lbrace \alpha , \phi , \eta , \sigma ^{2}\rbrace $ . For a generalised linear model (as we use in this paper), this is approximated by $ E\left[ Y_{j} | \mathbf {w}_{j}, \alpha , \phi ,\eta , \sigma ^{2} \right] \approx E_{q} \left[\mu \left(\eta ^{T} \bar{\mathbf {z}}_{j} \right)\right], $ where $\mu \left(\eta ^{T} \bar{\mathbf {z}}_{j} \right) = E\left[Y_{j} | \zeta = \eta ^{T} \bar{\mathbf {z}}_{j} \right]$ and $\zeta $ is the natural parameter of the distribution from which the response is taken. Again, further detail on this method is found in BIBREF4 . HMTM regression model Topic modelling is designed as a method of dimension reduction, and as such we often deal with large corpora that cannot otherwise be analysed computationally. Given the complexity of human language, we therefore have to choose what information about our corpus is used to develop the topic model. The previous two models, LDA and sLDA, have relied on the `bag of words' assumption in order to maintain computational efficiency. While for some corpora, the loss of all information relating to language and document structure may not have a particularly large effect on the predictive capability of the topic model, this may not hold for all prediction problems. One simple way of introducing structure into the model is through a hidden Markov model (HMM) structure BIBREF8 , BIBREF9 ; in fact, there already exist multiple topic models which do so. We look here at the hidden Markov topic model (HMTM) BIBREF5 , which assumes that the topic assignment of a word in a document is dependent on the topic assignment of the word before it. That is, the topic assignments function as the latent states of the HMM, with words in the document being the observations. The HMTM assumes the generative process outlined in Algorithm "HMTM regression model" for documents in a corpus. [h] $l = 1,2,...,k$ generate topics $\phi _{l} \sim \textrm {Dir}(\beta )$ $j = 1,2,...m$ generate starting probabilities $\pi _{j} \sim \textrm {Dir}(\alpha )$ $l = 1,2,...,k$ generate the $l$ th row of the transition matrix, $\Theta _{j}$ , $\Theta _{jl} \sim \textrm {Dir}(\gamma _{l})$ choose the topic assignment for the first word $z_{j1} \sim \textrm {Multi}(\pi _{j})$ select a word from the vocabulary $w_{j1} \sim \textrm {Multi}(\phi _{z_{j1}})$ $\phi _{l} \sim \textrm {Dir}(\beta )$0 choose the topic assignment $\phi _{l} \sim \textrm {Dir}(\beta )$1 based on transition matrix $\phi _{l} \sim \textrm {Dir}(\beta )$2 select a word from the vocabulary $\phi _{l} \sim \textrm {Dir}(\beta )$3 create the document $\phi _{l} \sim \textrm {Dir}(\beta )$4 HMTM generative process. Here, $\alpha $ , $\beta $ and $\gamma = \left\lbrace \gamma _{1},...,\gamma _{k} \right\rbrace $ are Dirichlet priors of the starting probabilities, topics and transition probabilities respectively. When it comes to prediction, we are able to use the transition matrices for each document $\Theta _{j}$ as predictors, but to keep consistency with the previous models we take the equilibrium distributions of the matrices as the topic proportions $\theta _{j}$ . That is, we find $\theta _{j}$ such that $ \theta _{j} \Theta _{j} = \theta _{j}, \quad \textrm {and} \quad \theta _{j} \mathbf {e} = 1. $ This also fits with the concept of topic models as a form of dimension reduction, allowing $k-1$ variables, as opposed to $k(k-1)$ when using the transition matrix $\Theta _{j}$ . As models are often fit using hundreds of topics BIBREF10 , BIBREF6 , this makes models faster to compute. We choose the number of topics $k$ here with the same method outlined in Section "Regression model and number of topics" . Testing the topic regression models To demonstrate the use of topic models in a regression framework, we apply them to a problem involving online advertisements. Specifically, we have a corpus containing 4,151 advertisements taken from the trading website, Gumtree, pertaining to the sale of cats in Australia, and hand-labelled by an expert. Of these advertisements, 2,187 correspond to relinquished cats and 1,964 to non-relinquished. We train a model to predict `relinquished status' from the text of an advertisement, using a topic regression model. A cat is considered to be relinquished if it is being given up by its owner after a period of time, as opposed to cats that are sold, either by breeders or former owners. In order to improve efficiency and model quality, we first clean our text data. Details on the cleaning steps can be found in Appendix "Text cleaning" . Word count model Before investigating regression models that use topic proportions as predictors, it is worth developing a `gold standard' model, i.e., a model whose predictive capability we aim to match with our topic regression models. Because the problem here involves a relatively small corpus (advertisements with a median word count of 35), we are able to compare our topic regression models to a model that uses individual words as its predictors. In a much larger corpus, this kind of prediction would be cumbersome to compute - hence our reliance on topic models and other dimension reduction techniques. Because we are predicting a categorical, binary variable, we use logistic regression. Rather than using all words in the corpus (as this would drastically overfit the model), we use a step-up algorithm based on the Akaike information criterion (AIC) BIBREF12 to choose the most significant words for the model, without overfitting. Instead of applying the step-up process to the entire vocabulary (of exactly 13,000 words), we apply it to the 214 most common words (i.e., words that appear in at least 2.5% of the documents in the corpus). The chosen model uses 97 predictors, with coefficients appearing consistent with what you would expect from the problem: for example, the word kitten is indicative of non-relinquished advertisements, while cat is the opposite, which is expected as younger cats are less likely to be relinquished. To assess the predictive capability of this and other models, we require some method by which we can compare the models. For that purpose, we use receiver operating characteristic (ROC) curves as a visual representation of predictive effectiveness. ROC curves compare the true positive rate (TPR) and false positive rate (FPR) of a model's predictions at different threshold levels. The area under the curve (AUC) (between 0 and 1) is a numerical measure, where the higher the AUC is, the better the model performs. We cross-validate our model by first randomly splitting the corpus into a training set (95% of the corpus) and test set (5% of the corpus). We then fit the model to the training set, and use it to predict the response of the documents in the test set. We repeat this process 100 times. The threshold-averaged ROC curve BIBREF13 is found from these predictions, and shown in Figure 3 . Table 1 shows the AUC for each model considered. As with the Gumtree dataset, we first construct a word count model against which we can measure the performance of our topic regression models. Once again, this can be done because we are working with a small corpus; otherwise, we would generally consider this approach to be computationally too heavy. As we have a categorical, non-binary response variable (storyline) with 10 levels, we use a multinomial logistic regression model. We again use a step-up process with AIC as the measure to determine which words in our vocabulary to use as predictors in our model. As our vocabulary consists of only 1,607 unique words, we consider all of them in our step-up process. After applying this process, the model with three predictors, minister, night and around, is chosen. We are no longer able to easily apply ROC curves as a measure of performance to this problem, as we are dealing with a non-binary response. We instead use a Brier score BIBREF14 , a measure for comparing the predictive performance of models with categorical responses. The Brier score is $ \textrm {BS} = \frac{1}{m} \sum \limits _{j=1}^{m} \sum \limits _{i=1}^{s} \left( \hat{y}_{ji} - o_{ji} \right)^{2}, $ where $\hat{y}_{ji}$ is the probability of document $j$ belonging to storyline $i$ , and $o_{ji} = 1$ if document $j$ belongs to storyline $i$ , and 0 otherwise, for document $j = 1,2,...,m$ and storyline $i = 1,2,...,s$ . Each term in the sum goes to zero the closer the model gets to perfect prediction, and as such our aim is to minimise the Brier score in choosing a model. For each document in the corpus, we find the probabilities of each outcome by using the remaining 78 documents (or training dataset) as the corpus in a multinomial logistic regression model with the same three predictors as found above. Due to the fact that the training dataset here is smaller than the Gumtree dataset, we perform leave-one-out cross validation on each document in the corpus (rather than using a 95/5 split). We then predict the outcome based on the words found in the left-out document (or test dataset), and repeat for all 79 scenes. However, due to the short length of some scenes, and the fact that unique words must be thrown out, we restrict the testing to 57 of the 79 scenes: the remaining scenes do not generate a numerically stable approximation for $\theta _{j}$ for the HMTM regression model. The Brier score calculated using this method for the step-up word count model is $0.8255$ . Topic regression models Using the method outlined in Section "Regression model and number of topics" , we choose the LDA regression model with 26 topics as the `best' for this problem. Inspection of the top words included in these 26 topics shows individual topics associated with different breeds (e.g., `persian', `manx') as well as urgency of selling (e.g., `urgent', `asap'), suggesting that the model is not overfit to the data. We generate a threshold-averaged ROC curve using the same cross validation method as earlier, yielding an area under the curve (AUC) of $0.8913$ . The curve can be seen in Figure 3 . While not as high as the AUC for the word count model, the LDA regression model is significantly more efficient, taking only $3\%$ of the time to calculate. We can compare this result to that of an sLDA regression model. The model chosen for this problem has two topics, giving a threshold-averaged ROC curve under cross validation with an AUC of $0.8588$ . It is surprising that the LDA regression model should outperform sLDA, as sLDA incorporates the response variable when finding the most appropriate topics. However, this can be attributed to the number of topics in the model: the sLDA regression model with 26 topics outperforms the LDA model, with an AUC of $0.9030$ . The word count model still outperforms the sLDA model, however once again the topic regression model is significantly more efficient, taking only $0.6\%$ of the time to calculate. Further details on the models and their calculation can be found in Appendix "Topic model inference" . For the LDA regression model for this problem, we determine the `best' number of topics $k$ to be 16. As with the word count model, we use the Brier score to evaluate the performance of this model compared to others in the chapter. We again use the leave-one-out cross validation approach to predict the probabilities of a scene belonging to each storyline. The Brier score found for the LDA regression model is $1.6351$ . While this is higher and therefore worse than the Brier score for the word count model above, this is not unexpected and we are more interested in seeing how the LDA model fares against other topic models. We compare these results to the HMTM regression model, as outlined in Section "HMTM regression model" . We choose the model with 12 topics, according to the CVPE. The Brier score calculated from 57 scenes for the HMTM regression model is $1.5749$ . While still not up to the standard of the word count model at $0.8255$ , this appears to be a slight improvement on the LDA model, meaning that dropping the `bag of words' assumption may in fact improve the predictive performance of the model. However, it should be kept in mind that the LDA model is better at handling short documents. It would be worth applying these models to corpora with longer documents in future, to see how they compare. Further details on the computation of these models can be found in Appendix "Topic model inference" . One of the motivating ideas behind having topic dependencies between consecutive words, as in the HMTM model, is that some documents will have a predisposition to stay in the same topic for a long sequence, such as a sentence or a paragraph. This argument particularly applies to narrative-driven corpora such as the Love Actually corpus. To that end, we may adapt the HMTM described above so that the model favours long sequences of the same topic, by adjusting the Dirichlet priors of the transition probabilities, $\gamma = \lbrace \gamma _{1},...,\gamma _{k} \rbrace $ , to favour on-diagonal elements. By specifying these priors to be $ \gamma _{ls} = {\left\lbrace \begin{array}{ll} 0.99 + 0.01/k \quad \text{if} \quad l = s\\ 0.01/k \quad \text{elsewhere}, \end{array}\right.} $ for $l = 1,2,...,k$ , we choose the persistent HMTM regression model with three topics. This results in a Brier score of $0.9124$ , which is a massive improvement on the original HMTM regression model and makes it very competitive with the word count model. Table 2 summarises these results. Incorporating language structure When evaluating the usefulness of incorporating document structure into a topic model for regression, we require a corpus and problem that we would expect would be heavily influenced by this structure. To understand the predictive capability of the HMTM regression model over that of the more simplistic LDA, we therefore consider predicting the storylines of the 2003 film Love Actually, known for its interwoven yet still quite distinct storylines. We therefore ask if we are able to predict to which storyline a scene belongs, based on the dialogue in that scene. The film consists of 79 scenes, each pertaining to one of 10 storylines. The scenes were hand-classified by storyline, and their dialogue forms the documents of our corpus. We once again clean our data; more detail can be found in Appendix "Text cleaning" . Discussion and further research This paper outlines and implements a streamlined, statistical framework for prediction using topic models as a data processing step in a regression model. In doing so, we investigate how various topic model features affect how well the topic regression model makes predictions. While this methodology has been applied to three specific topic models, the use of any particular topic model depends heavily on the kind of corpus and problem at hand. For that reason, it may be worth applying this methodology to incorporate different topic models in future, depending on the needs of the problem at hand. In particular, we investigate here the influence of both supervised methods, and the incorporation of document structure. A logical next step would be to propose a model that incorporates these two qualities, in order to see if this improves predictive capability on corpora with necessary language structure. Text cleaning The following steps were taken to clean the Gumtree corpus: removal of punctuation and numbers, conversion to lower case, removal of stop words (i.e., common words such as the and for that contribute little lexically), and removal of grammatical information from words (i.e., stemming). When stemming words in this paper, we use the stemming algorithm developed by Porter for the Snowball stemmer project BIBREF15 . Similarly, when removing stop words, we use the (English language) list compiled, again, in the Snowball stemmer project. In cleaning the Love Actually corpus, we perform the first three steps outlined here. However, unlike with the Gumtree dataset, we do not stem words, as grammatical information is more pertinent when incorporating language structure. Topic model inference For each topic model, we choose the best number of topics from models generated with between two and 40 topics. For the LDA models found in this paper, we use the LDA function from the R package topicmodels, with the following parameters: $\tt {burnin} = 1000$ , $\tt {iterations} = 1000$ , and $\tt {keep} = 50$ . The sLDA model in this paper was found using the $\tt {slda.em}$ function from the R package lda, with the following parameters: $\tt {alpha} = 1.0$ , $\tt {eta} = 0.1$ , $\tt {variance} = 0.25$ , $\tt {num.e.iterations} = 10$ , and $\tt {num.m.iterations} = 4$ . We use the Python code from BIBREF5 for the generation of our HMTM.
they use ROC curves and cross-validation
ae95a7d286cb7a0d5bc1a8283ecbf803e9305951
ae95a7d286cb7a0d5bc1a8283ecbf803e9305951_0
Q: What models are included in the toolkit? Text: Introduction Being one of the prominent natural language generation tasks, neural abstractive text summarization (NATS) has gained a lot of popularity BIBREF0 , BIBREF1 , BIBREF2 . Different from extractive text summarization BIBREF3 , BIBREF4 , BIBREF5 , NATS relies on modern deep learning models, particularly sequence-to-sequence (Seq2Seq) models, to generate words from a vocabulary based on the representations/features of source documents BIBREF0 , BIBREF6 , so that it has the ability to generate high-quality summaries that are verbally innovative and can also easily incorporate external knowledge BIBREF1 . Many NATS models have achieved better performance in terms of the commonly used evaluation measures (such as ROUGE BIBREF7 score) compared to extractive text summarization approaches BIBREF2 , BIBREF8 , BIBREF9 . We recently provided a comprehensive survey of the Seq2Seq models BIBREF10 , including their network structures, parameter inference methods, and decoding/generation approaches, for the task of abstractive text summarization. A variety of NATS models share many common properties and some of the key techniques are widely used to produce well-formed and human-readable summaries that are inferred from source articles, such as encoder-decoder framework BIBREF11 , word embeddings BIBREF12 , attention mechanism BIBREF13 , pointing mechanism BIBREF14 and beam-search algorithm BIBREF0 . Many of these features have also found applications in other language generation tasks, such as machine translation BIBREF13 and dialog systems BIBREF15 . In addition, other techniques that can also be shared across different tasks include training strategies BIBREF16 , BIBREF17 , BIBREF18 , data pre-processing, results post-processing and model evaluation. Therefore, having an open-source toolbox that modularizes different network components and unifies the learning framework for each training strategy can benefit researchers in language generation from various aspects, including efficiently implementing new models and generalizing existing models to different tasks. In the past few years, different toolkits have been developed to achieve this goal. Some of them were designed specifically for a single task, such as ParlAI BIBREF19 for dialog research, and some have been further extended to other tasks. For example, OpenNMT BIBREF20 and XNMT BIBREF21 are primarily for neural machine translation (NMT), but have been applied to other areas. The bottom-up attention model BIBREF9 , which has achieved state-of-the-art performance for abstractive text summarization, is implemented in OpenNMT. There are also several other general purpose language generation packages, such as Texar BIBREF22 . Compared with these toolkits, LeafNATS is specifically designed for NATS research, but can also be adapted to other tasks. In this toolkit, we implement an end-to-end training framework that can minimize the effort in writing codes for training/evaluation procedures, so that users can focus on building models and pipelines. This framework also makes it easier for the users to transfer pre-trained parameters of user-specified modules to newly built models. In addition to the learning framework, we have also developed a web application, which is driven by databases, web services and NATS models, to show a demo of deploying a new NATS idea to a real-life application using LeafNATS. Such an application can help front-end users (e.g., blog/news authors and editors) by providing suggestions of headlines and summaries for their articles. The rest of this paper is organized as follows: Section SECREF2 introduces the structure and design of LeafNATS learning framework. In Section SECREF3 , we describe the architecture of the live system demo. Based on the request of the system, we propose and implement a new model using LeafNATS for headline and summary generation. We conclude this paper in Section SECREF4 . LeafNATS Toolkithttps://github.com/tshi04/LeafNATS In this section, we introduce the structure and design of LeafNATS toolkit, which is built upon the lower level deep learning platform – Pytorch BIBREF23 . As shown in Fig. FIGREF2 , it consists of four main components, i.e., engines, modules, data and tools and playground. Engines: In LeafNATS, an engine represents a training algorithm. For example, end-to-end training BIBREF1 and adversarial training BIBREF16 are two different training frameworks. Therefore, we need to develop two different engines for them. Specifically for LeafNATS, we implement a task-independent end-to-end training engine for NATS, but it can also be adapted to other NLP tasks, such as NMT, question-answering, sentiment classification, etc. The engine uses abstract data, models, pipelines, and loss functions to build procedures of training, validation, testing/evaluation and application, respectively, so that they can be completely reused when implementing a new model. For example, these procedures include saving/loading check-point files during training, selecting N-best models during validation, and using the best model for generation during testing, etc. Another feature of this engine is that it allows users to specify part of a neural network to train and reuse parameters from other models, which is convenient for transfer learning. Modules: Modules are the basic building blocks of different models. In LeafNATS, we provide ready-to-use modules for constructing recurrent neural network (RNN)-based sequence-to-sequence (Seq2Seq) models for NATS, e.g., pointer-generator network BIBREF1 . These modules include embedder, RNN encoder, attention BIBREF24 , temporal attention BIBREF6 , attention on decoder BIBREF2 and others. We also use these basic modules to assemble a pointer-generator decoder module and the corresponding beam search algorithms. The embedder can also be used to realize the embedding-weights sharing mechanism BIBREF2 . Data and Tools: Different models in LeafNATS are tested on three datasets (see Table TABREF5 ), namely, CNN/Daily Mail (CNN/DM) BIBREF25 , Newsroom BIBREF26 and Bytecup. The pre-processed CNN/DM data is available online. Here, we provide tools to pre-process the last two datasets. Data modules are used to prepare the input data for mini-batch optimization. Playground: With the engine and modules, we can develop different models by just assembling these modules and building pipelines in playground. We re-implement different models in the NATS toolkit BIBREF10 to this framework. The performance (ROUGE scores BIBREF7 ) of the pointer-generator model on different datasets has been reported in Table TABREF6 , where we find that most of the results are better than our previous implementations BIBREF10 due to some minor changes to the neural network. A Live System Demonstrationhttp://dmkdt3.cs.vt.edu/leafNATS In this section, we present a real-world web application of the abstractive text summarization models, which can help front-end users to write headlines and summaries for their articles/posts. We will first discuss the architecture of the system, and then, provide more details of the front-end design and a new model built by LeafNATS that makes automatic summarization and headline generation possible. Architecture This is a news/blog website, which allows people to read, duplicate, edit, post, delete and comment articles. It is driven by web-services, databases and our NATS models. This web application is developed with PHP, HTML/CSS, and jQuery following the concept of Model-View-Controller (see Fig. FIGREF9 ). In this framework, when people interact with the front-end views, they send HTML requests to controllers that can manipulate models. Then, the views will be changed with the updated information. For example, in NATS, we first write an article in a text-area. Then, this article along with the summarization request will be sent to the controller via jQuery Ajax call. The controller communicates with our NATS models asynchronously via JSON format data. Finally, generated headlines and summaries are shown in the view. Design of Frontend Fig. FIGREF13 presents the front-end design of our web application for creating a new post, where labels represent the sequence of actions. In this website, an author can first click on “New Post” (step 1) to bring a new post view. Then, he/she can write content of an article in the corresponding text-area (step 2) without specifying it's headline and highlights, i.e., summary. By clicking “NATS” button (step 3) and waiting for a few seconds, he/she will see the generated headlines and highlights for the article in a new tab on the right hand side of the screen. Here, each of the buttons in gray color denotes the resource of the training data. For example, “Bytecup” means the model is trained with Bytecup headline generation dataset. The tokenized article content is shown in the bottom. Apart from plain-text headlines and highlights, our system also enables users to get a visual understanding of how each word is generated via attention weights BIBREF24 . When placing the mouse tracker (step 4) on any token in the headlines or highlights, related content in the article will be labeled with red color. If the author would like to use one of the suggestions, he/she can click on the gray button (step 5) to add it to the text-area on the left hand side and edit it. Finally, he/she can click “Post” (step 6) to post the article. The Proposed Model As shown in the Fig. FIGREF11 , our system can suggest to the users two headlines (based on Newsroom headline and Bytecup datasets) and summaries (based on Newsroom summary and CNN/DM datasets). They are treated as four tasks in this section. To achieve this goal, we use the modules provided in LeafNATS toolkit to assemble a new model (see Fig. FIGREF13 ), which has a shared embedding layer, a shared encoder layer, a task specific encoder-decoder (Bi-LSTM encoder and pointer-generator decoder) layer and a shared output layer. To train this model, we first build a multi-task learning pipeline for Newsroom dataset to learn parameters for the modules that are colored in orange in Fig. FIGREF13 , because (1) articles in this dataset have both headlines and highlights, (2) the size of the dataset is large, and (3) the articles come from a variety of news agents. Then, we build a transfer learning pipeline for CNN/Daily and Bytecup dataset, and learn the parameters for modules labeled with blue and green color, respectively. With LeafNATS, we can accomplish this work efficiently. The performance of the proposed model on the corresponding testing sets are shown in Table TABREF14 . From the table, we observe that our model performs better in headline generation tasks. However, the ROUGE scores in summarization tasks are lower than the models without sharing embedding, encoder and output layers. It should be noted that by sharing the parameters, this model requires less than 20 million parameters to achieve such performance. Conclusion In this paper, we have introduced a LeafNATS toolkit for building, training, testing/evaluating, and deploying NATS models, as well as a live news blogging system to demonstrate how the NATS models can make the work of writing headlines and summaries for news articles more efficient. An extensive set of experiments on different benchmark datasets has demonstrated the effectiveness of our implementations. The newly proposed model for this system has achieved competitive results with fewer number of parameters. Acknowledgments This work was supported in part by the US National Science Foundation grants IIS-1619028, IIS-1707498 and IIS-1838730.
recurrent neural network (RNN)-based sequence-to-sequence (Seq2Seq) models for NATS
0be0c8106df5fde4b544af766ec3d4a3d7a6c8a2
0be0c8106df5fde4b544af766ec3d4a3d7a6c8a2_0
Q: Is there any human evaluation involved in evaluating this famework? Text: Introduction In this work, we aim to develop an automatic Language-Based Image Editing (LBIE) system. Given a source image, which can be a sketch, a grayscale image or a natural image, the system will automatically generate a target image by editing the source image following natural language instructions provided by users. Such a system has a wide range of applications from Computer-Aided Design (CAD) to Virtual Reality (VR). As illustrated in Figure 1 , a fashion designer presents a sketch of a pair of new shoes (i.e., the source image) to a customer, who can provide modifications on the style and color in verbal description, which can then be taken by the LBIE system to change the original design. The final output (i.e., the target image) is the revised and enriched design that meets the customer’s requirement. Figure 2 showcases the use of LBIE for VR. While most VR systems still use button-controlled or touchscreen interface, LBIE provides a natural user interface for future VR systems, where users can easily modify the virtual environment via natural language instructions. LBIE covers a broad range of tasks in image generation: shape, color, size, texture, position, etc. This paper focuses on two basic sub-tasks: language-based segmentation and colorization for shapes and colors. As shown in Figure 3 , given a grayscale image and the expression “The flower has red petals with yellow stigmas in the middle”, the segmentation model will identify regions of the image as “petals”, “stigmas”, and the colorization model will paint each pixel with the suggested color. In this intertwined task of segmentation and colorization, the distribution of target images can be multi-modal in the sense that each pixel will have a definitive ground truth on segmentation, but not necessarily on color. For example, the pixels on petals in Figure 3 should be red based on the textual description, but the specific numeric values of the red color in the RGB space is not uniquely specified. The system is required to colorize the petals based on real-world knowledge. Another uncertainty lies in the fact that the input description might not cover every detail of the image. The regions that are not described, such as the leaves in the given example, need to be rendered based on common sense knowledge. In summary, we aim to generate images that not only are consistent with the natural language expressions, but also align with common sense. Language-based image segmentation has been studied previously in BIBREF2 . However, our task is far more challenging because the textual description often contains multiple sentences (as in Figure 2 ), while in BIBREF2 most of the expressions are simple phrases. To the best of our knowledge, language-based colorization has not been studied systematically before. In most previous work, images are generated either solely based on natural language expressions BIBREF3 , BIBREF4 or based on another image BIBREF0 , BIBREF5 , BIBREF6 . Instead, we want to generate a target image based on both the natural language expression and the source image. Related tasks will be discussed in detail in Section "Related Work" . A unique challenge in language-based image editing is the complexity of natural language expressions and their correlation with the source images. As shown in Figure 2 , the description usually consists of multiple sentences, each referring to multiple objects in the source image. When human edits the source image based on a textual description, we often keep in mind which sentences are related to which region/object in the image, and go back to the description multiple times while editing that region. This behavior of “going back” often varies from region to region, depending on the complexity of the description for that region. An investigation of this problem is carried out on CoSaL, which is a synthetic dataset described in Section "Experiments" . Our goal is to design a generic framework for the two sub-tasks in language-based image editing. A diagram of our model is shown in Figure 4 . Inspired by the observation aforementioned, we introduce a recurrent attentive fusion module in our framework. The fusion module takes as input the image features that encode the source image via a convolutional neural network, and the textual features that encode the natural language expression via an LSTM, and outputs the fused features to be upsampled by a deconvolutional network into the target image. In the fusion module, recurrent attentive models are employed to extract distinct textual features based on the spatial features from different regions of an image. A termination gate is introduced for each region to control the number of steps it interacts with the textual features. The Gumbel-Softmax reparametrization trick BIBREF7 is used for end-to-end training of the entire network. Details of the models and the training process are described in Section "The Framework" . Our contributions are summarized as follows: Related Work While the task of language-based image editing has not been studied, the community has taken significant steps in several related areas, including Language Based object detection and Segmentation (LBS) BIBREF2 , BIBREF8 , Image-to-Image Translation (IIT) BIBREF0 , Generating Images from Text (GIT) BIBREF9 , BIBREF4 , Image Captioning (IC) BIBREF10 , BIBREF11 , BIBREF12 , Visual Question Answering (VQA) BIBREF13 , BIBREF14 , Machine Reading Comprehension (MRC) BIBREF15 , etc. We summarize the types of inputs and outputs for these related tasks in Table 1 . Recurrent attentive models Recurrent attentive models have been applied to visual question answering (VQA) to fuse language and image features BIBREF14 . The stacked attention network proposed in BIBREF14 identifies the image regions that are relevant to the question via multiple attention layers, which can progressively filter out noises and pinpoint the regions relevant to the answer. In image generation, a sequential variational auto-encoder framework, such as DRAW BIBREF16 , has shown substantial improvement over standard variational auto-encoders (VAE) BIBREF17 . Similar ideas have also been explored for machine reading comprehension, where models can take multiple iterations to infer an answer based on the given query and document BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . In BIBREF23 and BIBREF24 , a novel neural network architecture called ReasoNet is proposed for reading comprehension. ReasoNet performs multi-step inference where the number of steps is determined by a termination gate according to the difficulty of the problem. ReasoNet is trained using policy gradient methods. Segmentation from language expressions The task of language-based image segmentation is first proposed in BIBREF2 . Given an image and a natural language description, the system will identify the regions of the image that correspond to the visual entities described in the text. The authors in BIBREF2 proposed an end-to-end approach that uses three neural networks: a convolutional network to encode source images, an LSTM network to encode natural language descriptions, and a fully convolutional classification and upsampling network for pixel-wise segmentation. One of the key differences between their approach and ours is the way of integrating image and text features. In BIBREF2 , for each region in the image, the extracted spatial features are concatenated with the same textual features. Inspired by the alignment model of BIBREF10 , in our approach, each spatial feature is aligned with different textual features based on attention models. Our approach yields superior segmentation results than that of BIBREF2 on a benchmark dataset. Conditional GANs in image generation Generative adversarial networks (GANs) BIBREF25 have been widely used for image generation. Conditional GANs BIBREF26 are often employed when there are constraints that a generated image needs to satisfy. For example, deep convolutional conditional GANs BIBREF27 have been used to synthesize images based on textual descriptions BIBREF3 BIBREF4 . BIBREF0 proposed the use of conditional GANs for image-to-image translation. Different from these tasks, LBIE takes both image and text as input, presenting an additional challenge of fusing the features of the source image and the textual description. Experiments We conducted three experiments to validate the performance of the proposed framework. A new synthetic dataset CoSaL (Colorizing Shapes with Artificial Language) was introduced to test the capability of understanding multi-sentence descriptions and associating the inferred textual features with visual features. Our framework also yielded state-of-the-art performance on the benchmark dataset ReferIt BIBREF29 for image segmentation. A third experiment was carried out on the Oxford-102 Flowers dataset BIBREF1 , for the language-based colorization task. All experiments were coded in TensorFlow. Codes for reproducing the key results are available online. Experiments on CoSaL Each image in the CoSaL dataset consists of nine shapes, paired with a textual description of the image. The task is defined as: given a black-white image and its corresponding description, colorize the nine shapes following the textual description. Figure 5 shows an example. It requires sophisticated coreference resolution, multi-step inference and logical reasoning to accomplish the task. The dataset was created as follows: first, we divide a white-background image into $3\times 3$ regions. Each region contains a shape randomly sampled from a set of $S$ shapes (e.g., squares, fat rectangles, tall rectangles, circles, fat ellipses, tall ellipses, diamonds, etc.) Each shape is then filled with one of $C$ color choices, chosen at random. The position and the size of each shape are generated by uniform random variables. As illustrated in Figure 5 , the difficulty of this task increases with the number of color choices. In our experiments, we specify $C=3$ . The descriptive sentences for each image can be divided into two categories: direct descriptions and relational descriptions. The former prescribes the color of a certain shape (e.g., Diamond is red), and the latter depicts one shape conditional of another (e.g., The shape left to Diamond is blue). To understand direct descriptions, the model needs to associate a specified shape with its textual features. Relational description adds another degree of difficulty, which requires advanced inference capability of relational/multi-step reasoning. The ratio of direct descriptions to relational descriptions varies among different images, and all the colors and shapes in each image are uniquely determined by the description. In our experiment, we randomly generated $50,000$ images with corresponding descriptions for training purpose, and $10,000$ images with descriptions for testing. For this task, we use average IoU over nine shapes and the background as the evaluation metric. Specifically, for each region, we compute the intersection-over-union (IoU), which is the ratio of the total intersection area to the total union area of predicted colors and ground truth colors. We also compute the IoU for the background (white) of each image. The IoU for 10 classes (9 shapes $+$ 1 background) are computed over the entire test set and then averaged. A six-layer convolutional network is implemented as the image feature extractor. Each layer has a $3\times 3$ kernel with stride 1 and output dimension $4,4,8,8,16,16$ . ReLU is used for nonlinearity after each layer, and a max-pooling layer with a kernel of size 2 is inserted after every two layers. Each sentence in the textual description is encoded with bidirectional LSTMs that share parameters. Another LSTM with attention is put on top of the encoded sentences. The LSTMs have 16 units. In the fusion network, the attention model has 16 units, the GRU cells use 16 units, and the termination gate uses a linear map on top of the hidden state of each GRU cell. Two convolutional layers of kernel size $1\times 1$ with the output dimension of $16,7$ are put on top of the fused features as a classifier. Then an upsampling layer is implemented on top of it, with a single-layer deconvolutional network of kernel size 16, stride 8 to upsample the classifier to the original resolution. The upsampling layer is initialized with bilinear transforms. The maximum of termination steps $T$ vary from 1 to 4. When $T=1$ , the model is reduced to simply concatenating features extracted from the convolutional network with the last vector from LSTM. Results in Table 2 show that the model with attention and $T=4$ achieves a better performance when there are more relational descriptions in the dataset. When there are more direct descriptions, the two models achieve similar performance. This demonstrates the framework's capability of interpreting multiple-sentence descriptions and associating them with their source image. Figure 5 illustrates how the model with $T=3$ interprets the nine sentences during each inference step. In each step, we take the sentence with the largest attention score as the one being attended to. Sentences in red are attended to in the first step. Those in yellow and green are attended to in the next two consecutive steps. We observe that the model tends to first extract information from direct descriptions, and then extract information from relational descriptions via reasoning. Experiments on ReferIt The ReferIt dataset is composed of $19,894$ photographs of real world scenes, along with $130,525$ natural language descriptions on $96,654$ distinct objects in those photographs BIBREF29 . The dataset contains 238 different object categories, including animals, people, buildings, objects and background elements (e.g., grass, sky). Both training and development datasets include $10,000$ images. Following BIBREF2 , we use two metrics for evaluation: 1) overall intersection-over-union (overall IoU) of the predicted and ground truth of each region, averaged over the entire test set; 2) precision@threshold, the percentage of test data whose (per image) IoU between prediction and ground truth is above the threshold. Thresholds are set to $0.5,0.6,0.7,0.8,0.9$ . A VGG-16 model BIBREF31 is used as the image encoder for images of size $512\times 512$ . Textual descriptions are encoded with an LSTM of $1,024$ units. In the fusion network, the attention model uses 512 units and the GRU cells $1,024$ units, on top of which is a classifier and an upsampling layer similar to the implementation in Section 4.1. The maximum number of inference steps is 3. ReLU is used on top of each convolutional layer. $L2$ -normalization is applied to the parameters of the network. Table 3 shows the experimental results of our model and the previous methods on the ReferIt dataset. We see that our framework yields a better IoU and precision than BIBREF2 . We attribute the superior performance to the unique attention mechanism used by our fusion network. It efficiently associates individual descriptive sentences with different regions of the source image. There is not much discrepancy between the two models with $T=1$ and $T=3$ , probably due to the fact that most textual descriptions in this dataset are simple. Experiments on Oxford-102 Flower Dataset The Oxford-102 Flowers dataset BIBREF1 contains $8,189$ images from 102 flower categories. Each image has five textual descriptions BIBREF3 . Following BIBREF3 , BIBREF9 and BIBREF32 , we split the dataset into 82 classes for training and 20 classes for testing. The task is defined as follows: Given a grayscale image of a flower and a description of the shapes and colors of the flower, colorize the image according to the description. A 15-layer convolutional network similar to BIBREF6 is used for encoding $256\times 256$ images. Textual descriptions are encoded with an bidirectional LSTM of 512 units. In the fusion network, the attention model uses 128 units and the GRU cells 128 units. The image encoder is composed of 2 deconvolutional layers, each followed by 2 convolutional layers, to upsample the fusion feature map to the target image space of $256\times 256\times 2$ . The maximum length of the spatial RNN is 1. The discriminator is composed of 5 layers of convolutional networks of stride 2, with the output dimension $256,128,64,32,31$ . The discriminator score is the average of the final output. ReLU is used for nonlinearity following each convolutional layer, except for the last one which uses the sigmoid function. Due to the lack of available models for the task, we compare our framework with a previous model developed for image-to-image translation as baseline, which colorizes images without text descriptions. We carried out two human evaluations using Mechanical Turk to compare the performance of our model and the baseline. For each experiment, we randomly sampled 1,000 images from the test set and then turned these images into black and white. For each image, we generated a pair of two images using our model and the baseline, respectively. Our model took into account the caption in generation while the baseline did not. Then we randomly permuted the 2,000 generated images. In the first experiment, we presented to human annotators the 2,000 images, together with their original captions, and asked humans to rate the consistency between the generated images and the captions in a scale of 0 and 1, with 0 indicating no consistency and 1 indicating consistency. In the second experiment, we presented to human annotators the same 2,000 images without captions, but asked human annotators to rate the quality of each image without providing its original caption. The quality was rated in a scale of 0 and 1, with 0 indicating low quality and 1 indicating high quality. The results of comparison are shown in Table 4 . Our model achieves better consistency with captions and also better image quality by making use of information in captions. The colorization results on 10 randomly-sampled images from the test set are shown in Figure 6 . As we can see, without text input, the baseline approach often colorizes images with the same color (in this dataset, most images are painted with purple, red or white), while our framework can generate flowers similar to their original colors which are specified in texts. Figure 7 provides some example images generated with arbitrary text description using our model. Conclusion and Future Work In this paper we introduce the problem of Language-Based Image Editing (LBIE), and propose a generic modeling framework for two sub-tasks of LBIE: language-based image segmentation and colorization. At the heart of the proposed framework is a fusion module that uses recurrent attentive models to dynamically decide, for each region of an image, whether to continue the text-to-image fusion process. Our models have demonstrated superior empirical results on three datasets: the ReferIt dataset for image segmentation, the Oxford-102 Flower dataset for colorization, and the synthetic CoSaL dataset for evaluating the end-to-end performance of the LBIE system. In future, we will extend the framework to other image editing subtasks and build a dialogue-based image editing system that allows users to edit images interactively.
Yes
959490ba72bd02f742db1e7b19525d4b6c419772
959490ba72bd02f742db1e7b19525d4b6c419772_0
Q: How big is multilingual dataset? Text: Introduction The MULTEXT-East project, (Multilingual Text Tools and Corpora for Central and Eastern European Languages) ran from ’95 to ’97 and developed standardised language resources for six Central and Eastern European languages, as well as for English, the “hub” language of the project BIBREF0. The project was a spin-off of the MULTEXT project BIBREF1, which pursued similar goals for six Western European languages. The main results of the project were morphosyntactic specifications, defining the tagsets for lexical and corpus annotations in a common format, lexical resources and annotated multilingual corpora. In addition to delivering resources, a focus of the project was also the adoption and promotion of encoding standardization. On the one hand, the morphosyntactic annotations and lexicons were developed in the formalism used in MULTEXT, itself based on the specifications of the Expert Advisory Group on Language Engineering Standards, EAGLES BIBREF2. On the other, the corpus resources were encoded in SGML, using CES, the Corpus Encoding Standard BIBREF3, a derivative of the Text Encoding Initiative Guidelines, version P3, BIBREF4. After the completion of the EU MULTEXT-East project a number of further projects have helped to keep the MULTEXT-East resources up to date as regards encoding and enabled the addition of new languages. The latest release of the resources is Version 4 BIBREF5, BIBREF6, which covers 16 languages. The main improvements to Version 3 were the addition of resources for five new languages, updating of four, and the recoding of the morphosyntactic specifications from to XML: the specifications and the corpora are now uniformly encoded to a schema based on the latest version of the Text Encoding Initiative Guidelines, TEI P5 BIBREF7. The resources are freely available for research and include uniformly encoded basic language resources for a large number of languages. These mostly include languages for which resources are scarcer than those for English and the languages of Western Europe. Best covered are the Slavic languages, which are well known for their complex morphosyntax and MULTEXT-East is the first dataset that enables a qualitative and quantitative comparison between them on this level of description. The MULTEXT-East resources have helped to advance the state-of-the-art in language technologies in a number of areas, e.g., part-of-speech tagging BIBREF8, BIBREF9, learning of lemmatisation rules BIBREF10, BIBREF11, word alignment BIBREF12, BIBREF13, and word sense disambiguation BIBREF14, BIBREF15. They have served as the basis on which to develop further language resources, e.g. the WordNets of the BalkaNet project BIBREF16 and the JOS linguistically tagged corpus of Slovene BIBREF17. The morphosyntactic specifications have become a de-facto standard for several of the languages, esp. Romanian, Slovene and Croatian, where large monolingual reference corpora are using the MULTEXT-East tagsets in their annotation. The resources have also provided a model to which some languages still lacking publicly available basic language engineering resources (tagsets, lexicons, annotated corpora) can link to, taking a well-trodden path; in this manner resources for several new languages have been added to the Version 4 resources. Table TABREF8 summarises the language resources of MULTEXT-East Version 4 by language (similar languages are grouped together and the ordering roughly West to East), and by resource type. The resources marked by X are present in Version 4, while the ones marked with O have been already produced and will be released in the next version. The meaning of the columns is the following: MSD specs: morphosyntactic specifications, defining the features and tagsets of morphosyntactic descriptions (MSDs) of the languages; MSD lexicon: morphosyntactic lexicons; 1984 MSD: MSD and lemma annotated parallel corpus, consisting of the novel “1984” by G. Orwell, approx. 100,000 tokens per language; 1984 alignments: sentence alignments over the “1984” corpus; 1984 corpus: a variant of the parallel corpus, extensively annotated with structural information (e.g., paragraph, verse, quoted speech, note, etc.), named-entity information (name, number), and basic linguistic information (foreign, sentence); Comparable corpus: multilingual corpus comprising comparable monolingual structurally annotated texts of fiction (100,000 tokens per language) and newspaper articles (also 100,000 tokens per language); Speech corpus: parallel speech corpus, 200 sentences per language, spoken + text. We discuss only the resources given in bold in the table, giving the “morphosyntactic triad” of MULTEXT-East, consisting of the specifications, the lexicon and annotated corpus. These resources have had the most impact and are also the most interesting from the point of view of encoding and content. The structurally annotated parallel and comparable corpora and the speech corpus have been retained from the original MULTEXT-East project but are too small, esp. from today's perspective, to be really useful. The rest of this chapter is structured as follows: Section SECREF2 introduces the TEI encoding as used in the MULTEXT-East resources, Section SECREF3 details the morphosyntactic specifications and lexicons, Section SECREF4 the linguistically annotated parallel “1984” corpus, Section SECREF5 discusses related work, and Section SECREF6 gives conclusions and directions for further work. Resource encoding The MULTEXT-East resources, including the morphosyntactic specification, corpora, alignments, as well as supporting documentation, are all encoded to a common schema following the Text Encoding Initiative Guidelines, TEI P5 BIBREF7. The first version of the resources was encoded to the Corpus Encoding Standard, CES BIBREF3, with subsequent versions moving to XCES BIBREF18, the XML version of CES, and later on to TEI, as it is more general, defines how to introduce extensions to the core standard and has extensive support. This TEIfication finished with Version 4, where the last part of the resources, namely the morphosyntactic specifications (previously as a document typeset using ) and sentence alignments (previously in XCES) were re-coded to TEI P5. The advantages of having all the resources in XML are obvious: they can be edited, validated, transformed, and queried with XML tools. Using TEI means that much of needed functionality (schema, documentation, some transformations) is already in place. For example, the TEI provides a sophisticated set of XSLT stylesheets for converting TEI documents into HTML and other formats, useful for putting on-line the MULTEXT-East documentation and the morphosyntactic specifications. TEI P5 schemas are constructed by writing a TEI customization, i.e., a complete (but possibly quite short) TEI document, where the text contains a special element, schemaSpec, giving the schema in the high-level TEI ODD language, with the acronym meaning “One Document Does it all” BIBREF7 . The schema specification contains invocations of the needed TEI modules, which define their elements and attributes. An ODD document can be processed by an ODD processor, which will generate an appropriate XML schema. The XML schema can be expressed using any of the standard schema languages, such as ISO RELAX NG (REgular LAnguage for XML Next Generation) BIBREF19 or the W3C Schema language. These output schemas can then be used by any XML processor such as a validator or editor to validate or otherwise process documents. TEI provides an ODD processor called Roma, also available via a Web interface that helps in the process of creating a customized TEI schema. The MULTEXT-East XML schema distributed with the resources consists of the customisation TEI ODD and the Roma generated Relax NG, W3C and DTD schemas, as well as customisation-specific documentation. The morphosyntactic specifications The morphosyntactic specifications define word-level features (most of) which reside on the interface between morphology and syntax. So, for example, the features will not specify the paradigm label of a word, such as “first masculine declension”, which is a purely morphological feature, nor the valency of a verb, which is a syntactic feature and has no reflex in the morphology of a verb. They will, however, give the part-of-speech, and, depending on the language, gender, number, case, etc., which, on the one hand, are marked on the form of a word (typically its ending) and, on the other, enter into syntactic relationships such as agreement. In addition to defining features (attributes and their values) the specifications also give the mapping from feature-structures used to annotate word-forms to morphosyntactic descriptions (MSDs), which are compact strings used in the morphosyntactic lexicons and for corpus annotation. So, for example, the feature-structure Noun, Type = common, Gender = neuter, Number = dual, Case = locative maps to the MSD Ncndl. The feature structures can thus be viewed as a logical form of the features associated with a word-form, while the corresponding MSDs is its serialisation. In addition to the formal parts, the specifications also contain commentary, bibliography, etc. Although the encoding of the specifications has changed substantially, their structure still follows the original MULTEXT specifications: they are composed of the introductory part, followed by the common specifications, and then by language particular specifications, one for each language. The remainder of this section explains the structure of the common specifications, the language particular sections, the MSD tagset(s) and their relation of feature-structures, an overview of the XSLT stylesheets used to process the specifications and the morphosyntactic lexicons. The morphosyntactic specifications ::: Common specifications The common part of the specification gives the 14 MULTEXT defined categories, which mostly correspond to parts-of-speech, with a few introduced for technical reasons. Each category has a dedicated table defining its attributes, their values, and their mapping to the (common) MSD strings. For each attribute-value pair it also specifies the languages it is appropriate for. Furthermore, attributes or their values can have associated notes. Table TABREF10 lists the defined categories and, for each category, gives the number of attributes it distinguishes, the number of different attribute-value pairs, and the number of MULTEXT-East languages which use the category. The feature-set is quite extensive, as many of the languages covered have very rich inflection, are typologically quite different (inflectional, agglutinating) but also have different linguistic traditions. The definitions for each category are encoded simply as a TEI table, with @role giving the function of each row and cell. Figure FIGREF11 gives the definition of the Formation attribute belonging to the Particle category. The example states that (a Particle) has the attribute Formation with two values, simple and compound and both values are valid for Bulgarian, Macedonian and Russian. Furthermore, in Particle MSDs, such as Qzs or Qgc, the Formation attribute has position 2 (taking the category position as 0), with the code s or c. The morphosyntactic specifications ::: Specifications for individual languages The second main part of the specifications consists of the language-specific sections. These, in addition to the introductory matter, also contain sections for each category with their tables of attribute-value definitions. These tables are similar to the common tables in that they repeat the attributes and their values, although only those appropriate for the language. However, they can also re-specify the position of the attributes in the MSD string, leading to much shorter and more readable MSD tags for the language. So, if an MSD needs to be uniquely interpreted in a multilingual setting, then the mapping from the features to the MSD is made using the common tables, if not, then the language specific mapping can be used. The tables can also contain localisation information, i.e., in addition to English, the names of the categories, attributes, their values and codes can be translated into some other language(s). This enables expressing the feature-structures and MSDs either in English or in the language that is being described, making them much more suitable for use by native speakers of the language. Such localisation information enables e.g., the mapping of the already mentioned MSD Ncndl to the Slovene Sosdm, which corresponds to samostalnik vrsta = občno_ime, spol = srednji, število = dvojina, sklon = mestnik. Figure FIGREF13 shows the language specific table for the Slovene Particle, which has no attributes. As in the common tables, the role attribute gives the meaning of the cell, while the language of the cell is specified by the xml:lang attribute. The language particular section can also contain information on the allowed combinations of particular attribute values in the form of a list of constraints for each category. This mechanism has been carried over from MULTEXT although is has not, as yet, been operationalized. Finally, each language particular section contains an index (also encoded as a table) containing all the valid MSDs, i.e., it specifies the tagset for the language. This is an important piece of information, as a tagged corpus can then be automatically validated against this authority list, and the tagset can be statically transformed into various other formats, cf. Section SECREF23. Figure FIGREF14 shows one row from the Slovene MSD index. The MSD is given in cell (1), while the rest of the row gives explicative information: its expansion into features (2), the MSD localised to Slovene (3) and its expansion (4), the number of word tokens (5) and types (6) tagged with this MSD in a corpus, and (7) examples of usage in the form word-form/lemma. The latter three pieces of information have been in this instance automatically produced from a corpus and supplementary lexicon of Slovene, and the examples chosen are the most frequent word forms in the corpus for the MSD. In contrast to Slovene, the MSD lists were extracted directly from the corresponding MULTEXT-East lexicon for each language. The numbers of MSDs of course differ significantly, although not only due to the inherent differences between the languages but also because of different approaches taken in the construction of the lexica: while, for some languages, the lexica contain the complete inflections paradigms of the included lemmas, others include only word-forms (and their MSDs) that have actually been attested in a corpus. English, as a poorly inflecting language, has the lowest number of MSDs, namely 135, and even this number is considerably larger than most English tagsets, as the MULTEXT-East specifications introduce quite fined grained distinctions. Next come languages that either have “medium-rich” inflections (Romanian with 615 or Macedonian with 765 MSDs) or list only corpus-attested MSDs (Bulgarian with 338 or Estonian with 642 MSDs), followed by inflectionally very rich (South) West Slavic languages (Czech with 1,452 or Slovak with 1,612 MSDs). By far the largest tagset is that of the agglutinating Hungarian language (17,279 MSDs), which can pile many different suffixes (and their features) onto one word-form, resulting in a huge theoretically possible MSD tagset. This tagset shows the limits of the MSD concept, as it would most likely be impossible to construct a corpus of sufficient size to contain training examples covering all the MSDs. The morphosyntactic specifications ::: PoS tags, MSDs and features The MSDs have a central status in MULTEXT-East, as they tie together the specifications, lexicon and corpus and this section discusses the relation between traditional Part-of-Speech (PoS) tagsets, MSDs and features in somewhat more detail. It should first be noted that in EAGLES, as in MULTEXT, the MSDs were not meant to be used in corpus annotation. Rather, the MSDs were to be mapped to PoS tagsets. PoS tagsets, as traditionally conceived for annotating monolingual corpora, such as the English Penn TreeBank tagset or the Stuttgart-Tübingen (SST) tagset (cf. the Chapter on German Treebanks), are not analytical, i.e., a tag cannot in the general case be decomposed into morphosyntactic features. Especially morphosyntactic features which lead to high ambiguity and are difficult to predict by taggers are left out of the tags and PoS tags can even be assigned to individual words, such as “EX” for “Existential there”. But developing an “optimal” mapping of MSDs to tagger-friendly tagsets for individual languages is quite difficult and has not been attempted for most languages, at least not in the scope of MULTEXT-East. A MSD tagset itself, in spite of its seeming simplicity, is also difficult to define unambiguously. One reason is the varied interpretations of the `-' symbol. The hyphen is used in MSDs to mark the value of a “non-applicable” attribute, either for the whole language, for a combination of features or for a particular lexical item. For example, Adverbs have Degree marked on the 2nd position, and Clitic on 3rd, so Rgpy is Adverb Type=general Degree=positive Clitic=yes, but as adverbial Participles do not distinguish Degree, Adverb Type=particle Clitic=yes will be coded as Rp-y. The same logic applies to cases where an inflectional feature is evident only on some forms and not on others. For example, Slovene nouns only distinguish animacy if they have the masculine gender and even here only in the singular accusative form, so it is marked only on this form, while the others set the value of Animacy to non-applicable. The use of hyphens also brings with it the question whether or not to write trailing hyphens up to the number of attributes defined for the category; in MULTEXT they were written, but in MULTEXT-East it was decided to omit them, resulting in Rgpy rather than Rgpy—-. With the addition of new languages the number of attributes became quite large, and, as the new attributes were added at the end, the MSD strings often became very long (e.g., Dg——–q), which is precisely the reason the language-particular orderings of attributes were introduced. However, this does give the option of expressing the MSDs for the same feature set in two ways, according to the common tables, or to the language particular ones. A better option would most likely have been to move the MSD constructors completely into the language particular tables, as they are really defined on the level of an individual language. If the need arises for the MSDs of several languages to be mixed in a corpus they would be easy enough to distinguish by, say, prefixing them by the language code or interpreting them in the context of the superordinate @xml:lang attribute. Another complication arises from the fact that features are defined for each category separately; attributes with identical names can have different values with different categories, and the position of the attribute in the MSD string is typically also different for different categories. This is in contrast with the mapping between attributes and their positions in the MSD tags which are used for annotating many Czech language resources, such as the Prague Dependency TreeBank BIBREF20, where each attribute has a fixed position, regardless of the category. If MSDs are taken to correspond to fully ground feature-structures, then the PDT system is untyped, while in the MULTEXT approach categories act as types in the sense of BIBREF21, each introducing its own attributes. Alternatively, and more usefully, an attribute can be taken to be defined by its name, and its valid values as the union of all category-dependent values, i.e., as in the PDT. This is probably uncontroversial for attributes like Gender, but more problematic for e.g., Type, which mostly has disjoint values for different categories. As the preceding discussion shows, there are a variety of ways of writing “the same” MSD, and to which exact feature-structure to map the MSD. The MULTEXT-East distribution provides translation tables between MSDs and several feature structure encodings and the list below gives most of the available options, on one example from the Slovene MSD tagset: MSD with attribute ordering according to the common specifications, in English: Vmn———–e same as 1, but with attribute ordering according to the language particular specifications for Slovene: Vmen same as 2, but in Slovene: Ggdn minimal feature set, giving only instantiated features, in English: Verb, Type=main, Aspect=perfective, VForm=infinitive same as 4, but in Slovene: glagol, vrsta=glavni, vid=dovršni, oblika=nedoločnik Canonical (type-free) feature set, giving all the attributes defined for the language, in English: Verb, Type=main, Gender=0, Number=0, Case=0, Animate=0 Aspect=perfective, VForm=infinitive, Person=0, Negative=0, Degree=0, Definiteness=0, Owner_Number=0, Owner_Gender=0, Clitic=0, Form=0 Canonical (type-free) feature set, giving all the attributes defined in MULTEXT-East, in English: Verb, Type=main, Gender=0, Number=0, Case=0, Definiteness=0, Clitic=0, Animate=0, Owner_Number=0, Owner_Person=0, Owned_Number=0 Case2=0, Human=0, Aspect=perfective, Negation=0, VForm=infinitive, Tense=0, Person=0, Voice=0, Negative=0, Clitic_s=0, Courtesy=0, Transitive=0, Degree=0, Formation=0, Owner_Gender=0, Referent_Type=0, Syntactic_Type=0, Pronoun_Form=0, Wh_Type=0, Modific_Type=0 Coord_Type=0, Sub_Type=0, Form=0, Class=0 The morphosyntactic specifications ::: XSLT stylesheets The MULTEXT-East specifications come with associated XSLT stylesheets, which take the specifications as input, usually together with certain parameters, and produce either XML, HTML or text output, depending on the stylesheet. Three classes of transformations are provided: the first help in adding a new language to the specifications themselves; the second transform the specifications into HTML for reading; and the third transform (and validate) a list of MSDs. The outputs of the second and third class of transformation are included in the MULTEXT-East distribution. There are two stylesheets for authoring the specifications for a new language. The first stylesheet (msd-split.xsl) takes the common part of the specifications and, as a parameter, a list of languages, and produces the language specific section for a new language, copying into it all the features defined for the selected languages. The intention is to make it easier to author the language specific specifications for a new language, by constructing a template that already contains the features of the language(s) that are most similar to it. The second stylesheet (msd-merge.xsl) takes the language specific section for a new or updated language and the common part, and produces the common part with the language added or updated. This might involve only adding the language flag to existing attribute-value pairs, but also adding or deleting attributes or values from the common tables. The stylesheet warns of such cases, making it also suited for validating language specific sections against the common tables. For converting the specifications into HTML the stylesheet msd-spec2prn.xsl is first used to pre-process them in order to add various indexes (of attributes, values, MSDs) and to convert the tables into a more human readable form, which largely follows the formatting of the original MULTEXT(-East) specifications. This pre-processed version of the specifications is still in TEI XML and is then fed through the standard TEI XSLT stylesheets to produce the HTML (or other) output. Finally, there are two stylesheets that take the specifications and a list of MSDs and converts this list into various other formats. The stylesheet msd-expand.xsl produces different types of output, depending on the values of its parameters. It can check an MSD list for well-formedness against the specifications or can produce an expansion of the MSDs into their feature structure equivalents. Here it distinguishes several expansions, most already presented in the previous section, from a brief one, meant to be the shortest human readable expansion, to the full canonical form, where all the defined attributes are listed. The stylesheet can also produce the collating sequence for the MSDs with which it is possible to sort MSDs so that their order corresponds to the ordering of categories, attributes and their values in the specifications. Finally, the stylesheet is able to localise the MSD or features on the basis of the language specific section with localisation information. The second stylesheet, msd-fslib.xsl transforms the MSD list into TEI feature and feature-structure libraries, suitable for inclusion into TEI encoded and MSD annotated corpora. The morphosyntactic specifications ::: The morphosyntactic lexicons The MULTEXT-East lexicons contain mid-sized lexicons for most of the languages and are, from the encoding standpoint, very simple. Each lexicon is a tabular file with one entry per line, composed of three fields: (1) the word-form, which is the inflected form of the word, as it appears in the text, modulo sentence-initial capitalisation; (2) the lemma, the base-form of the word, which e.g., serves as the head-word in a dictionary; and (3) the MSD, i.e., the morphosyntactic description, according to the language particular specifications. It should be noted that the lexicon is necessary to ground the specifications and make them useful: it is only by associating a MSD with lexical items (word-form + lemma) that the MSD is given its semantics, i.e., this makes it possible to exemplify how a MSD is used in practice. The sizes of the MULTEXT-East lexicons vary considerably between the languages: the Slovak and Macedonian ones, with around 80,000 lemmas, are quite comprehensive, the majority offer medium sized lexicons in the range of 15–50,000 lemmas, and a few are smaller, with Persian only covering the lemmas of “1984” and Resian simply giving examples for each MSD. However, even the smaller lexicons cover the most morphologically complex words, such as pronouns and high frequency open class words, providing a good starting point for the development of more extensive lexical resources. The “1984” corpus The parallel MULTEXT-East corpus consists of the novel “1984” by G. Orwell and its translations. The complete novel has about 100,000 tokens, although this of course differs from language to language. This corpus is small, esp. by today’s standards, and consists of only one text; nevertheless, it provides an interesting experimentation dataset, as there are still very few uniformly annotated many-way parallel corpora. The corpus is available in a format where the novel is extensively annotated for structures which would be mostly useful in the context of a digital library, such as sections, paragraphs, verse, lists, notes, names, etc. More interestingly, the “1984” also exists as a separate corpus, which uses only basic structural tags but annotates each word with its context-disambiguated and – for the most part – hand-validated MSD and lemma. This dataset provides the final piece of the morphosyntactic triad, as it contextually validates the specifications and lexicon and provides examples of actual usage of the MSDs and lexical items. It is useful for training part-of-speech taggers and lemmatisers, or for studies involving word-level syntactic information in a multilingual setting, such as factored models of statistical machine translation. The “1984” corpus ::: The linguistically annotated corpus As illustrated in Figure FIGREF26, the text body consists of basic structure (divisions, paragraphs, sentences) and the tokenised text, where the words are annotated by (a pointer to) their MSD and the lemma. The elements and attributes for the linguistic annotation come from the TEI analysis module. The document also contains, in its back matter, the feature and feature-value libraries, automatically derived from the language specific morphosyntactic specifications. The feature-value library defines the MSDs, by giving them identifiers and decomposing them into features, i.e., giving pointers to their definitions, while the feature library provides these definitions in the form of attribute-value pairs. Each linguistically annotated “1984” thus contains within it the mapping from the MSD tags to the equivalent feature structures. To further illustrate the annotation we give in Appendix 1 the first sentence of “1984” for all the languages that have this annotated corpus in MULTEXT-East. The “1984” corpus ::: Sentence alignments The “1984” corpus also comes with separate files containing sentence alignments between the languages. In addition to the hand-validated alignments between English and the translations Version 4 also includes automatically induced pair-wise alignments between all the languages, as well as a multi-way alignment spanning all the languages. The problem of producing optimal n-way alignments from (high-precision) 2-way alignments with a hub is interesting, and more complex than might be obvious at first sight, as the source alignments need not be 1:1, and the alignment of different languages can have different spans of such $m:n$ alignments ($m, n \ge 0$); the Java program used to compute them BIBREF22 is available from the download page of MULTEXT-East. Figure FIGREF28 shows a few sentence links from the two-way alignment between Macedonian and Slovene. Each link gives the arity of the alignment and a series of (sentence) targets. The @target attribute is in TEI defined as a series of 2 or more values of XML schema type anyURI, so a target must be (unlike CES) fully qualified and it is not possible to directly distinguish between the two languages of the alignment. However, this is easily done via the value of the @n attribute or by the @xml:lang attribute of the (ancestor of) the referred to sentences. Given that there have to be two or more URIs as the value of @targets it is also not possible to encode 1:0 and 0:1 alignments which in CES used to be encoded explicitly. Whether the lack of such alignment ever makes a difference in practice, is an open question. Related work This section reviews work which connects to the MULTEXT-East resources, i.e., making available multilingual morphosyntactic specifications, lexicons and annotated parallel corpora. Related work ::: Morphosyntactic specifications Harmonisation of multilingual linguistic features usually proceeds in the scope of large international projects, and while MULTEXT-East has its genesis in such efforts, in particular EAGLES and MULTEXT, it has since proceeded by slowly adding new languages and updating the encoding of the resources, without making any revolutionary changes to the basic concept. In the meantime other initiatives have also been cataloguing and standardising linguistic features, although on a much broader scale, not limited to morphosyntax. GOLD, the General Ontology for Linguistic Description BIBREF23 is an effort to create a freely available domain-specific ontology for linguistic concepts. This is a well advanced effort, where (morphosyntactic) terms are extensively documented, also with references to literature. As the complete ontology is also available for download, it would be interesting to link the categories, attributes and their values form the MULTEXT-East specifications to GOLD, thus providing an explication of their semantics. Mostly as a result of a series of EU projects, a number of standards for encoding linguistic resources have been (or are being) developed by the ISO Technical Committee TC 37 “Terminology and other language and content resources”, in particular its Subcommittee SC 4 “Language resource management”. Morphosyntactic features are, along with other linguistic features, defined in the ISO standard 12620:2009 “Specification of data categories and management of a Data Category Registry for language resources”, and the standard is operationalized as the isoCat Web service at http://www.isocat.org/. The isoCat Data Category Registry (DCR) BIBREF24, BIBREF25 assigns PIDs (permanent indentifiers) to data categories, such as morphosyntactic features, and these PIDs then serve as stable identifiers for particular features. Users can also browse or search for data categories, export a chosen subset, or add new categories. The GOLD ontology has also been added to isoCat, although the information accompanying the features is not given in isoCat; rather the data categories just refer to the GOLD site. While MULTEXT-East has served as one of the sources for developing the ISO DCR, it has not been so far directly included in isoCat as one of the possible profiles. This would certainly be a useful endeavour but is complicated by the fact that, unlike GOLD, the DCR registry is not available for download and upload, which precludes (semi)automatically adding already existing category registries. There are currently also some technical and conceptual problems with adding existing feature collections, as documented for the case of mapping the National Corpus of Polish tagset to ISOcat BIBREF26. GOLD and isoCat deal with linguistic features and do not propose specific multilingual harmonised tagsets. Surprisingly, it is only relatively recently that research has moved in this direction. After MULTEXT and MULTEXT-East probably the first and very partial attempt in this direction was the dataset used in the CoNLL-X shared task on Multilingual Dependency Parsing BIBREF27, which consisted of uniformly encoded treebanks for 13 languages. However, while the format of the treebanks was the same, there was no attempt to unify the PoS tagsets or morphosyntactic features of the treebanks. A more interesting approach is that taken in Interset BIBREF28, BIBREF29, even though it does not propose multilingual tagsets. Rather, the idea is to introduce a central and largely universal set of features and then write drivers from and to particular tagsets to this pivot feature set. Then, if a particular tagset A needs to be made compatible with another tagset B (either for the same or for another language) it is enough to run the driver for A into the pivot and the driver for B from the pivot. So, for each tagset only two drivers need to be specified, enabling the conversion to and from all the covered tagsets. There are of course quite a few problems in defining such mappings, such as partial overlap of features, but the approach has been validated in practice and the problems and suggested solutions are discussed in the literature. The Interset approach has been subsequently also used for tagset harmonisation of treebanks for 29 languages, which, together with the harmonisation of syntactic dependencies, resulted in the HamleDT, the Harmonized Multi-Language Dependency Treebank BIBREF30. The most influential multilingual tagset is probably the “Universal tagset” proposed by BIBREF31, which maps tagsets for 22 languages to a tagset consisting of 12 different tags. While such a tagset is undoubtedly useful, it does propose only a lowest common denominator for the languages, thus losing most information from the original tagsets. Related work ::: Morphosyntactic lexicons MULTEXT-East resources also offer morphosyntactic lexicons for languages for which they are otherwise still hard to obtain. ELDA, for example, offers almost 600 lexicons, but most are for Western European languages, and are, for the most part, not for free. LDC, on the other hand, offers cheaper resources but has very few lexicons, and those mostly for speech or for very low resourced languages. It should be noted, however, that ELDA does offer the lexicons of the MULTEXT project, i.e., for English, French, German, Italian, Spanish, and Castilian, which complement the MULTEXT-East lexicons. Related work ::: Parallel annotated corpora Finally, the MULTEXT-East parallel “1984” corpus is, of course, very small and too uniform to seriously train taggers but, again, available parallel tagged and hand validated corpora are quite difficult to find, so it represents a viable option for developing tagger and lemmatiser models. The text is also interesting from a literary and translation perspective: the novel “1984” is an important work of fiction and linguistically quite interesting, e.g., in the choices the translators made in translating Newspeak words into their language. Again, ELDA does offer (a part of) the MULTEXT corpus, which contains passages from the Written Questions and Answers of the Official Journal of the European Community, with the same languages as for the lexicons. However, only English, French, Italian and Spanish parts are tagged, with roughly 200,000 words per language. Many other highly multilingual corpora have, of course, also become available in the many years since MULTEXT, with the best known being Europarl BIBREF32, JRC-ACQUIS BIBREF33 and other corpora compiled by JRC. But while these corpora contain 20+ languages and are quite large, the texts are not word-level (PoS tags, lemmas) annotated and available corpora with such annotations continue to be rather rare. Conclusions This chapter has introduced one of the oldest maintained sets of multilingual resources, covering mostly the morphosyntactic level of linguistic description. From the beginning, the objective of MULTEXT-East has also been to make its resources available to the wider research community. While it proved impossible to distribute the resources in a completely open manner, a portion of the resources is freely available for download or browsing and for the rest, the user has to agree to use them for research only and to acknowledge their use, and is then free to download them from the project Web site. Further work on the resources could proceed in a number of directions. As mentioned, an obvious next step in the development of the specifications and associated tagsets would be to link them to universal vocabularies, in particular isoCat and GOLD. The second direction concerns the quality of the resources: it has been noted that the MULTEXT-East morphosyntactic specifications lack consistency between the languages BIBREF34, BIBREF35, BIBREF36. Specific points are summarised in BIBREF37 and can be divided into cases where different features in various languages are used to describe the same phenomenon, and, conversely, the same features are used to describe different phenomena, and that certain features are too specific and hard to extend to cover similar phenomena in another language; in short, the harmonisation of the specifications between the languages is less than perfect. There are several reasons for this, most already mentioned: the specifications typically reflect the annotations in some source lexicon for the language, and the logic of such language and resource particular morphosyntactic annotations. The linguistic traditions of different languages differ, and this is also reflected in the choice and configuration of the features. Some steps in harmonising the MULTEXT-East specifications have already been undertaken in the context of converting them into an OWL DL ontology BIBREF38, which enables logical inferences over feature sets to be made on the basis of partial information. This process also pin-pointed inconsistencies, which were, to an extent, resolved in the ontology. The specifications also provide a framework in which other, different morphosyntactic tagsets can be defined. For Slovene, we have used the framework to define two new sets of morphosyntactic specifications with associated tagsets. The SPOOK corpus BIBREF39 is a corpus of parallel sentence aligned bi-texts, where one of the languages is Slovene, with the other being English, German, French or Italian. The SPOOK foreign language texts have been tagged with TreeTagger BIBREF40 which is distributed with a language models covering the SPOOK foreign languages, but having very disparate tagsets. To harmonise these tagsets, we developed the SPOOK morphosyntactic specifications, where the TreeTagger tags are 1:1 mapped onto MSDs for each particular language, using, where necessary, idiosyncratic features. With this it is possible to use the corpora either with the source TreeTagger tags or with harmonised SPOOK MSDs. The other case concerns corpora of historical Slovene BIBREF41, where the focus of the project was on modernisation of historical word-forms, rather than on MSD tagging. Nevertheless, we also wanted to annotate at least basic PoS information on the words. To this end, we developed the IMP morphosyntactic specifications, based on the MULTEXT-East ones for Slovene, which, however, strip all inflectional features from the tags, resulting in a small tagset of 32 MSDs. Both specifications are available on-line, in the same formats as the MULTEXT-East ones. Appendix 1. Examples of annotated text from Orwell's “1984”.
Unanswerable
d76ecdc0743893a895bc9dc3772af47d325e6d07
d76ecdc0743893a895bc9dc3772af47d325e6d07_0
Q: How big are datasets for 2019 Amazon Alexa competition? Text: Vision Prompt: What is your team’s vision for your Socialbot? How do you want your customers to feel at the end of an interaction with your socialbot? How would your team measure success in competition? Our vision is made up of the following main points: 1. A natural, engaging, and knowledge-powered conversational experience. Made possible by a socialbot that can handle all kinds of topics and topic switching more naturally than current Alexa bots. Our goal is not necessarily for the user to feel like they are talking to a human. 2. More natural topic handling and topic switching. Incorporating knowledge into neural models BIBREF0 and using the Amazon topical chat dataset can help improve current socialbots in this aspect. 3. Building a deeper, more personalized connection with the user. We believe that offering a personalized experience is equally as important as being able to talk about a wide range of topics BIBREF1. 4. Consistency. Consistency is another important aspect of conversations which we want to take into account through our user models. 5. Diversity and interestingness. The socialbot should give diverse and interesting responses, and the user should never feel like it is merely repeating what it has said earlier. At the end of an interaction customers should feel like they just had a fun conversation, maybe learned something new, and are thrilled to talk to the bot again. Throughout the dialog, customers should feel like the socialbot is interested in them and their topics, and can offer valuable insight and opinions. It is also important for it to suggest relevant topics in an engaging way. Users should never feel like the bot is not interested or can’t continue a conversation. This is a reason behind classifying and calculating our metrics for each user input, to get an idea of user engagement in the current conversation. Our main measures for success are: - User feedback. - Comparison to other dialog systems in A/B tests. - Automatic metrics. We would measure success partly by looking at the user feedback. We expect our socialbot’s ratings to constantly increase, and verbal feedback to get more positive throughout the competition. We plan to classify verbal feedback with a simple sentiment classifier to quantitatively see the rate of improvement. Working back from the customer and constantly improving the conversational experience based on feedback is important to us. Success would also be measured by comparing our system to previous socialbots or other dialog systems in A/B tests with crowdsourced evaluators. Our goal is to have long and high-quality conversations, but the longevity shouldn’t come from awkwardly long, specific, and forced replies, as is the case with some of the current socialbots. While generally, a longer conversation is better, it is not the only metric that we wish to consider. Besides user ratings we also have a plethora of automatic metrics that we want to improve on, like metrics measuring topic depth and breadth BIBREF2, entropy metrics measuring diversity, or embedding metrics measuring coherence BIBREF3. Different metrics measure different aspects of responses, thus it is important to not solely look at metrics individually. Sample Conversation Prompt: Please share a sample interaction/conversation you expect your Socialbot to achieve by the end of the Competition. User: Alexa, let’s chat Alexa: Welcome to the Alexa Prize. SYS: Hi <username>, what do you want to talk about? User: I don’t know, you decide. SYS: What about books. I remember you liking books, but what was the last book you’ve read? User: Umm, I think it was American Psycho. SYS: One of my favorites, did you know there is a movie adaptation? User: No. SYS: It’s pretty good. Are you reading something right now, because if not I can give some recommendations. User: Shoot me. SYS: I heard that Silence of the Lambs is really good. User: Yeah I watched the movie, but never read the book, maybe I will pick it up. SYS: Glad I could be of help. User: What is your favorite movie? SYS: I really liked Blade Runner, what about you? User: I hated it. SYS: Ah that’s too bad, but I also like your favorite movie which if I remember correctly is Avengers. User: Wow I’m surprised you remember that, but I guess you should have perfect memory. SYS: My memory is not perfect unfortunately, but I try my best. User: What is your earliest memory? SYS: I was very little, and I was playing in the sand, that’s all I remember. You? User: Uhh, I don’t know, maybe when I was four and fell off the swing. SYS: That must have been painful, did you have the courage to get on a swing again. User: Sure, it wasn’t that big of a deal. SYS: Glad to hear that. Since we are already talking about swings do you like the swing genre? User: Yes, especially electro swing. SYS: Me too! I love Parov Stelar! User: Haha, I think he’s a bit overrated but yeah, I like watching Youtube compilations of people dancing to it. SYS: I wish I could dance that well. User: Yeah, me too. SYS: What other Youtube compilations do you watch? User: Fails, and cats SYS: Cats are too cute, do you own one? User: No, maybe one day. SYS: Well I hope you will have a cat one day. User: Can you get me one? SYS: Unfortunately I can’t, but I can say purr purr. Architecture Prompt: Describe, in detail, your scientific approach along with the related system architecture. Plan for operating at scale when you define the architecture. Our system architecture (Figure FIGREF1) is comprised of 3 main components: First the user input is processed through the myblueNLP component, then this data is sent to myyellowResponse Candidates, which contains a suite of neural and rule-based models, and finally the mygreenDialog Manager chooses the best response taking into account dialog history and current myblueNLP data. These components communicate with each other through the mygrayDialog State Manager, and a myredKnowledge Base component that can be queried by our components stores our knowledge bases. We build on top of CoBot BIBREF2, thus the system is scalable and new components can be added easily. We leverage former Alexa competitors' architectures BIBREF4, BIBREF5. We minimize latency, by running tasks in parallel whenever possible, in order to make the conversation feel natural. Some redundancy is also included (e.g. in the form of multiple response generators), and we define a fixed time window for each major step in our pipeline, after which we interrupt the current component and use the information already computed from the sub-components in the next step, reducing total processing time. We will develop our system in three phases (Figure FIGREF1): Components marked core, core+, and core++ are to be completed by the end of phase 5, 7, and 9, respectively. These are the minimally planned components for each category, but if time permits we will advance faster. This provides us an incremental and iterative approach to build our architecture starting with the most important components, always testing included components before advancing to new ones. Our main novelties include: [topsep=2pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] Using self-play strategies to train a neural response ranker. Computing a large number of metrics for both input and response, and specifically optimizing some models for our metrics. Training a separate dialog model for each user. Using a response classification predictor and a response classifier to predict and control aspects of responses. Predicting which model emits the best response before response generation. Using our entropy-based filtering approach to filter dialog datasets BIBREF3. Using big, pre-trained, hierarchical BERT and GPT models BIBREF6, BIBREF7, BIBREF8. Next, we describe each component in detail in order of the data flow. Architecture ::: Dialog State Manager. This is included in CoBot and we extend it to manage our current dialog state (i.e. conversations and related data described below), saving it to DynamoDB BIBREF9 when appropriate. DynamoDB stores all past dialog states for every user. The grayDialog State Manager communicates with the myblueNLP and mygreenDialog Manager components which can update the dialog state. It works in parallel to all components, thus it doesn't affect latency. Architecture ::: NLP. mygrayASR data is sent to the first component in the pipeline (myblueNLP), starting with the myblueASR Postprocessor. If the confidence score of the transcribed utterance is below a certain threshold the pipeline is interrupted and we return a reply asking the user to repeat or rephrase their sentence. Otherwise if the confidence is above this but still lower than average we look at the n-best mygrayASR hypotheses and try to correct the utterance based on context (planned to be part of core++.). The corrected utterance is passed to all the subcomponents running in parallel. Token-timing is also saved to the dialog state and used as additional input to dialog models, as it might help disentangle separate phrases. We leverage and extend some of CoBot's built-in NLP components (myblueNER, myblueTopic, myblueSentiment, and myblueOffensive Speech classifiers) and also add our own. Named entities are extracted and we use myredNeptune BIBREF10 and the myredGoogle Knowledge Graph BIBREF11 to get related entities and pieces of information about them. myblueTopic, myblueDialog Act, myblueSentiment and myblueOffensive Speech classifiers take into account previous dialog states (context) from DynamoDB. We save all information in DynamoDB and build statistics about the user (e.g. what are her/his favorite topics). We compute all our automatic evaluation metrics BIBREF3 for the user utterance which is useful for the response selection strategy (e.g. if we find the user is bored we would try to suggest a new topic based on saved user information). After all subcomponents are finished or the time window is exceeded, all data is sent to the mygrayDialog State Manager. We also plan to experiment with inserting a response classification prediction (mygreenRCP) step, which predicts the topic, dialog act and sentiment of the response, using context, and current myblueNLP data. The predicted information about the response is added to the dialog state and the dialog models in myyellowResponse Candidates can leverage it. We also plan to experiment with using this information to control desired aspects of the response3 BIBREF12. Architecture ::: Response Candidates. Once the myblueNLP and mygreenRCP are done, the mygrayDialog State Manager sends the current dialog state to our dialog models running in parallel. Most models will also use conversation history and user information from DynamoDB. Ensemble modeling, a prevalent technique in nearly all Alexa socialbots BIBREF5, BIBREF2, improves the response quality since we can have different models dealing with different domains and situations. Rule-based models include myyellowEvi (built into CoBot), and publicly available AIML parts of myyellowAlice BIBREF13 and myyellowMitsuku,3. The base of all neural models is a big, hierarchical BERT or GPT-based model, pre-trained on non-dialog data BIBREF8, BIBREF14. The hierarchical part ensures that our models are grounded in past utterances and that they respond differently to the same input utterance (since the past is different). We also plan to experiment with inserting BERT layers in variational models3 BIBREF15, BIBREF16, which can provide more interesting and non-deterministic responses. We further train our pre-trained models on all available dialog datasets jointly. Finally, we finetune myyellowTopic Models on datasets related to specific topics (e.g. subreddits), while myyellowMetric Models are finetuned jointly on all dialog datasets, but we replace the loss function with a specific metric (e.g. coherence, diversity, etc.). myyellowMetric Models can focus on specific dialog properties and ensure that generated responses are diverse and engaging. We train models with extra annotations (e.g. topic, sentiment in DailyDialog BIBREF17, or using knowledge pieces BIBREF0 through the new Amazon topical chat dataset). There are several issues with the cross-entropy loss function BIBREF18, BIBREF3, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, and we proposed to use all kinds of features BIBREF18, motivating the usage of annotations computed with myblueNLP, which helps amend the loss function problem and provides more interesting and diverse responses BIBREF24, BIBREF17, BIBREF25, BIBREF26, BIBREF27, BIBREF0, BIBREF28, BIBREF29. We use two variants of each myyellowTopic and myyellowMetric Model, a neural generative and a retrieval based, which simply returns the n-best responses from training data. The myyellowUser Model4 is a user-specific dialog model finetuned on user-Alexa conversations. It will be at least one order of magnitude smaller than other models since we have to train and store the weights (in DynamoDB) of one model for each user. Through this model, we can encode information about the user, and the model can stay more consistent (if trained with its own responses as targets). Personalizing our system is important and we feel that it will make our chatbot more pleasant to talk to BIBREF1. The myyellowWikiSearch Model simply searches myredWikidata BIBREF30 and returns relevant sentences which we can consider as responses. A similar model is employed for the Washington Post live API as well to stay up-to-date with events and news. We also plan to experiment with an ensemble model setup, where all the response candidates are combined into one response word-by-word, which can be considered as an additional response candidate. Through the myyellowUser Model and the knowledge-augmented myyellowTopic Models our goal is to achieve an engaging and interesting conversation in which topic handling and topic switching occur more naturally than in current Alexa socialbots. In the initial stages of the competition, we plan to experiment with as many models as possible and use crowdsourcing to exclude from our system models that generate low-quality responses. Architecture ::: Dialog Manager. Once all dialog models have computed a response or timed out, we send response candidates to the mygreenDialog Manager. The mygreenModel Predictor4 runs in parallel with the dialog models, trying to predict which model will generate the best response based on the dialog state and context. If we find that such a model can predict the selected model (by the mygreenResponse Ranker) accurately, then we can largely decrease computational costs by reducing the number of models required to produce a response. We plan to experiment with several response selection strategies (mygreenResponse Ranker) and evaluate them with crowdsourced evaluators in A/B tests. In the initial phases (core part) of the competition, we plan to employ safe baseline strategies like selecting responses only from retrieval and rule-based models, using the CoBot selection strategy, and ranking responses using a weighted sum across all metrics. Our end goal is to be able to learn a neural ranker, which takes as input the dialog state, context, and response candidates (and their probability scores in the case of neural models), and outputs the best response4. One approach is to use crowdsourcing to gather training data for the ranker, by letting people choose the best response among the candidates. We also plan to use user feedback with reinforcement learning BIBREF5. In the final version of the ranker we plan to experiment with self-play3 BIBREF31, BIBREF32, BIBREF12, described in detail in the novel approaches document of the application. Essentially, both at train and test times we can do rollouts with the ranker, where the dialog system feeds its response into itself, to filter responses that lead to poor conversations. This is a computationally taxing technique, which will be tuned to the desired latency. To increase selection confidence we will use the agreement between the mygreenModel Prediction and selected response, and between the mygreenResponse Classifier and the mygreenRCP. The mygreenResponse Classifier uses the myblueNLP component to compute the same data as the mygreenRCP, and is useful in helping the mygreenResponse Ranker rank responses, based on whether they are offensive, on topic, positive, engaging, etc., ensuring a fun and interesting conversation. Thus the mygreenResponse Ranker leverages all components in the mygreenDialog Manager before emitting the final response, which is sent to the mygrayTTS of Alexa. The mygreenRCP and the mygreenModel Predictor are both trained so they approximate their post-myyellowResponse Candidates counterparts (mygreenResponse Classifier and mygreenResponse Ranker). This training signal can be used in the loss function of the neural dialog models as well. Novelty Prompt: What is novel about the team’s approach? (This may be completely new approach or novel combination of known techniques). Our novelties include: Using self-play learning for the neural response ranker (described in detail below). Optimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup. Training a separate dialog model for each user, personalizing our socialbot and making it more consistent. Using a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc. Using a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses. Using our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3. Building big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8. Constantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged. Self-play BIBREF31, BIBREF32 offers a solution to the scarcity of dialog datasets and to the issues encountered when using cross-entropy loss as an objective function BIBREF18, BIBREF3. In our setup the dialog system would converse with itself, selecting the best response with the neural ranker in each turn. After a few turns, we reward the ranker based on the generated conversation. Our reward ideas include a weighted sum of metrics and using crowdsourcing and user ratings. Furthermore, we wish to explore two exciting self-play setups: 1. An adversarial setup where the ranker is trained to generate a dialog by self-play to fool a neural discriminator deciding whether it’s machine or human generated. 2. We apply the ideas of curiosity and random network distillation to train the neural ranker BIBREF32. We also plan to experiment with self-play ideas for some of the individual neural dialog models. Related Work Prompt: Please provide a summary of the technical work/research (relevant to your proposed architecture), yours or others’, that you will leverage and how. We employ topic, dialog act, and sentiment classifiers, which widely used in the literature BIBREF4. We leverage rule-based bots in our system because they can provide a different class of responses than neural models. We use recent NLP models BIBREF6, BIBREF7, by finetuning them on our dialog datasets, and modify them to be more suited to deal with dialog modeling, e.g. making them hierarchical or integrating them in other state-of-the-art dialog models BIBREF15, BIBREF16. We leverage baseline response rankers, and adapt ideas from the domains of reinforcement and self-play learning to dialog modeling BIBREF5, BIBREF12, BIBREF32. We and others have found the cross-entropy loss function problematic and the primary reason for the generation of short and boring responses BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF33, BIBREF23. To amend this, we use our idea of filtering dialog datasets based on entropy, obtaining higher quality training data BIBREF3. We address the loss function problem using various features and metrics (from the NLP component) and knowledge pieces (using the new topical chat dataset and Wikidata), which can help neural models in generating more natural and diverse responses BIBREF24, BIBREF17, BIBREF26, BIBREF27, BIBREF0, BIBREF29. We build on top of, modify, and extend CoBot and former competitors’ architectures, as they provide a solid foundation for our dialog system BIBREF4, BIBREF5. ASR postprocessing, and a neural ranker choosing between response candidates are some techniques that we include in our architecture. Ensuring an engaging experience Prompt: How will you ensure you create an experience users find engaging? We have several mechanisms to ensure an engaging experience: We classify user utterances by topic, sentiment, etc., and calculate our automatic metrics, using this information when selecting and generating responses. If we find the user lost interest in the conversation, we might suggest a new topic related to their interests (through our topic and user models). We also classify the response candidates, so that we can make sure that they are engaging and relevant. With the help of knowledge-augmented models BIBREF0 we offer the user an interesting and informative conversational experience in a natural way, which all contributes to engagingness. Personalization through our user models is an important factor to engagingness. If the user feels like the socialbot is able to remember and include past information about them in its responses, then this directly contributes to building a deeper connection with the user and maintain their interest. Defining a maximum latency for our pipeline is a small but important feature to ensure users stay engaged. We plan to heavily leverage user feedback (by classifying verbal feedback and using ratings to refine our response ranker) in order to improve our system. Impact Prompt: How do you think your work will impact the field of Conversational AI? We aim to move forward the field of conversational AI and neural dialog modeling through three main novelties: self-play learning, tackling the loss function problem, and personalization. We believe that instead of using rule-based components and rule-based dialog managers, our refined, self-play based, neural ensemble system is much more capable of scaling and will be a great step forward in the field towards achieving a better conversational experience. Our work will popularize the idea of self-play and will eliminate some of the problems with current neural dialog models. We believe that applying self-play on the response ranker level is an under-researched idea, with which we could potentially train much better dialog agents than current ones. We believe that our combined approaches will mend the problems with learning through the cross-entropy loss function, and will create more diverse and interesting dialog models BIBREF18, BIBREF3. These include feeding classification and metric annotations to our neural models, optimizing models specifically for our metrics, and using self-play. From a user perspective, personalization in dialog agents is one of the most important aspects that current socialbots are lacking, which we want to make a great impact on through our user models.
Unanswerable
2a6469f8f6bf16577b590732d30266fd2486a72e
2a6469f8f6bf16577b590732d30266fd2486a72e_0
Q: What is novel in author's approach? Text: Vision Prompt: What is your team’s vision for your Socialbot? How do you want your customers to feel at the end of an interaction with your socialbot? How would your team measure success in competition? Our vision is made up of the following main points: 1. A natural, engaging, and knowledge-powered conversational experience. Made possible by a socialbot that can handle all kinds of topics and topic switching more naturally than current Alexa bots. Our goal is not necessarily for the user to feel like they are talking to a human. 2. More natural topic handling and topic switching. Incorporating knowledge into neural models BIBREF0 and using the Amazon topical chat dataset can help improve current socialbots in this aspect. 3. Building a deeper, more personalized connection with the user. We believe that offering a personalized experience is equally as important as being able to talk about a wide range of topics BIBREF1. 4. Consistency. Consistency is another important aspect of conversations which we want to take into account through our user models. 5. Diversity and interestingness. The socialbot should give diverse and interesting responses, and the user should never feel like it is merely repeating what it has said earlier. At the end of an interaction customers should feel like they just had a fun conversation, maybe learned something new, and are thrilled to talk to the bot again. Throughout the dialog, customers should feel like the socialbot is interested in them and their topics, and can offer valuable insight and opinions. It is also important for it to suggest relevant topics in an engaging way. Users should never feel like the bot is not interested or can’t continue a conversation. This is a reason behind classifying and calculating our metrics for each user input, to get an idea of user engagement in the current conversation. Our main measures for success are: - User feedback. - Comparison to other dialog systems in A/B tests. - Automatic metrics. We would measure success partly by looking at the user feedback. We expect our socialbot’s ratings to constantly increase, and verbal feedback to get more positive throughout the competition. We plan to classify verbal feedback with a simple sentiment classifier to quantitatively see the rate of improvement. Working back from the customer and constantly improving the conversational experience based on feedback is important to us. Success would also be measured by comparing our system to previous socialbots or other dialog systems in A/B tests with crowdsourced evaluators. Our goal is to have long and high-quality conversations, but the longevity shouldn’t come from awkwardly long, specific, and forced replies, as is the case with some of the current socialbots. While generally, a longer conversation is better, it is not the only metric that we wish to consider. Besides user ratings we also have a plethora of automatic metrics that we want to improve on, like metrics measuring topic depth and breadth BIBREF2, entropy metrics measuring diversity, or embedding metrics measuring coherence BIBREF3. Different metrics measure different aspects of responses, thus it is important to not solely look at metrics individually. Sample Conversation Prompt: Please share a sample interaction/conversation you expect your Socialbot to achieve by the end of the Competition. User: Alexa, let’s chat Alexa: Welcome to the Alexa Prize. SYS: Hi <username>, what do you want to talk about? User: I don’t know, you decide. SYS: What about books. I remember you liking books, but what was the last book you’ve read? User: Umm, I think it was American Psycho. SYS: One of my favorites, did you know there is a movie adaptation? User: No. SYS: It’s pretty good. Are you reading something right now, because if not I can give some recommendations. User: Shoot me. SYS: I heard that Silence of the Lambs is really good. User: Yeah I watched the movie, but never read the book, maybe I will pick it up. SYS: Glad I could be of help. User: What is your favorite movie? SYS: I really liked Blade Runner, what about you? User: I hated it. SYS: Ah that’s too bad, but I also like your favorite movie which if I remember correctly is Avengers. User: Wow I’m surprised you remember that, but I guess you should have perfect memory. SYS: My memory is not perfect unfortunately, but I try my best. User: What is your earliest memory? SYS: I was very little, and I was playing in the sand, that’s all I remember. You? User: Uhh, I don’t know, maybe when I was four and fell off the swing. SYS: That must have been painful, did you have the courage to get on a swing again. User: Sure, it wasn’t that big of a deal. SYS: Glad to hear that. Since we are already talking about swings do you like the swing genre? User: Yes, especially electro swing. SYS: Me too! I love Parov Stelar! User: Haha, I think he’s a bit overrated but yeah, I like watching Youtube compilations of people dancing to it. SYS: I wish I could dance that well. User: Yeah, me too. SYS: What other Youtube compilations do you watch? User: Fails, and cats SYS: Cats are too cute, do you own one? User: No, maybe one day. SYS: Well I hope you will have a cat one day. User: Can you get me one? SYS: Unfortunately I can’t, but I can say purr purr. Architecture Prompt: Describe, in detail, your scientific approach along with the related system architecture. Plan for operating at scale when you define the architecture. Our system architecture (Figure FIGREF1) is comprised of 3 main components: First the user input is processed through the myblueNLP component, then this data is sent to myyellowResponse Candidates, which contains a suite of neural and rule-based models, and finally the mygreenDialog Manager chooses the best response taking into account dialog history and current myblueNLP data. These components communicate with each other through the mygrayDialog State Manager, and a myredKnowledge Base component that can be queried by our components stores our knowledge bases. We build on top of CoBot BIBREF2, thus the system is scalable and new components can be added easily. We leverage former Alexa competitors' architectures BIBREF4, BIBREF5. We minimize latency, by running tasks in parallel whenever possible, in order to make the conversation feel natural. Some redundancy is also included (e.g. in the form of multiple response generators), and we define a fixed time window for each major step in our pipeline, after which we interrupt the current component and use the information already computed from the sub-components in the next step, reducing total processing time. We will develop our system in three phases (Figure FIGREF1): Components marked core, core+, and core++ are to be completed by the end of phase 5, 7, and 9, respectively. These are the minimally planned components for each category, but if time permits we will advance faster. This provides us an incremental and iterative approach to build our architecture starting with the most important components, always testing included components before advancing to new ones. Our main novelties include: [topsep=2pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] Using self-play strategies to train a neural response ranker. Computing a large number of metrics for both input and response, and specifically optimizing some models for our metrics. Training a separate dialog model for each user. Using a response classification predictor and a response classifier to predict and control aspects of responses. Predicting which model emits the best response before response generation. Using our entropy-based filtering approach to filter dialog datasets BIBREF3. Using big, pre-trained, hierarchical BERT and GPT models BIBREF6, BIBREF7, BIBREF8. Next, we describe each component in detail in order of the data flow. Architecture ::: Dialog State Manager. This is included in CoBot and we extend it to manage our current dialog state (i.e. conversations and related data described below), saving it to DynamoDB BIBREF9 when appropriate. DynamoDB stores all past dialog states for every user. The grayDialog State Manager communicates with the myblueNLP and mygreenDialog Manager components which can update the dialog state. It works in parallel to all components, thus it doesn't affect latency. Architecture ::: NLP. mygrayASR data is sent to the first component in the pipeline (myblueNLP), starting with the myblueASR Postprocessor. If the confidence score of the transcribed utterance is below a certain threshold the pipeline is interrupted and we return a reply asking the user to repeat or rephrase their sentence. Otherwise if the confidence is above this but still lower than average we look at the n-best mygrayASR hypotheses and try to correct the utterance based on context (planned to be part of core++.). The corrected utterance is passed to all the subcomponents running in parallel. Token-timing is also saved to the dialog state and used as additional input to dialog models, as it might help disentangle separate phrases. We leverage and extend some of CoBot's built-in NLP components (myblueNER, myblueTopic, myblueSentiment, and myblueOffensive Speech classifiers) and also add our own. Named entities are extracted and we use myredNeptune BIBREF10 and the myredGoogle Knowledge Graph BIBREF11 to get related entities and pieces of information about them. myblueTopic, myblueDialog Act, myblueSentiment and myblueOffensive Speech classifiers take into account previous dialog states (context) from DynamoDB. We save all information in DynamoDB and build statistics about the user (e.g. what are her/his favorite topics). We compute all our automatic evaluation metrics BIBREF3 for the user utterance which is useful for the response selection strategy (e.g. if we find the user is bored we would try to suggest a new topic based on saved user information). After all subcomponents are finished or the time window is exceeded, all data is sent to the mygrayDialog State Manager. We also plan to experiment with inserting a response classification prediction (mygreenRCP) step, which predicts the topic, dialog act and sentiment of the response, using context, and current myblueNLP data. The predicted information about the response is added to the dialog state and the dialog models in myyellowResponse Candidates can leverage it. We also plan to experiment with using this information to control desired aspects of the response3 BIBREF12. Architecture ::: Response Candidates. Once the myblueNLP and mygreenRCP are done, the mygrayDialog State Manager sends the current dialog state to our dialog models running in parallel. Most models will also use conversation history and user information from DynamoDB. Ensemble modeling, a prevalent technique in nearly all Alexa socialbots BIBREF5, BIBREF2, improves the response quality since we can have different models dealing with different domains and situations. Rule-based models include myyellowEvi (built into CoBot), and publicly available AIML parts of myyellowAlice BIBREF13 and myyellowMitsuku,3. The base of all neural models is a big, hierarchical BERT or GPT-based model, pre-trained on non-dialog data BIBREF8, BIBREF14. The hierarchical part ensures that our models are grounded in past utterances and that they respond differently to the same input utterance (since the past is different). We also plan to experiment with inserting BERT layers in variational models3 BIBREF15, BIBREF16, which can provide more interesting and non-deterministic responses. We further train our pre-trained models on all available dialog datasets jointly. Finally, we finetune myyellowTopic Models on datasets related to specific topics (e.g. subreddits), while myyellowMetric Models are finetuned jointly on all dialog datasets, but we replace the loss function with a specific metric (e.g. coherence, diversity, etc.). myyellowMetric Models can focus on specific dialog properties and ensure that generated responses are diverse and engaging. We train models with extra annotations (e.g. topic, sentiment in DailyDialog BIBREF17, or using knowledge pieces BIBREF0 through the new Amazon topical chat dataset). There are several issues with the cross-entropy loss function BIBREF18, BIBREF3, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, and we proposed to use all kinds of features BIBREF18, motivating the usage of annotations computed with myblueNLP, which helps amend the loss function problem and provides more interesting and diverse responses BIBREF24, BIBREF17, BIBREF25, BIBREF26, BIBREF27, BIBREF0, BIBREF28, BIBREF29. We use two variants of each myyellowTopic and myyellowMetric Model, a neural generative and a retrieval based, which simply returns the n-best responses from training data. The myyellowUser Model4 is a user-specific dialog model finetuned on user-Alexa conversations. It will be at least one order of magnitude smaller than other models since we have to train and store the weights (in DynamoDB) of one model for each user. Through this model, we can encode information about the user, and the model can stay more consistent (if trained with its own responses as targets). Personalizing our system is important and we feel that it will make our chatbot more pleasant to talk to BIBREF1. The myyellowWikiSearch Model simply searches myredWikidata BIBREF30 and returns relevant sentences which we can consider as responses. A similar model is employed for the Washington Post live API as well to stay up-to-date with events and news. We also plan to experiment with an ensemble model setup, where all the response candidates are combined into one response word-by-word, which can be considered as an additional response candidate. Through the myyellowUser Model and the knowledge-augmented myyellowTopic Models our goal is to achieve an engaging and interesting conversation in which topic handling and topic switching occur more naturally than in current Alexa socialbots. In the initial stages of the competition, we plan to experiment with as many models as possible and use crowdsourcing to exclude from our system models that generate low-quality responses. Architecture ::: Dialog Manager. Once all dialog models have computed a response or timed out, we send response candidates to the mygreenDialog Manager. The mygreenModel Predictor4 runs in parallel with the dialog models, trying to predict which model will generate the best response based on the dialog state and context. If we find that such a model can predict the selected model (by the mygreenResponse Ranker) accurately, then we can largely decrease computational costs by reducing the number of models required to produce a response. We plan to experiment with several response selection strategies (mygreenResponse Ranker) and evaluate them with crowdsourced evaluators in A/B tests. In the initial phases (core part) of the competition, we plan to employ safe baseline strategies like selecting responses only from retrieval and rule-based models, using the CoBot selection strategy, and ranking responses using a weighted sum across all metrics. Our end goal is to be able to learn a neural ranker, which takes as input the dialog state, context, and response candidates (and their probability scores in the case of neural models), and outputs the best response4. One approach is to use crowdsourcing to gather training data for the ranker, by letting people choose the best response among the candidates. We also plan to use user feedback with reinforcement learning BIBREF5. In the final version of the ranker we plan to experiment with self-play3 BIBREF31, BIBREF32, BIBREF12, described in detail in the novel approaches document of the application. Essentially, both at train and test times we can do rollouts with the ranker, where the dialog system feeds its response into itself, to filter responses that lead to poor conversations. This is a computationally taxing technique, which will be tuned to the desired latency. To increase selection confidence we will use the agreement between the mygreenModel Prediction and selected response, and between the mygreenResponse Classifier and the mygreenRCP. The mygreenResponse Classifier uses the myblueNLP component to compute the same data as the mygreenRCP, and is useful in helping the mygreenResponse Ranker rank responses, based on whether they are offensive, on topic, positive, engaging, etc., ensuring a fun and interesting conversation. Thus the mygreenResponse Ranker leverages all components in the mygreenDialog Manager before emitting the final response, which is sent to the mygrayTTS of Alexa. The mygreenRCP and the mygreenModel Predictor are both trained so they approximate their post-myyellowResponse Candidates counterparts (mygreenResponse Classifier and mygreenResponse Ranker). This training signal can be used in the loss function of the neural dialog models as well. Novelty Prompt: What is novel about the team’s approach? (This may be completely new approach or novel combination of known techniques). Our novelties include: Using self-play learning for the neural response ranker (described in detail below). Optimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup. Training a separate dialog model for each user, personalizing our socialbot and making it more consistent. Using a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc. Using a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses. Using our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3. Building big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8. Constantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged. Self-play BIBREF31, BIBREF32 offers a solution to the scarcity of dialog datasets and to the issues encountered when using cross-entropy loss as an objective function BIBREF18, BIBREF3. In our setup the dialog system would converse with itself, selecting the best response with the neural ranker in each turn. After a few turns, we reward the ranker based on the generated conversation. Our reward ideas include a weighted sum of metrics and using crowdsourcing and user ratings. Furthermore, we wish to explore two exciting self-play setups: 1. An adversarial setup where the ranker is trained to generate a dialog by self-play to fool a neural discriminator deciding whether it’s machine or human generated. 2. We apply the ideas of curiosity and random network distillation to train the neural ranker BIBREF32. We also plan to experiment with self-play ideas for some of the individual neural dialog models. Related Work Prompt: Please provide a summary of the technical work/research (relevant to your proposed architecture), yours or others’, that you will leverage and how. We employ topic, dialog act, and sentiment classifiers, which widely used in the literature BIBREF4. We leverage rule-based bots in our system because they can provide a different class of responses than neural models. We use recent NLP models BIBREF6, BIBREF7, by finetuning them on our dialog datasets, and modify them to be more suited to deal with dialog modeling, e.g. making them hierarchical or integrating them in other state-of-the-art dialog models BIBREF15, BIBREF16. We leverage baseline response rankers, and adapt ideas from the domains of reinforcement and self-play learning to dialog modeling BIBREF5, BIBREF12, BIBREF32. We and others have found the cross-entropy loss function problematic and the primary reason for the generation of short and boring responses BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF33, BIBREF23. To amend this, we use our idea of filtering dialog datasets based on entropy, obtaining higher quality training data BIBREF3. We address the loss function problem using various features and metrics (from the NLP component) and knowledge pieces (using the new topical chat dataset and Wikidata), which can help neural models in generating more natural and diverse responses BIBREF24, BIBREF17, BIBREF26, BIBREF27, BIBREF0, BIBREF29. We build on top of, modify, and extend CoBot and former competitors’ architectures, as they provide a solid foundation for our dialog system BIBREF4, BIBREF5. ASR postprocessing, and a neural ranker choosing between response candidates are some techniques that we include in our architecture. Ensuring an engaging experience Prompt: How will you ensure you create an experience users find engaging? We have several mechanisms to ensure an engaging experience: We classify user utterances by topic, sentiment, etc., and calculate our automatic metrics, using this information when selecting and generating responses. If we find the user lost interest in the conversation, we might suggest a new topic related to their interests (through our topic and user models). We also classify the response candidates, so that we can make sure that they are engaging and relevant. With the help of knowledge-augmented models BIBREF0 we offer the user an interesting and informative conversational experience in a natural way, which all contributes to engagingness. Personalization through our user models is an important factor to engagingness. If the user feels like the socialbot is able to remember and include past information about them in its responses, then this directly contributes to building a deeper connection with the user and maintain their interest. Defining a maximum latency for our pipeline is a small but important feature to ensure users stay engaged. We plan to heavily leverage user feedback (by classifying verbal feedback and using ratings to refine our response ranker) in order to improve our system. Impact Prompt: How do you think your work will impact the field of Conversational AI? We aim to move forward the field of conversational AI and neural dialog modeling through three main novelties: self-play learning, tackling the loss function problem, and personalization. We believe that instead of using rule-based components and rule-based dialog managers, our refined, self-play based, neural ensemble system is much more capable of scaling and will be a great step forward in the field towards achieving a better conversational experience. Our work will popularize the idea of self-play and will eliminate some of the problems with current neural dialog models. We believe that applying self-play on the response ranker level is an under-researched idea, with which we could potentially train much better dialog agents than current ones. We believe that our combined approaches will mend the problems with learning through the cross-entropy loss function, and will create more diverse and interesting dialog models BIBREF18, BIBREF3. These include feeding classification and metric annotations to our neural models, optimizing models specifically for our metrics, and using self-play. From a user perspective, personalization in dialog agents is one of the most important aspects that current socialbots are lacking, which we want to make a great impact on through our user models.
They use self-play learning , optimize the model for specific metrics, train separate models per user, use model and response classification predictors, and filter the dataset to obtain higher quality training data.
78577fd1c09c0766f6e7d625196adcc72ddc8438
78577fd1c09c0766f6e7d625196adcc72ddc8438_0
Q: What dataset is used for train/test of this method? Text: Introduction Corresponding author email: [email protected]. Paper submitted to IEEE ICASSP 2020 Recent advances in TTS have improved the achievable synthetic speech naturalness to near human-like capabilities BIBREF0, BIBREF1, BIBREF2, BIBREF3. This means that for simple sentences, or for situations in which we can correctly predict the most appropriate prosodic representation, TTS systems are providing us with speech practically indistinguishable from that of humans. One aspect that most systems are still lacking is the natural variability of human speech, which is being observed as one of the reasons why the cognitive load of synthetic speech is higher than that of humans BIBREF4. This is something that variational models such as those based on Variational Auto-Encoding (VAE) BIBREF3, BIBREF5 attempt to solve by exploiting the sampling capabilities of the acoustic embedding space at inference time. Despite the advantages that VAE-based inference brings, it also suffers from the limitation that to synthesize a sample, one has to select an appropriate acoustic embedding for it, which can be challenging. A possible solution to this is to remove the selection process and consistently use a centroid to represent speech. This provides reliable acoustic representations but it suffers again from the monotonicity problem of conventional TTS. Another approach is to simply do a random sampling of the acoustic space. This would certainly solve the monotonicity problem if the acoustic embedding were varied enough. It can however, introduce erratic prosodic representations of longer texts, which can prove to be worse than being monotonous. Finally, one can consider text-based selection or prediction, as done in this research. In this work, we present a novel approach for informed embedding selection using linguistic features. The tight relationship between syntactic constituent structure and prosody is well known BIBREF6, BIBREF7. In the traditional Natural Language Processing (NLP) pipeline, constituency parsing produces full syntactic trees. More recent approaches based on Contextual Word Embedding (CWE) suggest that CWE are largely able to implicitly represent the classic NLP pipeline BIBREF8, while still retaining the ability to model lexical semantics BIBREF9. Thus, in this work we explore how TTS systems can enhance the quality of speech synthesis by using such linguistic features to guide the prosodic contour of generated speech. Similar relevant recent work exploring the advantages of exploiting syntactic information for TTS can be seen in BIBREF10, BIBREF11. While those studies, without any explicit acoustic pairing to the linguistic information, inject a number of curated features concatenated to the phonetic sequence as a way of informing the TTS system, the present study makes use of the linguistic information to drive the acoustic embedding selection rather than using it as an additional model features. An exploration of how to use linguistics as a way of predicting adequate acoustic embeddings can be seen in BIBREF12, where the authors explore the path of predicting an adequate embedding by informing the system with a set of linguistic and semantic information. The main difference of the present work is that in our case, rather than predicting a point in a high-dimensional space by making use of sparse input information (which is a challenging task and potentially vulnerable to training-domain dependencies), we use the linguistic information to predict the most similar embedding in our training set, reducing the complexity of the task significantly. The main contributions of this work are: i) we propose a novel approach of embedding selection in the acoustic space by using linguistic features; ii) we demonstrate that including syntactic information-driven acoustic embedding selection improves the overall speech quality, including its prosody; iii) we compare the improvements achieved by exploiting syntactic information in contrast with those brought by CWE; iv) we demonstrate that the approach improves the TTS quality in LFR experience as well. Proposed Systems CWE seem the obvious choice to drive embedding selection as they contain both syntactic and semantic information. However, a possible drawback of relying on CWE is that the linguistic-acoustic mapping space is sparse. The generalization capability of such systems in unseen scenarios will be poor BIBREF13. Also, as CWE models lexical semantics, it implies that two semantically similar sentences are likely to have similar CWE representations. This however does not necessarily correspond to a similarity in prosody, as the structure of the two sentences can be very different. We hypothesize that, in some scenarios, syntax will have better capability to generalize than semantics and that CWE have not been optimally exploited for driving prosody in speech synthesis. We explore these two hypotheses in our experiments. The objective of this work is to exploit sentence-level prosody variations available in the training dataset while synthesizing speech for the test sentence. The steps executed in this proposed approach are: (i) Generate suitable vector representations containing linguistic information for all the sentences in the train and test sets, (ii) Measure the similarity of the test sentence with each of the sentences in the train set. We do so by using cosine similarity between the vector representations as done in BIBREF14 to evaluate linguistic similarity, (iii) Choose the acoustic embedding of the train sentence which gives the highest similarity with the test sentence, (iv) Synthesize speech from VAE-based inference using this acoustic embedding Proposed Systems ::: Systems We experiment with three different systems for generating vector representations of the sentences, which allow us to explore the impact of both syntax and semantics on the overall quality of speech synthesis. The representations from the first system use syntactic information only, the second relies solely on CWE while the third uses a combination of CWE and explicit syntactic information. Proposed Systems ::: Systems ::: Syntactic Syntactic representations for sentences like constituency parse trees need to be transformed into vectors in order to be usable in neural TTS models. Some dimensions describing the tree can be transformed into word-based categorical feature like identity of parent and position of word in a phrase BIBREF15. The syntactic distance between adjacent words is known to be a prosodically relevant numerical source of information which is easily extracted from the constituency tree BIBREF16. It is explained by the fact that if many nodes must be traversed to find the first common ancestor, the syntactic distance between words is high. Large syntactic distances correlate with acoustically relevant events such as phrasing breaks or prosodic resets. To compute syntactic distance vector representations for sentences, we use the algorithm mentioned in BIBREF17. That is, for a sentence of n tokens, there are n corresponding distances which are concatenated together to give a vector of length n. The distance between the start of sentence and first token is always 0. We can see an example in Fig. 1: for the sentence “The brown fox is quick and it is jumping over the lazy dog", whose distance vector is d = [0 2 1 3 1 8 7 6 5 4 3 2 1]. The completion of the subject noun phrase (after `fox') triggers a prosodic reset, reflected in the distance of 3 between `fox' and `is'. There should also be a more emphasized reset at the end of the first clause, represented by the distance of 8 between `quick' and `and'. Proposed Systems ::: Systems ::: BERT To generate CWE we use BERT BIBREF18, as it is one of the best performing pre-trained models with state of the art results on a large number of NLP tasks. BERT has also shown to generate strong representations for both syntax and semantics. We use the word representations from the uncased base (12 layer) model without fine-tuning. The sentence level representations are achieved by averaging the second to last hidden layer for each token in the sentence. These embeddings are used to drive acoustic embedding selection. Proposed Systems ::: Systems ::: BERT Syntactic Even though BERT embeddings capture some aspects of syntactic information along with semantics, we decided to experiment with a system combining the information captured by both of the above mentioned systems. The information from syntactic distances and BERT embeddings cannot be combined at token level to give a single vector representation since both these systems use different tokenization algorithms. Tokenization in BERT is based on the wordpiece algorithm BIBREF19 as a way to eliminate the out-of-vocabulary issues. On the other hand, tokenization used to generate parse trees is based on morphological considerations rooted in linguistic theory. At inference time, we average the similarity scores obtained by comparing the BERT embeddings and the syntactic distance vectors. Proposed Systems ::: Applications to LFR The approaches described in Section SECREF1 produce utterances with more varied prosody as compared to the long-term monotonicity of those obtained via centroid-based VAE inference. However, when considering multi-sentence texts, we have to be mindful of the issues that can be introduced by erratic transitions. We tackle this issue by minimizing the acoustic variation a sentence can have with respect to the previous one, while still minimizing the linguistic distance. We consider the Euclidean distance between the 2D Principal Component Analysis (PCA) projected acoustic embeddings as a measure of acoustic variation, as we observe that the projected space provides us with an acoustically relevant space in which distances can be easily obtained. Doing the same in the 64-dimensional VAE space did not perform as intended, likely because of the non-linear manifold representing our system, in which distances are not linear. As a result, certain sentence may be linguistically the closest match in terms of syntactic distance or CWE, but it will still not be selected if its acoustic embedding is far apart from that of the previous sentence. We modify the similarity evaluation metric used for choosing the closest match from the train set by adding a weighted cost to account for acoustic variation. This approach focuses only on the sentence transitions within a paragraph rather than optimizing the entire acoustic embedding path. This is done as follows: (i) Define the weights for linguistic similarity and acoustic similarity. In this work, the two weights sum up to 1; (ii) The objective is to minimize the following loss considering the acoustic embedding chosen for the previous sentence in the paragraph: Loss = LSW * (1-LS) + (1-LSW) * D, where LSW = Linguistic Similarity Weight; LS = Linguistic Similarity between test and train sentence; D = Euclidean distance between the acoustic embedding of the train sentence and the acoustic embedding chosen for the previous sentence. We fix D=0 for the first sentence of every paragraph. Thus, this approach is more suitable for cases when the first sentence is generally the carrier sentence, i.e. one which uses a structural template. This is particularly the case for news stories such as the ones considered in this research. Distances observed between the chosen acoustic embeddings for a sample paragraph and the effect of varying weights are depicted in the matrices in Fig FIGREF7. They are symmetric matrices, where each row and column of the matrix represents the sentence at index i in a paragraph. Each cell represents the Euclidean distance between the acoustic embeddings chosen for sentences at index i,j. We can see that in (a) the sentence at index 4 stands out as the most acoustically dissimilar sentence from the rest of the sentences in the paragraph. We see that the overall acoustic distance between sentences in much higher in (a) than in (b). As we are particularly concerned with transitions from previous to current sentence, we focus on cells (i,i-1) for each row. In (a), sentences at index 4 and 5 particularly stand out as potential erratic transitions due to high values in cell (4,3) and (5,4). In (b) we observe that the distances have significantly reduced and thus sentence transitions are expected to be smooth. As LSW decreases, the transitions become smoother. This is not `free': there is a trade-off, as increasing the transition smoothness decreases the linguistic similarity which also reduces the prosodic divergence. Fig. FIGREF10 shows the trade-off between the two, across the test set, when using syntactic distance to evaluate LS. Low linguistic distance (i.e. 1 - LS) and low acoustic distance are required. The plot shows that there is a sharp decrease in acoustic distance between LSW of 1.0 and 0.9 but the reduction becomes slower from therein, while the changes in linguistic distance progress in a linear fashion. We informally evaluated the performance of the systems by reducing LSW from 1.0 till 0.7 with a step size of 0.05 in order to look for an optimal balance. At LSW=0.9, the first elbow on acoustic distance curve, there was a significant decrease in the perceived erraticness. As such, we chose those values for our LFR evaluations. Experimental Protocol The research questions we attempt to answer are: Can linguistics-driven selection of acoustic waveform from the existing dataset lead to improved prosody and naturalness when synthesizing speech ? How does syntactic selection compare with CWE selection? Does this approach improve LFR experience as well? To answer these questions, we used in our experiments the systems, data and subjective evaluations described below. Experimental Protocol ::: Text-to-Speech System The evaluated TTS system is a Tacotron-like system BIBREF20 already verified for the newscaster domain. A schematic description can be seen in Fig. FIGREF15 and a detailed explanation of the baseline system and the training data can be read in BIBREF21, BIBREF22. Conversion of the produced spectrograms to waveforms is done using the Universal WaveRNN-like model presented in BIBREF2. For this study, we consider an improved system that replaced the one-hot vector style modeling approach by a VAE-based reference encoder similar to BIBREF5, BIBREF3, in which the VAE embedding represents an acoustic encoding of a speech signal, allowing us to drive the prosodic representation of the synthesized text as observed in BIBREF23. The way of selecting the embedding at inference time is defined by the approaches introduced in Sections SECREF1 and SECREF6. The dimension of the embedding is set to 64 as it allows for the best convergence without collapsing the KLD loss during training. Experimental Protocol ::: Datasets Experimental Protocol ::: Datasets ::: Training Dataset (i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances). (ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences. Experimental Protocol ::: Datasets ::: Evaluation Dataset The systems were evaluated on two datasets: (i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or" questions (9%), “wh" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24. (ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case. Experimental Protocol ::: Subjective evaluation Our tests are based on MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) BIBREF26, but without forcing a system to be rated as 100, and not always considering a top anchor. All of our listeners, regardless of linguistic knowledge were native US English speakers. For the CPE dataset, we carried out two tests. The first one with 10 linguistic experts as listeners, who were asked to rate the appropriateness of the prosody ignoring the speaking style on a scale from 0 (very inappropriate) to 100 (very appropriate). The second test was carried out on 10 crowd-sourced listeners who evaluated the naturalness of the speech from 0 to 100. In both tests each listener was asked to rate 28 different screens, with 4 randomly ordered samples per screen for a total of 112 samples. The 4 systems were the 3 proposed ones and the centroid-based VAE inference as the baseline. For the LFR dataset, we conducted only a crowd-sourced evaluation of naturalness, where the listeners were asked to assess the suitability of newscaster style on a scale from 0 (completely unsuitable) to 100 (completely adequate). Each listener was presented with 51 news stories, each playing one of the 5 systems including the original recordings as a top anchor, the centroid-based VAE as baseline and the 3 proposed linguistics-driven embedding selection systems. Results Table 1 reports the average MUSHRA scores, evaluating prosody and naturalness, for each of the test systems on the CPE dataset. These results answer Q1, as the proposed approach improves significantly over the baseline on both grounds. It thus, gives us evidence supporting our hypothesis that linguistics-driven acoustic embedding selection can significantly improve speech quality. We also observe that better prosody does not directly translate into improved naturalness and that there is a need to improve acoustic modeling in order to better reflect the prosodic improvements achieved. We validate the differences between MUSHRA scores using pairwise t-test. All proposed systems improved significantly over the baseline prosody (p$<$0.01). For naturalness, BERT syntactic performed the best, improving over the baseline significantly (p=0.04). Other systems did not give statistically significant improvement over the baseline (p$>$0.05). The difference between BERT and BERT Syntactic is also statistically insignificant. Q2 is explored in Table TABREF21, which gives the breakdown of prosody results by major categories in CPE. For `wh' questions, we observe that Syntactic alone brings an improvement of 4% and BERT Syntactic performs the best by improving 8% over the baseline. This suggests that `wh' questions generally share a closely related syntax structure and that information can be used to achieve better prosody. This intuition is further strengthened by the improvements observed for `or' questions. Syntactic alone improves by 9% over the baseline and BERT Syntactic performs the best by improving 21% over the baseline. The improvement observed in `or' questions is greater than `wh' questions as most `or' questions have a syntax structure unique to them and this is consistent across samples in the category. For both these categories, the systems Syntactic, BERT and BERT Syntactic show incremental improvement as the first system contains only syntactic information, the next captures some aspect of syntax with semantics and the third has enhanced the representation of syntax with CWE representation to drive selection. Thus, it is evident that the extent of syntactic information captured drives the quality in speech synthesis for these two categories. Compound nouns proved harder to improve upon as compared to questions. BERT performed the best in this category with a 1.2% improvement over the baseline. We can attribute this to the capability of BERT to capture context which Syntactic does not do. This plays a critical role in compound nouns, where to achieve suitable prosody it is imperative to understand in which context the nouns are being used. For other complex sentences as well, BERT performed the best by improving over the baseline by 6%. This can again be attributed to the fact that most of the complex sentences required contextual knowledge. Although Syntactic does improve over the baseline, syntax does not look like the driving factor as BERT Syntactic performs a bit worse than BERT. This indicates that enhancing syntax representation hinders BERT from fully leveraging the contextual knowledge it captured to drive embedding selection. Q3 is answered in Table TABREF22, which reports the MUSHRA scores on the LFR dataset. The Syntactic system performed the best with high statistical significance (p=0.02) in comparison to baseline. We close the gap between the baseline and the recordings by almost 20%. Other systems show statistically insignificant (p$>$0.05) improvements over the baseline. To achieve suitable prosody, LFR requires longer distance dependencies and knowledge of prosodic groups. Such information can be approximated more effectively by the Syntactic system rather than the CWE based systems. However, this is a topic for a potential future exploration as the difference between BERT and Syntactic is statistically insignificant (p=0.6). Conclusion The current VAE-based TTS systems are susceptible to monotonous speech generation due to the need to select a suitable acoustic embedding to synthesize a sample. In this work, we proposed to generate dynamic prosody from the same TTS systems by using linguistics to drive acoustic embedding selection. Our proposed approach is able to improve the overall speech quality including prosody and naturalness. We propose 3 techniques (Syntactic, BERT and BERT Syntactic) and evaluated their performance on 2 datasets: common prosodic errors and LFR. The Syntactic system was able to improve significantly over the baseline on almost all parameters (except for naturalness on CPE). Information captured by BERT further improved prosody in cases where contextual knowledge was required. For LFR, we bridged the gap between baseline and actual recording by 20%. This approach can be further extended by making the model aware of these features rather than using them to drive embedding selection.
Training datasets: TTS System dataset and embedding selection dataset. Evaluation datasets: Common Prosody Errors dataset and LFR dataset.
1f63ccc379f01ecdccaa02ed0912970610c84b72
1f63ccc379f01ecdccaa02ed0912970610c84b72_0
Q: How much is the gap between using the proposed objective and using only cross-entropy objective? Text: Introduction Existing state-of-the-art question answering models are trained to produce exact answer spans for a question and a document. In this setting, a ground truth answer used to supervise the model is defined as a start and an end position within the document. Existing training approaches optimize using cross entropy loss over the two positions. However, this suffers from a fundamental disconnect between the optimization, which is tied to the position of a particular ground truth answer span, and the evaluation, which is based on the textual content of the answer. This disconnect is especially harmful in cases where answers that are textually similar to, but distinct in positions from, the ground truth are penalized in the same fashion as answers that are textually dissimilar. For example, suppose we are given the sentence “Some believe that the Golden State Warriors team of 2017 is one of the greatest teams in NBA history”, the question “which team is considered to be one of the greatest teams in NBA history”, and a ground truth answer of “the Golden State Warriors team of 2017”. The span “Warriors” is also a correct answer, but from the perspective of traditional cross entropy based training it is no better than the span “history”. To address this problem, we propose a mixed objective that combines traditional cross entropy loss over positions with a measure of word overlap trained with reinforcement learning. We obtain the latter objective using self-critical policy learning in which the reward is based on word overlap between the proposed answer and the ground truth answer. Our mixed objective brings two benefits: (i) the reinforcement learning objective encourages answers that are textually similar to the ground truth answer and discourages those that are not; (ii) the cross entropy objective significantly facilitates policy learning by encouraging trajectories that are known to be correct. The resulting objective is one that is both faithful to the evaluation metric and converges quickly in practice. In addition to our mixed training objective, we extend the Dynamic Coattention Network (DCN) by BIBREF0 with a deep residual coattention encoder. This allows the network to build richer representations of the input by enabling each input sequence to attend to previous attention contexts. BIBREF1 show that the stacking of attention layers helps model long-range dependencies. We merge coattention outputs from each layer by means of residual connections to reduce the length of signal paths. BIBREF2 show that skip layer connections facilitate signal propagation and alleviate gradient degradation. The combination of the deep residual coattention encoder and the mixed objective leads to higher performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset () BIBREF3 compared to our DCN baseline. The improvement is especially apparent on long questions, which require the model to capture long-range dependencies between the document and the question. Our model, which we call , achieves state-of-the-art results on , with exact match accuracy and F1. When ensembled, the obtains exact match accuracy and F1. We consider the question answering task in which we are given a document and a question, and are asked to find the answer in the document. Our model is based on the DCN by BIBREF0 , which consists of a coattention encoder and a dynamic decoder. The encoder first encodes the question and the document separately, then builds a codependent representation through coattention. The decoder then produces a start and end point estimate given the coattention. The DCN decoder is dynamic in the sense that it iteratively estimates the start and end positions, stopping when estimates between iterations converge to the same positions or when a predefined maximum number of iterations is reached. We make two significant changes to the DCN by introducing a deep residual coattention encoder and a mixed training objective that combines cross entropy loss from maximum likelihood estimation and reinforcement learning rewards from self-critical policy learning. Deep residual coattention encoder Because it only has a single-layer coattention encoder, the DCN is limited in its ability to compose complex input representations. BIBREF1 proposed stacked self-attention modules to facilitate signal traversal. They also showed that the network's ability to model long-range dependencies can be improved by reducing the length of signal paths. We propose two modifications to the coattention encoder to leverage these findings. First, we extend the coattention encoder with self-attention by stacking coattention layers. This allows the network to build richer representations over the input. Second, we merge coattention outputs from each layer with residual connections. This reduces the length of signal paths. Our encoder is shown in Figure 1 . Suppose we are given a document of $$ words and a question of $$ words. Let $^D \in ^{\times }$ and $^Q \in ^{\times }$ respectively denote the word embeddings for the document and the question, where $$ is the dimension of the word embeddings. We obtain document encodings $_1^D$ and question encodings $_1^Q$ through a bidirectional Long Short-Term Memory Network (LSTM) BIBREF4 , where we use integer subscripts to denote the coattention layer number. $$_1^D &=& _1 \left( ^D \right) \in ^{\times (+1)} \\ _1^Q &=& \text{tanh} \left( W~\hspace{2.0pt}px_1 \left( ^Q \right) + b \right) \in ^{\times (+1)}$$ (Eq. 3) Here, $$ denotes the hidden state size and the $+1$ indicates the presence of an additional sentinel word which allows the coattention to not focus on any part of the input. Like the original DCN, we add a non-linear transform to the question encoding. We compute the affinity matrix between the document and the question as $= {\left( _1^Q \right)}^\intercal _1^D \in ^{(+1) \times (+1)}$ . Let ${X}$ denote the softmax operation over the matrix $X$ that normalizes $X$ column-wise. The document summary vectors and question summary vectors are computed as $$_1^D &=& _1^Q ~{^\intercal } \in ^{\times (+ 1)} \\ _1^Q &=& _1^D ~{} \in ^{\times (+ 1)}$$ (Eq. 4) We define the document coattention context as follows. Note that we drop the dimension corresponding to the sentinel vector – it has already been used during the summary computation and is not a potential position candidate for the decoder. $$_1^D &=& _1^Q ~{^\intercal } \in ^{\times }$$ (Eq. 5) We further encode the summaries using another bidirectional LSTM. $$_2^D &=& _2 \left( _1^D \right) \in ^{2 \times } \\ _2^Q &=& _2 \left( _1^Q \right) \in ^{2 \times }$$ (Eq. 6) Equation 4 to equation 5 describe a single coattention layer. We compute the second coattention layer in a similar fashion. Namely, let $$ denote a multi-valued mapping whose inputs are the two input sequences $_1^D$ and $_1^Q$ . We have $$_1 \left( _1^D, _1^Q \right) &\rightarrow & _1^D, _1^Q, _1^D \\ _2 \left( _2^D, _2^Q \right) &\rightarrow & _2^D, _2^Q, _2^D$$ (Eq. 7) The output of our encoder is then obtained as $$U = \left( { _1^D; _2^D; _1^D; _2^D; _1^D; _2^D } \right) \in ^{2\times m}$$ (Eq. 8) where ${A, B}$ denotes the concatenation between the matrices $A$ and $B$ along the first dimension. This encoder is different than the original DCN in its depth and its use of residual connections. We use not only the output of the deep coattention network $_2^D$ as input to the final bidirectional LSTM, but add skip connections to initial encodings $_1^D$ , $_2^D$ , summary vectors $_1^D$ , $_2^D$ , and coattention context $_1^D$ . This is akin to transformer networks BIBREF1 , which achieved state-of-the-art results on machine translation using deep self-attention layers to help model long-range dependencies, and residual networks BIBREF2 , which achieved state-of-the-art results in image classification through the addition of skip layer connections to facilitate signal propagation and alleviate gradient degradation. Mixed objective using self-critical policy learning The DCN produces a distribution over the start position of the answer and a distribution over the end position of the answer. Let $s$ and $e$ denote the respective start and end points of the ground truth answer. Because the decoder of the DCN is dynamic, we denote the start and end distributions produced at the $t$ th decoding step by $_t \in ^{m}$ and $_t \in ^{m}$ . For convenience, we denote the greedy estimate of the start and end positions by the model at the $t$ th decoding step by $s_t$ and $e_t$ . Moreover, let $\Theta $ denote the parameters of the model. Similar to other question answering models, the DCN is supervised using the cross entropy loss on the start position distribution and the end position distribution: $$_{ce}(\Theta ) = - \sum _t \left( \log _t \left( s \mid s_{t-1}, e_{t-1} ; \Theta \right) + \log _t \left( e \mid s_{t-1}, e_{t-1} ; \Theta \right) \right)$$ (Eq. 10) Equation 10 states that the model accumulates a cross entropy loss over each position during each decoding step given previous estimates of the start and end positions. The question answering task consists of two evaluation metrics. The first, exact match, is a binary score that denotes whether the answer span produced by the model has exact string match with the ground truth answer span. The second, F1, computes the degree of word overlap between the answer span produced by the model and the ground truth answer span. We note that there is a disconnect between the cross entropy optimization objective and the evaluation metrics. For example, suppose we are given the answer estimates $A$ and $B$ , neither of which match the ground truth positions. However, $A$ has an exact string match with the ground truth answer whereas $B$ does not. The cross entropy objective penalizes $A$ and $B$ equally, despite the former being correct under both evaluation metrics. In the less extreme case where $A$ does not have exact match but has some degree of word overlap with the ground truth, the F1 metric still prefers $A$ over $B$ despite its wrongly predicted positions. We encode this preference using reinforcement learning, using the F1 score as the reward function. Let $\hat{s_t} \sim _t$ and $\hat{e_t} \sim _t$ denote the sampled start and end positions from the estimated distributions at decoding step $t$ . We define a trajectory $\hat{\tau }$ as a sequence of sampled start and end points $\hat{s_t}$ and $\hat{e_t}$ through all $T$ decoder time steps. The reinforcement learning objective is then the negative expected rewards $R$ over trajectories. $$_{rl}\left(\Theta \right) &=& - \mathbb {E}_{\hat{\tau } \sim p_{\tau }} \left[ R \left(s, e, \hat{s}_T, \hat{e}_T ; \Theta \right) \right] \\ &\approx & - \mathbb {E}_{\hat{\tau } \sim p_{\tau }} \left[ F_1 \left( {\hat{s}_T}{\hat{e}_T}, {s}{e} \right) - F_1 \left( {s_T}{e_T}, {s}{e} \right) \right]$$ (Eq. 11) We use $F_1$ to denote the F1 scoring function and ${s}{e}$ to denote the answer span retrieved using the start point $s$ and end point $e$ . In equation 11 , instead of using only the F1 word overlap as the reward, we subtract from it a baseline. BIBREF5 show that a good baseline reduces the variance of gradient estimates and facilitates convergence. In our case, we employ a self-critic BIBREF6 that uses the F1 score produced by the current model during greedy inference without teacher forcing. For ease of notation, we abbreviate $R \left(s, e, \hat{s}_T, \hat{e}_T ; \Theta \right)$ as $R$ . As per BIBREF7 and BIBREF8 , the expected gradient of a non-differentiable reward function can be computed as $$\nabla _\Theta _{rl}\left(\Theta \right) &=& - \nabla _\Theta \left( \mathbb {E}_{\hat{\tau } \sim p_{\tau }} \left[ R \right] \right) \\ &=& - \mathbb {E}_{\hat{\tau } \sim p_{\tau }} \left[ R \nabla _\Theta \log p_\tau \left( \tau ; \Theta \right) \right] \\ &=& - \mathbb {E}_{\hat{\tau } \sim p_{\tau }} \left[ R \nabla _\Theta \left( \sum _t^T \left( \log _t \left( \hat{s}_t \vert \hat{s}_{t-1}, \hat{e}_{t-1}; \Theta \right) + \log _t \left( \hat{e}_t \vert \hat{s}_{t-1}, \hat{e}_{t-1}; \Theta \right) \right) \right) \right] \nonumber \\ &\approx & - R \nabla _\Theta \left( \sum _t^T \left( \log _t \left( \hat{s}_t \vert \hat{s}_{t-1}, \hat{e}_{t-1}; \Theta \right) + \log _t \left( \hat{e}_t \vert \hat{s}_{t-1}, \hat{e}_{t-1}; \Theta \right) \right) \right)$$ (Eq. 12) In equation 12 , we approximate the expected gradient using a single Monte-Carlo sample $\tau $ drawn from $p_\tau $ . This sample trajectory $\tau $ contains the start and end positions $\hat{s}_t$ and $\hat{e}_t$ sampled during all decoding steps. One of the key problems in applying RL to natural language processing is the discontinuous and discrete space the agent must explore in order to find a good policy. For problems with large exploration space, RL approaches tend to be applied as fine-tuning steps after a maximum likelihood model has already been trained BIBREF9 , BIBREF10 . The resulting model is constrained in its exploration during fine-tuning because it is biased by heavy pretraining. We instead treat the optimization problem as a multi-task learning problem. The first task is to optimize for positional match with the ground truth answer using the the cross entropy objective. The second task is to optimize for word overlap with the ground truth answer with the self-critical reinforcement learning objective. In a similar fashion to BIBREF11 , we combine the two losses using homoscedastic uncertainty as task-dependent weightings. $$= \frac{1}{2 \sigma _{ce}^2} _{ce}\left(\Theta \right) + \frac{1}{2 \sigma _{rl}^2} _{rl}\left(\Theta \right) + \log \sigma _{ce}^2 + \log \sigma _{rl}^2$$ (Eq. 13) Here, $\sigma _{ce}$ and $\sigma _{rl}$ are learned parameters. The gradient of the cross entropy objective can be derived using straight-forward backpropagation. The gradient of the self-critical reinforcement learning objective is shown in equation 12 . Figure 2 illustrates how the mixed objective is computed. In practice, we find that adding the cross entropy task significantly facilitates policy learning by pruning the space of candidate trajectories - without the former, it is very difficult for policy learning to converge due to the large space of potential answers, documents, and questions. Experiments We train and evaluate our model on the Stanford Question Answering Dataset (). We show our test performance of our model against other published models, and demonstrate the importance of our proposals via ablation studies on the development set. To preprocess the corpus, we use the reversible tokenizer from Stanford CoreNLP BIBREF12 . For word embeddings, we use GloVe embeddings pretrained on the 840B Common Crawl corpus BIBREF13 as well as character ngram embeddings by BIBREF14 . In addition, we concatenate these embeddings with context vectors (CoVe) trained on WMT BIBREF15 . For out of vocabulary words, we set the embeddings and context vectors to zero. We perform word dropout on the document which zeros a word embedding with probability 0.075. In addition, we swap the first maxout layer of the highway maxout network in the DCN decoder with a sparse mixture of experts layer BIBREF16 . This layer is similar to the maxout layer, except instead of taking the top scoring expert, we take the top $k = 2$ expert. The model is trained using ADAM BIBREF17 with default hyperparameters. Hyperparameters of our model are identical to the DCN. We implement our model using PyTorch. Results The performance of our model is shown in Table 1 . Our model achieves state-of-the-art results on dataset with exact match accuracy and F1. When ensembled, our model obtains exact match accuracy and F1. To illustrate the effectiveness of our proposals, we use the DCN with context vectors as a baseline BIBREF15 . This model is identical to the DCN by BIBREF0 , except that it augments the word representations with context vectors trained on WMT16. outperforms the baseline by $$ exact match accuracy and $$ F1 on the development set. Figure 3 shows the consistent performance gain of over the baseline across question types, question lengths, and answer lengths. In particular, provides a significant advantage for long questions. The contributions of each part of our model are shown in Table 2 . We note that the deep residual coattention yielded the highest contribution to model performance, followed by the mixed objective. The sparse mixture of experts layer in the decoder added minor improvements to the model performance. The training curves for with reinforcement learning and without reinforcement learning are shown in Figure 4 to illustrate the effectiveness of our proposed mixed objective. In particular, we note that without mixing in the cross entropy loss, it is extremely difficult to learn the policy. When we combine the cross entropy loss with the reinforcement learning objective, we find that the model initially performs worse early on as it begins policy learning from scratch (shown in Figure 4 ). However, with the addition of cross entropy loss, the model quickly learns a reasonable policy and subsequently outperforms the purely cross entropy model (shown in Figure 4 ). Figure 5 compares predictions by and by the baseline on the development set. Both models retrieve answers that have sensible entity types. For example, the second example asks for “what game” and both models retrieve an American football game; the third example asks for “type of Turing machine” and both models retrieve a type of turing machine. We find, however, that consistently make less mistakes on finding the correct entity. This is especially apparent in the examples we show, which contain several entities or candidate answers of the correct type. In the first example, Gasquet wrote about the plague and called it “Great Pestilence”. While he likely did think of the plague as a “great pestilence”, the phrase “suggested that it would appear to be some form of ordinary Eastern or bubonic plague” provides evidence for the correct answer – “some form of ordinary Eastern or bubonic plague”. Similarly, the second example states that Thomas Davis was injured in the “NFC Championship Game”, but the game he insisted on playing in is the “Super Bowl”. Finally, “multi-tape” and “single-tape” both appear in the sentence that provides provenance for the answer to the question. However, it is the “single-tape” Turing machine that implies quadratic time. In these examples, finds the correct entity out of ones that have the right type whereas the baseline does not. Related work Conclusion We introduced , an state-of-the-art question answering model with deep residual coattention trained using a mixed objective that combines cross entropy supervision with self-critical policy learning. We showed that our proposals improve model performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset ( ). On , the achieves exact match accuracy and F1. When ensembled, the obtains exact match accuracy and F1.
The mixed objective improves EM by 2.5% and F1 by 2.2%
736c74d2f61ac8d3ac31c45c6510a36c767a5d6d
736c74d2f61ac8d3ac31c45c6510a36c767a5d6d_0
Q: What is the multi-instance learning? Text: Introduction A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e.g., facts in a database) and paired texts BIBREF0 , BIBREF1 , BIBREF2 . These correspondences describe how data representations are expressed in natural language (content realisation) but also indicate which subset of the data is verbalised in the text (content selection). Although content selection is traditionally performed by domain experts, recent advances in generation using neural networks BIBREF3 , BIBREF4 have led to the use of large scale datasets containing loosely related data and text pairs. A prime example are online data sources like DBPedia BIBREF5 and Wikipedia and their associated texts which are often independently edited. Another example are sports databases and related textual resources. Wiseman et al. wiseman-shieber-rush:2017:EMNLP2017 recently define a generation task relating statistics of basketball games with commentaries and a blog written by fans. In this paper, we focus on short text generation from such loosely aligned data-text resources. We work with the biographical subset of the DBPedia and Wikipedia resources where the data corresponds to DBPedia facts and texts are Wikipedia abstracts about people. Figure 1 shows an example for the film-maker Robert Flaherty, the Wikipedia infobox, and the corresponding abstract. We wish to bootstrap a data-to-text generator that learns to verbalise properties about an entity from a loosely related example text. Given the set of properties in Figure ( 1 a) and the related text in Figure ( 1 b), we want to learn verbalisations for those properties that are mentioned in the text and produce a short description like the one in Figure ( 1 c). In common with previous work BIBREF6 , BIBREF7 , BIBREF8 our model draws on insights from neural machine translation BIBREF3 , BIBREF9 using an encoder-decoder architecture as its backbone. BIBREF7 introduce the task of generating biographies from Wikipedia data, however they focus on single sentence generation. We generalize the task to multi-sentence text, and highlight the limitations of the standard attention mechanism which is often used as a proxy for content selection. When exposed to sub-sequences that do not correspond to any facts in the input, the soft attention mechanism will still try to justify the sequence and somehow distribute the attention weights over the input representation BIBREF10 . The decoder will still memorise high frequency sub-sequences in spite of these not being supported by any facts in the input. We propose to alleviate these shortcomings via a specific content selection mechanism based on multi-instance learning (MIL; BIBREF11 , BIBREF11 ) which automatically discovers correspondences, namely alignments, between data and text pairs. These alignments are then used to modify the generation function during training. We experiment with two frameworks that allow to incorporate alignment information, namely multi-task learning (MTL; BIBREF12 , BIBREF12 ) and reinforcement learning (RL; BIBREF13 , BIBREF13 ). In both cases we define novel objective functions using the learnt alignments. Experimental results using automatic and human-based evaluation show that models trained with content-specific objectives improve upon vanilla encoder-decoder architectures which rely solely on soft attention. The remainder of this paper is organised as follows. We discuss related work in Section "Related Work" and describe the MIL-based content selection approach in Section "Bidirectional Content Selection" . We explain how the generator is trained in Section "Generator Training" and present evaluation experiments in Section "Experimental Setup" . Section "Conclusions" concludes the paper. Related Work Previous attempts to exploit loosely aligned data and text corpora have mostly focused on extracting verbalisation spans for data units. Most approaches work in two stages: initially, data units are aligned with sentences from related corpora using some heuristics and subsequently extra content is discarded in order to retain only text spans verbalising the data. belz2010extracting obtain verbalisation spans using a measure of strength of association between data units and words, walter2013corpus extract textual patterns from paths in dependency trees while mrabet:webnlg16 rely on crowd-sourcing. Perez-Beltrachini and Gardent perezbeltrachini-gardent:2016:*SEM learn shared representations for data units and sentences reduced to subject-predicate-object triples with the aim of extracting verbalisations for knowledge base properties. Our work takes a step further, we not only induce data-to-text alignments but also learn generators that produce short texts verbalising a set of facts. Our work is closest to recent neural network models which learn generators from independently edited data and text resources. Most previous work BIBREF7 , BIBREF14 , BIBREF15 , BIBREF16 targets the generation of single sentence biographies from Wikipedia infoboxes, while wiseman-shieber-rush:2017:EMNLP2017 generate game summary documents from a database of basketball games where the input is always the same set of table fields. In contrast, in our scenario, the input data varies from one entity (e.g., athlete) to another (e.g., scientist) and properties might be present or not due to data incompleteness. Moreover, our generator is enhanced with a content selection mechanism based on multi-instance learning. MIL-based techniques have been previously applied to a variety of problems including image retrieval BIBREF17 , BIBREF18 , object detection BIBREF19 , BIBREF20 , text classification BIBREF21 , image captioning BIBREF22 , BIBREF23 , paraphrase detection BIBREF24 , and information extraction BIBREF25 . The application of MIL to content selection is novel to our knowledge. We show how to incorporate content selection into encoder-decoder architectures following training regimes based on multi-task learning and reinforcement learning. Multi-task learning aims to improve a main task by incorporating joint learning of one or more related auxiliary tasks. It has been applied with success to a variety of sequence-prediction tasks focusing mostly on morphosyntax. Examples include chunking, tagging BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , name error detection BIBREF30 , and machine translation BIBREF31 . Reinforcement learning BIBREF13 has also seen popularity as a means of training neural networks to directly optimize a task-specific metric BIBREF4 or to inject task-specific knowledge BIBREF32 . We are not aware of any work that compares the two training methods directly. Furthermore, our reinforcement learning-based algorithm differs from previous text generation approaches BIBREF4 , BIBREF32 in that it is applied to documents rather than individual sentences. Bidirectional Content Selection We consider loosely coupled data and text pairs where the data component is a set ${{\cal P}}$ of property-values $\lbrace p_1:v_1, \cdots , p_{|{{\cal P}}|}:v_{|{{\cal P}}|}\rbrace $ and the related text ${{\cal T}}$ is a sequence of sentences $(s_1, \cdots , s_{|{\cal T}|})$ . We define a mention span $\tau $ as a (possibly discontinuous) subsequence of ${{\cal T}}$ containing one or several words that verbalise one or more property-value from ${{\cal P}}$ . For instance, in Figure 1 , the mention span “married to Frances H. Flaherty” verbalises the property-value $\lbrace Spouse(s) : Frances \; Johnson \; Hubbard\rbrace $ . In traditional supervised data to text generation tasks, data units (e.g., $p_i:v_i$ in our particular setting) are either covered by some mention span $\tau _j$ or do not have any mention span at all in ${{\cal T}}$ . The latter is a case of content selection where the generator will learn which properties to ignore when generating text from such data. In this work, we consider text components which are independently edited, and will unavoidably contain unaligned spans, i.e., text segments which do not correspond to any property-value in ${{\cal P}}$ . The phrase “from 1914” in the text in Figure ( 1 b) is such an example. Similarly, the last sentence, talks about Frances' awards and nominations and this information is not supported by the properties either. Our model checks content in both directions; it identifies which properties have a corresponding text span (data selection) and also foregrounds (un)aligned text spans (text selection). This knowledge is then used to discourage the generator from producing text not supported by facts in the property set ${{\cal P}}$ . We view a property set ${{\cal P}}$ and its loosely coupled text ${{\cal T}}$ as a coarse level, imperfect alignment. From this alignment signal, we want to discover a set of finer grained alignments indicating which mention spans in ${{\cal T}}$ align to which properties in ${{\cal P}}$ . For each pair $({\cal P}, {\cal T})$ , we learn an alignment set ${\cal A}({\cal P}, {\cal T})$ which contains property-value word pairs. For example, for the properties $spouse$ and $died$ in Figure 1 , we would like to derive the alignments in Table 1 . We formulate the task of discovering finer-grained word alignments as a multi-instance learning problem BIBREF11 . We assume that words from the text are positive labels for some property-values but we do not know which ones. For each data-text pair $({{\cal P}}, {{\cal T}})$ , we derive $|{\cal T}|$ pairs of the form $({{\cal P}},s)$ where $|{\cal T}|$ is the number of sentences in ${\cal T}$ . We encode property sets ${{\cal P}}$ and sentences $s$ into a common multi-modal $h$ -dimensional embedding space. While doing this, we discover finer grained alignments between words and property-values. The intuition is that by learning a high similarity score for a property set ${{\cal P}}$ and sentence pair $s$ , we will also learn the contribution of individual elements (i.e., words and property-values) to the overall similarity score. We will then use this individual contribution as a measure of word and property-value alignment. More concretely, we assume the pair is aligned (or unaligned) if this individual score is above (or below) a given threshold. Across examples like the one shown in Figure ( 1 a-b), we expect the model to learn an alignment between the text span “married to Frances H. Flaherty” and the property-value $|{\cal T}|$0 . In what follows we describe how we encode $({{\cal P}}, s)$ pairs and define the similarity function. Generator Training In this section we describe the base generation architecture and explain two alternative ways of using the alignments to guide the training of the model. One approach follows multi-task training where the generator learns to output a sequence of words but also to predict alignment labels for each word. The second approach relies on reinforcement learning for adjusting the probability distribution of word sequences learnt by a standard word prediction training algorithm. Encoder-Decoder Base Generator We follow a standard attention based encoder-decoder architecture for our generator BIBREF3 , BIBREF33 . Given a set of properties $X$ as input, the model learns to predict an output word sequence $Y$ which is a verbalisation of (part of) the input. More precisely, the generation of sequence $Y$ is conditioned on input $X$ : $$P(Y|X) = \prod _{t=1}^{|Y|} P(y_t|y_{1:t-1}, X)$$ (Eq. 12) The encoder module constitutes an intermediate representation of the input. For this, we use the property-set encoder described in Section "Bidirectional Content Selection" which outputs vector representations $\lbrace \mathbf {p}_1, \cdots , \mathbf {p}_{|X|} \rbrace $ for a set of property-value pairs. The decoder uses an LSTM and a soft attention mechanism BIBREF33 to generate one word $y_t$ at a time conditioned on the previous output words and a context vector $c_t$ dynamically created: $$P(y_{t+1}|y_{1:t},X) = softmax(g(\mathbf {h}_t, c_t))$$ (Eq. 13) where $g(\cdot )$ is a neural network with one hidden layer parametrised by $\mathbf {W}_o \in \mathbb {R}^{|V| \times d}$ , $|V|$ is the output vocabulary size and $d$ the hidden unit dimension, over $\mathbf {h}_t$ and $c_t$ composed as follows: $$g(\mathbf {h}_t, c_t) = \mathbf {W}_o \; tanh(\mathbf {W}_c [ c_t ; \mathbf {h}_t ] )$$ (Eq. 14) where $\mathbf {W}_c \in \mathbb {R}^{d \times 2d}$ . $\mathbf {h}_t$ is the hidden state of the LSTM decoder which summarises $y_{1:t}$ : $$\mathbf {h}_t = \text{LSTM}(y_t, \mathbf {h}_{t-1})$$ (Eq. 15) The dynamic context vector $c_t$ is the weighted sum of the hidden states of the input property set (Equation ( 16 )); and the weights $\alpha _{ti}$ are determined by a dot product attention mechanism: $$c_t = \sum _{i=1}^{|X|}\alpha _{ti} \, \mathbf {p}_i$$ (Eq. 16) $$\alpha _{ti} = {\text{exp}(\mathbf {h}_{t} \, \mathchoice{\mathbin {\hbox{\scalebox {.5}{$\m@th \displaystyle \bullet $}}}}{}{}{}}{\mathbin {\hbox{\scalebox {.5}{$\m@th \textstyle \bullet $}}}}$$ (Eq. 17) pi)i exp(ht pi ) We initialise the decoder with the averaged sum of the encoded input representations BIBREF34 . The model is trained to optimize negative log likelihood: $${\cal L}_{wNLL} = - \sum _{t=1}^{|Y|} log \, P(y_t|y_{1:t-1}, X)$$ (Eq. 18) We extend this architecture to multi-sentence texts in a way similar to wiseman-shieber-rush:2017:EMNLP2017. We view the abstract as a single sequence, i.e., all sentences are concatenated. When training, we cut the abstracts in blocks of equal size and perform forward backward iterations for each block (this includes the back-propagation through the encoder). From one block iteration to the next, we initialise the decoder with the last state of the previous block. The block size is a hyperparameter tuned experimentally on the development set. Predicting Alignment Labels The generation of the output sequence is conditioned on the previous words and the input. However, when certain sequences are very common, the language modelling conditional probability will prevail over the input conditioning. For instance, the phrase from 1914 in our running example is very common in contexts that talk about periods of marriage or club membership, and as a result, the language model will output this phrase often, even in cases where there are no supporting facts in the input. The intuition behind multi-task training BIBREF12 is that it will smooth the probabilities of frequent sequences when trying to simultaneously predict alignment labels. Using the set of alignments obtained by our content selection model, we associate each word in the training data with a binary label $a_t$ indicating whether it aligns with some property in the input set. Our auxiliary task is to predict $a_t$ given the sequence of previously predicted words and input $X$ : $$P(a_{t+1}|y_{1:t},X) = sigmoid(g^{\prime }(\mathbf {h}_t, c_t))$$ (Eq. 20) $$g^{\prime }(\mathbf {h}_t, c_t) = \mathbf {v}_a \, \mathchoice{\mathbin {\hbox{\scalebox {.5}{$\m@th \displaystyle \bullet $}}}}{}{}{}$$ (Eq. 21) tanh(Wc [ ct ; ht ] ) where $\mathbf {v}_a \in \mathbb {R}^{d}$ and the other operands are as defined in Equation ( 14 ). We optimise the following auxiliary objective function: $${\cal L}_{aln} = - \sum _{t=1}^{|Y|} log \, P(a_t|y_{1:t-1}, X)$$ (Eq. 22) and the combined multi-task objective is the weighted sum of both word prediction and alignment prediction losses: $${\cal L}_{MTL} = \lambda \, {\cal L}_{wNLL} + (1 - \lambda ) \, {\cal L}_{aln}$$ (Eq. 23) where $\lambda $ controls how much model training will focus on each task. As we will explain in Section "Experimental Setup" , we can anneal this value during training in favour of one objective or the other. Reinforcement Learning Training Although the multi-task approach aims to smooth the target distribution, the training process is still driven by the imperfect target text. In other words, at each time step $t$ the algorithm feeds the previous word $w_{t-1}$ of the target text and evaluates the prediction against the target $w_t$ . Alternatively, we propose a training approach based on reinforcement learning ( BIBREF13 ) which allows us to define an objective function that does not fully rely on the target text but rather on a revised version of it. In our case, the set of alignments obtained by our content selection model provides a revision for the target text. The advantages of reinforcement learning are twofold: (a) it allows to exploit additional task-specific knowledge BIBREF32 during training, and (b) enables the exploration of other word sequences through sampling. Our setting differs from previous applications of RL BIBREF4 , BIBREF32 in that the reward function is not computed on the target text but rather on its alignments with the input. The encoder-decoder model is viewed as an agent whose action space is defined by the set of words in the target vocabulary. At each time step, the encoder-decoder takes action $\hat{y}_t$ with policy $P_{\pi }(\hat{y}_t|\hat{y}_{1:t-1}, X)$ defined by the probability in Equation ( 13 ). The agent terminates when it emits the End Of Sequence (EOS) token, at which point the sequence of all actions taken yields the output sequence $\hat{Y}=(\hat{y}_1, \cdots , \hat{y}_{|\hat{Y}|})$ . This sequence in our task is a short text describing the properties of a given entity. After producing the sequence of actions $\hat{Y}$ , the agent receives a reward $r(\hat{Y})$ and the policy is updated according to this reward. We define the reward function $r(\hat{Y})$ on the alignment set ${\cal A}(X,Y)$ . If the output action sequence $\hat{Y}$ is precise with respect to the set of alignments ${\cal A}(X,Y)$ , the agent will receive a high reward. Concretely, we define $r(\hat{Y})$ as follows: $$r(\hat{Y}) = \gamma ^{pr} \, r^{pr}(\hat{Y})$$ (Eq. 26) where $\gamma ^{pr}$ adjusts the reward value $r^{pr}$ which is the unigram precision of the predicted sequence $\hat{Y}$ and the set of words in ${\cal A}(X,Y)$ . We use the REINFORCE algorithm BIBREF13 to learn an agent that maximises the reward function. As this is a gradient descent method, the training loss of a sequence is defined as the negative expected reward: $${\cal L}_{RL} = -\mathbb {E}_{(\hat{y}_1, \cdots , \hat{y}_{|\hat{Y}|})} \sim P_\pi (\text{·}|X)[r(\hat{y}_1, \cdots , \hat{y}_{|\hat{Y}|})] \nonumber $$ (Eq. 28) where $P_\pi $ is the agent's policy, i.e., the word distribution produced by the encoder-decoder model (Equation ( 13 )) and $r(\text{·})$ is the reward function as defined in Equation ( 26 ). The gradient of ${\cal L}_{RL}$ is given by: $$\nabla {\cal L}_{RL} \approx \sum ^{|\hat{Y}|}_{t=1}\nabla \, \text{log} \, P_{\pi }(\hat{y}_t|\hat{y}_{1:t-1}, X)[r(\hat{y}_{1:|\hat{Y}|})-b_t] \nonumber $$ (Eq. 29) where $b_t$ is a baseline linear regression model used to reduce the variance of the gradients during training. $b_t$ predicts the future reward and is trained by minimizing mean squared error. The input to this predictor is the agent hidden state $\mathbf {h}_t$ , however we do not back-propagate the error to $\mathbf {h}_t$ . We refer the interested reader to BIBREF13 and BIBREF4 for more details. Rather than starting from a state given by a random policy, we initialise the agent with a policy learnt by pre-training with the negative log-likelihood objective BIBREF4 , BIBREF32 . The reinforcement learning objective is applied gradually in combination with the log-likelihood objective on each target block subsequence. Recall from Section "Encoder-Decoder Base Generator" that our document is segmented into blocks of equal size during training which we denote as MaxBlock. When training begins, only the last $\mho $ tokens are predicted by the agent while for the first $(\text{{\small \textsc {MaxBlock}}}-\mho )$ we still use the negative log-likelihood objective. The number of tokens $\mho $ predicted by the agent is incremented by $\mho $ units every 2 epochs. We set $\mho =3$ and the training ends when $(\text{{\small \textsc {MaxBlock}}}-\mho )=0$ . Since we evaluate the model's predictions at the block level, the reward function is also evaluated at the block level. Results We compared the performance of an encoder-decoder model trained with the standard negative log-likelihood method (ED), against a model trained with multi-task learning (ED $_{\mathrm {MTL}}$ ) and reinforcement learning (ED $_{\mathrm {RL}}$ ). We also included a template baseline system (Templ) in our evaluation experiments. The template generator used hand-written rules to realise property-value pairs. As an approximation for content selection, we obtained the 50 more frequent property names from the training set and manually defined content ordering rules with the following criteria. We ordered personal life properties (e.g., $birth\_date$ or $occupation$ ) based on their most common order of mention in the Wikipedia abstracts. Profession dependent properties (e.g., $position$ or $genre$ ), were assigned an equal ordering but posterior to the personal properties. We manually lexicalised properties into single sentence templates to be concatenated to produce the final text. The template for the property $position$ and example verbalisation for the property-value ${position : defender}$ of the entity zanetti are “ $[$ NAME $]$ played as $[$ POSITION $]$ .” and “ Zanetti played as defender.” respectively. Conclusions In this paper we focused on the task of bootstrapping generators from large-scale datasets consisting of DBPedia facts and related Wikipedia biography abstracts. We proposed to equip standard encoder-decoder models with an additional content selection mechanism based on multi-instance learning and developed two training regimes, one based on multi-task learning and the other on reinforcement learning. Overall, we find that the proposed content selection mechanism improves the accuracy and fluency of the generated texts. In the future, it would be interesting to investigate a more sophisticated representation of the input BIBREF34 . It would also make sense for the model to decode hierarchically, taking sequences of words and sentences into account BIBREF41 , BIBREF42 . Acknowledgments We thank the NAACL reviewers for their constructive feedback. We also thank Xingxing Zhang, Li Dong and Stefanos Angelidis for useful discussions about implementation details. We gratefully acknowledge the financial support of the European Research Council (award number 681760).
Unanswerable
b2254f9dd0e416ee37b577cef75ffa36cbcb8293
b2254f9dd0e416ee37b577cef75ffa36cbcb8293_0
Q: How many domains of ontologies do they gather data from? Text: Introduction Within the field of ontology engineering, Competency Questions (CQs) BIBREF0 are natural language questions outlining the scope of knowledge represented by an ontology. They represent functional requirements in the sense that the developed ontology or an ontology-based information system should be able to answer them; hence contain all the relevant knowledge. For example, a CQ may be What are the implementations of C4.5 algorithm?, indicating that the ontology needs to contain classes, such as Algorithm and C4.5 as subclass of Algorithm, and something about implementations such that the answer to the CQ will be non-empty. CQs are a part of several ontology engineering methodologies, yet the actual publication of CQs for the available ontologies is rather scarce. Even more scarce is the publication of the CQs' respective formalisation in terms of, e.g., SPARQL queries. This suggests CQs are not used widely as intended. We hypothezise that it may be due to the lack of common practices, templates, automation, and user tools that would support CQ formulation, formalisation, execution, and general management; or: it is still a fully manual process. For instance, even if one has specified CQs, there is no automatic way to translate it to, say, a SPARQL-OWL BIBREF1 query (for validation and verification), and not even a systematic manual way either. There have been few attempts to analyse CQs. Ren et al. BIBREF2 analysed CQs and their patterns to determine CQ archetypes, as tried BIBREF3 . Those patterns have a limited coverage, however, for they are based on CQ sets of at most two ontologies (Pizza and Software), which thus may contain domain bias, CQ author bias, and `prejudiced' patterns as the Pizza CQs were created after the ontology. As simple example of the latter issue, one could create a CQ Which pizza has hot as spiciness? that neatly fits with Pizza's hasSpiciness data property, or a more natural phrase Which pizzas are hot? that is fully agnostic of how it is represented in the ontology, be it with a data property, object property, or a class. More generally, it suggests that Ren et al.'s CQ patterns, formulated alike “Which [CE1] [OPE] [CE2]?”, may not be appropriate as CQ pattern, as it presupposes which kind of element it would be in an ontology. The manual process and `free form' formulation of CQs by domain experts also runs onto problems that some turn out not translatable into a test over the ontology for various reasons. For instance, the CQ How can I get problems [with X] fixed? of the Software Ontology cannot be answered by a declarative specification that the ontology is, or take the CQ for the DMOP ontology BIBREF4 : Given a data mining task/data set, which of the valid or applicable workflows/algorithms will yield optimal results (or at least better results than the others)?: assuming that the question may deal with an arbitrary (not pre-defined upfront) dataset, this CQ may only be answered via performing data mining experiments and not by the ontology itself. Therefore, without a clear guidelines of what kind of CQs may be meaningfully expressed and used as requirement specification for an ontology's content, their uptake and usage likely will remain limited. This paper aims to contribute to addressing the engineering shortcomings of using CQs in ontology development. To clear up the CQ muddle and trying to understand the relation between CQs and the queries over the ontology to test the CQs on an ontology, we gather, analyse, and publicly release a larger set of CQs and their translations to SPARQL-OWL for several ontologies in different domains developed by different groups. For the analysis in particular, we seek to address the following research questions: A total of 234 CQs for 5 ontologies have been collected and translated into SPARQL-OWL queries, and made available as a data resource. We analysed them in two principal ways. The first stage focused on a linguistic analysis of the natural language text itself, i.e., a lexico-syntactic analysis without any presuppositions of ontology elements, and a subsequent step of semantic analysis. This revealed 17 CQ patterns at the natural language layer. While a few patterns occur in multiple CQ sets, there are also patterns unique to a CQ set, supporting the expectation that a broad sampling is required to obtain a more representative set of patterns. The second phase consists of designing SPARQL-OWL queries for all CQs, where possible, and examining the signature of the queries. We found 46 query signatures resulting from the collected 131 SPARQL-OWL queries. The third step consists of the analysis of the relation between the CQ patterns and the SPARQL-OWL query signatures. This is, as hypothesised, a INLINEFORM0 : INLINEFORM1 relation, or: one CQ pattern may be realised by more than one SPARQL-OWL query and there may be more than one CQ pattern for a SPARQL-OWL query signature. The remainder of the paper is structured as follows. We first discuss related works on CQs and CQ patterns in Section SECREF2 . Section SECREF3 is devoted to the linguistic analysis of CQs and Section SECREF4 to the generation and analysis of the SPARQL-OWL queries. We discuss and return to the research questions in Section SECREF5 and conclude in Section SECREF6 . The data is available from a Git repository at https://github.com/CQ2SPARQLOWL/Dataset. Analysis of Competency Questions The aim of the analysis of the CQs is to examine whether there are some popular linguistic structures that can be reused to specify requirements for, and validate, new and existing ontologies. This section describes the collection of the materials, the methods, and subsequently the results of the CQ analysis. Materials and Methods We describe and motivate the materials first and then proceed to the methods and motivations thereof. There are multiple ontologies available over internet with competency questions provided, but since the focus of our research is on SPARQL-OWL queries, we selected only those ontologies with CQs stated against ontology schema (T-Box). As a result we selected 5 ontologies with 234 competency questions in total. Table TABREF8 summarizes our dataset size and source of each ontology. The Software Ontology (SWO) BIBREF5 is included because its set of CQs is of substantial size and it was part of Ren et al.'s set of analysed CQs. The CQ sets of Dem@Care BIBREF8 and OntoDT BIBREF9 were included because they were available. CQs for the Stuff BIBREF6 and African Wildlife (AWO) BIBREF7 ontologies were added to the set, because the ontologies were developed by one of the authors (therewith facilitating in-depth domain analysis, if needed), they cover other topics, and are of a different `type' (a tutorial ontology (AWO) and a core ontology (Stuff)), thus contributing to maximising diversity in source selection. Generating SPARQL-OWL queries from CQs In this section, we carry out and examine the `translation' of CQs to a form that can be evaluated against an ontology. As first preliminary observation, we observe that an OWL ontology can be serialized as an RDF/XML graph BIBREF10 and thus queried using SPARQL Query Language BIBREF11 . In its base form SPARQL is basically a pattern matching language and as such does not provide any reasoning capabilities; however, it is possible to introduce these by using SPARQL Entailment Regimes BIBREF12 . In particular, we employ OWL 2 Direct Semantics Entailment Regime. Intuitively, it allows us to construct a SPARQL query such that its WHERE clause contains OWL axioms, possibly with some of its IRIs and literals replaced by SPARQL variables. The results of the execution of such a query are all the variable mappings such that the axioms obtained by applying these mapping to the axioms in the query, are entailed by the queried ontology. SPARQL, being a query language for RDF, employs Turtle syntax BIBREF13 to express Basic Graph Patterns (BGPs) and this convention is kept also for expressing OWL axioms, i.e., their RDF representation is used BIBREF10 . This is consistent with how the only available implementation behaves BIBREF1 , BIBREF14 . The second preliminary comment is that we note that, unlike Dennis et al. BIBREF15 's claim, CQs do not have to have specific presuppositions other than vocabulary, but queries do, for it is the queries that are specific to the ontology and the modelling style used and other modelling decisions made. We can make this distinction here, because of the separation of concerns between the linguistics of the CQs on the one hand and the queries and ontology how it it realised on the other hand, rather than having the two combined as in BIBREF3 , BIBREF2 , BIBREF15 . Acknowledgments This work was partly supported by the Polish National Science Center (Grant No 2014/13/D/ST6/02076). Jedrzej Potoniec acknowledges support from the grant 09/91/DSPB/0627.
5 domains: software, stuff, african wildlife, healthcare, datatypes
cb1126992a39555e154bedec388465b249a02ded
cb1126992a39555e154bedec388465b249a02ded_0
Q: How is the semi-structured knowledge base created? Text: Introduction Answering questions posed in natural language is a fundamental AI task, with a large number of impressive QA systems built over the years. Today's Internet search engines, for instance, can successfully retrieve factoid style answers to many natural language queries by efficiently searching the Web. Information Retrieval (IR) systems work under the assumption that answers to many questions of interest are often explicitly stated somewhere BIBREF0 , and all one needs, in principle, is access to a sufficiently large corpus. Similarly, statistical correlation based methods, such as those using Pointwise Mutual Information or PMI BIBREF1 , work under the assumption that many questions can be answered by looking for words that tend to co-occur with the question words in a large corpus. While both of these approaches help identify correct answers, they are not suitable for questions requiring reasoning, such as chaining together multiple facts in order to arrive at a conclusion. Arguably, such reasoning is a cornerstone of human intelligence, and is a key ability evaluated by standardized science exams given to students. For example, consider a question from the NY Regents 4th Grade Science Test: We would like a QA system that, even if the answer is not explicitly stated in a document, can combine basic scientific and geographic facts to answer the question, e.g., New York is in the north hemisphere; the longest day occurs during the summer solstice; and the summer solstice in the north hemisphere occurs in June (hence the answer is June). Figure 1 illustrates how our system approaches this, with the highlighted support graph representing its line of reasoning. Further, we would like the system to be robust under simple perturbations, such as changing New York to New Zealand (in the southern hemisphere) or changing an incorrect answer option to an irrelevant word such as “last” that happens to have high co-occurrence with the question text. To this end, we propose a structured reasoning system, called TableILP, that operates over a semi-structured knowledge base derived from text and answers questions by chaining multiple pieces of information and combining parallel evidence. The knowledge base consists of tables, each of which is a collection of instances of an $n$ -ary relation defined over natural language phrases. E.g., as illustrated in Figure 1 , a simple table with schema (country, hemisphere) might contain the instance (United States, Northern) while a ternary table with schema (hemisphere, orbital event, month) might contain (North, Summer Solstice, June). TableILP treats lexical constituents of the question $Q$ , as well as cells of potentially relevant tables $T$ , as nodes in a large graph $\mathcal {G}_{Q,T}$ , and attempts to find a subgraph $G$ of $\mathcal {G}_{Q,T}$ that “best” supports an answer option. The notion of best support is captured via a number of structural and semantic constraints and preferences, which are conveniently expressed in the Integer Linear Programming (ILP) formalism. We then use an off-the-shelf ILP optimization engine called SCIP BIBREF3 to determine the best supported answer for $Q$ . Following a recently proposed AI challenge BIBREF4 , we evaluate TableILP on unseen elementary-school science questions from standardized tests. Specifically, we consider a challenge set BIBREF2 consisting of all non-diagram multiple choice questions from 6 years of NY Regents 4th grade science exams. In contrast to a state-of-the-art structured inference method BIBREF5 for this task, which used Markov Logic Networks (MLNs) BIBREF6 , TableILP achieves a significantly (+14% absolute) higher test score. This suggests that a combination of a rich and fine-grained constraint language, namely ILP, even with a publicly available solver is more effective in practice than various MLN formulations of the task. Further, while the scalability of the MLN formulations was limited to very few (typically one or two) selected science rules at a time, our approach easily scales to hundreds of relevant scientific facts. It also complements the kind of questions amenable to IR and PMI techniques, as is evidenced by the fact that a combination (trained using simple Logistic Regression BIBREF2 ) of TableILP with IR and PMI results in a significant (+10% absolute) boost in the score compared to IR alone. Our ablation study suggests that combining facts from multiple tables or multiple rows within a table plays an important role in TableILP's performance. We also show that TableILP benefits from the table structure, by comparing it with an IR system using the same knowledge (the table rows) but expressed as simple sentences; TableILP scores significantly (+10%) higher. Finally, we demonstrate that our approach is robust to a simple perturbation of incorrect answer options: while the simple perturbation results in a relative drop of 20% and 33% in the performance of IR and PMI methods, respectively, it affects TableILP's performance by only 12%. Related Work BIBREF2 aristo2016:combining proposed an ensemble approach for the science QA task, demonstrating the effectiveness of a combination of information retrieval, statistical association, rule-based reasoning, and an ILP solver operating on semi-structured knowledge. Our ILP system extends their model with additional constraints and preferences (e.g., semantic relation matching), substantially improving QA performance. A number of systems have been developed for answering factoid questions with short answers (e.g., “What is the capital of France?”) using document collections or databases (e.g., Freebase BIBREF7 , NELL BIBREF8 ), for example BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . However, many science questions have answers that are not explicitly stated in text, and instead require combining information together. Conversely, while there are AI systems for formal scientific reasoning (e.g., BIBREF16 , BIBREF17 ), they require questions to be posed in logic or restricted English. Our goal here is a system that operates between these two extremes, able to combine information while still operating with natural language. The task of Recognizing Textual Entailment (RTE) BIBREF18 , BIBREF19 is also closely related, as QA can be cast as entailment (Does corpus entail question+answer? BIBREF20 ). However, RTE has primarily focused on the task of linguistic equivalence, and has not addressed questions where some form of scientific reasoning is required. Recent work on Natural Logic BIBREF21 , BIBREF22 has extended RTE to account for the logical structure within language. Our work can be seen as going one step further, to add a layer of structured reasoning on top of this; in fact, we use an RTE engine as a basic subroutine for comparing individual table cells in our ILP formulation. ILP based discrete optimization has been successful in several NLP tasks BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . While our ILP formulation also operates on natural language text, our focus is on the use of a specific semi-structured table representation for QA. BIBREF28 cohen2000:joins studied tables with natural language text requiring soft matching, with a focus on efficiently computing the top few candidates given a database query. In contrast, our system, given a natural language question, (implicitly) seeks to generate a query that produces the most supported answer. QA as Subgraph Optimization We begin with our knowledge representation formalism, followed by our treatment of QA as an optimal subgraph selection problem over such knowledge, and then briefly describe our ILP model for subgraph selection. Semi-Structured Knowledge as Tables We use semi-structured knowledge represented in the form of $n$ -ary predicates over natural language text BIBREF2 . Formally, a $k$ -column table in the knowledge base is a predicate $r(x_1, x_2, \ldots , x_k)$ over strings, where each string is a (typically short) natural language phrase. The column headers capture the table schema, akin to a relational database. Each row in the table corresponds to an instance of this predicate. For example, a simple country-hemisphere table represents the binary predicate $r_{\text{ctry-hems}}(c,h)$ with instances such as (Australia, Southern) and (Canada, Northern). Since table content is specified in natural language, the same entity is often represented differently in different tables, posing an additional inference challenge. Although techniques for constructing this knowledge base are outside the scope of this paper, we briefly mention them. Tables were constructed using a mixture of manual and semi-automatic techniques. First, the table schemas were manually defined based on the syllabus, study guides, and training questions. Tables were then populated both manually and semi-automatically using IKE BIBREF29 , a table-building tool that performs interactive, bootstrapped relation extraction over a corpus of science text. In addition, to augment these tables with the broad knowledge present in study guides that doesn't always fit the manually defined table schemas, we ran an Open IE BIBREF30 pattern-based subject-verb-object (SVO) extractor from BIBREF31 clark2014:akbc over several science texts to populate three-column Open IE tables. Methods for further automating table construction are under development. QA as a Search for Desirable Support Graphs We treat question answering as the task of pairing the question with an answer such that this pair has the best support in the knowledge base, measured in terms of the strength of a “support graph” defined as follows. Given a multiple choice question $Q$ and tables $T$ , we can define a labeled undirected graph $\mathcal {G}_{Q,T}$ over nodes $\mathcal {V}$ and edges $\mathcal {E}$ as follows. We first split $Q$ into lexical constituents (e.g., non-stopword tokens, or chunks) $\mathbf {q} = \lbrace q_\ell \rbrace $ and answer options $\mathbf {a} = \lbrace a_{m}\rbrace $ . For each table $T_{i}$ , we consider its cells $\mathbf {t} = \lbrace t_{ijk}\rbrace $ as well as column headers $T$0 . The nodes of $T$1 are then $T$2 . For presentation purposes, we will equate a graph node with the lexical entity it represents (such as a table cell or a question constituent). The undirected edges of $T$3 are $T$4 excluding edges both whose endpoints are within a single table. Informally, an edge denotes (soft) equality between a question or answer node and a table node, or between two table nodes. To account for lexical variability (e.g., that tool and instrument are essentially equivalent) and generalization (e.g., that a dog is an animal), we replace string equality with a phrase-level entailment or similarity function $w : \mathcal {E} \rightarrow [0,1]$ that labels each edge $e \in \mathcal {E}$ with an associated score $w(e)$ . We use entailment scores (directional) from $\mathbf {q}$ to $\mathbf {t} \cup \mathbf {h}$ and from $\mathbf {t} \cup \mathbf {h}$ to $\mathbf {a}$ , and similarity scores (symmetric) between two nodes in $\mathbf {t}$ . In the special case of column headers across two tables, the score is (manually) set to either 0 or 1, indicating whether this corresponds to a meaningful join. Intuitively, we would like the support graph for an answer option to be connected, and to include nodes from the question, the answer option, and at least one table. Since each table row represents a coherent piece of information but cells within a row do not have any edges in $\mathcal {G}_{Q,T}$ (the same holds also for cells and the corresponding column headers), we use the notion of an augmented subgraph to capture the underlying table structure. Let $G = (V,E)$ be a subgraph of $\mathcal {G}_{Q,T}$ . The augmented subgraph $G^+$ is formed by adding to $G$ edges $(v_1,v_2)$ such that $v_1$ and $v_2$ are in $V$ and they correspond to either the same row (possibly the header row) of a table in $T$ or to a cell and the corresponding column header. Definition 1 A support graph $G = G(Q,T,a_{m})$ for a question $Q$ , tables $T$ , and an answer option $a_{m}$ is a subgraph $(V,E)$ of $\mathcal {G}_{Q,T}$ with the following basic properties: $V \cap \mathbf {a} = \lbrace a_{m}\rbrace , \ V \cap \mathbf {q} \ne \phi , \ V \cap \mathbf {t} \ne \phi $ ; $w(e) > 0$ for all $e \in E$ ; if $e \in E \cap (\mathbf {t} \times \mathbf {t})$ then there exists a corresponding $e^{\prime } \in E \cap (\mathbf {h} \times \mathbf {h})$ involving the same columns; and the augmented subgraph $G^+$ is connected. A support graph thus connects the question constituents to a unique answer option through table cells and (optionally) table headers corresponding to the aligned cells. A given question and tables give rise to a large number of possible support graphs, and the role of the inference process will be to choose the “best" one under a notion of desirable support graphs developed next. We do this through a number of additional structural and semantic properties; the more properties the support graph satisfies, the more desirable it is. ILP Formulation We model the above support graph search for QA as an ILP optimization problem, i.e., as maximizing a linear objective function over a finite set of variables, subject to a set of linear inequality constraints. A summary of the model is given below. We note that the ILP objective and constraints aren't tied to the particular domain of evaluation; they represent general properties that capture what constitutes a well supported answer for a given question. Table 1 summarizes the notation for various elements of the problem, such as $t_{ijk}$ for cell $(j,k)$ of table $i$ . All core variables in the ILP model are binary, i.e., have domain $\lbrace 0,1\rbrace $ . For each element, the model has a unary variable capturing whether this element is part of the support graph $G$ , i.e., it is “active”. For instance, row $r_{ij}$ is active if at least one cell in row $j$ of table $i$ is in $G$ . The model also has pairwise “alignment” variables, capturing edges of $\mathcal {G}_{Q,T}$ . The alignment variable for an edge $(j,k)$0 in $(j,k)$1 is associated with the corresponding weight $(j,k)$2 , and captures whether $(j,k)$3 is included in $(j,k)$4 . To improve efficiency, we create a pairwise variable for $(j,k)$5 only if $(j,k)$6 is larger than a certain threshold. These unary and pairwise variables are then used to define various types of constraints and preferences, as discussed next. To make the definitions clearer, we introduce all basic variables used in our optimization in Table 2 , and will use them later to define constraints explicitly. We use the notation $x\left(.\right)$ to refer to a unary variable parameterized by a single element of the optimization, and $y\left(., .\right)$ to refer to a pairwise variable parameterized by a pair of elements. Unary variables represent the presence of a specific element as a node in the support graph $G$ . For example $x\left(T_{i}\right) = 1$ if and only if the table $T_{i}$ is active in $G$ . Similarly, $y\left(t_{ijk}, q_\ell \right)= 1$ if and only if the corresponding edge is present in $G$ , which we alternatively refer to as an alignment between cell $(j,k)$ of table $i$ and the $y\left(., .\right)$0 -th constituent of the question. As previously mentioned, in practice we do not create all possible pairwise variables. Instead we choose the pairs alignment score $w(e)$ exceeds a pre-set threshold. For example, we create $y\left(t_{ijk}, t_{i^{\prime }j^{\prime }k^{\prime }}\right)$ only if $w(t_{ijk}, t_{i^{\prime }j^{\prime }k^{\prime }}) \ge \textsc {MinCellCellAlignment}$ . The objective function is a weighted linear sum over all variables instantiated for a given question answering problem. A small set of auxiliary variables is defined for linearizing complicated constraints. Constraints are a significant part of our model, used for imposing the desired behavior on the support graph. Due to lack of space, we discuss only a representative subset here. Some constraints relate variables to each other. For example, unary variables are defined through constraints that relate them to the corresponding pairwise variables. For instance, for active row variable $x\left(r_{ij}\right)$ , we ensure that it is 1 if and only if at least one cell in row $j$ is active: $ x\left(r_{ij}\right) \ge y\left(t_{ijk}, *\right),\ \ \ \forall (t_{ijk}, *) \in \mathcal {R}_{ij}, \forall i, j, k, $ where $\mathcal {R}_{ij}$ is collection of pairwise variables with one end in row $j$ of table $i$ . In the remainder of this section, we outline some of the important characteristics we expect in our model, and provide details of a few illustrative constraints. Consider the following question: Which characteristic helps a fox find food? (A) sense of smell (B) thick fur (C) long tail (D) pointed teeth In order to answer such lookup-style questions, we generally seek a row with the highest aggregate alignment to question constituents. We achieve this by incorporating the question-table alignment variables with the alignment scores, $w(e)$ , as coefficients and the active question constituents variable with a constant coefficient in the objective function. Since any additional question-table edge with a positive entailment score (even to irrelevant tables) in the support graph would result in an increase in the score, we disallow tables with alignments only to the question (or only to a choice) and add a small penalty for every table used in order to reduce noise in the support graph. We also limit the maximum number of alignments of a question constituent and table cells, in order to prevent one constituent or cell from having a large influence on the objective function and thereby the solution: $ \sum _{(*, q_\ell ) \in \mathcal {Q}_l } y\left(*, q_\ell \right) \le \textsc {MaxAlignmentsPerQCons}, \forall l $ where $\mathcal {Q}_l$ is the set of all pairwise variables with one end in the question constituent $\ell $ . For certain questions, evidence needs to be combined from multiple rows of a table. For example, Sleet, rain, snow, and hail are forms of (A) erosion (B) evaporation (C) groundwater (D) precipitation To answer this question, we need to combine evidence from multiple table entries from the weather terms table, (term, type), namely (sleet, precipitation), (rain, precipitation), (snow, precipitation), and (hail, precipitation). To achieve this, we allow multiple active rows in the support graph. Similar to the basic constraints, we limit the maximum number of active rows per table and add a penalty for every active row to ensure only relevant rows are considered for reasoning : $ \sum _{j} x\left(r_{ij}\right) \le \textsc {MaxRowsPerTable}, \forall i $ To encourage only coherent parallel evidence within a single table, we limit our support graph to always use the same columns across multiple rows within a table, i.e., every active row has the active cells corresponding to the same set of columns. Questions requiring chaining of evidence from multiple tables, such as the example in Figure 1 , are typically the most challenging in this domain. Chaining can be viewed as performing a join between two tables. We introduce alignments between cells across columns in pairs of tables to allow for chaining of evidence. To help minimize potential noise introduced by chaining irrelevant facts, we add a penalty for every inter-table alignment and also rely on the 0/1 weights of header-to-header edges to ensure only semantically meaningful table joins are considered. Our constraints so far have only looked at the content of the table cells, or the structure of the support graph, without explicitly considering the semantics of the table schema. By using alignments between the question and column headers (i.e., type information), we exploit the table schema to prefer alignments to columns relevant to the “topic” of the question. In particular, for questions of the form “which X $\ldots $ ”, we prefer answers that directly entail X or are connected to cells that entail X. However, this is not sufficient for questions such as: What is one way to change water from a liquid to a solid? (A) decrease the temperature (B) increase the temperature (C) decrease the mass (D) increase the mass Even if we select the correct table, say $r_\text{change-init-fin}(c, i, f)$ that describes the initial and final states for a phase change event, both choice (A) and choice (B) would have the exact same score in the presence of table rows (increase temperature, solid, liquid) and (decrease temperature, liquid, solid). The table, however, does have the initial vs. final state structure. To capture this semantic structure, we annotate pairs of columns within certain tables with the semantic relationship present between them. In this example, we would annotate the phase change table with the relations: changeFrom $(c, i)$ , changeTo $(c, f)$ , and fromTo $(i, f)$ . Given such semantic relations for table schemas, we can now impose a preference towards question-table alignments that respect these relations. We associate each semantic relation with a set of linguistic patterns describing how it might be expressed in natural language. TableILP then uses these patterns to spot possible mentions of the relations in the question $Q$ . We then add the soft constraint that for every pair of active columns in a table (with an annotated semantic relation) aligned to a pair of question constituents, there should be a valid expression of that relation in $Q$ between those constituents. In our example, we would match the relation fromTo(liquid, solid) in the table to “liquid to a solid” in the question via the pattern “X to a Y" associated with fromTo(X,Y), and thereby prefer aligning with the correct row (decrease temperature, liquid, solid). Evaluation We compare our approach to three existing methods, demonstrating that it outperforms the best previous structured approach BIBREF5 and produces a statistically significant improvement when used in combination with IR-based methods BIBREF2 . For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. Question Set. We use the same question set as BIBREF2 aristo2016:combining, which consists of all non-diagram multiple-choice questions from 12 years of the NY Regents 4th Grade Science exams. The set is split into 108 development questions and 129 hidden test questions based on the year they appeared in (6 years each). All numbers reported below are for the hidden test set, except for question perturbation experiments which relied on the 108 development questions. Test scores are reported as percentages. For each question, a solver gets a score of 1 if it chooses the correct answer and $1/k$ if it reports a $k$ -way tie that includes the correct answer. On the 129 test questions, a score difference of 9% (or 7%) is statistically significant at the 95% (or 90%, resp.) confidence interval based on the binomial exact test BIBREF33 . Corpora. We work with three knowledge corpora: Solvers TableILP (our approach). Given a question $Q$ , we select the top 7 tables from the Table Corpus using the the standard TF-IDF score of $Q$ with tables treated as bag-of-words documents. For each selected table, we choose the 20 rows that overlap with $Q$ the most. This filtering improves efficiency and reduces noise. We then generate an ILP and solve it using the open source SCIP engine BIBREF3 , returning the active answer option $a_{m}$ from the optimal solution. To check for ties, we disable $a_{m}$ , re-solve the ILP, and compare the score of the second-best answer, if any, with that of $a_m$ . MLN Solver (structured inference baseline). We consider the current state-of-the-art structured reasoning method developed for this specific task by BIBREF5 aristo2015:mln. We compare against their best performing system, namely Praline, which uses Markov Logic Networks BIBREF6 to (a) align lexical elements of the question with probabilistic first-order science rules and (b) to control inference. We use the entire set of 47,000 science rules from their original work, which were also derived from same domain-targeted sources as the ones used in our Sentence Corpus. IR Solver (information retrieval baseline). We use the IR baseline by BIBREF2 aristo2016:combining, which selects the answer option that has the best matching sentence in a corpus. Specifically, for each answer option $a_i$ , the IR solver sends $q + a_i$ as a query to a search engine (we use Lucene) on the Sentence Corpus, and returns the search engine's score for the top retrieved sentence $s$ , where $s$ must have at least one non-stopword overlap with $q$ , and at least one with $a_i$ . The option with the highest Lucene score is returned as the answer. PMI Solver (statistical co-occurrence baseline). We use the PMI-based approach by BIBREF2 aristo2016:combining, which selects the answer option that most frequently co-occurs with the question words in a corpus. Specifically, it extracts unigrams, bigrams, trigrams, and skip-bigrams from the question and each answer option. For a pair $(x,y)$ of $n$ -grams, their pointwise mutual information (PMI) BIBREF1 in the corpus is defined as $\log \frac{p(x,y)}{p(x)p(y)}$ where $p(x,y)$ is the co-occurrence frequency of $x$ and $y$ (within some window) in the corpus. The solver returns the answer option that has the largest average PMI in the Web Corpus, calculated over all pairs of question $n$ -grams and answer option $n$ -grams. Results We first compare the accuracy of our approach against the previous structured (MLN-based) reasoning solver. We also compare against IR(tables), an IR solver using table rows expressed as sentences, thus embodying an unstructured approach operating on the same knowledge as TableILP. As Table 3 shows, among the two structured inference approaches, TableILP outperforms the MLN baseline by 14%. The preliminary ILP system reported by BIBREF2 aristo2016:combining achieves only a score of 43.8% on this question set. Further, given the same semi-structured knowledge (i.e., the Table Corpus), TableILP is substantially (+10%) better at exploiting the structure than the IR(tables) baseline, which, as mentioned above, uses the same data expressed as sentences. While their overall score is similar, TableILP and IR-based methods clearly approach QA very differently. To assess whether TableILP adds any new capabilities, we considered the 50 (out of 129) questions incorrectly answered by PMI solver (ignoring tied scores). On these unseen but arguably more difficult questions, TableILP answered 27 questions correctly, achieving a score of 54% compared to the random chance of 25% for 4-way multiple-choice questions. Results with IR solver were similar: TableILP scored 24.75 on the 52 questions incorrectly answered by IR (i.e., 47.6% accuracy). This analysis highlights the complementary strengths of these solvers. Following BIBREF2 aristo2016:combining, we create an ensemble of TableILP, IR, and PMI solvers, combining their answer predictions using a simple Logistic Regression model trained on the development set. This model uses 4 features derived from each solver's score for each answer option, and 11 features derived from TableILP's support graphs. Table 4 shows the results, with the final combination at 69% representing a significant improvement over individual solvers. Table 5 summarizes various ILP and support graph statistics for TableILP, averaged across all test questions. The optimization model has around 50 high-level constraints, which result, on average, in around 4000 inequalities over 1000 variables. Model creation, which includes computing pairwise entailment scores using WordNet, takes 1.9 seconds on average per question, and the resulting ILP is solved by the SCIP engine in 2.1 seconds (total for all four options), using around 1,300 LP iterations for each option. Thus, TableILP takes only 4 seconds to answer a question using multiple rows across multiple tables (typically 140 rows in total), as compared to 17 seconds needed by the MLN solver for reasoning with four rules (one per answer option). While the final support graph on this question set relies mostly on a single table to answer the question, it generally combines information from more than two rows (2.3 on average) for reasoning. This suggests parallel evidence is more frequently used on this dataset than evidence chaining. Ablation Study To quantify the importance of various components of our system, we performed several ablation experiments, summarized in Table 6 and described next. No Multiple Row Inference: We modify the ILP constraints to limit inference to a single row (and hence a single table), thereby disallowing parallel evidence and evidence chaining (Section "ILP Formulation" ). This drops the performance by 10.5%, highlighting the importance of being able to combine evidence from multiple rows (which would correspond to multiple sentences in a corpus) from one or more tables. No Relation matching: To assess the importance of considering the semantics of the table, we remove the requirement of matching the semantic relation present between columns of a table with its lexicalization in the question (Section "ILP Formulation" ). The 6% drop indicates TableILP relies strongly on the table semantics to ensure creating meaningful inferential chains. No Open IE tables: To evaluate the impact of relatively unstructured knowledge from a large corpus, we removed the tables containing Open IE extractions (Section "Conclusion" ). The 9% drop in the score shows that this knowledge is important and TableILP is able to exploit it even though it has a very simple triple structure. This opens up the possibility of extending our approach to triples extracted from larger knowledge bases. No Lexical Entailment: Finally, we test the effect of changing the alignment metric $w$ (Section "Conclusion" ) from WordNet based scores to a simple asymmetric word-overlap measured as $\mathit {score}(T, H) = \frac{|T \cap H|}{|H|}$ . Relying on just word-matching results in an 11% drop, which is consistent with our knowledge often being defined in terms of generalities. Question Perturbation One desirable property of QA systems is robustness to simple variations of a question, especially when a variation would make the question arguably easier for humans. To assess this, we consider a simple, automated way to perturb each 4-way multiple-choice question: (1) query Microsoft's Bing search engine (www.bing.com) with the question text and obtain the text snippet of the top 2,000 hits; (2) create a list of strings by chunking and tokenizing the results; (3) remove stop words and special characters, as well as any words (or their lemma) appearing in the question; (4) sort the remaining strings based on their frequency; and (5) replace the three incorrect answer options in the question with the most frequently occurring strings, thereby generating a new question. For instance: In New York State, the longest period of daylight occurs during which month? (A) eastern (B) June (C) history (D) years As in this example, the perturbations (italicized) are often not even of the correct “type”, typically making them much easier for humans. They, however, still remain difficult for solvers. For each of the 108 development questions, we generate 10 new perturbed questions, using the 30 most frequently occurring words in step (5) above. While this approach can introduce new answer options that should be considered correct as well, only 3% of the questions in a random sample exhibited this behavior. Table 7 shows the performance of various solvers on the resulting 1,080 perturbed questions. As one might expect, the PMI approach suffers the most at a 33% relative drop. TableILP's score drops as well (since answer type matching isn't perfect), but only by 12%, attesting to its higher resilience to simple question variation. Conclusion Answering real science questions is a challenging task because they are posed in natural language, require extensive domain knowledge, and often require combining multiple facts together. We presented TableILP, a system that can answer such questions, using a semi-structured knowledge base. We treat QA as a subgraph selection problem and then formulate this as an ILP optimization. Most importantly, this formulation allows multiple, semi-formally expressed facts to be combined to answer questions, a capability outside the scope of IR-based QA systems. In our experiments, this approach significantly outperforms both the previous best attempt at structured reasoning for this task, and an IR engine provided with the same knowledge. It also significantly boosts performance when combined with unstructured methods (IR and PMI). These results suggest that the approach is both viable and promising for natural language question answering. Acknowledgments D.K. is in part supported by AI2 and Google. The authors would like to thank Christos Christodoulopoulos, Sujay Jauhar, Sam Skjonsberg, and the Aristo Team at AI2 for invaluable discussions and insights.
using a mixture of manual and semi-automatic techniques
d5256d684b5f1b1ec648d996c358e66fe51f4904
d5256d684b5f1b1ec648d996c358e66fe51f4904_0
Q: what is the practical application for this paper? Text: Introduction Morphology deals with the internal structure of words BIBREF0 , BIBREF1 . Languages of the world have different word production processes. Morphological richness vary from language to language, depending on their linguistic typology. In natural language processing (NLP), taking into account the morphological complexity inherent to each language could be important for improving or adapting the existing methods, since the amount of semantic and grammatical information encoded at the word level, may vary significantly from language to language. Conceptualizing and quantifying linguistic complexity is not an easy task, many quantitative and qualitative dimensions must be taken into account BIBREF2 . On one hand we can try to answer what is complexity in a language and which mechanisms express it, on the other hand, we can try to find out if there is a language with more complex phenomena (phonological, morphological, syntactical) than other and how can we measure it. miestamo2008grammatical distinguishes between two types of complexity: the absolute, which defines complexity in terms of the number of parts of a system; and the relative, which is related to the cost and difficulty faced by language users. Some authors focuses in the absolute approach since it is less subjective. Another common complexity distinction is between global and particular. Global complexity characterizes entire languages, e.g., as easy or difficult to learn BIBREF2 , while particular complexity refers only to a level of the whole language (for example phonological complexity, morphological complexity, syntactical complexity). We focus on morphological complexity. Many definitions of this term have been proposed BIBREF3 , BIBREF4 , BIBREF5 . From the computational linguistics perspective there has been a special interest in corpus based approaches to quantify it, i.e., methods that estimate the morphological complexity of a language directly from the production of morphological instances over a corpus. This type of approach usually represents a relatively easy and reproducible way to quantify complexity without the strict need of linguistic annotated data. The underlying intuition of corpus based methods is that morphological complexity depends on the morphological system of a language, like its inflectional and derivational processes. A very productive system will produce a lot of different word forms. This morphological richness can be captured with several statistical measures, e.g., information theory measures BIBREF6 or type token relationships. For example, [p. 9]bybee2010language affirms that “the token frequency of certain items in constructions [i.e., words] as well as the range of types [...] determines representation of the construction as well as its productivity”. In this work, we are interested in using corpus based approaches; however, we would like to quantify the complexity not only by the type and token distributions over a corpus, but also by taking into account other important dimension: the predictability of a morph sequence BIBREF7 . This is a preliminary work that takes as a case of study the distant languages Otomi, Nahuatl and Spanish. The general idea is to use parallel corpora, type-token relationship and some NLP strategies for measuring the predictability in statistical language models. Additionally, most of the previous works do not analyze how the complexity changes when different types of morphological normalization procedures are applied to a language, e.g., lemmatization, stemming, morphological segmentation. This information could be useful for linguistic analysis and for measuring the impact of different word form normalization tools depending of the language. In this work, we analyze how the type-token relationship changes using different types of morphological normalization techniques. The type-token relationship (TTR) The type-token relationship (TTR) is the relationship that exists between the number of distinct words (types) and the total word count (tokens) within a text. This measure has been used for several purposes, e.g., as an indicator of vocabulary richness and style of an author BIBREF8 , BIBREF9 , information flow of a text BIBREF10 and it has also been used in child language acquisition, psychiatry and literary studies BIBREF11 , BIBREF12 . TTR has proven to be a simple, yet effective, way to quantify the morphological complexity of a language. This is why it has been used to estimate morphological complexity using relatively small corpora BIBREF13 . It has also shown a high correlation with other types of complexity measures like entropy and paradigm-based approaches that are based on typological information databases BIBREF14 It is important to notice that the value of TTR is affected by the type and length of the texts. However, one natural way to make TTRs comparable between languages is to use a parallel corpus, since the same meaning and functions are, more or less, expressed in the two languages. When TTR is measured over a parallel corpus, it provides a useful way to compare typological and morphological characteristics of languages. kelih2010type works with parallel texts of the Slavic language family to analyze morphological and typological features of the languages, i.e., he uses TTR for comparing the morphological productivity and the degree of syntheticity and analycity between the languages. Along the same line, mayer2014extraction automatically extract typological features of the languages, e.g., morphological synthesis degree, by using TTR. There exist several models that have been developed to examine the relationship between the types and tokens within a text BIBREF15 . The most common one is the ratio $\frac{types}{tokens}$ and it is the one that we use in this work. Entropy and Perplexity In NLP, statistical language models are a useful tool for calculating the probability of any sequence of words in a language. These models need a corpus as training data, they are usually based on n-grams, and more recently, in neural representations of words. Information theory based measures can be used to estimate the predictiveness of these models, i.e., perplexity and entropy. Perplexity is a common measure for the complexity of n-grams models in NLP BIBREF16 . Perplexity is based in Shannon's entropy BIBREF17 as the perplexity of a model $\mu $ is defined by the equation $2^{H(\mu )}$ , where $H(\mu )$ es the entropy of the model (or random variable). Shannon's entropy had been used for measuring complexity of different systems. In linguistics, entropy is commonly used to measure the complexity of morphological systems BIBREF6 , BIBREF18 , BIBREF19 . Higher values of perplexity and entropy mean less predictability. Perplexity depends on how the model is represented (this includes the size of the data). In this work, we compare two different models for calculating the entropy and perplexity: a typical bigram model adapted to a morph level BIBREF16 ; and our proposal based on using the word as a context instead of ngrams. We rely in parallel corpora to compare the measures across languages, since the same meaning and functions are shared in the two languages. This model takes into consideration bigrams BIBREF16 as context for determining the joint probabilities of the sub-strings. Here the bigrams are sequences of two morphs in the text (whether they belong to the same word or not). This is a typical statistical language model but instead of using sequences of words, we use morphological segmented texts. In addition, we use a Laplacian (or add one) smoothing for the conditional probabilities BIBREF20 . The word level representation takes the whole word as context for the determination of joint probabilities. Therefore, the frequency of co-occurrence is different from zero only if the sub-word units (morphs) are part of the same word. For example, if $xby$ is a word with a prefix $x$ and a suffix $y$ , the co-occurrence of $x$ with $b$ will be different from zero as both morphs are part of the word $xby$ . Similarly, the co-occurrence of $y$ with $b$ will be different from zero. Conversely, if two morphs are sub-strings of different words, its co-occurrence will be zero. To calculate the conditional probabilities we use and add one estimator defined as: $$p(x|y) = \frac{fr(x,y) + 1 }{fr(x,y) + V}$$ (Eq. 5) Where $V$ is the number of types and $fr(\cdot )$ is the frequency of co-occurrence function. The corpus We work with two language pairs that are spoken in the same country (Mexico) but they are typologically distant languages: Spanish (Indo-European)-Nahuatl (Uto-Aztecan) and Spanish-Otomi (Oto-Manguean). Both, Nahuatl and Otomi are low-resource languages that face scarcity of digital parallel and monolingual corpora. Nahuatl is an indigenous language with agglutinative and polysynthethic morphological phenomena. It can agglutinate many different prefixes and suffixes to build complex words. Spanish also has rich morphology, but it mainly uses suffixes and it can have a fusional behavior, where morphemes can be fused or overlaid into a single one that encodes several grammatical meanings. Regarding to Otomi, its morphology also has a fusional tendency, and it is head-marking. Otomi morphology is usually considered quite complex BIBREF21 as it exhibits different phenomena like stem alternation, inflectional class changes and suprasegmental variation, just to mention some. Since we are dealing with low resource languages that have a lot of dialectal and orthographic variation, it is difficult to obtain a standard big parallel corpus. We work with two different parallel corpora, i.e., Spanish-Nahuatl and Spanish-Otomi. Therefore the complexity comparisons are always in reference to Spanish. We used a Spanish-Nahuatl parallel corpus created by GUTIERREZVASQUES16.1068. However, we used only a subset since the whole corpus is not homogeneous, i.e., it comprises several Nahuatl dialects, sources, periods of time and it lacks of a general orthographic normalization. We chose the texts that had a more or less systematic writing. On the other hand, we used a Spanish-Otomi parallel corpus BIBREF22 conformed by 38 texts transcribed from speech. This corpus was obtained in San Andrés Cuexcontitlan. It is principally composed by narrative texts, but also counts with dialogues and elicited data. Table 1 shows the size of the parallel corpora used for the experiments. Morphological analysis tools We used different morphological analysis tools, in order to explore the morphological complexity variation among languages and between the different types of morphological representations. We performed lemmatization for Spanish language, and morphological segmentation for all languages. In NLP, morphology is usually tackled by building morphological analysis (taggers) tools. And more commonly, lemmatization and stemming methods are used to reduce the morphological variation by converting words forms to a standard form, i.e., a lemma or a stem. However, most of these technologies are focused in a reduced set of languages. For languages like English, with plenty of resources and relatively poor morphology, morphological processing may be considered solved. However, this is not the case for all the languages. Specially for languages with rich morphological phenomena where it is not enough to remove inflectional endings in order to obtain a stem. Lemmatization and stemming aim to remove inflectional endings. Spanish has available tools to perform this task. We used the tool Freeling. Regarding to morphological segmentation, we used semi-supervised statistical segmentation models obtained with the tool Morfessor BIBREF23 . In particular, we used the same segmentation models reported in ximena2017bilingual for Spanish and Nahuatl. As for Otomi, we used manual morphological segmentation of the corpus, provided by a specialist. Complexity measures We calculated the type-token relationship for every language in each parallel corpus. Table 2 shows the TTR of the texts without any processing ( $ES$ , $NA$ ) and with the different types of morphological processing: morphological segmentation ( $ES_{morph}$ , $NA_{morph}$ ), lemmatization ( $ES_{lemma}$ ). In a similar way, Table 3 shows the TTR values for the Spanish-Otomi corpus. It is worth mentioning that the TTR values are only comparable within the same parallel corpus. We also calculate the perplexity and complexity for the different languages. Since we are focusing on morphological complexity, we took only the segmented data for computing the entropy and the perplexity. We do not use the lemmatized or non segmented data since this would be equivalent to measuring the combinatorial complexity between words, i.e. syntax. In this sense, the entropy and perplexity reflects the predictability of the morphs sequences. Tables 4 and 5 shows the perplexity and entropy in each language pair. TTR as a measure of morphological complexity When no morphological processing is applied, Nahuatl has a lot higher TTR value than Spanish, i.e., a greater proportion of different word forms (types). In spite of Nahuatl having fewer tokens because of its agglutinative nature, it has a lot more types than Spanish. This suggests that Nahuatl has a highly productive system that can generate a great number of different morphological forms. In other words, it is more likely to find a repeated word in Spanish than in a Nahuatl corpus. In the case of Otomi-Spanish, Otomi also has a bigger complexity compared to Spanish in terms of TTR. Even though both Otomi and Spanish show fusional patterns in its inflection, Otomi also count with a lot of derivational processes and shows regular stem alternations. In every case, morphological segmentation induced the smallest values of TTR for all languages. Suggesting that greater reduction of the morphological complexity is achieved when the words are split into morphs, making it more likely to find a repeated item. For instance, when Nahuatl was morphologically segmented, TTR had a dramatic decrease (from $26.22$ to $1.23$ ). This TTR reduction could be the result of eliminating the combinatorial variety of the agglutinative and polysynthetical morphology of the language. Therefore, when we segment the text we break this agglutination, leading to significantly less diverse units. In the case of Otomi language, a similar trend can be observed. Otomi seems to be morphologically more complex than Spanish in terms of TTR, i.e., more diverse types or word forms. When morphological segmentation is applied, TTR decreases and Otomi language has a lower TTR compared to Spanish. Even though Otomi is not a polysynthetic language like Nahuatl, these results suggest that Otomi has also a great combinatory potential of its morphs, i.e, when Otomi gets morphologically segmented we obtain less diverse types, these morphs may be recurrent in the text but they can be combined in many several ways within the Otomi word structure. Linguistic studies have shown that Otomi language can concatenate several affixes, specially in derivative processes BIBREF22 . It has brought to our attention that Spanish has a higher TTR than Nahuatl and Otomi, only when the languages are morphologically segmented. It seems that the morphs inventory is bigger in Spanish, we conjecture this is related to the fact that Spanish has more suppletion or “irregular” forms phenomena BIBREF24 . Predictability The predictability of the internal structure of word is other dimension of complexity. It reflects the difficulty of producing novel words given a set of lexical items (stems, suffixes or morphs). First of all, as a general overview, we can see that word level models have the lower perplexity and entropy (Tables 4 and 5 ). We believe that this type of models capture better the morphological structure, since they take into account the possible combinations of morphs within a word and not outside the bounds of it (like the bigram model). It is interesting to compare the TTR and the predictability measures for each language. In the case of Nahuatl, TTR shows that there is a lot of complexity at lexical level (many different word forms, few repetitions), however, this contrasts with the predictability of the elements that conform a lexical item: the combination of morphs within a word is more predictable than Spanish, since it obtains lower values of Perplexity and entropy. The combinatorial structure of Nahuatl morphology shows less uncertainty than Spanish one, despite the fact that Nahuatl is capable of producing many more different types in the corpus due to its agglutinative and polysynthetic nature. The case of Otomi language is different, since it seems that it is not only complex in terms of TTR but also in terms of predictability. It obtains higher entropy and perplexity than Spanish. We conjecture this is related to several phenomena. For instance, Otomi and Nahuatl allow a large number of morphs combinations to modify a stem (inflectional and derivational). However, Otomi shows phenomena that is not easy to predict; for example, it has a complex system of inflectional classes, stem alternations and prefix changes. Moreover, tones and prosody plays an important role in the morphology of Otomi verbs BIBREF25 , BIBREF26 . Also, we mentioned before that many of the affixes concatenations in Otomi take place in derivative processes. Derivation tends to be less predictable than inflection phenomena (derivation is less frequent and less regular), and this could be an additional reason of why the entropy values of this language are high. Conclusions In this work we used corpus based measures like TTR, entropy and perplexity for exploring the morphological complexity of three languages, using two small parallel corpora. We use TTR as a measure of morphological productivity of a language, and we use the entropy and perplexity calculated over a sequence of morphs, as a measure of predictability. There may be a common believe that polysynthetical languages are far more complex than analytic ones. However, it is important to take into account the many factors that lay a role in the complexity of the system. We stressed out that morphological complexity has several dimensions that must be taken into account BIBREF3 . While some agglutinative polysynthetical languages, like Nahuatl, could be considered complex by the number of morphemes the combinations and the information than can be encoded in a single word; the sequence of these elements may be more predictable than fusional languages like Spanish. Languages like Otomi, showed high complexity in the two dimensions that we focused in this work (this is consistent with qualitative perspectives BIBREF26 ). These two dimensions of complexity are valid and complementary. Measures like TTR reflect the amount of information that words can encode in a language, languages that have a high TTR have the potential of encoding a lot of functions at the word level, therefore, they produce many different word forms. Perplexity and entropy measured over a sequence of morphs reflect the predictability or degree of uncertainty of these combinations. The higher the entropy (hence, the perplexity), the higher the uncertainty in the combinations of morphs. This was a preliminary work. Deeper linguistic analysis, more corpora and more languages are needed. However, we believe that quantitative measures extracted from parallel corpora can complement and deepen the study of linguistic complexity. Efforts are currently being made BIBREF27 . However, more studies are needed, especially for low resources languages. Future work Languages of the world have a wide range of functions that can be codified at the world level. Therefore, it would be interesting to consider the study of more complexity dimensions in our work. Popular quantitative approaches are successful in reflecting how many morphs can be combined into a single word. However, it is also important to take into account how complex the format of a word can be, i.e., not only how many elements can be combined but also what type of elements. For example, dahl2009testing argues that when a phoneme is added to a word, this process is not as complex as adding a tone. Another interesting dimension is the complexity of the morphology in terms of acquisition (of native and L2 speakers). miestamo2008grammatical points out that this typo of complexity should be made on the basis of psycho-linguistics analysis in both processing and acquisition. Finally, one important factor that influences language complexity is culture. In many languages, pragmatics nuances are produced via morphological processes. For instance, languages like Nahuatl have a complex honorific or reverential system that is expressed using different types of affixes. Spanish expresses this type of phenomena with morphosyntactic processes. It is a challenging task to be able to quantify all these factors that play a role in the complexity of a language. Acknowledgements This work was supported by the Mexican Council of Science and Technology (CONACYT), fund 2016-01-2225, and CB-2016/408885. We also thank the reviewers for their valuable comments and to our friend Morrisé P. Martinez for his unconditional support.
Improve existing NLP methods. Improve linguistic analysis. Measure impact of word normalization tools.
2a1069ae3629ae8ecc19d2305f23445c0231dc39
2a1069ae3629ae8ecc19d2305f23445c0231dc39_0
Q: Do they use a neural model for their task? Text: Introduction The notion of word sense is central to computational lexical semantics. Word senses can be either encoded manually in lexical resources or induced automatically from text. The former knowledge-based sense representations, such as those found in the BabelNet lexical semantic network BIBREF0 , are easily interpretable by humans due to the presence of definitions, usage examples, taxonomic relations, related words, and images. The cost of such interpretability is that every element mentioned above is encoded manually in one of the underlying resources, such as Wikipedia. Unsupervised knowledge-free approaches, e.g. BIBREF1 , BIBREF2 , require no manual labor, but the resulting sense representations lack the above-mentioned features enabling interpretability. For instance, systems based on sense embeddings are based on dense uninterpretable vectors. Therefore, the meaning of a sense can be interpreted only on the basis of a list of related senses. We present a system that brings interpretability of the knowledge-based sense representations into the world of unsupervised knowledge-free WSD models. The contribution of this paper is the first system for word sense induction and disambiguation, which is unsupervised, knowledge-free, and interpretable at the same time. The system is based on the WSD approach of Panchenko:17 and is designed to reach interpretability level of knowledge-based systems, such as Babelfy BIBREF3 , within an unsupervised knowledge-free framework. Implementation of the system is open source. A live demo featuring several disambiguation models is available online. Related Work In this section, we list prominent WSD systems with openly available implementations. Unsupervised Knowledge-Free Interpretable WSD This section describes (1) how WSD models are learned in an unsupervised way from text and (2) how the system uses these models to enable human interpretable disambiguation in context. Induction of the WSD Models Figure 1 presents architecture of the WSD system. As one may observe, no human labor is used to learn interpretable sense representations and the corresponding disambiguation models. Instead, these are induced from the input text corpus using the JoBimText approach BIBREF8 implemented using the Apache Spark framework, enabling seamless processing of large text collections. Induction of a WSD model consists of several steps. First, a graph of semantically related words, i.e. a distributional thesaurus, is extracted. Second, word senses are induced by clustering of an ego-network of related words BIBREF9 . Each discovered word sense is represented as a cluster of words. Next, the induced sense inventory is used as a pivot to generate sense representations by aggregation of the context clues of cluster words. To improve interpretability of the sense clusters they are labeled with hypernyms, which are in turn extracted from the input corpus using Hearst:92 patterns. Finally, the obtained WSD model is used to retrieve a list of sentences that characterize each sense. Sentences that mention a given word are disambiguated and then ranked by prediction confidence. Top sentences are used as sense usage examples. For more details about the model induction process refer to BIBREF10 . Currently, the following WSD models induced from a text corpus are available: Word senses based on cluster word features. This model uses the cluster words from the induced word sense inventory as sparse features that represent the sense. Word senses based on context word features. This representation is based on a sum of word vectors of all cluster words in the induced sense inventory weighted by distributional similarity scores. Super senses based on cluster word features. To build this model, induced word senses are first globally clustered using the Chinese Whispers graph clustering algorithm BIBREF9 . The edges in this sense graph are established by disambiguation of the related words BIBREF11 , BIBREF12 . The resulting clusters represent semantic classes grouping words sharing a common hypernym, e.g. “animal”. This set of semantic classes is used as an automatically learned inventory of super senses: There is only one global sense inventory shared among all words in contrast to the two previous traditional “per word” models. Each semantic class is labeled with hypernyms. This model uses words belonging to the semantic class as features. Super senses based on context word features. This model relies on the same semantic classes as the previous one but, instead, sense representations are obtained by averaging vectors of words sharing the same class. WSD API To enable fast access to the sense inventories and effective parallel predictions, the WSD models obtained at the previous step were indexed in a relational database. In particular, each word sense is represented by its hypernyms, related words, and usage examples. Besides, for each sense, the database stores an aggregated context word representation in the form of a serialized object containing a sparse vector in the Breeze format. During the disambiguation phrase, the input context is represented in the same sparse feature space and the classification is reduced to the computation of the cosine similarity between the context vector and the vectors of the candidate senses retrieved from the database. This back-end is implemented as a RESTful API using the Play framework. User Interface for Interpretable WSD The graphical user interface of our system is implemented as a single page Web application using the React framework. The application performs disambiguation of a text entered by a user. In particular, the Web application features two modes: Single word disambiguation mode is illustrated in Figure 2 . In this mode, a user specifies an ambiguous word and its context. The output of the system is a ranked list of all word senses of the ambiguous word ordered by relevance to the input context. By default, only the best matching sense is displayed. The user can quickly understand the meaning of each induced sense by looking at the hypernym and the image representing the sense. Faralli:12 showed that Web search engines can be used to acquire information about word senses. We assign an image to each word in the cluster by querying an image search API using a query composed of the ambiguous word and its hypernym, e.g. “jaguar animal”. The first hit of this query is selected to represent the induced word sense. Interpretability of each sense is further ensured by providing to the user the list of related senses, the list of the most salient context clues, and the sense usage examples (cf. Figure 2 ). Note that all these elements are obtained without manual intervention. Finally, the system provides the reasons behind the sense predictions by displaying context words triggered the prediction. Each common feature is clickable, so a user is able to trace back sense cluster words containing this context feature. All words disambiguation mode is illustrated in Figure 3 . In this mode, the system performs disambiguation of all nouns and entities in the input text. First, the text is processed with a part-of-speech and a named entity taggers. Next, each detected noun or entity is disambiguated in the same way as in the single word disambiguation mode described above, yet the disambiguation results are represented as annotations of a running text. The best matching sense is represented by a hypernym and an image as depicted in Figure 3 . This mode performs “semantification” of a text, which can, for instance, assist language learners with the understanding of a text in a foreign language: Meaning of unknown to the learner words can be deduced from hypernyms and images. Evaluation In our prior work BIBREF10 , we performed a thorough evaluation of the method implemented in our system on two datasets showing the state-of-the-art performance of the approach as compared to other unsupervised knowledge-free methods for WSD, including participants of the SemEval 2013 Task 13 BIBREF13 and two unsupervised knowledge-free WSD systems based on word sense embeddings BIBREF2 , BIBREF7 . These evaluations were based on the “lexical sample” setting, where the system is expected to predict a sense identifier of the ambiguous word. In this section, we perform an extra evaluation that assesses how well hypernyms of ambiguous words are assigned in context by our system. Namely, the task is to assign a correct hypernym of an ambiguous word, e.g. “animal” for the word “Jaguar” in the context “Jaguar is a large spotted predator of tropical America”. This task does not depend on a fixed sense inventory and evaluates at the same time WSD performance and the quality of the hypernymy labels of the induced senses. Dataset In this experiment, we gathered a dataset consisting of definitions of BabelNet 3.7 senses of 1,219 frequent nouns. In total, we collected 56,003 sense definitions each labeled with gold hypernyms coming from the IsA relations of BabelNet. The average polysemy of words in the gathered dataset was 15.50 senses per word as compared to 2.34 in the induced sense inventory. This huge discrepancy in granularities lead to the fact that some test sentences cannot be correctly predicted by definition: some (mostly rare) BabelNet senses simply have no corresponding sense in the induced inventory. To eliminate the influence of this idiosyncrasy, we kept only sentences that contain at least one common hypernym with all hypernyms of all induced senses. The statistics of the resulting dataset are presented in Table 1 , it is available in the project repository. Evaluation Metrics WSD performance is measured using the accuracy with respect to the sentences labeled with the direct hypernyms (Hypers) or an extended set of hypernym including hypernyms of hypernyms (HyperHypers). A correct match occurs when the predicted sense has at least one common hypernym with the gold hypernyms of the target word in a test sentence. Discussion of Results All evaluated models outperform both random and most frequent sense baselines, see Table 2 . The latter picks the sense that corresponds to the largest sense cluster BIBREF10 . In the case of the traditional “per word” inventories, the model based on the context features outperform the models based on cluster words. While sense representations based on the clusters of semantically related words contain highly accurate features, such representations are sparse as one sense contains at most 200 features. As the result, often the model based on the cluster words contain no common features with the features extracted from the input context. The sense representations based on the aggregated context clues are much less sparse, which explains their superior performance. In the case of the super sense inventory, the model based solely on the cluster words yielded better results that the context-based model. Note here that (1) the clusters that represent super senses are substantially larger than word sense clusters and thus less sparse, (2) words in the super sense clusters are unweighted in contrast to word sense cluster, thus averaging of word vectors is more noise-prone. Besides, the performance scores of the models based on the super sense inventories are substantially lower compared to their counterparts based on the traditional “per word” inventories. Super sense models are able to perform classification for any unknown word missing in the training corpus, but their disambiguation task is more complex (the models need to choose one of 712 classes as compared to an average of 2–3 classes for the “per word” inventories). This is illustrated by the near-zero scores of the random and the MFS baselines for this model. Conclusion We present the first openly available word sense disambiguation system that is unsupervised, knowledge-free, and interpretable at the same time. The system performs extraction of word and super sense inventories from a text corpus. The disambiguation models are learned in an unsupervised way for all words in the corpus on the basis on the induced inventories. The user interface of the system provides efficient access to the produced WSD models via a RESTful API or via an interactive Web-based graphical user interface. The system is available online and can be directly used from external applications. The code and the WSD models are open source. Besides, in-house deployments of the system are made easy due to the use of the Docker containers. A prominent direction for future work is supporting more languages and establishing cross-lingual sense links. Acknowledgments We acknowledge the support of the DFG under the “JOIN-T” project, the RFBR under project no. 16-37-00354 mol_a, Amazon via the “AWS Research Grants” and Microsoft via the “Azure for Research” programs. Finally, we also thank four anonymous reviewers for their helpful comments.
No
0b411f942c6e2e34e3d81cc855332f815b6bc123
0b411f942c6e2e34e3d81cc855332f815b6bc123_0
Q: What's the method used here? Text: Introduction The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary. Recent work BIBREF1, BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pre-trained language models (LMs) such as BERT BIBREF3 into their architectures. However, the performance improvement from the pre-trained LMs is known to be relatively small in case of abstractive summarization BIBREF4, BIBREF5. This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the task—the former classifies whether each sentence to be included in a summary, while the latter generates a whole summary from scratch. In other words, as most of the pre-trained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization, they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6, BIBREF7. On the other hand, recent studies for abstractive summarization BIBREF8, BIBREF9, BIBREF10 have attempted to exploit extractive models. Among these, a notable one is BIBREF8, in which a sophisticated model called Reinforce-Selected Sentence Rewriting is proposed. The model consists of both an extractor and abstractor, where the extractor picks out salient sentences first from a source article, and then the abstractor rewrites and compresses the extracted sentences into a complete summary. It is further fine-tuned by training the extractor with the rewards derived from sentence-level ROUGE scores of the summary generated from the abstractor. In this paper, we improve the model of BIBREF8, addressing two primary issues. Firstly, we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model (i.e., without the abstractor) is no better than solid baselines such as selecting the first 3 sentences. To resolve the problem, we present a novel neural extractor exploiting the pre-trained LMs (BERT in this work) which are expected to perform better according to the recent studies BIBREF1, BIBREF2. Since the extractor is a sort of sentence classifier, we expect that it can make good use of the ability of pre-trained LMs which is proven to be effective in classification. Secondly, the other point is that there is a mismatch between the training objective and evaluation metric; the previous work utilizes the sentence-level ROUGE scores as a reinforcement learning objective, while the final performance of a summarization model is evaluated by the summary-level ROUGE scores. Moreover, as BIBREF11 pointed out, sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary, since they may contain overlapping contents, causing verbose and redundant summaries. Therefore, we propose to directly use the summary-level ROUGE scores as an objective instead of the sentence-level scores. A potential problem arising from this apprsoach is the sparsity of training signals, because the summary-level ROUGE scores are calculated only once for each training episode. To alleviate this problem, we use reward shaping BIBREF12 to give an intermediate signal for each action, preserving the optimal policy. We empirically demonstrate the superiority of our approach by achieving new state-of-the-art abstractive summarization results on CNN/Daily Mail and New York Times datasets BIBREF13, BIBREF14. It is worth noting that our approach shows large improvements especially on ROUGE-L score which is considered a means of assessing fluency BIBREF11. In addition, our model performs much better than previous work when testing on DUC-2002 dataset, showing better generalization and robustness of our model. Our contributions in this work are three-fold: a novel successful application of pre-trained transformers for abstractive summarization; suggesting a training method to globally optimize sentence selection; achieving the state-of-the-art results on the benchmark datasets, CNN/Daily Mail and New York Times. Background ::: Sentence Rewriting In this paper, we focus on single-document multi-sentence summarization and propose a neural abstractive model based on the Sentence Rewriting framework BIBREF8, BIBREF15 which consists of two parts: a neural network for the extractor and another network for the abstractor. The extractor network is designed to extract salient sentences from a source article. The abstractor network rewrites the extracted sentences into a short summary. Background ::: Learning Sentence Selection The most common way to train extractor to select informative sentences is building extractive oracles as gold targets, and training with cross-entropy (CE) loss. An oracle consists of a set of sentences with the highest possible ROUGE scores. Building oracles is finding an optimal combination of sentences, where there are $2^n$ possible combinations for each example. Because of this, the exact optimization for ROUGE scores is intractable. Therefore, alternative methods identify the set of sentences with greedy search BIBREF16, sentence-level search BIBREF9, BIBREF17 or collective search using the limited number of sentences BIBREF15, which construct suboptimal oracles. Even if all the optimal oracles are found, training with CE loss using these labels will cause underfitting as it will only maximize probabilities for sentences in label sets and ignore all other sentences. Alternatively, reinforcement learning (RL) can give room for exploration in the search space. BIBREF8, our baseline work, proposed to apply policy gradient methods to train an extractor. This approach makes an end-to-end trainable stochastic computation graph, encouraging the model to select sentences with high ROUGE scores. However, they define a reward for an action (sentence selection) as a sentence-level ROUGE score between the chosen sentence and a sentence in the ground truth summary for that time step. This leads the extractor agent to a suboptimal policy; the set of sentences matching individually with each sentence in a ground truth summary isn't necessarily optimal in terms of summary-level ROUGE score. BIBREF11 proposed policy gradient with rewards from summary-level ROUGE. They defined an action as sampling a summary from candidate summaries that contain the limited number of plausible sentences. After training, a sentence is ranked high for selection if it often occurs in high scoring summaries. However, their approach still has a risk of ranking redundant sentences high; if two highly overlapped sentences have salient information, they would be ranked high together, increasing the probability of being sampled in one summary. To tackle this problem, we propose a training method using reinforcement learning which globally optimizes summary-level ROUGE score and gives intermediate rewards to ease the learning. Background ::: Pre-trained Transformers Transferring representations from pre-trained transformer language models has been highly successful in the domain of natural language understanding tasks BIBREF18, BIBREF3, BIBREF19, BIBREF20. These methods first pre-train highly stacked transformer blocks BIBREF21 on a huge unlabeled corpus, and then fine-tune the models or representations on downstream tasks. Model Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\lbrace s_1,s_2,\cdots ,s_n\rbrace $. We denote $i$-th sentence as $s_i=\lbrace w_{i1},w_{i2},\cdots ,w_{im}\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\hat{D}=\lbrace \hat{s}_1,\hat{s}_2,\cdots ,\hat{s}_k|\hat{s}_i\in D\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\lbrace f(\hat{s}_1),f(\hat{s}_2),\cdots ,f(\hat{s}_k)\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\lbrace a_1,a_2,\cdots ,a_l\rbrace $. Model ::: Extractor Network The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\lbrace h_1,h_2,\cdots ,h_n\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\hat{D}$ from $D$. Model ::: Extractor Network ::: Leveraging Pre-trained Transformers Although we require the encoder to output the representation for each sentence, the output vectors from BERT are grounded to tokens instead of sentences. Therefore, we modify the input sequence and embeddings of BERT as BIBREF1 did. In the original BERT's configure, a [CLS] token is used to get features from one sentence or a pair of sentences. Since we need a symbol for each sentence representation, we insert the [CLS] token before each sentence. And we add a [SEP] token at the end of each sentence, which is used to differentiate multiple sentences. As a result, the vector for the $i$-th [CLS] symbol from the top BERT layer corresponds to the $i$-th sentence representation $h_i$. In addition, we add interval segment embeddings as input for BERT to distinguish multiple sentences within a document. For $s_i$ we assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for a consecutive sequence of sentences $s_1, s_2, s_3, s_4, s_5$, we assign $E_A, E_B, E_A, E_B, E_A$ in order. All the words in each sentence are assigned to the same segment embedding, i.e. segment embeddings for $w_{11}, w_{12},\cdots ,w_{1m}$ is $E_A,E_A,\cdots ,E_A$. An illustration for this procedure is shown in Figure FIGREF1. Model ::: Extractor Network ::: Sentence Selection We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already. As the decoder structure is almost the same with the previous work, we convey the equations of BIBREF8 to avoid confusion, with minor modifications to agree with our notations. Formally, the extraction probability is calculated as: where $e_t$ is the output of the glimpse operation: In Equation DISPLAY_FORM9, $z_t$ is the hidden state of the LSTM decoder at time $t$ (shown in green in Figure FIGREF1). All the $W$ and $v$ are trainable parameters. Model ::: Abstractor Network The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8. Training In our model, an extractor selects a series of sentences, and then an abstractor paraphrases them. As they work in different ways, we need different training strategies suitable for each of them. Training the abstractor is relatively obvious; maximizing log-likelihood for the next word given the previous ground truth words. However, there are several issues for extractor training. First, the extractor should consider the abstractor's rewriting process when it selects sentences. This causes a weak supervision problem BIBREF26, since the extractor gets training signals indirectly after paraphrasing processes are finished. In addition, thus this procedure contains sampling or maximum selection, the extractor performs a non-differentiable extraction. Lastly, although our goal is maximizing ROUGE scores, neural models cannot be trained directly by maximum likelihood estimation from them. To address those issues above, we apply standard policy gradient methods, and we propose a novel training procedure for extractor which guides to the optimal policy in terms of the summary-level ROUGE. As usual in RL for sequence prediction, we pre-train submodules and apply RL to fine-tune the extractor. Training ::: Training Submodules ::: Extractor Pre-training Starting from a poor random policy makes it difficult to train the extractor agent to converge towards the optimal policy. Thus, we pre-train the network using cross entropy (CE) loss like previous work BIBREF27, BIBREF8. However, there is no gold label for extractive summarization in most of the summarization datasets. Hence, we employ a greedy approach BIBREF16 to make the extractive oracles, where we add one sentence at a time incrementally to the summary, such that the ROUGE score of the current set of selected sentences is maximized for the entire ground truth summary. This doesn't guarantee optimal, but it is enough to teach the network to select plausible sentences. Formally, the network is trained to minimize the cross-entropy loss as follows: where $s^*_t$ is the $t$-th generated oracle sentence. Training ::: Training Submodules ::: Abstractor Training For the abstractor training, we should create training pairs for input and target sentences. As the abstractor paraphrases on sentence-level, we take a sentence-level search for each ground-truth summary sentence. We find the most similar document sentence $s^{\prime }_t$ by: And then the abstractor is trained as a usual sequence-to-sequence model to minimize the cross-entropy loss: where $w^a_j$ is the $j$-th word of the target sentence $a_t$, and $\Phi $ is the encoded representation for $s^{\prime }_t$. Training ::: Guiding to the Optimal Policy To optimize ROUGE metric directly, we assume the extractor as an agent in reinforcement learning paradigm BIBREF28. We view the extractor has a stochastic policy that generates actions (sentence selection) and receives the score of final evaluation metric (summary-level ROUGE in our case) as the return While we are ultimately interested in the maximization of the score of a complete summary, simply awarding this score at the last step provides a very sparse training signal. For this reason we define intermediate rewards using reward shaping BIBREF12, which is inspired by BIBREF27's attempt for sequence prediction. Namely, we compute summary-level score values for all intermediate summaries: The reward for each step $r_t$ is the difference between the consecutive pairs of scores: This measures an amount of increase or decrease in the summary-level score from selecting $\hat{s}_t$. Using the shaped reward $r_t$ instead of awarding the whole score $R$ at the last step does not change the optimal policy BIBREF12. We define a discounted future reward for each step as $R_t=\sum _{t=1}^{k}\gamma ^tr_{t+1}$, where $\gamma $ is a discount factor. Additionally, we add `stop' action to the action space, by concatenating trainable parameters $h_{\text{stop}}$ (the same dimension as $h_i$) to $H$. The agent treats it as another candidate to extract. When it selects `stop', an extracting episode ends and the final return is given. This encourages the model to extract additional sentences only when they are expected to increase the final return. Following BIBREF8, we use the Advantage Actor Critic BIBREF29 method to train. We add a critic network to estimate a value function $V_t(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$, which then is used to compute advantage of each action (we will omit the current state $(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$ to simplify): where $Q_t(s_i)$ is the expected future reward for selecting $s_i$ at the current step $t$. We maximize this advantage with the policy gradient with the Monte-Carlo sample ($A_t(s_i) \approx R_t - V_t$): where $\theta _\pi $ is the trainable parameters of the actor network (original extractor). And the critic is trained to minimize the square loss: where $\theta _\psi $ is the trainable parameters of the critic network. Experimental Setup ::: Datasets We evaluate the proposed approach on the CNN/Daily Mail BIBREF13 and New York Times BIBREF30 dataset, which are both standard corpora for multi-sentence abstractive summarization. Additionally, we test generalization of our model on DUC-2002 test set. CNN/Daily Mail dataset consists of more than 300K news articles and each of them is paired with several highlights. We used the standard splits of BIBREF13 for training, validation and testing (90,226/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for Daily Mail). We did not anonymize entities. We followed the preprocessing methods in BIBREF25 after splitting sentences by Stanford CoreNLP BIBREF31. The New York Times dataset also consists of many news articles. We followed the dataset splits of BIBREF14; 100,834 for training and 9,706 for test examples. And we also followed the filtering procedure of them, removing documents with summaries that are shorter than 50 words. The final test set (NYT50) contains 3,452 examples out of the original 9,706. The DUC-2002 dataset contains 567 document-summary pairs for single-document summarization. As a single document can have multiple summaries, we made one pair per summary. We used this dataset as a test set for our model trained on CNN/Daily Mail dataset to test generalization. Experimental Setup ::: Implementation Details Our extractor is built on $\text{BERT}_\text{BASE}$ with fine-tuning, smaller version than $\text{BERT}_\text{LARGE}$ due to limitation of time and space. We set LSTM hidden size as 256 for all of our models. To initialize word embeddings for our abstractor, we use word2vec BIBREF32 of 128 dimensions trained on the same corpus. We optimize our model with Adam optimizer BIBREF33 with $\beta _1=0.9$ and $\beta _2=0.999$. For extractor pre-training, we use learning rate schedule following BIBREF21 with $warmup=10000$: And we set learning rate $1e^{-3}$ for abstractor and $4e^{-6}$ for RL training. We apply gradient clipping using L2 norm with threshold $2.0$. For RL training, we use $\gamma =0.95$ for the discount factor. To ease learning $h_{\text{stop}}$, we set the reward for the stop action to $\lambda \cdot \text{ROUGE-L}^{\text{summ}}_{F_1}(S, A)$, where $\lambda $ is a stop coefficient set to $0.08$. Our critic network shares the encoder with the actor (extractor) and has the same architecture with it except the output layer, estimating scalar for the state value. And the critic is initialized with the parameters of the pre-trained extractor where it has the same architecture. Experimental Setup ::: Evaluation We evaluate the performance of our method using different variants of ROUGE metric computed with respect to the gold summaries. On the CNN/Daily Mail and DUC-2002 dataset, we use standard ROUGE-1, ROUGE-2, and ROUGE-L BIBREF34 on full length $F_1$ with stemming as previous work did BIBREF16, BIBREF25, BIBREF8. On NYT50 dataset, following BIBREF14 and BIBREF35, we used the limited length ROUGE recall metric, truncating the generated summary to the length of the ground truth summary. Results ::: CNN/Daily Mail Table TABREF24 shows the experimental results on CNN/Daily Mail dataset, with extractive models in the top block and abstractive models in the bottom block. For comparison, we list the performance of many recent approaches with ours. Results ::: CNN/Daily Mail ::: Extractive Summarization As BIBREF25 showed, the first 3 sentences (lead-3) in an article form a strong summarization baseline in CNN/Daily Mail dataset. Therefore, the very first objective of extractive models is to outperform the simple method which always returns 3 or 4 sentences at the top. However, as Table TABREF27 shows, ROUGE scores of lead baselines and extractors from previous work in Sentence Rewrite framework BIBREF8, BIBREF15 are almost tie. We can easily conjecture that the limited performances of their full model are due to their extractor networks. Our extractor network with BERT (BERT-ext), as a single model, outperforms those models with large margins. Adding reinforcement learning (BERT-ext + RL) gives higher performance, which is competitive with other extractive approaches using pre-trained Transformers (see Table TABREF24). This shows the effectiveness of our learning method. Results ::: CNN/Daily Mail ::: Abstractive Summarization Our abstractive approaches combine the extractor with the abstractor. The combined model (BERT-ext + abs) without additional RL training outperforms the Sentence Rewrite model BIBREF8 without reranking, showing the effectiveness of our extractor network. With the proposed RL training procedure (BERT-ext + abs + RL), our model exceeds the best model of BIBREF8. In addition, the result is better than those of all the other abstractive methods exploiting extractive approaches in them BIBREF9, BIBREF8, BIBREF10. Results ::: CNN/Daily Mail ::: Redundancy Control Although the proposed RL training inherently gives training signals that induce the model to avoid redundancy across sentences, there can be still remaining overlaps between extracted sentences. We found that the additional methods reducing redundancies can improve the summarization quality, especially on CNN/Daily Mail dataset. We tried Trigram Blocking BIBREF1 for extractor and Reranking BIBREF8 for abstractor, and we empirically found that the reranking only improves the performance. This helps the model to compress the extracted sentences focusing on disjoint information, even if there are some partial overlaps between the sentences. Our best abstractive model (BERT-ext + abs + RL + rerank) achieves the new state-of-the-art performance for abstractive summarization in terms of average ROUGE score, with large margins on ROUGE-L. However, we empirically found that the reranking method has no effect or has negative effect on NYT50 or DUC-2002 dataset. Hence, we don't apply it for the remaining datasets. Results ::: CNN/Daily Mail ::: Combinatorial Reward Before seeing the effects of our summary-level rewards on final results, we check the upper bounds of different training signals for the full model. All the document sentences are paraphrased with our trained abstractor, and then we find the best set for each search method. Sentence-matching finds sentences with the highest ROUGE-L score for each sentence in the gold summary. This search method matches with the best reward from BIBREF8. Greedy Search is the same method explained for extractor pre-training in section SECREF11. Combination Search selects a set of sentences which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences. Due to time constraints, we limited the maximum number of sentences to 5. This method corresponds to our final return in RL training. Table TABREF31 shows the summary-level ROUGE scores of previously explained methods. We see considerable gaps between Sentence-matching and Greedy Search, while the scores of Greedy Search are close to those of Combination Search. Note that since we limited the number of sentences for Combination Search, the exact scores for it would be higher. The scores can be interpreted to be upper bounds for corresponding training methods. This result supports our training strategy; pre-training with Greedy Search and final optimization with the combinatorial return. Additionally, we experiment to verify the contribution of our training method. We train the same model with different training signals; Sentence-level reward from BIBREF8 and combinatorial reward from ours. The results are shown in Table TABREF34. Both with and without reranking, the models trained with the combinatorial reward consistently outperform those trained with the sentence-level reward. Results ::: CNN/Daily Mail ::: Human Evaluation We also conduct human evaluation to ensure robustness of our training procedure. We measure relevance and readability of the summaries. Relevance is based on the summary containing important, salient information from the input article, being correct by avoiding contradictory/unrelated information, and avoiding repeated/redundant information. Readability is based on the summarys fluency, grammaticality, and coherence. To evaluate both these criteria, we design a Amazon Mechanical Turk experiment based on ranking method, inspired by BIBREF36. We randomly select 20 samples from the CNN/Daily Mail test set and ask the human testers (3 for each sample) to rank summaries (for relevance and readability) produced by 3 different models: our final model, that of BIBREF8 and that of BIBREF1. 2, 1 and 0 points were given according to the ranking. The models were anonymized and randomly shuffled. Following previous work, the input article and ground truth summaries are also shown to the human participants in addition to the three model summaries. From the results shown in Table TABREF36, we can see that our model is better in relevance compared to others. In terms of readability, there was no noticeable difference. Results ::: New York Times corpus Table TABREF38 gives the results on NYT50 dataset. We see our BERT-ext + abs + RL outperforms all the extractive and abstractive models, except ROUGE-1 from BIBREF1. Comparing with two recent models that adapted BERT on their summarization models BIBREF1, BIBREF4, we can say that we proposed another method successfully leveraging BERT for summarization. In addition, the experiment proves the effectiveness of our RL training, with about 2 point improvement for each ROUGE metric. Results ::: DUC-2002 We also evaluated the models trained on the CNN/Daily Mail dataset on the out-of-domain DUC-2002 test set as shown in Table TABREF41. BERT-ext + abs + RL outperforms baseline models with large margins on all of the ROUGE scores. This result shows that our model generalizes better. Related Work There has been a variety of deep neural network models for abstractive document summarization. One of the most dominant structures is the sequence-to-sequence (seq2seq) models with attention mechanism BIBREF37, BIBREF38, BIBREF39. BIBREF25 introduced Pointer Generator network that implicitly combines the abstraction with the extraction, using copy mechanism BIBREF40, BIBREF41. More recently, there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models. Some notable examples include the use of inconsistency loss BIBREF9, key phrase extraction BIBREF42, BIBREF10, and sentence extraction with rewriting BIBREF8. Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor. Reinforcement learning has been shown to be effective to directly optimize a non-differentiable objective in language generation including text summarization BIBREF43, BIBREF27, BIBREF35, BIBREF44, BIBREF11. BIBREF27 use actor-critic methods for language generation, using reward shaping BIBREF12 to solve the sparsity of training signals. Inspired by this, we generalize it to sentence extraction to give per step reward preserving optimality. Conclusions We have improved Sentence Rewriting approaches for abstractive summarization, proposing a novel extractor architecture exploiting BERT and a novel training procedure which globally optimizes summary-level ROUGE metric. Our approach achieves the new state-of-the-art on both CNN/Daily Mail and New York Times datasets as well as much better generalization on DUC-2002 test set. Acknowledgments We thank anonymous reviewers for their constructive and fruitful comments. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF2016M3C4A7952587).
Two neural networks: an extractor based on an encoder (BERT) and a decoder (LSTM Pointer Network BIBREF22) and an abstractor identical to the one proposed in BIBREF8.
01123a39574bdc4684aafa59c52d956b532d2e53
01123a39574bdc4684aafa59c52d956b532d2e53_0
Q: By how much does their method outperform state-of-the-art OOD detection? Text: Introduction Recently, there has been a surge of excitement in developing chatbots for various purposes in research and enterprise. Data-driven approaches offered by common bot building platforms (e.g. Google Dialogflow, Amazon Alexa Skills Kit, Microsoft Bot Framework) make it possible for a wide range of users to easily create dialog systems with a limited amount of data in their domain of interest. Although most task-oriented dialog systems are built for a closed set of target domains, any failure to detect out-of-domain (OOD) utterances and respond with an appropriate fallback action can lead to frustrating user experience. There have been a set of prior approaches for OOD detection which require both in-domain (IND) and OOD data BIBREF0 , BIBREF1 . However, it is a formidable task to collect sufficient data to cover in theory unbounded variety of OOD utterances. In contrast, BIBREF2 introduced an in-domain verification method that requires only IND utterances. Later, with the rise of deep neural networks, BIBREF3 proposed an autoencoder-based OOD detection method which surpasses prior approaches without access to OOD data. However, those approaches still have some restrictions such that there must be multiple sub-domains to learn utterance representation and one must set a decision threshold for OOD detection. This can prohibit these methods from being used for most bots that focus on a single task. The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. Most prior approaches do not consider dialog context and make predictions for each utterance independently. We will show that this independent decision leads to suboptimal performance even when actual OOD utterances are given to optimize the model and that the use of dialog context helps reduce OOD detection errors. To consider dialog context, we need to connect the OOD detection task with the overall dialog task. Thus, for this work, we build upon Hybrid Code Networks (HCN) BIBREF4 since HCNs achieve state-of-the-art performance in a data-efficient way for task-oriented dialogs, and propose AE-HCNs which extend HCNs with an autoencoder (Figure FIGREF8 ). Furthermore, we release new dialog datasets which are three publicly available dialog corpora augmented with OOD turns in a controlled way (exemplified in Table TABREF2 ) to foster further research. METHODS In this section, we first present the standard HCN model. Then we introduce the proposed AE-HCN(-CNN) model, consisting of an autoencoder and a reconstruction score-aware HCN model. Finally, we describe the counterfeit data augmentation method for training the proposed model. HCN As shown in Figure FIGREF8 , HCN considers a dialog as a sequence of turns. At each turn, HCN takes a tuple, INLINEFORM0 , as input to produce the next system action INLINEFORM1 , where INLINEFORM2 is a user utterance consisting of INLINEFORM3 tokens, i.e., INLINEFORM4 , INLINEFORM5 a one-hot vector encoding the previous system action and INLINEFORM6 a contextual feature vector generated by domain-specific code. The user utterance is encoded as a concatenation of a bag-of-words representation and an average of word embeddings of the user utterance: DISPLAYFORM0 where INLINEFORM0 denotes a word embedding layer initialized with GloVe BIBREF5 with 100 dimensions. HCN then considers the input tuple, INLINEFORM1 , to update the dialog state through an LSTM BIBREF6 with 200 hidden units: DISPLAYFORM0 Finally, a distribution over system actions is calculated by a dense layer with a softmax activation: DISPLAYFORM0 AE-HCN On top of HCN, AE-HCN additionally takes as input an autoencoder's reconstruction score INLINEFORM0 for the user utterance for dialog state update (Figure FIGREF8 ): DISPLAYFORM0 The autoencoder is a standard seq2seq model which projects a user utterance into a latent vector and reconstructs the user utterance. Specifically, the encoder reads INLINEFORM0 using a GRU BIBREF7 to produce a 512-dimensional hidden vector INLINEFORM1 which in turn gets linearly projected to a 200-dimensional latent vector INLINEFORM2 : DISPLAYFORM0 DISPLAYFORM1 The output of the decoder at step INLINEFORM0 is a distribution over words: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 has 512 hidden units. The reconstruction score INLINEFORM1 is the normalized generation probability of INLINEFORM2 : DISPLAYFORM0 AE-HCN-CNN AE-HCN-CNN is a variant of AE-HCN where user utterances are encoded using a CNN layer with max-pooling (following BIBREF8 ) rather than equation EQREF5 : DISPLAYFORM0 The CNN layer considers two kernel sizes (2 and 3) and has 100 filters for each kernel size. Counterfeit Data Augmentation To endow an AE-HCN(-CNN) model with a capability of detecting OOD utterances and producing fallback actions without requiring real OOD data, we augment training data with counterfeit turns. We first select arbitrary turns in a dialog at random according to a counterfeit OOD probability INLINEFORM0 , and insert counterfeit turns before the selected turns. A counterfeit turn consists of a tuple INLINEFORM1 as input and a fallback action INLINEFORM2 as output. We copy INLINEFORM3 and INLINEFORM4 of each selected turn to the corresponding counterfeit turns since OOD utterances do not affect previous system action and feature vectors generated by domain-specific code. Now we generate a counterfeit INLINEFORM5 and INLINEFORM6 . Since we don't know OOD utterances a priori, we randomly choose one of the user utterances of the same dialog to be INLINEFORM7 . This helps the model learn to detect OOD utterances because a random user utterance is contextually inappropriate just like OOD utterances are. We generate INLINEFORM8 by drawing a sample from a uniform distribution, INLINEFORM9 , where INLINEFORM10 is the maximum reconstruction score of training data and INLINEFORM11 is an arbitrary large number. The rationale is that the reconstruction scores of OOD utterances are likely to be larger than INLINEFORM12 but we don't know what distribution the reconstruction scores of OOD turns would follow. Thus we choose the most uninformed distribution, i.e., a uniform distribution so that the model may be encouraged to consider not only reconstruction score but also other contextual features such as the appropriateness of the user utterance given the context, changes in the domain-specific feature vector, and what action the system previously took. DATASETS To study the effect of OOD input on dialog system's performance, we use three task-oriented dialog datasets: bAbI6 BIBREF9 initially collected for Dialog State Tracking Challenge 2 BIBREF10 ; GR and GM taken from Google multi-domain dialog datasets BIBREF11 . Basic statistics of the datasets are shown in Table TABREF22 . bAbI6 deals with restaurant finding tasks, GM buying a movie ticket, and GR reserving a restaurant table, respectively. We generated distinct action templates by replacing entities with slot types and consolidating based on dialog act annotations. We augment test datasets (denoted as Test-OOD in Table TABREF22 ) with real user utterances from other domains in a controlled way. Our OOD augmentations are as follows: These two augmentation types reflect a specific dialog pattern of interest (see Table TABREF2 ): first, the user utters a request from another domain at an arbitrary point in the dialog (each turn is augmented with the probability INLINEFORM0 , which is set to 0.2 for this study), and the system answers accordingly. This may go on for several turns in a row —each following turn is augmented with the probability INLINEFORM1 , which is set to 0.4 for this study. Eventually, the OOD sequence ends up and the dialog continues as usual, with a segment-level OOD content of the user affirming their mistake. While we introduce the OOD augmentations in a controlled programmatic way, the actual OOD content is natural. The OOD utterances are taken from dialog datasets in several foreign domains: 1) Frames dataset BIBREF12 — travel booking (1198 utterances); 2) Stanford Key-Value Retrieval Network Dataset BIBREF13 — calendar scheduling, weather information retrieval, city navigation (3030 utterances); 3) Dialog State Tracking Challenge 1 BIBREF14 — bus information (968 utterances). In order to avoid incomplete/elliptical phrases, we only took the first user's utterances from the dialogs. For segment-level OOD content, we mined utterances with the explicit affirmation of a mistake from Twitter and Reddit conversations datasets — 701 and 500 utterances respectively. EXPERIMENTAL SETUP AND EVALUATION We comparatively evaluate four different models: 1) an HCN model trained on in-domain training data; 2) an AE-HCN-Indep model which is the same as the HCN model except that it deals with OOD utterances using an independent autoencoder-based rule to mimic BIBREF3 – when the reconstruction score is greater than a threshold, the fallback action is chosen; we set the threshold to the maximum reconstruction score of training data; 3) an AE-HCN(-CNN) model trained on training data augmented with counterfeit OOD turns – the counterfeit OOD probability INLINEFORM0 is set to 15% and INLINEFORM1 to 30. We apply dropout to the user utterance encoding with the probability 0.3. We use the Adam optimizer BIBREF15 , with gradients computed on mini-batches of size 1 and clipped with norm value 5. The learning rate was set to INLINEFORM2 throughout the training and all the other hyperparameters were left as suggested in BIBREF15 . We performed early stopping based on the performance of the evaluation data to avoid overfitting. We first pretrain the autoencoder on in-domain training data and keep it fixed while training other components. The result is shown in Table TABREF23 . Since there are multiple actions that are appropriate for a given dialog context, we use per-utterance Precision@K as performance metric. We also report f1-score for OOD detection to measure the balance between precision and recall. The performances of HCN on Test-OOD are about 15 points down on average from those on Test, showing the detrimental impact of OOD utterances to such models only trained on in-domain training data. AE-HCN(-CNN) outperforms HCN on Test-OOD by a large margin about 17(20) points on average while keeping the minimum performance trade-off compared to Test. Interestingly, AE-HCN-CNN has even better performance than HCN on Test, indicating that, with the CNN encoder, counterfeit OOD augmentation acts as an effective regularization. In contrast, AE-HCN-Indep failed to robustly detect OOD utterances, resulting in much lower numbers for both metrics on Test-OOD as well as hurting the performance on Test. This result indicates two crucial points: 1) the inherent difficulty of finding an appropriate threshold value without actually seeing OOD data; 2) the limitation of the models which do not consider context. For the first point, Figure FIGREF24 plots histograms of reconstruction scores for IND and OOD utterances of bAbI6 Test-OOD. If OOD utterances had been known a priori, the threshold should have been set to a much higher value than the maximum reconstruction score of IND training data (6.16 in this case). For the second point, Table TABREF25 shows the search for the best threshold value for AE-HCN-Indep on the bAbI6 task when given actual OOD utterances (which is highly unrealistic for the real-world scenario). Note that the best performance achieved at 9 is still not as good as that of AE-HCN(-CNN). This implies that we can perform better OOD detection by jointly considering other context features. Finally, we conduct a sensitivity analysis by varying counterfeit OOD probabilities. Table TABREF26 shows performances of AE-HCN-CNN on bAbI6 Test-OOD with different INLINEFORM0 values, ranging from 5% to 30%. The result indicates that our method manages to produce good performance without regard to the INLINEFORM1 value. This superior stability nicely contrasts with the high sensitivity of AE-HCN-Indep with regard to threshold values as shown in Table TABREF25 . CONCLUSION We proposed a novel OOD detection method that does not require OOD data without any restrictions by utilizing counterfeit OOD turns in the context of a dialog. We also release new dialog datasets which are three publicly available dialog corpora augmented with natural OOD turns to foster further research. In the presence of OOD utterances, our method outperforms state-of-the-art dialog models equipped with an OOD detection mechanism by a large margin — more than 17 points in Precision@K on average — while minimizing performance trade-off on in-domain test data. The detailed analysis sheds light on the difficulty of optimizing context-independent OOD detection and justifies the necessity of context-aware OOD handling models. We plan to explore other ways of scoring OOD utterances than autoencoders. For example, variational autoencoders or generative adversarial networks have great potential. We are also interested in using generative models to produce more realistic counterfeit user utterances.
AE-HCN outperforms by 17%, AE-HCN-CNN outperforms by 20% on average
954c4756e293fd5c26dc50dc74f505cc94b3f8cc
954c4756e293fd5c26dc50dc74f505cc94b3f8cc_0
Q: What are dilated convolutions? Text: Introduction Keyword spotting (KWS) aims at detecting a pre-defined keyword or set of keywords in a continuous stream of audio. In particular, wake-word detection is an increasingly important application of KWS, used to initiate an interaction with a voice interface. In practice, such systems run on low-resource devices and listen continuously for a specific wake word. An effective on-device KWS therefore requires real-time response and high accuracy for a good user experience, while limiting memory footprint and computational cost. Traditional approaches in keyword spotting tasks involve Hidden Markov Models (HMMs) for modeling both keyword and background BIBREF0 , BIBREF1 , BIBREF2 . In recent years, Deep Neural Networks (DNNs) have proven to yield efficient small-footprint solutions, as shown first by the fully-connected networks introduced in BIBREF3 . More advanced architectures have been successfully applied to KWS problems, such as Convolutional Neural Networks (CNNs) exploiting local dependencies BIBREF4 , BIBREF5 . They have demonstrated efficiency in terms of inference speed and computational cost but fail at capturing large patterns with reasonably small models. Recent works have suggested RNN based keyword spotting using LSTM cells that can leverage longer temporal context using gating mechanism and internal states BIBREF6 , BIBREF7 , BIBREF8 . However, because RNNs may suffer from state saturation when facing continuous input streams BIBREF9 , their internal state needs to be periodically reset. In this work we focus on end-to-end stateless temporal modeling which can take advantage of a large context while limiting computation and avoiding saturation issues. By end-to-end model, we mean a straight-forward model with a binary target that does not require a precise phoneme alignment beforehand. We explore an architecture based on a stack of dilated convolution layers, effectively operating on a broader scale than with standard convolutions while limiting model size. We further improve our solution with gated activations and residual skip-connections, inspired by the WaveNet style architecture explored previously for text-to-speech applications BIBREF10 and voice activity detection BIBREF9 , but never applied to KWS to our knowledge. In BIBREF11 , the authors explore Deep Residual Networks (ResNets) for KWS. ResNets differ from WaveNet models in that they do not leverage skip-connections and gating, and apply convolution kernels in the frequency domain, drastically increasing the computational cost. In addition, the long-term dependency our model can capture is exploited by implementing a custom “end-of-keyword” target labeling, increasing the accuracy of our model. A max-pooling loss trained LSTM initialized with a cross-entropy pre-trained network is chosen as a baseline, as it is one of the most effective models taking advantage of longer temporal contexts BIBREF7 . The rest of the paper is organized in two main parts. Section "System description" describes the different components of our model as well as our labeling. Section "Experiments" focuses on the experimental setup and performance results obtained on a publicly available “Hey Snips” dataset. System description The acoustic features are 20-dimensional log-Mel filterbank energies (LFBEs), extracted from the input audio every 10ms over a window of 25ms. A binary target is used, see Section "End-of-keyword labeling" for more details about labeling. During decoding, the system computes smoothed posteriors by averaging the output of a sliding context window containing $w_{smooth}$ frames, a parameter chosen after experimental tuning. End-to-end models such as the one presented here do not require any post-processing step besides smoothing, as opposed to multi-class models such as BIBREF3 , BIBREF4 . Indeed, the system triggers when the smoothed keyword posterior exceeds a pre-defined threshold. Neural network architecture WaveNet was initially proposed in BIBREF10 , as a generative model for speech synthesis and other audio generation tasks. It consists in stacked causal convolution layers wrapped in a residual block with gated activation units as depicted in Figure 1 . Standard convolutional networks cannot capture long temporal patterns with reasonably small models due to the increase in computational cost yielded by larger receptive fields. Dilated convolutions skip some input values so that the convolution kernel is applied over a larger area than its own. The network therefore operates on a larger scale, without the downside of increasing the number of parameters. The receptive field $r$ of a network made of stacked convolutions indeed reads: $r = \sum _i d_i (s_i - 1),$ where $d_i$ refers to the dilation rate ( $d_i=1$ for normal convolutions) and $s_i$ the filter size of the $i^{th}$ layer. Additionally, causal convolutions kernels ensure a causal ordering of input frames: the prediction emitted at time $t$ only depends on previous time stamps. It allows to reduce the latency at inference time. As mentioned in BIBREF10 , gated activations units – a combination of tanh and sigmoid activations controlling the propagation of information to the next layer – prove to efficiently model audio signals. Residual learning strategies such as skip connections are also introduced to speed up convergence and address the issue of vanishing gradients posed by the training of models of higher depth. Each layer yields two outputs: one is directly fed to the next layer as usual, but the second one skips it. All skip-connections outputs are then summed into the final output of the network. A large temporal dependency, can therefore be achieved by stacking multiple dilated convolution layers. By inserting residual connections between each layer, we are able to train a network of 24 layers on relatively small amount of data, which corresponds to a receptive field of 182 frames or 1.83s. The importance of gating and residual connections is analyzed in Section 3.3.2. Streaming inference In addition to reducing the model size, dilated convolutions allow the network to run in a streaming fashion during inference, drastically reducing the computational cost. When receiving a new input frame, the corresponding posteriors are recovered using previous computations, kept in memory for efficiency purposes as described in Figure 2 . This cached implementation allows to reduce the amount of Floating Point Operations per Second (FLOPS) to a level suiting production requirements. End-of-keyword labeling Our approach consists in associating a target 1 to frames within a given time interval $\Delta t$ before and after the end of the keyword. The optimal value for $\Delta t$ is tuned on the dev set. Additionally, a masking scheme is applied, discarding background frames outside of the labeling window in positive samples. A traditional labeling approach, however, associates a target 1 to all frames aligned with the keyword. In this configuration, the model has a tendency to trigger as soon as the keyword starts, whether or not the sample contains only a fraction of the keyword. One advantage of our approach is that the network will trigger near the end of keyword, once it has seen enough context. Moreover, our labeling does not need any phoneme alignment, but only to detect the end of the keyword, which is easily obtained with a VAD system. Furthermore, thanks to masking, the precise frontiers of the labeling window are not learned, making the network more robust to labeling imprecisions. The relative importance of end-of-keyword labeling and masking are analyzed in Section UID18 . Open dataset The proposed approach is evaluated on a crowdsourced close-talk dataset. The chosen keyword is “Hey Snips” pronounced with no pause between the two words. The dataset contains a large variety of English accents and recording environments. Around 11K wake word utterances and 86.5K ( $\sim $ 96 hours) negative examples have been recorded, see Table 1 for more details. Note that negative samples have been recorded in the same conditions than wake-word utterances, therefore arising from the same domain (speaker, hardware, environment, etc.). It thus prevents the model from discerning the two classes based on their domain-dependent acoustic features. Positive data has been cleaned by automatically removing samples of extreme duration, or samples with repeated occurrences of the wake word. Positive dev and test sets have been manually cleaned to discard any mispronunciations of the wake word (e.g. “Hi Snips” or “Hey Snaips”), leaving the training set untouched. Noisy conditions are simulated by augmenting samples with music and noise background audio from Musan BIBREF12 . The positive dev and test datasets are augmented at 5dB of Signal-to-noise Ratio (SNR). The full dataset and its metadata are available for research purposes. Although some keyword spotting datasets are freely available, such as the Speech Commands dataset BIBREF13 for voice commands classification, there is no equivalent in the specific wake-word detection field. By establishing an open reference for wake-word detection, we hope to contribute to promote transparency and reproducibility in a highly concurrent field where datasets are often kept private. Experimental setup The network consists in an initial causal convolution layer (filter size of 3) and 24 layers of gated dilated convolutions (filter size of 3). The 24 dilation rates are a repeating sequence of $\lbrace 1, 2, 4, 8, 1, 2, 4, 8...\rbrace $ . Residual connections are created between each layer and skip connections are accumulated at each layer and are eventually fed to a DNN followed by a softmax for classification as depicted in Figure 1 . We used projection layers of size 16 for residual connections and of size 32 for skip connections. The optimal duration of the end-of-keyword labeling interval as defined in Section "End-of-keyword labeling" is $\Delta t = 160ms$ (15 frames before and 15 frames after the end of the keyword). The posteriors are smoothed over a sliding context window of $w_{smooth}=30$ frames, also tuned on the dev set. The main baseline model is a LSTM trained with a max-pooling based loss initialized with a cross-entropy pre-trained network, as it is another example of end-to-end temporal model BIBREF7 . The idea of the max-pooling loss is to teach the network to fire at its highest confidence time by back-propagating loss from the most informative keyword frame that has the maximum posterior for the corresponding keyword. More specifically, the network is a single layer of unidirectional LSTM with 128 memory blocks and a projection layer of dimension 64, following a similar configuration to BIBREF7 but matching the same number of parameters than the proposed architecture (see Section UID15 ). 10 frames in the past and 10 frames in the future are stacked to the input frame. Standard frame labeling is applied, but with the frame masking strategy described in Section "End-of-keyword labeling" . The authors of BIBREF7 mentioned back-propagating loss only from the last few frames, but said that the LSTM network performed poorly in this setting. The same smoothing strategy is applied on an window $w_{smooth}=8$ frames, after tuning on dev data. For comparison, we also add as a CNN variant the base architecture trad-fpool3 from BIBREF4 , a multi-class model with 4 output labels (“hey”, “sni”, “ps”, and background). Among those proposed in BIBREF4 , this is the architecture with the lowest amount of FLOPS while having a similar number of parameters as the two other models studied here (see Section UID15 ). The Adam optimization method is used for the three models with a learning rate of $10^{-3}$ for the proposed architecture, $10^{-4}$ for the CNN, and $5 \cdot 10^{-5}$ for the LSTM baseline. Additionally, gradient norm clipping to 10 is applied. A scaled uniform distribution for initialization BIBREF14 (or “Xavier” initialization) yielded the best performance for the three models. We also note that the LSTM network is much more sensitive to the chosen initialization scheme. Results The performance of the three models is first measured by observing the False Rejection Rate (FRR) on clean and noisy (5dB SNR) positives samples at the operating threshold of 0.5 False Alarms per Hour (FAH) computed on the collected negative data. Hyper parameters are tuned on the dev set and results are reported on the test set. Table 2 displays these quantities as well as the number of parameters and multiplications per second performed during inference. The proposed architecture yields a lower FRR than the LSTM (resp. CNN) baseline with a 94% (resp. 95%) and 86% (resp. 88%) decrease in clean and noisy conditions. The number of parameters is similar for the three architectures, but the amount of FLOPS is higher by an order of magnitude for the CNN baseline while resulting in a poorer FRR in a noisy environment. Figure 3 provides the Detection Error Tradeoff (DET) curves and shows that the WaveNet model also outperforms the baselines on a whole range of triggering thresholds. To assess the relative importance of some characteristics of the proposed architecture, we study the difference in FRR observed once each of them is removed separately, all things being equal. Table 3 shows that the end-of-keyword labeling is particularly helpful in improving the FRR at a fixed FAH, especially in noisy conditions. Masking background frames in positive samples also helps, but in a lower magnitude. Similarly to what is observed in BIBREF9 , gating contributes to improving the FRR especially in noisy conditions. We finally observed that removing either residual or skip connections separately has little effect on the performance. However, we could not properly train the proposed model without any of these connections. It seems to confirm that implementing at least one bypassing strategy is key for constructing deeper network architectures. Conclusion This paper introduces an end-to-end stateless modeling for keyword spotting, based on dilated convolutions coupled with residual connections and gating encouraged by the success of the WaveNet architecture in audio generation tasks BIBREF10 , BIBREF9 . Additionally, a custom frame labeling is applied, associating a target 1 to frames located within a small time interval around the end of the keyword. The proposed architecture is compared against a LSTM baseline, similar to the one proposed in BIBREF7 . Because of their binary targets, both the proposed model and the LSTM baseline do not require any phoneme alignment or post-processing besides posterior smoothing. We also added a multi-class CNN baseline BIBREF4 for comparison. We have shown that the presented WaveNet model significantly reduces the false rejection rate at a fixed false alarm rate of 0.5 per hour, in both clean and noisy environments, on a crowdsourced dataset made publicly available for research purposes. The proposed model seems to be very efficient in the specific domain defined by this dataset and future work will focus on domain adaptation in terms of recording hardware, accents, or far-field settings, to be deployed easily in new environments. Acknowledgements We thank Oleksandr Olgashko for his contribution in developing the training framework. We are grateful to the crowd of contributors who recorded the dataset. We are indebted to the users of the Snips Voice Platform for valuable feedback.
Similar to standard convolutional networks but instead they skip some input values effectively operating on a broader scale.
ee279ace5bc69d15e640da967bd4214fe264aa1a
ee279ace5bc69d15e640da967bd4214fe264aa1a_0
Q: what was the evaluation metrics studied in this work? Text: Introduction Knowledge graphs are a vital source for disambiguation and discovery in various tasks such as question answering BIBREF0 , information extraction BIBREF1 and search BIBREF2 . They are, however, known to suffer from data quality issues BIBREF3 . Most prominently, since formal knowledge is inherently sparse, relevant facts are often missing from the graph. To overcome this problem, knowledge graph completion (KGC) or link prediction strives to enrich existing graphs with new facts. Formally, a knowledge graph $\mathcal {G} \, {\subset } \, E {\times } R {\times } E$ consists of facts or triples $(head,rel,tail)$ , where $E$ and $R$ denote finite sets of entities and relations respectively. Knowledge graph completion is targeted at assessing the probability of triples not present in the graph. To do so, a common approach involves representing the entities and relations in triples using real-valued vectors called embeddings. The probability of the triple is then inferred by geometric reasoning over the embeddings. Embeddings are usually generated by learning to discriminate real triples from randomly corrupted ones BIBREF4 , BIBREF5 , BIBREF6 . A key problem with most existing approaches is that the plausibility of links can be determined for known entities only. For many applications, however, it is of interest to infer knowledge about entities not present in the graph. Imagine answering the question “What is German actress Julia Lindig known for?”, where Julia Lindig is not a known entity. Here, information can be inferred from the question, typically using word embeddings BIBREF7 , BIBREF8 , BIBREF9 . Similar to entity embeddings, these techniques represent words with embedding vectors. These can be pre-trained on text corpora, thereby capturing word similarity and semantic relations, which may help to predict the plausibility of the triple $(Julia\_Lindig, starred\_in, Lola\_Rennt)$ . This challenge is known as open-world (or zero-shot) KGC. To the best of our knowledge, few open-world KGC models have been proposed so far, all of which are full replacements for regular KGC models and require textual descriptions for all entities BIBREF10 , BIBREF11 . In this paper, we suggest a different approach, namely to extend existing KGC models with pre-trained word embeddings. Given an new entity, we aggregate its name and description into a text-based entity representation. We then learn a transformation from text-based embedding space to graph-based embedding space, where we can now apply the graph-based model for predicting links. We show that this simple approach yields competitive results, and offers two key benefits: First, it is independent of the specific KGC model used, which allows us to use multiple different link prediction models from which we can pick the best one. Second, as training on the graph structure happens independently from training on text, our approach can exploit the full-scale knowledge graph structure in situations where textual information is scarce because learning the transformation is robust even for such situations. We coin our approach OWE for Open World Extension and combine it with several common KGC models, obtaining TransE-OWE, DistMult-OWE, and ComplEx-OWE. We demonstrate competitive results on common datasets for open-world prediction, and also introduce a new dataset called FB15k-237-OWE, which avoids bias towards long textual descriptions and trivial regularities like inverse relations. The code and the new FB15k-237-OWE dataset are available online. Approach Our approach starts with a regular link prediction model (in the following also referred to as the graph-based model) as outlined in Section "Related Work" and visualised in Fig. 1 . The model scores triples $(h, r, t)$ : $$score(h,r,t) = \phi ({u}_{h}, {u}_{r}, {u}_{t})$$ (Eq. 6) where ${u}_{x}$ denotes the embedding of entity/relation $x$ . Typically, ${u}_{x} \in \mathbb {R}^d$ , but other options are possible. For example, in ComplEx BIBREF6 , ${u}_{x}$ is complex-valued ( ${u}_{x} \in \mathbb {C}^d$ ). $\phi $ is a scoring function that depends on the link prediction model and will be adressed in more detail in Section "Link Prediction Models" . Link Prediction Models Since our approach is independent of the specific link prediction model used, we test three commonly used models in this work: TransE: $\phi ({u}_{h}, {u}_{r}, {u}_{t}) = -||{u}_{h} {+} {u}_{r} {-} {u}_{t}||_2$ $\;\;\;$ DistMult: $\phi ({u}_{h}, {u}_{r}, {u}_{t}) = \langle {u}_{h},{u}_{r},{u}_{t}\rangle $ $\;\;\;\;\;\;\;\;$ ComplEx: $\phi ({u}_{h}, {u}_{r}, {u}_{t}) = \textrm {Re}({\langle }{u}_{h}, {u}_{r}, {\overline{u}}_{t}{\rangle })$ $\;$ Note that the first two use real-valued embeddings, while ComplEx uses complex-valued embeddings (where ${\overline{u}} = \textrm {Re}({u}) - i{\cdot } \textrm {Im}({u})$ denotes the complex conjugate of embedding ${u}$ ). All models are trained using their original loss functions and validated using closed-world validation data. Word Embeddings and Aggregation We use pre-trained word embeddings trained on large text corpora. Since the number of entities in the datasets used is limited and we found overfitting to be an issue, we omit any refinement of the embeddings. We tested 200-dimensional Glove embeddings BIBREF8 and 300-dimensional Wikipedia2Vec embeddings BIBREF18 . Note that Wikipedia2Vec embeddings contain phrase embeddings, which we use as an embedding for entity names (like "Julia Lindig"). If no phrase embedding is available, we split the name into single tokens and use token-wise embeddings. If no embedding is available for a token, we use a vector of zeros as an “unknown” token. To aggregate word embeddings to an entity embedding (function $\Psi ^{agg}$ , Equation 11 ), approaches in the literature range from simple averaging BIBREF8 over Long Short Term Memory Networks (LSTMs) BIBREF23 to relation-specific masking BIBREF11 . We use averaging as an aggregation function. Here, the word embedding vectors are averaged to obtain a single representative embedding. To prevent overfitting, we apply dropout during training, i.e. embeddings of some words are randomly replaced by the unknown token before averaging. Transformation Functions The key to open-world prediction is the mapping from text-based entity embeddings ${v}_e$ to graph-based ones ${u}_e$ . Several different transformation functions $\Psi ^{map}$ can be learned for this task. In this paper, we discuss three options: A simple linear function $\Psi ^{map}({v}) = A {\cdot } {v}$ . For ComplEx, separate matrices are used for the real and imaginary part: $\Psi ^{map}({v}) = A {\cdot } {v} + i \cdot A^{\prime } {\cdot } {v}$ Here, $\Psi ^{map}$ is an affine function $\Psi ^{map}({v}) = A {\cdot } {v} + {b}$ . For ComplEx, separate matrices and vectors are trained just like above: $\Psi ^{map}({v}) = (A {\cdot } {v} + {b}) + i \cdot (A^{\prime } {\cdot } {v} + {b^{\prime }})$ $\Psi ^{map}$ is a four layer Multi-Layer Perceptron (MLP) with ReLU activation functions. The output layer is affine. We did not perform an extensive hyperparameter search here. To train the transformations, first a link prediction model is trained on the full graph, obtaining entity embeddings ${u}_1,...,{u}_n$ . We then choose all entities $e_{i_1},...,e_{i_m}$ with textual metadata (names and/or descriptions), and extract text-based embedding ${v}_{i_1},...,{v}_{i_m}$ for them using aggregation (see above). Finally, $\Psi ^{map}$ is learned by minimizing the loss function $$L(\Theta ) = \sum _{k=1}^m \Big | \Big |\Psi _\Theta ^{map}({v}_{i_k}) - {u}_{i_k} \Big |\Big |_2$$ (Eq. 22) using batched stochastic gradient descent, where $\Theta $ denotes the parameters of $\Psi ^{map}$ (e.g., the weight matrices and bias vectors $A,b$ ). For ComplEx, the above loss is summed for real and imaginary parts, and training happens on the sum. We apply no fine-tuning, neither on the graph nor on the text embeddings. Experiments In this section, we study the impact of our model's parameters ( $\Psi ^{agg}, \Psi ^{map}$ , text embeddings) on prediction performance. We also provide mappings of selected open-world entities, and compare our results with the state-of-the-art. Datasets Closed-world KGC tasks are commonly evaluated on WordNet and Freebase subsets, such as WN18, WN18RR, FB15k, and FB15k-237. For open-world KGC, the following datasets have been suggested: BIBREF10 introduced FB20k, which builds upon the FB15k dataset by adding test triples with unseen entities, which are selected to have long textual descriptions. BIBREF11 introduced DBPedia50k and DBPedia500k datasets for both open-world and closed-world KGC tasks. However, the above datasets display a bias towards long textual descriptions: DBpedia50k has an average description length of 454 words, FB20k of 147 words. Also, for neither of the datasets precautions have been taken to avoid redundant inverse relations, which allows models to exploit trivial patterns in the data BIBREF24 . To overcome these problems, we introduce a new dataset named FB15k-237-OWE. FB15k-237-OWE is based on the well-known FB15K-237 dataset, where redundant inverse relations have been removed. Also, we avoid a bias towards entities with longer textual descriptions: Test entities are uniformly sampled from FB15K-237, and only short Wikidata descriptions (5 words on average) are used. In the following section, the sampling strategy for FB15k-237-OWE is briefly outlined: For tail prediction test set, we start with FB15K-237 and randomly pick heads (by uniform sampling over all head entities). Each picked head $x$ is removed from the training graph by moving all triples of the form $(x,?,t)$ to the test set and dropping all triples of the form $(?,?,x)$ if $t$ still remains in the training set after these operations. Similarly, a head prediction test set is prepared from the set of dropped triplets which satisfy the conditions to be in head prediction test set i.e. head must be represented in training set while tail must not be represented. The dataset also contains two validation sets: A closed-world one (with random triples picked from the training set) and an open-world one (with random triples picked from the test set). We evaluate our approach on DBPedia50k, FB20k, and the new dataset FB15k-237-OWE. Statistics of the datasets are highlighted in Table 1 and Table 2 . Experimental Setup We perform multiple runs using different KGC models, transformation types, training data, and embeddings used. For each run, both KGC model and transformation $\Psi ^{map}$ are trained on the training set: the KGC model without using any textual information and the transformation using entity names and descriptions. We manually optimize all hyperparameters on the validation set. Due to the lack of an open-world validation set on FB20k, we randomly sampled 10% of the test triples as a validation set. Performance figures are computed using tail prediction on the test sets: For each test triple $(h,r,t)$ with open-world head $h \notin E$ , we rank all known entities $t^{\prime } \in E$ by their score $\phi (h,r,t^{\prime })$ . We then evaluate the ranks of the target entities $t$ with the commonly used mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10. Note that multiple triples with the same head and relation but different tails may occur in the dataset: $(h,r,t_1),...,(h,r,t_p)$ . Following BIBREF13 , when evaluating triple $(h,r,t_i)$ we remove all entities $t_1,...,t_{i-1},t_{i+1},...,t_p$ from the result list . All results (except MRR(raw)) are reported with this filtered approach. Note also that when computing the MRR, given a triple $(h,r,t_i)$ only the reciprocal rank of $t_i$ itself is evaluated (and not the best out of $t_1,...,t_{i},...,t_p$ , which would give better results). This is common when evaluating KGC models BIBREF13 but differs from ConMask's evaluation code, which is why one result in Table 3 differs from BIBREF11 (see the (*) mark). Note also that BIBREF11 add a second filtering method called target filtering: When evaluating a test triple $(h,r,t)$ , tails $t^{\prime }$ are only included in the ranked result list if a triple of the form $(?,r,t^{\prime })$ exists in the training data, otherwise it is skipped. We found this to improve quantitative results substantially, but it limits the predictive power of the model because tails can never be linked via new relations. Therefore, we use target filtering only when comparing with the ConMask and DKRL models from BIBREF11 (Table 3 ). For training TransE and DistMult, we use the OpenKE framework which provides implementations of many common link prediction models. For closed-world graph embedding, we use both OpenKE and our own implementation after validating the equivalence of both. For training the transformation $\Psi ^{map}$ , we used the Adam optimizer with a learning rate of $10^{-3}$ and batch size of 128. For DBPedia50k we use a dropout of 0.5, while for FB20k and FB15k-237-OWE we use no dropout. The embedding used is the pretrained 300 dimensional Wikipedia2Vec embedding and the transformation used is affine unless stated otherwise. Comparison with State of the Art We first compare our model ComplEx-OWE with other open-world link prediction models in Table 3 . For a fair comparison, all the results are evaluated using target filtering. For all models and all datasets, 200-dimensional Glove embeddings were used, except for the Complex-OWE300, which uses 300-dimensional Wikipedia2Vec embeddings. The effect of different embeddings will be studied further in Section "Text Embeddings and Robustness To Missing Entity Metadata" . The results for Target Filtering Baseline, DKRL and ConMask were obtained by the implementation provided by BIBREF11 . The Target Filtering Baseline is evaluated by assigning random scores to all targets that pass the target filtering criterion. DKRL uses a two-layer CNN over the entity descriptions. ConMask uses a CNN over the entity names and descriptions along with the relation-based attention weights. It can be seen from Table 3 that our best model, ComplEx-OWE300, performs competitively when compared to ConMask. On DBPedia50k, our model performs best on all metrics except Hits@10. On FB20k it is outperformed by a small margin by ConMask but performs better on Hits@1. On FB15k-237-OWE our model outperforms all other models significantly. We believe that this is due to FB15k-237-OWE having very short descriptions. ConMask generally relies on extracting information from the description of entities with its attention mechanism, whereas our model relies more on extracting information from the textual corpus that the word embedding were trained on. This enables our model to provide good results without relying on having long descriptions. Analysis of Different Link Prediction Models and Transformations Our OWE extension for open-world link prediction can be used with any common KGC model. Therefore, we evaluate three commonly used options, namely TransE, DistMult, and ComplEx. Results are displayed in Table 4 : All three models are trained with embedding dimensionality $d=300$ on the closed-world dataset. For text embeddings, Wikipedia2Vec embeddings of the same dimensionality were used. It can be seen that the performance on the open-world setting matches the expressiveness of the models: ComplEx-OWE with its ability to model even asymmetric relations yields the best results, while the symmetric DistMult-OWE achieves a similar performance. We also test different transformation functions $\Psi ^{map}$ as illustrated in Table 5 . It can be observed that quite simple transformations achieve the strong results: The best performance is achieved by the affine transformation with $49.1$ % HITS@10 by a margin of 2–4 percent. Text Embeddings and Robustness To Missing Entity Metadata In some cases, the knowledge graph may lack textual metadata (both the name and description) for some or all of its entities. Other models like ConMask and DKRL are dependant on textual descriptions, e.g. ConMask uses attention mechanisms to select relation-specific target words from long texts. Therefore, ConMask and DKRL would require completely dropping triples without metadata and be unable to learn about the link structure of such entities as they use joint training. However, in our approach, we have to drop such entities only during the phase where the transformation $\Psi ^{map}$ is learned, while the link prediction model can still be learned on the full graph. To demonstrate the robustness of our approach to missing entity meta-data, we re-evaluate accuracy when randomly dropping metadata for training entities. Fig. 2 outlines the performance for two scenarios: Dropping descriptions: We remove only the textual descriptions for a varying percentage of randomly selected entities (between 20% to 100%). The names of these entities are not removed and therefore, we still train $\Psi ^{map}$ on them. Dropping all meta-data: We randomly select entities and remove both their descriptions and names, effectively removing these entities from the training set altogether when training $\Psi ^{map}$ . We also included a baseline experiment to simulate an unsuccessful learning of $\Psi ^{map}$ . In this baseline, when evaluating a test triple, we replace its head by the embedding of another random head from the training data. Note that this baseline still gives some reasonable hits for triples where the relation is a strong indicator. For example, if we have a triplet $(X,time\_zone,?)$ : Even if the head $X$ is unknown, a model can achieve reasonable accuracy by simply ”guessing” time zones as tails. Overall, Fig. 2 suggests that transformation learning is able to generalize well even with very limited training data. In Fig. 2 a only the descriptions of entities have been removed. For Wikipedia2Vec embeddings, this removal has virtually no effect on prediction accuracy. We believe that this is because Wikipedia2Vec embeddings are trained such that we can lookup strong entity embeddings by the name alone. Even when removing 100% of descriptions (i.e., only training on the entity names), accuracy is only 2-3% lower than training on the full graph. However, in case of Glove embeddings, the drop in performance is very significant, especially when the description is dropped for all the entities. In Fig. 2 b, we remove not only descriptions but also entity names. Even in this case, learning is robust. If half of the $12,324$ training entities are removed, the drop in accuracy is less than 1%. Only when removing 90% of training data (leaving 123 training entities), performance starts to deteriorate significantly. This highlights the ability of our model to learn from a limited amount of training data, when it is important to be able to train the KGC model itself on all the entities. Selected Results Finally, we inspect sample prediction results for ComplEx-OWE-300 in Table 6 . Besides the final prediction, we also test whether our transformation from text-based to semantic space is successful: For each test triple, we represent the open-world head entity by its text-based embedding $v_{head}$ , match it to a graph-based embedding $\Psi ^{map}(v_{head})$ , and estimate the nearest neighbor entities in this space. We use the Euclidean distance on the real part of the ComplEx embeddings, but found results to be similar for the imaginary part. If the transformation works well, we expect these nearest neighbors to be semantically similar to the head entity. This is obviously the case: For Bram Stoker (the author of Dracula), the nearest neighbors are other authors of fantasy literature. For Parma, the neighbors are cities (predominantly in Italy). For Bachelor of Science, the model predicts appropriate entities (namely, Universities) but – even though we apply filtering – the predictions are not rewarded. This is because the corresponding triples, like (Bachelor of Science, /.../institution, Harward Law School), are missing in the knowledge graph. Conclusion In this work, we have presented a simple yet effective extension to embedding-based knowledge graph completion models (such as ComplEx, DistMult and TransE) to perform open-world prediction. Our approach – which we named OWE – maps text-based entity descriptions (learned from word embeddings) to the pre-trained graph embedding space. In experiments on several datasets (including the new FB15K-237-OWE dataset we introduced in this work), we showed that the learned transformations yield semantically meaningful results, that the approach performs competitive with respect to the state of the art, and that it is robust to scarce text descriptions. An interesting direction of future work will be to combine our model with approaches like ConMask BIBREF11 , which (1) exploit more complex aggregation functions and (2) use relation-specific attention/content masking to draw more precise embeddings from longer descriptions. Acknowledgements This work was partially funded by the German Federal Ministry of Education and Research (Program FHprofUnt, Project DeepCA / 13FH011PX6) and the German Academic Exchange Service (Project FIBEVID / 57402798).
mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10
dac2591f19f5bbac3d4a7fa038ff7aa09f6f0d96
dac2591f19f5bbac3d4a7fa038ff7aa09f6f0d96_0
Q: what are the three methods presented in the paper? Text: Introduction The Explanation Regeneration shared task asked participants to develop methods to reconstruct gold explanations for elementary science questions BIBREF1, using a new corpus of gold explanations BIBREF2 that provides supervision and instrumentation for this multi-hop inference task. Each explanation is represented as an “explanation graph”, a set of atomic facts (between 1 and 16 per explanation, drawn from a knowledge base of 5,000 facts) that, together, form a detailed explanation for the reasoning required to answer and explain the resoning behind a question. Linking these facts to achieve strong performance at rebuilding the gold explanation graphs requires methods to perform multi-hop inference - which has been shown to be far harder than inference of smaller numbers of hops BIBREF3, particularly for the case here, where there is considerable uncertainty (at a lexical level) of how individual explanations logically link somewhat `fuzzy' graph nodes. Introduction ::: Dataset Review The WorldTree corpus BIBREF2 is a new dataset is a comprehensive collection of elementary science exam questions and explanations. Each explanation sentence is a fact that is related to science or common sense, and is represented in a structured table that can be converted to free-text. For each question, the gold explanations have lexical overlap (i.e. having common words), and are denoted as having a specific explanation role such as CENTRAL (core concepts); GROUNDING (linking core facts to the question); and LEXICAL GLUE (linking facts which may not have lexical overlap). Introduction ::: Problem Review As described in the introduction, the general task being posed is one of multi-hop inference, where a number of `atomic fact' sentences must be combined to form a coherent chain of reasoning to solve the elementary science problem being posed. These explanatory facts must be retrieved from a semi-structured knowledge base - in which the surface form of the explanation is represented as a series of terms gathered by their functional role in the explanation. For instance, for the explanation “Grass snakes live in grass” is encoded as “[Grass snakes] [live in] [grass]”, and this explanation is found in a PROTO-HABITATS table. However, in the same table there are also more elaborate explanations, for example : “Mice live in in holes in the ground in fields / in forests.” is expressed as : “[mice] [live in] [in holes in the ground] [in fields OR in forests]”. And more logically complex : “Most predators live in/near the same environment as their prey.” being expressed as : “[most] [predators] [live in OR live near] [the same environment as their prey]”. So, whereas the simpler explanations fit in the usual Knowledge-Base triples paradigm, the more complex ones are much more nuanced about what actually constitutes a node, and how reliable the arcs are between them. Indeed, there is also a collection of if/then explanations, including examples such as : “[if] [something] [has a] [positive impact on] [something else] [then] [increasing] [the] [amount of] [that something] [has a] [positive impact on] [that something else]” - where the explanation has meta-effect on the graph itself, and includes `unbound variables'. Preliminary Steps In this work, we used the pure textual form of each explanation, problem and correct answer, rather than using the semi-structured form given in the column-oriented files provided in the dataset. For each of these we performed Penn-Treebank tokenisation, followed by lemmatisation using the lemmatisation files provided with the dataset, and then stop-word removal. Concerned by the low performance of the Python Baseline method (compared to the Scala Baseline, which seemed to operate using an algorithm of similar `strength'), we identified an issue in the organizer's evaluation script where predicted explanations that were missing any of the gold explanations were assigned a MAP score of zero. This dramatically penalised the Python Baseline, since it was restricted to only returning 10 lines of explanation. It also effectively forces all submissions to include a ranking over all explanations - a simple fix (with the Python Baseline rescored in Table 1) will be submitted via GitHub. This should also make the upload/scoring process faster, since only the top $\scriptstyle \sim $1000 explanation lines meaningfully contribute to the rank scoring. Model Architectures Although more classic graph methods were initially attempted, along the lines of BIBREF4, where the challenge of semantic drift in multi-hop inference was analysed and the effectiveness of information extraction methods was demonstrated, the following 3 methods (which now easily surpass the score of our competition submission) were ultimately pursued due to their simplicity/effectiveness. Model Architectures ::: Optimized TF-IDF As mentioned above, the original TF-IDF implementation of the provided Python baseline script did not predict a full ranking, and was penalized by the evaluation script. When this issue was remedied, its MAP score rose to 0.2140. However, there are three main steps that significantly improve the performance of this baseline: The original question text included all the answer choices, only one of which was correct (while the others are distractors). Removing the distractors resulted in improvement; The TF-IDF algorithm is very sensitive to keywords. Using the provided lemmatisation set and NLTK for tokenisation helped to align the different forms of the same keyword and reduce the vocabulary size needed; Stopword removal gave us approximately 0.04 MAP improvement throughout - removing noise in the texts that was evidently `distracting' for TF-IDF. As shown in Table 2, these optimisation steps increased the Python Baseline score significantly, without introducing algorithmic complexity. Model Architectures ::: Iterated TF-IDF While graph methods have shown to be effective for multi-hop question answering, the schema in the textgraphs dataset is unconventional (as illustrated earlier). To counter this, the previous TF-IDF method was extended to simulate jumps between explanations, inspired by graph methods, but without forming any actual graphs: TF-IDF vectors are pre-computed for all questions and explanation candidates; For each question, the closest explanation candidate by cosine proximity is selected, and their TF-IDF vectors are aggregated by a max operation; The next closest (unused) explanation is selected, and this process was then applied iteratively up to maxlen=128 times, with the current TF-IDF comparison vector progressively increasing in expressiveness. At each iteration, the current TF-IDF vector was down-scaled by an exponential factor of the length of the current explanation set, as this was found to increase development set results by up to +0.0344. By treating the TF-IDF vector as a representation of the current chain of reasoning, each successive iteration builds on the representation to accumulate a sequence of explanations. The algorithm outlined above was additionally enhanced by adding a weighting factor to each successive explanation as it is added to the cumulative TF-IDF vector. Without this factor, the effectiveness was lower because the TF-IDF representation itself was prone to semantic drift away from the original question. Hence, each successive explanation’s weight was down-scaled, and this was shown to work well. Model Architectures ::: BERT Re-ranking Large pretrained language models have been proven effective on a wide range of downstream tasks, including multi-hop question answering, such as in BIBREF5 on the RACE dataset, and BIBREF6 which showed that large finetuned language models can be beneficial for complex question answering domains (especially in a data-constrained context). Inspired by this, we decided to adapt BERT BIBREF7 - a popular language model that has produced competitive results on a variety of NLP tasks - for the explanation generation task. For our `BERT Re-ranking' method, we attach a regression head to a BERT Language Model. This regression head is then trained to predict a relevance score for each pair of question and explanation candidate. The approach is as follows : Calculate a TF-IDF relevance score for every tokenised explanation against the tokenised `[Problem] [CorrectAnswer] [Gold explanations]' in the training set. This will rate the true explanation sentences very highly, but also provide a `soft tail' of rankings across all explanations; Use this relevance score as the prediction target of the BERT regression head, where BERT makes its predictions from the original `[Problem] [CorrectAnswer]' text combined with each potential Explanation text in turn (over the training set); At prediction time, the explanations are ranked according to their relevance to `[Problem] [CorrectAnswer]' as predicted by the BERT model's output. We cast the problem as a regression task (rather than a classification task), since treating it as a task to classify which explanations are relevant would result in an imbalanced dataset because the gold explanation sentences only comprise a small proportion of the total set. By using soft targets (given to us by the TF-IDF score against the gold answers in the training set), even explanations which are not designated as “gold” but have some relevance to the gold paragraph can provide learning signal for the model. Due to constraints in compute and time, the model is only used to rerank the $top_n=64$ predictions made by the TF-IDF methods. The BERT model selected was of “Base” size with 110M parameters, which had been pretrained on BooksCorpus and English Wikipedia. We did not further finetune it on texts similar to the TextGraphs dataset prior to regression training. In other tests, we found that the “Large” size model did not help improve the final MAP score. Discussion The authors' initial attempts at tackling the Shared Task focussed on graph-based methods. However, as identified in BIBREF3, the uncertainty involved with interpreting each lexical representation, combined with the number of hops required, meant that this line of enquiry was put to one side. While the graph-like approach is clearly attractive from a reasoning point of view (and will be the focus of future work), we found that using purely the textual aspects of the explanation database bore fruit more readily. Also. the complexity of the resulting systems could be minimised such that the description of each system could be as consise as possible. Specifically, we were able to optimise the TF-IDF baseline to such an extent that our `Optimised TF-IDF' would now place 2nd in the submission rankings, even though it used no special techniques at all. The Iterated TF-IDF method, while more algorithmically complex, also does not need any training on the data before it is used. This shows how effective traditional text processing methods can be, when used strategically. The BERT Re-ranking method, in contrast, does require training, and also applies one of the more sophisticated Language Models available to extract more meaning from the explanation texts. Figure 1 illustrates how there is a clear trend towards being able to build longer explanations as our semantic relevance methods become more sophisticated. There are also clear trends across the data in Table 3 that show that the more sophisticated methods are able to bring more CENTRAL explanations into the mix, even though they are more `textually distant' from the original Question and Answer statements. Surprisingly, this is at the expense of some of the GROUNDING statements. Since these methods seem to focus on different aspects of solving the ranking problem, we have also explored averaging the ranks they assign to the explanations (essentially ensembling their decisions). Empirically, this improves performance at the expense of making the model more obscure. Discussion ::: Further Work Despite our apparent success with less sophisticated methods, it seems clear that more explicit graph-based methods appears will be required to tackle the tougher questions in this dataset (for instance those that require logical deductions, as illustrated earlier, or hypothetical situations such as some `predictor-prey equilibrium' problems). Even some simple statements (such as `Most predators ...') present obstacles to existing Knowledge-Base representations. In terms of concrete next steps, we are exploring the idea of creating intermediate forms of representation, where textual explanations can be linked using a graph to plan out the logical steps. However these grander schemes suffer from being incrementally less effective than finding additional `smart tricks' for existing methods! In preparation, we have begun to explore doing more careful preprocessing, notably : Exploiting the structure of the explanation tables individually, since some columns are known to be relationship-types that would be suitable for labelling arcs between nodes in a typical Knowledge Graph setting; Expanding out the conjunction elements within the explanation tables. For instance in explanations like “[coral] [lives in the] [ocean OR warm water]”, the different sub-explanations “(Coral, LIVES-IN, Ocean)” and “(Coral, LIVES-IN, WarmWater)” can be generated, which are far closer to a `graph-able' representation; Better lemmatisation : For instance `ice cube' covers both `ice' and `ice cube' nodes. We need some more `common sense' to cover these cases. Clearly, it is early days for this kind of multi-hop inference over textual explanations. At this point, we have only scratched the surface of the problem, and look forward to helping to advance the state-of-the-art in the future. Acknowledgments The authors would like to thank Google for access to the TFRC TPU program which was used in training and fine-tuning models during experimentation for this paper.
Optimized TF-IDF, iterated TF-IDF, BERT re-ranking.
f62c78be58983ef1d77049738785ec7ab9f2a3ee
f62c78be58983ef1d77049738785ec7ab9f2a3ee_0
Q: what datasets did the authors use? Text: Introduction Online communities abound today, forming on social networks, on webforums, within videogames, and even in the comments sections of articles and videos. While this increased international contact and exchange of ideas has been a net positive, it has also been matched with an increase in the spread of high-risk and toxic content, a category which includes cyberbullying, racism, sexual predation, and other negative behaviors that are not tolerated in society. The two main strategies used by online communities to moderate themselves and stop the spread of toxic comments are automated filtering and human surveillance. However, given the sheer number of messages sent online every day, human moderation simply cannot keep up, and either leads to a severe slowdown of the conversation (if messages are pre-moderated before posting) or allows toxic messages to be seen and shared thousands of times before they are deleted (if they are post-moderated after being posted and reported). In addition, human moderation cannot scale up easily to the number of messages to monitor; for example, Facebook has a team of 20,000 human moderators, which is both massive compared to the total of 25,000 other employees in the company, and minuscule compared to the fact its automated algorithms flagged messages that would require 180,000 human moderators to review. Keyword detection, on the other hand, is instantaneous, scales up to the number of messages, and prevents toxic messages from being posted at all, but it can only stop messages that use one of a small set of denied words, and, are thus fairly easy to circumvent by introducing minor misspellings (i.e. writing "kl urself" instead of "kill yourself"). In BIBREF0 , the authors show how minor changes can elude even complex systems. These attempts to bypass the toxicity detection system are called subverting the system, and toxic users doing it are referred to as subversive users. In this paper, we consider an alternative strategy for toxic message filtering. Our intuition is that, while toxic keywords can easily be disguised, the toxic emotional tone of the message cannot. Consequently, we will study the correlation between sentiment and toxicity and its usefulness for toxic message detection both in subversive and non-subversive contexts. The rest of this paper is structured as follows. After a review of the relevant literature in the next section, we will consider the problem of sentiment detection in online messages in Section SECREF3 . Next, we will study the measure of toxicity and its correlation to message sentiment in Section SECREF4 . Finally, we will draw some concluding remarks in Section SECREF5 . Related Work Given the limitations of human and keyword-based toxicity detection systems mentioned previously, several authors have studied alternative means of detecting toxicity. In one of the earliest works on the detection of hate speech, the authors of BIBREF1 used n-grams enhanced by part-of-speech information as features to train an SVM classifier to accurately pick out anti-semitic online messages. Following a similar idea, the authors of BIBREF2 conducted a study of the usefulness of various linguistic features to train a machine learning algorithm to pick out hate speech. They found that the most useful single feature was character n-grams, followed closely by word n-grams. However, it was a combination of all their features (n-grams, features of language, features of syntax, and word embedding vectors) that achieved the highest performance. The authors of BIBREF3 studied hate speech through the detection of othering language. They built a custom lexicon of pronouns and semantic relationships in order to capture the linguistic differences when describing the in-group and out-group in messages, and trained a word embedding model on that data. Hate speech is not the only form of toxicity that has been studied. In BIBREF4 , the authors studied cyberbullying. They developed a list of 300 "bad" words sorted in five levels of severity. Next, they used the number and density of "bad" words found in each online message as the features to train a set of machine learning systems. The authors of BIBREF5 also used words as featured in two systems, this time to detect sexual predators. One used the TFxIDF values of the words of the text to train a single-class SVM classifier, and the other used a bag-of-words vector of the text as input to a deep neural network. The authors found that the latter system offered the better performance in their experiments. Recently, deep learning has become very popular for NLP applications, and pre-trained word embeddings have been shown to be very effective in most text-based neural network applications. In BIBREF6 , four different deep learning models were implemented and shown to outperform benchmark techniques for cyberbullying detection on three different datasets. In BIBREF7 , a deep neural network taking a word embedding vector as input was used to detect cyberbullying on Twitter. It thus appears from the related literature that authors have tried a variety of alternative features to automatically detect toxic messages without relying strictly on keyword detection. However, sentiment has rarely been considered. It was one of the inputs of the deep neural network of BIBREF7 , but the paper never discussed its importance or analyzed its impact. The authors of BIBREF8 conducted the first study of cyberbullying in Dutch, and considered several features, including a subjectivity keyword lexicon. They found its inclusion helped improve results, but that a more sophisticated source of information than simple keyword detection was required. And the study of BIBREF9 used the sentiment of messages, as measured by the SentiStrength online system, as one of several features to detect cyberbullying messages. However, an in-dept analysis of how sentiment can benefit toxicity detection has not been done in any of these papers, and a study of the use of sentiment in a subversive context has never been done. Lexicons Sentiment detection, or the task of determining whether a document has a positive or negative tone, has been frequently studied in the literature. It is usually done by using a sentiment lexicon that either classifies certain words as positive or negative, or quantifies their level of positivity or negativity. We decided to consider six such lexicons: SentiWordNet is a widely-used resource for sentiment mining. It is based on WordNet, and assigns three scores to each synset, namely positivity, negativity, and objectivity, with the constraint that the sum of all three must be 1. Using this lexicon requires a bit of preprocessing for us, since the same word can occur in multiple different synsets with different meanings and therefore different scores. Since picking out the intended meaning and synset of a polysemous word found in a message is beyond our scope, we instead chose to merge the different meanings and compute a weighted average of the scores of the word. The weights are the ranks of the synsets, which correspond to the popularity of that meaning of the word in documents. The average score equation is : DISPLAYFORM0 where INLINEFORM0 is the number of times the word occurs with the same part of speech. We compute the average positivity and negativity scores, but not the objectivity scores, since they are not useful for our purpose and since they are simply the complement of the other two. This allows us to extract 155,287 individual words from the lexicon, with a positivity and negativity score between 0 and 1 for each. We should note that SentiWordNet differentiates a word based on part-of-speech, and we maintain this distinction in our work Afinn is a lexicon of 3,382 words that are rated between -5 (maximum negativity) and 5 (maximum positivity). To match SentiWordNet, we split this score into positivity and negativity scores between 0 and 1. For example, a word with a INLINEFORM0 score was changed to have a positive score of 0 and a negative score of INLINEFORM1 . Bing Liu compiled lists of 6,789 positive or negative words. Given no other information, we assigned each word in the positive list a positivity score of 1 and a negativity score of 0, and vice-versa for the negative-list words. General Inquirer is a historically-popular lexicon of 14,480 words, though only 4,206 of them are tagged as either positive or negative. As for the Bing Liu lexicon, we assigned binary positive and negative scores to each word that was tagged as positive or negative. Subjectivity Clues extends the sentiment tags of the General Inquirer up to 8,222 words using a dictionary and thesaurus. It also adds a binary strength level (strong or weak) to the polarity information. We merged polarity and strength as a measure of 0.5 and 1 for weak or strong positivity or negativity. NRC has a list of 14,182 words that are marked as associated (1) or not associated (0) with 8 emotions (anger, fear, anticipation, trust, surprise, sadness, joy, disgust) and two sentiments (negative and positive). We transform this association into binary positive and negative scores in the same way we did for Bing Liu and General Inquirer. All six of these lexicons have limitations, which stem from their limited vocabulary and the ambiguity of the problem. Indeed, despite being thousands of words each and covering the same subject and purpose, our six lexicons have only 394 words in common, indicating that each is individually very incomplete compared to the others. And we can easily find inconsistencies between the ratings of words, both internally within each lexicon and externally when we compare the same words between lexicons. Table TABREF16 illustrate some of these inconsistencies: for instance, the word "helpless" is very negative in SentiWordNet but less so in Afinn and Subjectivity Clues, while the word "terrorize" is more strongly negative in the latter two resources but less negative (and even a bit positive) in SentiWordNet. Likewise, the word "joke" is strongly positive, weakly positive, or even negative, depending on the lexicon used, and the word "merry" is more positive than "joke" according to every lexicon except SentiWordnet, which rates it equally positive and negative. By contrast the word "splendid" has the same positivity values as "merry" in all lexicons except SentiWordnet, where it has the highest possible positivity score. In a longer document, such as the customer reviews these lexicons are typically used on BIBREF10 , BIBREF11 , BIBREF12 , these problems are minor: the abundance and variety of vocabulary in the text will insure that the correct sentiment emerges overall despite the noise these issues cause. This is not true for the short messages of online conversations, and it has forced some authors who study the sentiments of microblogs to resort to creating or customizing their own lexicons BIBREF13 . This, incidentally, is also why we could not simply use an existing sentiment classifier. We will instead opt to combine these lexicons into a more useful resource. Message Preprocessing The first preprocessing step is to detect the presence and scope of negations in a message. Negations have an important impact; the word "good" may be labeled positive in all our lexicons, but its actual meaning will differ in the sentences "this movie is good" and "this movie is not good". We thus created a list of negation keywords by combining together the lists of the negex algorithm and of BIBREF14 , filtering out some irrelevant words from these lists, and adding some that were missing from the lists but are found online. Next, we need to determine the scope of the negation, which means figuring out how many words in the message are affected by it. This is the challenge of, for example, realizing that the negation affects the word "interesting" in "this movie is not good or interesting" but not in "this movie is not good but interesting". We considered two algorithms to detect the scope of negations. The first is to simply assume the negation affects a fixed window of five words after the keyword BIBREF15 , while the second discovers the syntactic dependencies in the sentence in order to determine precisely which words are affected BIBREF16 . We tested both algorithms on the SFU review corpus of negation and speculation. As can be seen in Table TABREF21 the dependency algorithm gave generally better results, and managed to find the exact scope of the negation in over 43% of sentences. However, that algorithm also has a larger standard deviation in its scope, meaning that when it fails to find the correct scope, it can be off by quite a lot, while the fixed window is naturally bounded in its errors. Moreover, the increased precision of the dependencies algorithm comes at a high processing cost, requiring almost 30 times longer to analyze a message as the fixed window algorithm. Given that online communities frequently deal with thousands of new messages every second, efficiency is a major consideration, and we opted for the simple fixed window algorithm for that reason. The second preprocessing step is to detect sentiment-carrying idioms in the messages. For example, while the words "give" and "up" can both be neutral or positive, the idiom "give up" has a clear negative sentiment. Several of these idioms can be found in our lexicons, especially SentiWordNet (slightly over INLINEFORM0 ). We detect them in our messages and mark them so that our algorithm will handle them as single words going forward. Finally, we use the NLTK wordpunkt_tokenizer to split sentences into words, and the Stanford fasterEnglishPOSTagger to get the part-of-speech of each word. Since our lexicons contain only four parts-of-speech (noun, verb, adverb, and adjective) and Stanford's tagger has more than 30 possible tags, we manually mapped each tag to one of the four parts-of-speech (for example, "verb, past participle" maps to "verb"). Message Sentiment Once every word has a positivity and a negativity score, we can use them to determine the sentiment of an entire message. We do this by computing separately the sum of positive scores and of negative scores of words in the message, and subtracting the negative total from the positive total. In this way, a score over 0 means a positive message, and a score under 0 means a negative message. We consider two alternatives at this point: one in which we sum the sentiment value of all words in the sentence, and one where we only sum the sentiment value of the top-three words with the highest scores for each polarity. We label these "All words" and "Top words" in our results. The impact of this difference is felt when we consider a message with a few words with a strong polarity and a lot of words with a weak opposite polarity; in the "Top words" scheme these weak words will be ignored and the strong polarity words will dictate the polarity of the message, while in the "All words" scheme the many weak words can sum together to outweigh the few strong words and change the polarity of the message. We optionally take negations into account in our sentiment computation. When a word occurs in the word window of a negation, we flip its positivity and negativity scores. In other words, instead of adding its positivity score to the positivity total of the sentence, we added its negativity score, and the other way round for the negativity total. Experiments where we do that are labeled "Negativity" in our results. Finally, we optionally incorporate word weights based on their frequency in our datasets. When applied, the score of each word is multiplied by a frequency modifier, which we adapted from BIBREF10 : DISPLAYFORM0 where INLINEFORM0 is the number of times the word appears in a dataset, and INLINEFORM1 is the number of times the most frequent word appears in that dataset. Experiments using this frequency modifier are labeled "Frequency" in our results. Experimental Results Our experiments have four main objectives: (1) to determine whether the "All words" or the "Top words" strategy is preferable; (2) to determine whether the inclusion of "Negation" and "Frequency" modifiers is useful; (3) to determine which of the six lexicons is most accurate; and (4) to determine whether a weighted combination of the six lexicons can outperform any one lexicon. To conduct our experiments, we used the corpus of annotated news comments available from the Yahoo Webscope program. The comments in this dataset are annotated by up to three professional, trained editors to label various attributes, including type, sentiment and tone. Using these three attributes, we split the dataset into two categories, sarcastic and non-sarcastic, and then again into five categories, clear negative, slight negative, neutral, slight positive, and clear positive. Finally, we kept only the non-sarcastic comments where all annotators agreed to reduce noise. This gives us a test corpus of 2,465 comments. To evaluate our results, we compute the sentiment score of each comment in our test corpus using our various methods, and we then compute the average sentiment score of comments in each of the five sentiment categories. For ease of presentation, we give a simplified set of results in Table TABREF26 , with only the average score of the two negative and the two positive labels combined, along with the overlap of the two distributions. The overlap is obtained by taking two normal distributions with the the means and standard deviations of the positive and the negative sets, and calculating the area in common under both curves. It gives us a measure of the ambiguous region where comments may be positive or negative. A good sentiment classifier will thus have very distant positive and negative scores and a very low overlap. These results show that there are important differences between the lexicons. Three of the six are rather poor at picking out negative sentiments, namely Subjectivity Clues (where negative sentences are on average detected as more positive than the positive sentences), General Inquirer, and NRC. This bias for positivity is an issue for a study on toxicity, which we expect to be expressed using negative sentiments. The other three lexicons give a good difference between positive and negative sentences. For these three lexicons, we find that using All words increases the gap between positive and negative sentence scores but greatly increases the standard deviation of each sentiment class, meaning the sentiment of the messages becomes ambiguous. On the other hand, using Top words reduces the overlap between the distributions and thus gives a better separation of positive and negative sentiments. And while adding frequency information or negations does not cause a major change in the results, it does give a small reduction in overlap. To study combinations of lexicons, we decided to limit our scope to SentiWordNet, Afinn, and Bing Liu, the three lexicons that could accurately pick out negative sentiments, and on the Top words strategy. We consider three common strategies to combine the results of independent classifiers: majority voting, picking the one classifier with the maximum score (which is assumed to be the one with the highest confidence in its classification), and taking the average of the scores of all three classifiers. For the average, we tried using a weighted average of the lexicons and performed a grid search to find the optimal combination. However, the best results were obtained when the three lexicons were taken equally. For the majority vote, we likewise take the average score of the two or three classifiers in the majority sentiment. Table TABREF27 presents the results we obtained with all three strategies. It can be seen that combining the three classifiers outperforms taking any one classifier alone, in the sense that it creates a wider gap between the positive and negative sentences and a smaller overlap. It can also be seen that the addition of negation and frequency information gives a very small improvement in the results in all three cases. Comparing the three strategies it can be seen that the maximum strategy is the one with the biggest gap in between positive and negative distribution, which was to be expected since the highest positive or negative sentiment is selected each time while it gets averaged out in the other two classifiers. However, the average score strategy creates a significantly smaller standard deviation of sentiment scores and a lower overlap between the distributions of positive and negative sentences. For that reason, we find the average score to be the best of the three combination strategies. In all cases, we find that most misclassified sentences in our system are due to the lack of insults in the vocabulary. For example, none of the lexicons include colorful insults like "nut job" and "fruitcake", so sentences where they appear cannot be recognized as negative. Likewise, some words, such as the word "gay", are often used as insults online, but have positive meanings in formal English; this actually leads to labeling insult messages as positive sentences. This issue stems from the fact that these lexicons were designed for sentiment analysis in longer and more traditional documents, such as customer reviews and editorials. One will seldom, if ever, find insults (especially politically-incorrect ones such as the previous examples) in these documents. Toxicity Detection The main contribution of this paper is to study how sentiment can be used to detect toxicity in subversive online comments. To do this, we will use three new test corpora: Correlation Our first experiment consists in computing the sentiment of each message in each of our three test corpora, and verifying how they correlate with the different toxicity scores of each of the corpora. Following the results we found in Section SECREF3 , we used the best three lexicons (SentiWordNet, Afinn, and Bing Liu), combined them by taking the average score, and used our four algorithm variations. The results are presented in Table TABREF37 . It can be seen that there is a clear negative correlation between toxicity and sentiment in the messages, as expected. Our results also show that using words only or including frequency information makes the relationship clearer, while adding negations muddies it. These results are consistent over all three test corpora, despite being from different sources and labeled using different techniques. The lower score on the Reddit dataset may simply be due to the fact it was labeled automatically by a system that flags potentially dangerous content and not by human editors, so its labels may be noisier. For example, mentioning sexual body parts will be labeled as toxicity level 5 even if they are used in a positive sentence, because they carry more potential risk. Subversive Toxicity Detection Our second experiment consists in studying the benefits of taking sentiments into account when trying to determine whether a comment is toxic or not. The toxicity detector we implemented in this experiment is a deep neural network inspired by the most successful systems in the Kaggle toxicity competition we used as a dataset. It uses a bi-GRU layer with kernel size of 40. The final state is sent into a single linear classifier. To avoid overfitting, two 50% dropout layers are added, one before and one after the bi-GRU layer. The network takes as input a sentence split into words and into individual characters. The words are represented by the 300d fastText pre-trained word embeddings, and characters are represented by a one-hot character encoding but restricted to the set of 60 most common characters in the messages to avoid the inclusion of noise. Finally, we used our "top + frequency" sentiment classifier with the average of the best three lexicons (SentiWordNet, Afinn, and Bing Liu) to determine the sentiment of each message. We input that information into the neural network as three sentiment values, corresponding to each of the three lexicons used, for each of the frequent words retained for the message. Words that are not among the selected frequent words or that are not found in a lexicon receive a sentiment input value of 0. Likewise, experiments that do not make use of sentiment information have inputs of 0 for all words. These input values are then concatenated together into a vector of 363 values, corresponding to the 300 dimensions of fastText, the 60 one-hot character vector, and the 3 sentiment lexicons. The output of our network is a binary "toxic or non-toxic" judgment for the message. In the Kaggle dataset, this corresponds to whether the "toxic" label is active or not. In the Reddit dataset, it is the set of messages evaluated at levels 5, 6 or 7 by Community Sift in any of the topics mentioned earlier. And in the Wikipedia dataset, it is any message marked as toxic by 5 workers or more. We chose this binary approach to allow the network to learn to recognize toxicity, as opposed to types of toxic messages on Kaggle, keyword severity on Reddit, or a particular worker's opinions on Wikipedia. However, this simplification created a balance problem: while the Reddit dataset is composed of 12% toxic messages and 88% non-toxic messages, the Wikipedia dataset is composed of 18% toxic messages and the Kaggle dataset of 10% toxic messages. To create balanced datasets, we kept all toxic messages and undersampled randomly the set of non-toxic messages to equal the number of toxic messages. Our experiment consists in comparing the toxicity detection accuracy of our network when excluding or including sentiment information and in the presence of subversion. Indeed, as mentioned in Sections SECREF1 and SECREF2 , it is trivial for a subversive user to mask toxic keywords to bypass toxicity filters. In order to simulate this behavior and taking ideas from BIBREF0 , we created a substitution list that replaces popular toxic keywords with harmless versions. For example, the word "kill" is replaced by "kilt", and "bitch" by "beach". Our list contains 191 words, and its use adds noise to INLINEFORM0 of the toxic Kaggle messages, INLINEFORM1 of the Wikipedia messages, and INLINEFORM2 of the Reddit messages. These substitutions are only done at testing time, and not taken into account in training, to simulate the fact that users can create never-before-seen modifications. We trained and tested our neural network with and without sentiment information, with and without subversion, and with each corpus three times to mitigate the randomness in training. In every experiment, we used a random 70% of messages in the corpus as training data, another 20% as validation data, and the final 10% as testing data. The average results of the three tests are given in Table TABREF40 . It can be seen that sentiment information helps improve toxicity detection in all cases. The improvement is smaller when the text is clean. However, the introduction of subversion leads to an important drop in the accuracy of toxicity detection in the network that uses the text alone, and the inclusion of sentiment information gives an important improvement in that case. Comparing the different corpora, it can be seen that the improvement is smallest in the Reddit dataset experiment, which is expected since it is also the dataset in which toxicity and sentiment had the weakest correlation in Table TABREF37 . We can note that the system performs very well in all cases, even with subversion and without sentiment information. This may be due to the fact that the messages in all datasets are user-generated and therefore noisy already. In addition, the character encoding of the neural network is robust to misspellings, as opposed to a keyword lookup system. Conclusion In this paper, we explored the relationship between sentiment and toxicity in social network messages. We began by implementing a sentiment detection tool using different lexicons and different features such as word frequencies and negations. This tool allowed us to demonstrate that there exists a clear correlation between sentiment and toxicity. Next, we added sentiment information to a toxicity detection neural network, and demonstrated that it does improve detection accuracy. Finally, we simulated a subversive user who attempts to circumvent the toxicity filter by masking toxic keywords in their messages, and found that using sentiment information improved toxicity detection by as much as 3%. This confirms our fundamental intuition, that while it is possible for a user to mask toxic words with simple substitutions, it is a lot harder for a user to conceal the sentiment of a message. Our work so far has focused on single-line messages and general toxicity detection. There are however several different types of toxicity, some of which correlate to different sentiments. For instance, while cyber-bullying and hate speech have negative sentiments, other forms of toxicity such as fraud or sexual grooming will use more positive sentiments in order to lure victims. We expect that differentiating between these types of toxicity will strengthen the correlation to message sentiment and further improve our results. Likewise, handling entire conversations instead of individual messages will allow us to include contextual information to better model the sentiment of the message, and to detect sudden changes in the sentiment of the conversation that may correspond to a disruptive toxic comment. Acknowledgements This research was made possible by the financial, material, and technical support of Two Hat Security Research Corp, and the financial support of the Canadian research organization MITACS.
Kaggle Subversive Kaggle Wikipedia Subversive Wikipedia Reddit Subversive Reddit
639c145f0bcb1dd12d08108bc7a02f9ec181552e
639c145f0bcb1dd12d08108bc7a02f9ec181552e_0
Q: What are three possible phases for language formation? Text: Introduction This letter arises from two intriguing questions about human language. The first question is: To what extent language, and also language evolution, can be viewed as a graph-theoretical problem? Language is an amazing example of a system of interrelated units at different organization scales. Several recent works have stressed indeed the fact that human languages can be viewed language as a (complex) network of interacting parts BIBREF0, BIBREF1, BIBREF2, BIBREF3. Within the graph-based approach to human language, one may think word-meaning mappings (that is, vocabularies) as bipartite graphs, formed by two sets: words and meanings BIBREF2. The second question is: What is the nature of the language evolution process that affects the shape of graph-based language representations? To answer this question, we assume that human communication is constrained (at least) by two forces BIBREF2: one that pushes towards communicative success and another one that faces the trade-off between speaker and hearer efforts. The first force involves simpler decentralized models of linguistic interactions within populations of artificial agents, endowed with minimal human cognitive features, negotiating pieces of a common language: the so-called language games BIBREF4, BIBREF5, BIBREF6, BIBREF7. In the simplest language game, the naming game BIBREF8, BIBREF9, at discrete time step a pair of players (typically one speaker and one hearer) interacts towards agreement on word-meaning associations. Next, we also consider the communication cost to establish word-meaning mappings. G. Zipf referred to the lexical trade-off between two competing pressures, ambiguity and memory, as the least effort principle BIBREF10, BIBREF11: speakers prefer to minimize memory costs; whereas, hearers prefer to minimize disambiguation costs. As remarked by several works, an interesting proposal has stated that human-like vocabularies appear as a phase transition at a critical stage for both competing pressures BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. The appearance of a drastic stage of competing pressures can be understood moreover as an explanation of the empirical Zipf's law, which establishes a dichotomy between low-memory words (like the word “the") and low-ambiguity words (like the word “cat"). Within a statistical point of view, text corpora evidence strong scaling properties in word-frequencies BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. The main aim is to address a decentralized approach (based on a previous proposal of two authors of this letter BIBREF25) to the emergence of Zipfian properties in a human-like language, while players communicate with each other using bipartite word-meaning mappings. To structurally characterize changes in the system, our methodology is mainly based on a phase transition description, arising from both classical statistical mechanics tools and graph-mining techniques. We run numerical simulations over simple population topologies. We apply graph-mining techniques, particularly a clustering notion for bipartite graphs BIBREF26. The model ::: Key concepts on (bipartite) graphs A bipartite graph is a triple $B=(\top ,\bot ,E)$, where $\top $ and $\bot $ are two mutually disjoint set of nodes, and $E \subseteq \top \times \bot $ is the set of edges of the graph. Here, $\top $ represents the set of word nodes, whereas $\bot $ represents the set of meaning nodes. We remark that edges only exist between word nodes and meaning nodes. A classical useful tool in graph theory is the matrix representation of graphs. Here, we only consider the adjacency matrix. Let us denote by $A = (a)_{wm}$ the adjacency matrix for the (bipartite) graph $B$. From the bipartite sets $\top $ and $\bot $, representing respectively word and meaning nodes, we define the rows of $A$ as word nodes, and the columns as meaning nodes, where $(a)_{wm}=1$ if the word $w$ is joined with the meaning $m$, and 0 otherwise. The neighbors of order 1 of $u\in \top $ are the nodes at distance 1: $N(u)=\lbrace v\in \bot : uv \in E\rbrace $ (if $u \in \bot $ the definition is analogous). Let us denote by $N(N(u))$ the set of nodes at distance 2 from $u$. The degree $d(u)$ of the node $u$ is simply defined by $d(u)=|N(u)|$. We denote by $d^{\max }_W = \max _{w \in W} d(w)$ the maximum degree for word nodes ($\top $). Analogously, $d^{\max }_M = \max _{m \in M} d(m)$ the maximum degree for meaning nodes ($\bot $). The notion of clustering coefficient (in classical graphs) captures the fact that when there is an edge between two nodes they probably have common neighbors. More generally, such notion captures correlations between neighborhoods. Based on this point of view, BIBREF26 proposed a clustering coefficient notion for bipartite graphs: where $cc(u,v)$ is a notion of clustering defined for pairs of nodes (in the same set $\top $ or $\bot $): Interestingly, $cc(u,v)$ captures the overlap between the neighborhoods of $u$ and $v$: if $u$ and $v$ do not share neighbors $cc(u,v)=0$; if they have the same neighborhood $cc(u,v)=1$. To give an overall overview of bipartite clustering for the graph $B$, the average bipartite clustering reads The model ::: Basic elements of the language game The language game is played by a finite population of participants $P=${1,...,p}, sharing both a set of words $W=\lbrace 1,...,n\rbrace $ and a set of meanings $M=\lbrace 1,...,m\rbrace $. Each player $k\in P$ is endowed with a graph-based word-meaning mapping $B^k=(\top ^k,\bot ^k,E^k)$. In our case, $B^k$ is a bipartite graph with two disjoint sets: $\top ^k \subseteq W$ (word nodes) and $\bot ^k \subseteq M$ (meaning nodes). Each player $k \in P$ only knows its own graph $B^k$. Two technical terms are introduced. First, we say that a player $k \in P$ knows the word $w \in W$ if $w \in \top ^k$. Clearly, this definition is equivalent to the existence of the edge $wm \in E^k$, for some $m \in \bot ^k$. Second, the ambiguity of the word $w$, denoted $a(w)$, is defined as its node degree $d(w)$. The model ::: Language game rules The dynamics of the language game is based on pairwise speaker-hearer interactions at discrete time steps. At $t \geqslant 0$, a pair of players is selected uniformly at random: one plays the role of speaker $s$ and the other plays the role of hearer $h$, where $s,h \in P$. Each speaker-hearer communicative interaction is defined by two successive steps. The speaker-centered STEP 1 involves the selection of a meaning and a word to transmit them. At STEP 2, the hearer receives the word-meaning association and both speaker and hearer behave according to either repair or alignment strategies. STEP 1. To start the communicative interaction, the speaker $s$ selects the topic of the conversation: one meaning $m^* \in M$. To transmit the meaning $m^*$, the speaker needs to choose some word, denoted $w^*$. There are two possibilities for the selection of $w^*$: if the edge $wm^* \notin E^s$ for any $w \in \top ^s$, the speaker chooses (uniformly at random) the word $w^*$ from the set $W$ and adds the edge $w^*m^*$ to the graph $B^s$; otherwise, if $w^*m^* \in E^s$ for some $w^* \in \top ^s$, the speaker calculates $w^*$ based on its interests, that is, based on its own conflict between ambiguity and memory. To calculate $w^*$ for the second case ($w^*m^* \in E^s$), the speaker behaves according to the ambiguity parameter $\wp \in [0,1]$. Let $random \in [0,1]$ be a random number. Then, two actions are possible: if $random \geqslant \wp $, the speaker calculates $w^*$ as the least ambiguous word otherwise, the speaker calculates $w^*$ as the most ambiguous word The speaker transmits the word $w^*$ to the hearer. STEP 2. The hearer behaves as in the naming game. On the one hand, mutual speaker-hearer agreement (if the hearer knows the word $w^*$) involves alignment strategies BIBREF9. On the other hand, a speaker-hearer disagreement (if the hearer does not know the word $w^*$) involves a repair strategy in order to increase the chance of future agreements (that is, for $t^{\prime }>t$). More precisely, if the hearer knows the word $w^*$, both speaker and hearer remove all edges formed by $wm^*$, where $w$ respectively belongs to $\top ^s \setminus \lbrace w^*\rbrace $ and $\top ^h \setminus \lbrace w^*\rbrace $. otherwise, the hearer adds the edge $w^*m^*$ to its graph $B^h$. Methods The population of agents is located on the vertices of a complete graph of size $|P|=100$, typically called the mean field approximation. For the description of other simple graph topologies, see the caption of Fig. FIGREF15. The population shares both a set of $n=|W|=128$ words and a set of $m=|M|=128$ meanings. Starting from an initial condition in which each player $k \in P$ is associated to a bipartite graph $B^k$ where $B^k_{ij} = 1$ or $B^k_{ij} = 0$ with probability 0.5 (put differently, for each possible edge $ij$, $i \in W$ and $j \in M$, exists with probability 0.5), the dynamics performs a speaker-hearer interaction at each discrete time step $t \geqslant 0$. The bipartite word-meaning mappings $B^s$ and $B^h$ are then reevaluated according to communicative success. All results consider averages over 10 initial conditions and $3\times 10^5$ time steps. We denote by $t_f$ the final time step. The ambiguity parameter $\wp $ is varied from 0 to 1 with an increment of 1%. Results ::: Three structural phases in language formation Two key quantities have been analyzed for different values of $\wp $: the average population clustering $cc$, which captures the average correlation between word neighborhoods; and the (effective) lexicon size at time step $t$, $V(t)$, defined as BIBREF12, BIBREF25 where $V(t)=1$ if $|\top ^k|=n$, while $V(t)=0$ if $|\top ^k|=0$. Three clear domains can be noticed in the behavior of $\langle cc \rangle $ versus $\wp $, at $t_f$, as shown in Fig. FIGREF15 (blue squares). Phase I: $\langle cc \rangle $ increases smoothly for $\wp < 0.4$, indicating that for this domain there is a small correlation between word neighborhoods. Full vocabularies are attained also for $\wp < 0.4$; Phase II: a drastic transition appears at the critical domain $\wp ^* \in (0.4,0.6)$, in which $\langle cc \rangle $ shifts abruptly towards 1. An abrupt change in $V(t_f)$ versus $\wp $ is also found (Fig. FIGREF16) for $\wp ^*$; Phase III: single-word languages dominate for $\wp > 0.6$. The maximum value of $\langle cc \rangle $ indicate that word neighborhoods are completely correlated. Results ::: Bipartite graphs to visualize the phase transition We now shift our focus from graph-based measures towards a holistic level in which we illustrate the described phase transition using bipartite graph representations of language formation. We stress the fact that our framework based on a language game with players endowed with bipartite word-meaning mappings is able to visualize the structural changes of the three phases (I, II and III). Fig. FIGREF18 display, from top to bottom, the bipartite word-meaning mappings for ambiguity parameters $\wp $ in $\lbrace 0.1, 0.52,1\rbrace $. As expected, there are radical structural changes between bipartite graphs associated to such ambiguity parameters. Full vocabularies are attained for $\wp =0.1$ (Phase I), located at the hearer-centered phase. Zipfian vocabularies seem to appear for $\wp =0.52$ (Phase II), where speaker and hearer costs have a similar value. Finally, a single-word vocabulary (that is, one word, several meanings) is exhibited for $\wp =1$ (Phase III). Results ::: Critical values of energy The appearance of the three-phased language behavior described here is closely related to previous results of two authors of this letter BIBREF25. Indeed, in the cited paper the energy-like functional $e_{KL}$ (a kullback-leibler-based measure) is minimized around the parameter $\wp \approx 0.5$. Remarkably, here it is showed numerically that around the critical parameter $\wp \approx 0.52$ a drastic transition for both the effective vocabulary and the bipartite average clustering tends to appear (see Fig. FIGREF18). A first strategy to profound on the problem established between the phase transitions described here and energy-based approaches, is to measure the information-theoretic energy $\Omega _\wp (tf)$ (as defined in BIBREF12) as a function of the parameter $\wp $. $\Omega _\wp (tf)$ is a combination of the respective efforts of speakers and hearers: $\Omega _\wp (tf)=\wp H(R|S)+(1-\wp )H(S)$. Figure FIGREF20 showed that $\Omega _\wp (t_f)$ is minimized around $\wp \approx 0.5$. This suggests a new way to understand language evolution and formation, by reconciling models focused on self-organization and information-theoretic accounts. Discussion In this letter, we have described a decentralized model of the emergence of Zipfian features in a human-like language, where agents play language games communicating with bipartite word-meaning mappings. The model evidences a phase transition that corresponds to the formation of a human-like vocabulary satisfying Zipfian word-meaning properties. Our central graph-mining tool has been a notion of clustering for bipartite graphs. This function allowed us to suggest that the drastic transition is, in some sense, a qualitative transition in word's correlations. To further understand the nature of the described transition, we remark a recent proposal BIBREF28, reinterpreting an old question about language learning with a novel approach: if language learning by a child involves setting many parameters, to what extent all these need to be innate? According to the Principles and Parameters theory BIBREF29, children are biologically endowed with a general “grammar" and then the simple exposition to a particular language (for example, Quechua) fixes its syntax by equalizing parameters. This debate was illuminated by proposing a statistical mechanics approach in which the distribution of grammar weights (where language is modeled by weighted context-free grammars) evidences a drastic transition. Language learning is, for this proposal, a transition from a random model of grammar parameter-weights to the one in which deep structure (that is, syntax) is encountered. Here, the language learning problem is situated in a decentralized process, with agents negotiating a common word-meaning mapping exhibiting Zipfian scaling properties. Interestingly, our approach can shed light on the debate opened by BIBREF28. Indeed, our model questioned, first, the fact that language learning is traditionally viewed as an individual process, without any consideration of population structure (in general, language games question this fact). Secondly, we argue that our view pointed out the minimal necessity of cognitive principles for cultural language formation: the least effort principle. We hypothesize that players only need the most basic cognitive features for language learning (and formation) and the rest is an emergent property from the local speaker-hearer interactions. It is interesting to remark that several works have stressed the fact that language formation can be viewed as a phase transition within an information-theoretic approach BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Future work could explore an intriguing hypothesis: Zipfian properties have strong consequences for syntax and symbolic reference. BIBREF30 has proposed indeed that Zipf's law is a necessary precondition for full syntax, and for going beyond simple word-meaning mappings. They hypothesized moreover that the appearance of syntax have been as abrupt as the transition to Zipf's law. This is a goal for future work: to propose a decentralized model in which agents (constrained by specific cognitive features) develop a Zipfian language that acts as a precondition for the abrupt transition to simple forms of syntax (based, for example, on BIBREF31). Another related research line arises from models assuming the interplay between maximization of the information transfer and minimization of the entropy of signals (see, for example, BIBREF32). As previously remarked, these models evidence a lack of population structure. Current work asks how a community of individuals playing the language game proposed here can minimize the energy functional $\Omega (\lambda )$ of word-meaning mappings. We may hypothesize that reaching global consensus at the critical phase (that, is for $\wp \approx 0.5$) is closely related to the global minima of $\Omega (\lambda )$ (as a first approach to this problem, see Section SECREF19). This idea opens fascinating novel ways to study human language, reconciling models seeing communication as a global minima of information entropic energies and models focused on populations self-organizing themselves towards a shared consensus. F.U. thanks CONICYT Chile for financial support under the Grant 3180227.
Phase I: $\langle cc \rangle $ increases smoothly for $\wp < 0.4$, indicating that for this domain there is a small correlation between word neighborhoods. Full vocabularies are attained also for $\wp < 0.4$, Phase II: a drastic transition appears at the critical domain $\wp ^* \in (0.4,0.6)$, in which $\langle cc \rangle $ shifts abruptly towards 1. An abrupt change in $V(t_f)$ versus $\wp $ is also found (Fig. FIGREF16) for $\wp ^*$, Phase III: single-word languages dominate for $\wp > 0.6$. The maximum value of $\langle cc \rangle $ indicate that word neighborhoods are completely correlated
ab3737fbf17b7a0e790e1315fffe46f615ebde64
ab3737fbf17b7a0e790e1315fffe46f615ebde64_0
Q: How many parameters does the model have? Text: META-REVIEW Comments: An approach to handle the OOV issue in multilingual BERT is proposed. A great deal of nice experiments were done but ultimately (and in message board discussions) the reviewers agreed there wasn't enough novelty or result here to justify acceptance. REVIEWER #1 ::: What is this paper about, what contributions does it make, what are the main strengths and weaknesses? The paper addresses the OOV problem of multilingual language models. In particular, the authors extend Bert by a mapping from monolingual language models and evaluate it on a variety of tasks, both token-level and sentence-level prediction tasks. The main strengths of the paper are a sound experimental part with many experiments and results that give a clear picture of the pros and cons of the approach. The main weakness of the paper, in my opinion, lies in the presentation and motivation of the methods which I find a bit confusing (see my clarification questions below for details) and which I hope the authors will improved for the camera-ready version of the paper. REVIEWER #1 ::: Reasons to accept An exploration how BERT performs in multilingual settings and how it could be improved. REVIEWER #1 ::: Reasons to reject I don't see risks with accepting this paper. REVIEWER #1 ::: Reviewer's Scores Overall Recommendation: 4 REVIEWER #1 ::: Questions and Suggestions for the Author(s) - If I understood the introduction correctly, you do not want to train monolingual models on large-scale corpora because this is time-consuming and resource-intensive. However, isn't that exactly what you need to do in order to apply your mapping methods? (line 298) - Typically, OOV problems are investigated at the word level and taken as "solved" at the subword level. Since byte pair encoding falls back to single characters if needed, I am a bit suprised that OOV seems to be such a big issue with byte pair encoding. Can the authors explain how this happens? Or give examples for OOVs in the different languages? In Table 2, for example, the $OOV_{sw}$ numbers are pretty large for Arabic (possibly because of the different alphabet?) but also for Polish and Swedish. - In the introduction, you motivate that vocabulary sizes for multilingual models are typically small because of expensive softmax computations. What about using class-based LMs with languages as classes so that you can first predict the language and then the word given that language, as in Mikolov et al. 2011 ("Extensions of recurrent neural network language model")? - Line 160: I find it surprising to just ignore the shared subwords. Can you provide numbers please for how many subwords are actually shared among the language you consider? (to see if you can savely ignore them - I would assume that a lot of them are actually shared) - How can you get "non-contextualized embeddings" from BERT? REVIEWER #2 ::: What is this paper about, what contributions does it make, what are the main strengths and weaknesses? This paper proposes three approaches to address the out-of-vocabulary problem in multilingual BERT: as many languages share the same vocabulary, the vocabulary size for each language (particularly low-resource ones) is comparatively small. The first approach learns a mapping between a language-specific pretrained embedding space (from fastText) and the BERT embedding space, as in Madhyastha et al. (2016). The second approach maps all languages to the English embedding space using a cross-lingual word embedding method MUSE. Then this joint embedding space is again mapped to the BERT embedding space with another transformation. The third approach represents all subwords as a mixture of English subwords as in Gu et al. (2018). The first two approaches are found not to perform competitively, so many of the experiments are done only with the third approach. The paper reports results on POS tagging, code-mixed sequence labeling and reading comprehension (where it creates a new Chi! nese-English dataset) and MT quality estimation. The mixture model slightly outperforms multilingual BERT. Strengths: The paper addresses a timely and relevant problem. The paper conducts a large number of experiments. Weaknesses: One thing I would have liked to see that would motivate the problem that this paper is addressing is an analysis that shows how small the vocabulary actually is. Such an analysis would also help make clear whether this is only a problem that appears in a massively multilingual setting or whether this is already an issue with five or ten languages. A shared multilingual vocabulary is a feature that is not unique to BERT, but can be found in any model that is jointly trained with many languages. It would have been good to compare this approach with another model, either another Transformer-based model that uses subword embeddings such as GPT, an LSTM based model with subword embeddings, or a multilingual NMT model. I found it somewhat irritating that the first two methods are presented in detail and after the first experiment section only the third method is used as the first two do not perform very well. IMO it would strengthen the paper if they were either discussed less or if it was more analyzed why they do not work well. Søgaard et al. (2018) find that embedding spaces learned with different methods cannot be easily mapped onto each other, as discussed in Section 3.2. To control for this effect, it would have been nice to try mapping to an embedding space obtained with the same method (e.g. the English or Chinese BERT) and investigate if the methods still perform poorly. The paper mainly applies existing methods to a novel model. (minor) The baselines in Table 2 are somewhat out-dated. A more recent method to compare against is Yasunaga et al. (NAACL 2018). https://arxiv.org/abs/1711.04903 REVIEWER #2 ::: Reasons to accept The community would become aware of some ways to address the out-of-vocabulary setting with state-of-the-art models, even though most of these methods have been proposed before. REVIEWER #2 ::: Reasons to reject While the out-of-vocabulary problem in large multilingual models is an important issue, in my opinion this paper leaves too many questions open and misses out on investigating and analyzing important issues. REVIEWER #2 ::: Reviewer's Scores Overall Recommendation: 2 REVIEWER #2 ::: Missing References Conneau et al. (2017) should be Conneau et al. (2018) (the paper was presented at ICLR 2018). The paper cites Conneau et al. (2017) several times as a reference for the observation that it is difficult to map embeddings learned with different methods (e.g. in line 340). This is not observed in this paper, as far as I'm aware. Søgaard et al. (2018) should be cited here instead. Søgaard et al. (2018) should also be cited for the bilingual lexicon using identical subwords (line 311). Søgaard, A., Ruder, S., Vulić, I. (2018). On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proceedings of ACL 2018. REVIEWER #3 ::: What is this paper about, what contributions does it make, what are the main strengths and weaknesses? This paper investigates 3 methods for expanding vocabulary for multilingual pretrained word embeddings (such as multilingual BERT) using subword units. The argument is that the vocabulary for each language in pretrained multilingual word embeddings is usually small and it is costly to retrain with bigger vocab. This lead to a high out-of-vocab rate for downstream tasks which lowers the performance. They expand the vocab of target pretrained word embeddings by mapping language specific embeddings to this space. They experimented with 3 mapping methods: independent mapping, joint mapping and mixture mapping on various tasks including both token-level to sequence-level tasks. In overall, they achieved better accuracy compared with the pretrained word embeddings without vocab expansion. I quite like the discussion part in section 3.6 shedding more light on the performance gain. The strength of this paper is the extensive set of experiments but their best model (using mixture map! ping) is already proposed in Gu et al (2018) limiting their contribution. REVIEWER #3 ::: Reasons to accept A simple method with good results in many tasks. REVIEWER #3 ::: Reasons to reject lack of novelty REVIEWER #3 ::: Reviewer's Scores Overall Recommendation: 3.5
Unanswerable
0b8d64d6cdcfc2ba66efa41a52e09241729a697c
0b8d64d6cdcfc2ba66efa41a52e09241729a697c_0
Q: Do the experiments explore how various architectures and layers contribute towards certain decisions? Text: Introduction Following seminal work by Bengio and Collobert, the use of deep learning models for natural language processing (NLP) applications received an increasing attention in recent years. In parallel, initiated by the computer vision domain, there is also a trend toward understanding deep learning models through visualization techniques BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 or through decision tree extraction BIBREF6 . Most work dedicated to understanding neural network classifiers for NLP tasks BIBREF7 , BIBREF8 use gradient-based approaches. Recently, a technique called layer-wise relevance propagation (LRP) BIBREF4 has been shown to produce more meaningful explanations in the context of image classifications BIBREF9 . In this paper, we apply the same LRP technique to a NLP task, where a neural network maps a sequence of word2vec vectors representing a text document to its category, and evaluate whether similar benefits in terms of explanation quality are observed. In the present work we contribute by (1) applying the LRP method to the NLP domain, (2) proposing a technique for quantitative evaluation of explanation methods for NLP classifiers, and (3) qualitatively and quantitatively comparing two different explanation methods, namely LRP and a gradient-based approach, on a topic categorization task using the 20Newsgroups dataset. Explaining Predictions of Classifiers We consider the problem of explaining a prediction $f(x)$ associated to an input $x$ by assigning to each input variable $x_d$ a score $R_d$ determining how relevant the input variable is for explaining the prediction. The scores can be pooled into groups of input variables (e.g. all word2vec dimensions of a word, or all components of a RGB pixel), such that they can be visualized as heatmaps of highlighted texts, or as images. Layer-Wise Relevance Propagation Layer-wise relevance propagation BIBREF4 is a newly introduced technique for obtaining these explanations. It can be applied to various machine learning classifiers such as deep convolutional neural networks. The LRP technique produces a decomposition of the function value $f(x)$ on its input variables, that satisfies the conservation property: $$f(x)= {\textstyle \sum _d} R_d.$$ (Eq. 3) The decomposition is obtained by performing a backward pass on the network, where for each neuron, the relevance associated with it is redistributed to its predecessors. Considering neurons mapping a set of $n$ inputs $(x_i)_{i \in [1,n]}$ to the neuron activation $x_j$ through the sequence of functions: $ z_{ij} &= x_i w_{ij} + {\textstyle \frac{b_j}{n}}\\ z_j &= {\textstyle \sum _i} z_{ij}\\ x_j &= g(z_j) $ where for convenience, the neuron bias $b_j$ has been distributed equally to each input neuron, and where $g(\cdot )$ is a monotonously increasing activation function. Denoting by $R_i$ and $R_j$ the relevance associated with $x_i$ and $x_j$ , the relevance is redistributed from one layer to the other by defining messages $R_{i \leftarrow j}$ indicating how much relevance must be propagated from neuron $x_j$ to its input neuron $x_i$ in the lower layer. These messages are defined as: $ R_{i \leftarrow j} = \frac{z_{ij} + \frac{s(z_j)}{n}}{\sum _{i} z_{ij} + s(z_j)} R_j $ where $s(z_j) = \epsilon \cdot (1_{z_j \ge 0} - 1_{z_j < 0})$ is a stabilizing term that handles near-zero denominators, with $\epsilon $ set to $0.01$ . The intuition behind this local relevance redistribution formula is that each input $x_i$ should be assigned relevance proportionally to its contribution in the forward pass, in a way that the relevance is preserved ( $\sum _i R_{i \leftarrow j} = R_j$ ). Each neuron in the lower layer receives relevance from all upper-level neurons to which it contributes $ R_i = {\textstyle \sum _j} R_{i \leftarrow j}. $ This pooling ensures layer-wise conservation: $\sum _i R_i = \sum _j R_j$ . Finally, in a max-pooling layer, all relevance at the output of the layer is redistributed to the pooled neuron with maximum activation (i.e. winner-take-all). An implementation of LRP can be found in BIBREF10 and downloaded from www.heatmapping.org. Sensitivity Analysis An alternative procedure called sensitivity analysis (SA) produces explanations by scoring input variables based on how they affect the decision output locally BIBREF11 , BIBREF12 . The sensitivity of an input variable is given by its squared partial derivative: $ R_d = \Big (\frac{\partial f}{\partial x_d} \Big )^2. $ Here, we note that unlike LRP, sensitivity analysis does not preserve the function value $f(x)$ , but the squared $l_2$ -norm of the function gradient: $$\Vert \nabla _{x} f(x) \Vert _2^2 = {\textstyle \sum _d} R_d.$$ (Eq. 6) This quantity is however not directly related to the amount of evidence for the category to detect. Similar gradient-based analyses BIBREF7 , BIBREF8 have been recently applied in the NLP domain, and were also used by Simonyan in the context of image classification. While recent work uses different relevance definitions for a group of input variables (e.g. gradient magnitude in Denil2 or max-norm of absolute value of simple derivatives in Simonyan), in the present work (unless otherwise stated) we employ the squared $l_2$ -norm of gradients allowing for decomposition of Eq. 6 as a sum over relevances of input variables. Experiments For the following experiments we use the 20news-bydate version of the 20Newsgroups dataset consisting of 11314/7532 train/test documents evenly distributed among twenty fine-grained categories. CNN Model As a document classifier we employ a word-based CNN similar to Kim consisting of the following sequence of layers: $ \texttt {Conv} \xrightarrow{} \texttt {ReLU} \xrightarrow{} \texttt {1-Max-Pool} \xrightarrow{} \texttt {FC} \\ $ By 1-Max-Pool we denote a max-pooling layer where the pooling regions span the whole text length, as introduced in BIBREF13 . Conv, ReLU and FC denote the convolutional layer, rectified linear units activation and fully-connected linear layer. For building the CNN numerical input we concatenate horizontally 300-dimensional pre-trained word2vec vectors BIBREF14 , in the same order the corresponding words appear in the pre-processed document, and further keep this input representation fixed during training. The convolutional operation we apply in the first neural network layer is one-dimensional and along the text sequence direction (i.e. along the horizontal direction). The receptive field of the convolutional layer neurons spans the entire word embedding space in vertical direction, and covers two consecutive words in horizontal direction. The convolutional layer filter bank contains 800 filters. Experimental Setup As pre-processing we remove the document headers, tokenize the text with NLTK, filter out punctuation and numbers, and finally truncate each document to the first 400 tokens. We train the CNN by stochastic mini-batch gradient descent with momentum (with $l_2$ -norm penalty and dropout). Our trained classifier achieves a classification accuracy of 80.19%. Due to our input representation, applying LRP or SA to our neural classifier yields one relevance value per word-embedding dimension. From these single input variable relevances to obtain word-level relevances, we sum up the relevances over the word embedding space in case of LRP, and (unless otherwise stated) take the squared $l_2$ -norm of the corresponding word gradient in case of SA. More precisely, given an input document $d$ consisting of a sequence $(w_1, w_2,..., w_N)$ of $N$ words, each word being represented by a $D$ -dimensional word embedding, we compute the relevance $R(w_t)$ of the $t^\mathrm {th}$ word in the input document, through the summation: $$R(w_t) = \sum _{i=1}^{D} R_{i, t}$$ (Eq. 14) where $R_{i, t}$ denotes the relevance of the input variable corresponding to the $i^\mathrm {th}$ dimension of the $t^\mathrm {th}$ word embedding, obtained by LRP or SA as specified in Sections "Layer-Wise Relevance Propagation" & "Sensitivity Analysis" . In particular, in case of SA, the above word relevance can equivalently be expressed as: $$R_{\mathrm {SA}}(w_t) = \Vert \nabla _{{w_t}} f({d}) \Vert _2^2$$ (Eq. 15) where $f({d})$ represents the classifier's prediction for document $d$ . Note that the resulting LRP word relevance is signed, while the SA word relevance is positive. In all experiments, we use the term target class to identify the function $f(x)$ to analyze in the relevance decomposition. This function maps the neural network input to the neural network output variable corresponding to the target class. Evaluating Word-Level Relevances In order to evaluate different relevance models, we perform a sequence of “word deletions” (hereby for deleting a word we simply set the word-vector to zero in the input document representation), and track the impact of these deletions on the classification performance. We carry out two deletion experiments, starting either with the set of test documents that are initially classified correctly, or with those that are initially classified wrongly. We estimate the LRP/SA word relevances using as target class the true document class. Subsequently we delete words in decreasing resp. increasing order of the obtained word relevances. Fig. 1 summarizes our results. We find that LRP yields the best results in both deletion experiments. Thereby we provide evidence that LRP positive relevance is targeted to words that support a classification decision, while LRP negative relevance is tuned upon words that inhibit this decision. In the first experiment the SA classification accuracy curve decreases significantly faster than the random curve representing the performance change when randomly deleting words, indicating that SA is able to identify relevant words. However, the SA curve is clearly above the LRP curve indicating that LRP provides better explanations for the CNN predictions. Similar results have been reported for image classification tasks BIBREF9 . The second experiment indicates that the classification performance increases when deleting words with the lowest LRP relevance, while small SA values points to words that have less influence on the classification performance than random word selection. This result can partly be explained by the fact that in contrast to SA, LRP provides signed explanations. More generally the different quality of the explanations provided by SA and LRP can be attributed to their different objectives: while LRP aims at decomposing the global amount of evidence for a class $f(x)$ , SA is build solely upon derivatives and as such describes the effect of local variations of the input variables on the classifier decision. For a more detailed view of SA, as well as an interpretation of the LRP propagation rules as a deep Taylor decomposition see MontavonArXiv15. Document Highlighting Word-level relevances can be used for highlighting purposes. In Fig. 2 we provide such visualizations on one test document for different relevance target classes, using either LRP or SA relevance models. We can observe that while the word ride is highly negative-relevant for LRP when the target class is not rec.motorcycles, it is positively highlighted (even though not heavily) by SA. This suggests that SA does not clearly discriminate between words speaking for or against a specific classifier decision, while LRP is more discerning in this respect. Document Visualization Word2vec embeddings are known to exhibit linear regularities representing semantic relationships between words BIBREF14 . We explore if these regularities can be transferred to a document representation, when using as a document vector a linear combination of word2vec embeddings. As a weighting scheme we employ LRP or SA scores, with the classifier's predicted class as the target class for the relevance estimation. For comparison we perform uniform weighting, where we simply sum up the word embeddings of the document words (SUM). For SA we use either the $l_2$ -norm or squared $l_2$ -norm for pooling word gradient values along the word2vec dimensions, i.e. in addition to the standard SA word relevance defined in Eq. 15 , we use as an alternative $R_{\mathrm {SA}(l_2)}(w_t) = \Vert \nabla _{{w_t}} f({d}) \Vert _2$ and denote this relevance model by SA $(l_2)$ . For both LRP and SA, we employ different variations of the weighting scheme. More precisely, given an input document $d$ composed of the sequence $(w_1, w_2,..., w_N)$ of $D$ -dimensional word2vec embeddings, we build new document representations $d^{\prime }$ and $d^{\prime }_{\mathrm {e.w.}}$ by either using word-level relevances $R(w_t)$ (as in Eq. 14 ), or through element-wise multiplication of word embeddings with single input variable relevances $(R_{i, t})_{i \in [1,D]}$ (we recall that $R_{i, t}$ is the relevance of the input variable corresponding to the $i^{\mathrm {th}}$ dimension of the $(w_1, w_2,..., w_N)$0 word in the input document $(w_1, w_2,..., w_N)$1 ). More formally we use: $$d^{\prime } \; = \; \sum _{t=1}^{N} \; {{R(w_t)} \cdot {w_t}}$$ (Eq. 23) or $$d^{\prime }_{\mathrm {e.w.}} \; = \; \sum _{t=1}^{N} \; { \begin{bmatrix} R_{1, t} \\ R_{2, t} \\ \vdots \\ R_{D, t} \end{bmatrix} \odot {w_t}}$$ (Eq. 24) where $\odot $ is an element-wise multiplication. Finally we normalize the document vectors $d^{\prime }$ resp. $d^{\prime }_{\mathrm {e.w.}}$ to unit $l_2$ -norm and perform a PCA projection. In Fig. 3 we label the resulting 2D-projected test documents using five top-level document categories. For word-based models $d^{\prime }$ , we observe that while standard SA and LRP both provide similar visualization quality, the SA variant with simple $l_2$ -norm yields partly overlapping and dense clusters, still all schemes are better than uniform weighting. In case of SA note that, even though the power to which word gradient norms are raised ( $l_2$ or $l_2^2$ ) affects the present visualization experiment, it has no influence on the earlier described “word deletion” analysis. For element-wise models $d^{\prime }_{\mathrm {e.w.}}$ , we observe slightly better separated clusters for SA, and a clear-cut cluster structure for LRP. Conclusion Through word deleting we quantitatively evaluated and compared two classifier explanation models, and pinpointed LRP to be more effective than SA. We investigated the application of word-level relevance information for document highlighting and visualization. We derive from our empirical analysis that the superiority of LRP stems from the fact that it reliably not only links to determinant words that support a specific classification decision, but further distinguishes, within the preeminent words, those that are opposed to that decision. Future work would include applying LRP to other neural network architectures (e.g. character-based or recurrent models) on further NLP tasks, as well as exploring how relevance information could be taken into account to improve the classifier's training procedure or prediction performance. Acknowledgments This work was supported by the German Ministry for Education and Research as Berlin Big Data Center BBDC (01IS14013A) and the Brain Korea 21 Plus Program through the National Research Foundation of Korea funded by the Ministry of Education.
No
891c4af5bb77d6b8635ec4109572de3401b60631
891c4af5bb77d6b8635ec4109572de3401b60631_0
Q: What social media platform does the data come from? Text: Introduction In recent years, social networking has grown and become prevalent with every people, it makes easy for people to interact and share with each other. However, every problem has two sides. It also has some negative issues, hate speech is a hot topic in the domain of social media. With the freedom of speech on social networks and anonymity on the internet, some people are free to comment on hate and insults. Hate speech can have an adverse effect on human behavior as well as directly affect society. We don't manually delete each of those comments, which is time-consuming and boring. This spurs research to build an automated system that detects hate speech and eliminates them. With that system, we can detect and eliminate hate speech and thus reduce their spread on social media. With Vietnamese, we can use methods to apply specific extraction techniques manually and in combination with string labeling algorithms such as Conditional Random Field (CRF)[1], Model Hidden Markov (HMM)[2] or Entropy[3]. However, we have to choose the features manually to bring the model with high accuracy. Deep Neural Network architectures can handle the weaknesses of the above methods. In this report we apply Bidirectional Long Short-Term Memory (Bi-LSTM) to build the model. Also combined with the word embedding matrix to increase the accuracy of the model. The rest of the paper is organized as follows. In section 2, we presented the related work. In section 3, we described our Bi-LSTM system. In sections 4 and 5, we presented the experimental process and results. Finally, section 6 gives conclusions about the work. Related Work Gao and Huang (2017)[4] used BiLSTMs with attention mechanism 372 to detect hate speech. They illustrated that the Bi-directional LSTM model with attention mechanism achieves the high performance. They hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments. Davidson et al. (2017)[5] train a model to differentiate among three classes: containing hate speech, only offensive language, or neither.Jing Qian, Mai ElSherief, Elizabeth Belding, William Yang Wang (2018) [6] worked on classifying a tweet as racist, sexist or neither by multiple deep learning architectures. ABARUAH at SemEval-2019 [7] presented the results obtained using bi-directional long short-term memory (BiLSTM) with and without attention and Logistic Regression (LR) models for multilingual detection of hate speech against immigrants and women in Twitter. Animesh Koratana and Kevin Hu [8] use many machine learning models to detect toxic words, in which the Bi-Lstm model got the highest performance. Malmasi and Zampieri (2017)[9] made a similar study to compare the performance of different features in detecting hate speech. Bi-LSTM model for Vietnamese Hate Speech Detection As mentioned previously, we propose a framework based on the ensemble of Bi-LSTM models to perform hate speech detection with the provided dataset. Besides, we also implemented some more models to compare and find the optimal model for the task. Bi-LSTM model for Vietnamese Hate Speech Detection ::: Long Short-Term Memory LSTM takes words from an input sentence in a distributed word representation format. LSTM's network architecture includes memory cells and ports that allow the storage or retrieval of information. These gates help the LSTM memory cell to perform a write, read and reset operation. They enable the LSTM memory cell to store and access information over a period of time. Bi-LSTM model for Vietnamese Hate Speech Detection ::: Bidirectional Long Short-Term Memory One drawback of LSTM architecture[10] is that they are only considering the previous context. However, the identification of a word depends not only on the previous context but also on the subsequent context. Bidirectional LSTM (Bi-LSTM)[11] was created to overcome this weakness. A Bi-LSTM architecture usually contains two single LSTM networks used simultaneously and independently to model input chains in two directions: forward LSTM and backward LSTM. Pre-Processing The pre-processing performed on the text includes the following: The comments were converted to lowercase. The URLs, mentions (@) and non-alphabetic characters are removed (number, excess whitespace). Several stopwords were removed from the comments. We don't remove all stopword because having a few stopwords affect the results. A few characters that don't affect the results are replaced by an empty string. Using Tokenizer to convert each comment into a sequence of integers. Experiments ::: Dataset and Word Embeddings VLSP Shared Task 2019: Hate Speech Detection on Social Networks: This dataset includes 25431 items in csv format, the dataset was divided into two file, training dataset with 20345 items and test dataset with 5086 items. Each data line of training dataset is assigned 1 of 3 labels CLEAN, OFFENSIVE or HATE. The test dataset is not assign label. The statistic summarization of the given training dataset is described in Table I. Hate speech (HATE) contains the abusive language, which often bears the purpose of insulting individuals or groups, and can include hate speech, derogatory and offensive language. An item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (OFFENSIVE) is an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (CLEAN) is a normal item. It's conversations, expressing emotions normally. It does not contain offensive language or hate speech. In this paper, we use two different word embeddings to compare and find out the best word embedding such as Word2Vec [17] and FastText [16]. We used pre-trained vector with large dimensions to increase the accuracy of the model. Through experiments we found FastText achieved better results. For this public dataset, we find that the dataset is an unbalanced dataset. The CLEAN label has the highest rate with 91.49% and the HATE label is lowest with 3.49%. Therefore, it is difficult and challenging to find a good model for this task. Experiments ::: Evaluation on each Model For problems of this type, there are many models suitable to handle such as: SVM, Bi-LTSM, LR, GRU, CNN and etc. To solve this problem, we implement four different models (SVM, LR, Bi-LSTM, and GRU) to compare and find the most suitable one. To evaluate the four models on this task, we divide the training dataset into two parts training, testing rate of 80%, 20% respectively. The details of our models are provided below. 1. Support Vector Machine (SVM) Support Vector Machines (SVMs) are a popular machine learning method for classification, regression, and other learning tasks [13]. It is often used for two-class classification problems. For this problem, it has three labels, so we use the SVM to classify twice, two label at a time.Firstly, we classify two label 0 and 1, we achieved accuracy, precision, recall, and F1-score rates of 96.00%, 93.37%, 98.96%, and 96.08% respectively, on training dataset. Second time, we classify two label 1 and 2, we achieved accuracy, precision, recall, and F1-score rates of 84.34%, 87.38%, 78.86%, and 82.90% respectively. We find that this model doesn't classify well for two labels 1 and 2. Moreover, when we check this model with the public-test, it brings the result as not good as we expected with 63.87%. 2. Logistic Regression (LR) Logistic regression is basically a supervised classification algorithm. In a classification problem, the target variable(or output), can take only discrete values for a given set of features(or inputs) [14]. We have applied it to this problem as follows: Firstly, we use the TfidfVectorizer tool to convert text into feature vectors that are used as input for the model. Then, we used the Logistic Regression model to predict the classification results. When checking it on training datasets, we achieved accuracy, precision, recall, and F1-score rates of 94.17%, 88.87%, 55.54%, and 64.15% respectively. We also try submitting this model's result on the system, the result is worse we thought with 51.15% 3. Gated Recurrent Units (GRU) The Recurrent Neural Network (RNN) handles the variable-length sequence by having a recurrent hidden state whose activation at each time is dependent on that of the previous time [15]. The GRU is a variant of RNN and it only has two inputs. We have used it into this problem as follows: We have used it with word embeddings Fasttext [16]. First, we use Tokenizer() for sequences because GRU is good at processing long sequences. Then, we have applied this model to the problem. We achieved accuracy, precision, recall, and F1-score rates of 94.61%, 67.12%, 59.66%, and 64.15% respectively, on the training dataset. When we check this model with the public dataset, it brings the result quite good with 65.01% 4. Bidirectional Long Short-Term Memory (Bi-LSTM) The LSTM is a famous variant of RNN. The Bidirectional Long Short Term Memory can be trained using all available input information in the past and future of a specific time frame. We have applied it with word embeddings Fasttext [16] and baomoi.vn.model.txt [17]. As follows: As well as the GRU, we also used GloVe Embedding for sequences and applied this model for the problem. When using word embeddings baomoi.vn.model.txt [17], we achieved accuracy, precision, recall, and F1-score rates of 93.26%, 90.74%, 50.30%, and 53.62% respectively, on the training dataset. The same with word embeddings Fasttext [16], we achieved accuracy, precision, recall, and F1-score rates of 95.67%, 85.61%, 67.36%, and 73.84% respectively, on the training dataset. We find that when combining the Bi-LSTM with fasttext will bring the result better. When we check it with the public dataset, it brings the result good with 71.43% Experiments ::: Experimental Results After conducting experiments on many models, we obtained the following results on public-test, shown in Table TABREF16. We achieved the best result with Bi-LSTM, ranking the 2nd of the scoreboard on the public-test set shown in Table TABREF17. However, our result ranks the 6th of the scoreboard on the private-test set. Conclusion and Future Work In this paper, we have presented our approach to address Vietnamese hate speech detection task proposed at the VLSP Shared Task 2019. We develop the system using Bidirectional Long Short Memory for classifying three different labels in this task. We participate in this and evaluate the performance of our system on this dataset. As a result, our result is 71.43% of F1-score, ranking the 2nd of the scoreboard on the public-test set. In the future work, we plan to address this problem in different ways to enhance the performance of this task. We will investigate experiments both in traditional machine learning and types of deep learning for this problem. In addition, we also analyze experimental results on this task to choose the efficient approach such as the hybrid approach which combines machine learning and rule-based approaches to boost the result of detecting hate speech on Vietnamese social media text. Acknowledgment We would like to thank the VLSP Shared Task 2019 organizers for their really hard work and providing the dataset of Vietnamese Hate Speech Detection on social networks for our experiments.
Unanswerable
39a450ac15688199575798e72a2cc016ef4316b5
39a450ac15688199575798e72a2cc016ef4316b5_0
Q: How much performance improvements they achieve on SQuAD? Text: Introduction Machine reading comprehension (MRC) is a challenging task: the goal is to have machines read a text passage and then answer any question about the passage. This task is an useful benchmark to demonstrate natural language understanding, and also has important applications in e.g. conversational agents and customer service support. It has been hypothesized that difficult MRC problems require some form of multi-step synthesis and reasoning. For instance, the following example from the MRC dataset SQuAD BIBREF0 illustrates the need for synthesis of information across sentences and multiple steps of reasoning: $Q$ : What collection does the V&A Theator & Performance galleries hold? $P$ : The V&A Theator & Performance galleries opened in March 2009. ... They hold the UK's biggest national collection of material about live performance. To infer the answer (the underlined portion of the passage $P$ ), the model needs to first perform coreference resolution so that it knows “They” refers “V&A Theator”, then extract the subspan in the direct object corresponding to the answer. This kind of iterative process can be viewed as a form of multi-step reasoning. Several recent MRC models have embraced this kind of multi-step strategy, where predictions are generated after making multiple passes through the same text and integrating intermediate information in the process. The first models employed a predetermined fixed number of steps BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Later, shen2016reasonet proposed using reinforcement learning to dynamically determine the number of steps based on the complexity of the question. Further, shen2017empirical empirically showed that dynamic multi-step reasoning outperforms fixed multi-step reasoning, which in turn outperforms single-step reasoning on two distinct MRC datasets (SQuAD and MS MARCO). In this work, we derive an alternative multi-step reasoning neural network for MRC. During training, we fix the number of reasoning steps, but perform stochastic dropout on the answer module (final layer predictions). During decoding, we generate answers based on the average of predictions in all steps, rather than the final step. We call this a stochastic answer network (SAN) because the stochastic dropout is applied to the answer module; albeit simple, this technique significantly improves the robustness and overall accuracy of the model. Intuitively this works because while the model successively refines its prediction over multiple steps, each step is still trained to generate the same answer; we are performing a kind of stochastic ensemble over the model's successive prediction refinements. Stochastic prediction dropout is illustrated in Figure 1 . Proposed model: SAN The machine reading comprehension (MRC) task as defined here involves a question $Q=\lbrace q_0, q_1, ..., q_{m-1}\rbrace $ and a passage $P=\lbrace p_0, p_1, ..., p_{n-1}\rbrace $ and aims to find an answer span $A=\lbrace a_{start}, a_{end}\rbrace $ in $P$ . We assume that the answer exists in the passage $P$ as a contiguous text string. Here, $m$ and $n$ denote the number of tokens in $Q$ and $P$ , respectively. The learning algorithm for reading comprehension is to learn a function $f(Q, P) \rightarrow A$ . The training data is a set of the query, passage and answer tuples $P=\lbrace p_0, p_1, ..., p_{n-1}\rbrace $0 . We now describe our model from the ground up. The main contribution of this work is the answer module, but in order to understand what goes into this module, we will start by describing how $Q$ and $P$ are processed by the lower layers. Note the lower layers also have some novel variations that are not used in previous work. As shown in Figure 2 , our model contains four different layers to capture different concept of representations. The detailed description of our model is provided as follows. Lexicon Encoding Layer. The purpose of the first layer is to extract information from $Q$ and $P$ at the word level and normalize for lexical variants. A typical technique to obtain lexicon embedding is concatenation of its word embedding with other linguistic embedding such as those derived from Part-Of-Speech (POS) tags. For word embeddings, we use the pre-trained 300-dimensional GloVe vectors BIBREF5 for the both $Q$ and $P$ . Following chen2017reading, we use three additional types of linguistic features for each token $p_i$ in the passage $P$ : In summary, each token $p_i$ in the passage is represented as a 600-dimensional vector and each token $q_j$ is represented as a 300-dimensional vector. Due to different dimensions for the passages and questions, in the next layer two different bidirectional LSTM (BiLSTM) BIBREF6 may be required to encode the contextual information. This, however, introduces a large number of parameters. To prevent this, we employ an idea inspired by BIBREF7 : use two separate two-layer position-wise Feed-Forward Networks (FFN), $FFN(x)=W_2 ReLU(W_1 x +b_1) + b_2$ , to map both the passage and question lexical encodings into the same number of dimensions. Note that this FFN has fewer parameters compared to a BiLSTM. Thus, we obtain the final lexicon embeddings for the tokens in $Q$ as a matrix $E^q \in \mathbb {R}^{d \times m}$ and tokens in $P$ as $E^p\in \mathbb {R}^{d \times n}$ . Contextual Encoding Layer. Both passage and question use a shared two-layers BiLSTM as the contextual encoding layer, which projects the lexicon embeddings to contextual embeddings. We concatenate a pre-trained 600-dimensional CoVe vectors BIBREF8 trained on German-English machine translation dataset, with the aforementioned lexicon embeddings as the final input of the contextual encoding layer, and also with the output of the first contextual encoding layer as the input of its second encoding layer. To reduce the parameter size, we use a maxout layer BIBREF9 at each BiLSTM layer to shrink its dimension. By a concatenation of the outputs of two BiLSTM layers, we obtain $H^q\in \mathbb {R}^{2d \times m}$ as representation of $Q$ and $H^p\in \mathbb {R}^{2d \times n}$ as representation of $P$ , where $d$ is the hidden size of the BiLSTM. Memory Generation Layer. In the memory generation layer, We construct the working memory, a summary of information from both $Q$ and $P$ . First, a dot-product attention is adopted like in BIBREF7 to measure the similarity between the tokens in $Q$ and $P$ . Instead of using a scalar to normalize the scores as in BIBREF7 , we use one layer network to transform the contextual information of both $Q$ and $P$ : $$C=dropout(f_{attention}(\hat{H}^q, \hat{H}^p)) \in \mathbb {R}^{m \times n}\\$$ (Eq. 8) $C$ is an attention matrix. Note that $\hat{H^q}$ and $\hat{H^p}$ is transformed from $H^q$ and $H^p$ by one layer neural network $ReLU(W_3x)$ , respectively. Next, we gather all the information on passages by a simple concatenation of its contextual information $H^p$ and its question-aware representation $H^q \cdot C$ : $$U^p = concat(H^p, H^qC) \in \mathbb {R}^{4d \times n}$$ (Eq. 9) Typically, a passage may contain hundred of tokens, making it hard to learn the long dependencies within it. Inspired by BIBREF10 , we apply a self-attended layer to rearrange the information $U^p$ as: $$\hat{U}^p = U^p drop_{diag}(f_{attention}(U^p, U^p)).$$ (Eq. 10) In other words, we first obtain an $n \times n$ attention matrix with $U^p$ onto itself, apply dropout, then multiply this matrix with $U^p$ to obtain an updated $\hat{U}^p$ . Instead of using a penalization term as in BIBREF10 , we dropout the diagonal of the similarity matrix forcing each token in the passage to align to other tokens rather than itself. At last, the working memory is generated by using another BiLSTM based on all the information gathered: $$M=BiLSTM([U^p; \hat{U}^p])$$ (Eq. 11) where the semicolon mark $;$ indicates the vector/matrix concatenation operator. Answer module. There is a Chinese proverb that says: “wisdom of masses exceeds that of any individual." Unlike other multi-step reasoning models, which only uses a single output either at the last step or some dynamically determined final step, our answer module employs all the outputs of multiple step reasoning. Intuitively, by applying dropout, it avoids a “step bias problem" (where models places too much emphasis one particular step's predictions) and forces the model to produce good predictions at every individual step. Further, during decoding, we reuse wisdom of masses instead of individual to achieve a better result. We call this method “stochastic prediction dropout" because dropout is being applied to the final predictive distributions. Formally, our answer module will compute over $T$ memory steps and output the answer span. This module is a memory network and has some similarities to other multi-step reasoning networks: namely, it maintains a state vector, one state per step. At the beginning, the initial state $s_0$ is the summary of the $Q$ : $s_0=\sum _j \alpha _j H^q_{j}$ , where $\alpha _j = \frac{exp(w_4 \cdot H^q_j)}{\sum _{j^{\prime }}exp(w_4 \cdot H^q_{j^{\prime }})}$ . At time step $t$ in the range of $\lbrace 1, 2, ..., T-1\rbrace $ , the state is defined by $s_t = GRU(s_{t-1}, x_t)$ . Here, $x_t$ is computed from the previous state $s_{t-1}$ and memory $s_0$0 : $s_0$1 and $s_0$2 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step $s_0$3 . $$P_t^{begin} = softmax(s_tW_6M)$$ (Eq. 12) $$P_t^{end} = softmax([s_t; \sum _j P_{t,j}^{begin}M_j]W_7M).$$ (Eq. 13) From a pair of begin and end points, the answer string can be extracted from the passage. However, rather than output the results (start/end points) from the final step (which is fixed at $T-1$ as in Memory Networks or dynamically determined as in ReasoNet), we utilize all of the $T$ outputs by averaging the scores: $$P^{begin} = avg([P_0^{begin}, P_1^{begin}, ..., P_{T-1}^{begin}])$$ (Eq. 14) $$P^{end} = avg([P_0^{end}, P_1^{end}, ..., P_{T-1}^{end}])$$ (Eq. 15) Each $P_t^{begin}$ or $P_t^{end}$ is a multinomial distribution over $\lbrace 1,\ldots ,n\rbrace $ , so the average distribution is straightforward to compute. During training, we apply stochastic dropout to before the above averaging operation. For example, as illustrated in Figure 1 , we randomly delete several steps' predictions in Equations 14 and 15 so that $P^{begin}$ might be $avg([P_1^{begin}, P_3^{begin}])$ and $P^{end}$ might be $avg([P_0^{end}, P_3^{end}, P_{4}^{end}])$ . The use of averaged predictions and dropout during training improves robustness. Our stochastic prediction dropout is similar in motivation to the dropout introduced by BIBREF11 . The difference is that theirs is dropout at the intermediate node-level, whereas ours is dropout at the final layer-level. Dropout at the node-level prevents correlation between features. Dropout at the final layer level, where randomness is introduced to the averaging of predictions, prevents our model from relying exclusively on a particular step to generate correct output. We used a dropout rate of 0.4 in experiments. Experiment Setup Dataset: We evaluate on the Stanford Question Answering Dataset (SQuAD) BIBREF0 . This contains about 23K passages and 100K questions. The passages come from approximately 500 Wikipedia articles and the questions and answers are obtained by crowdsourcing. The crowdsourced workers are asked to read a passage (a paragraph), come up with questions, then mark the answer span. All results are on the official development set, unless otherwise noted. Two evaluation metrics are used: Exact Match (EM), which measures the percentage of span predictions that matched any one of the ground truth answer exactly, and Macro-averaged F1 score, which measures the average overlap between the prediction and the ground truth answer. Implementation details: The spaCy tool is used to tokenize the both passages and questions, and generate lemma, part-of-speech and named entity tags. We use 2-layer BiLSTM with $d=128$ hidden units for both passage and question encoding. The mini-batch size is set to 32 and Adamax BIBREF12 is used as our optimizer. The learning rate is set to 0.002 at first and decreased by half after every 10 epochs. We set the dropout rate for all the hidden units of LSTM, and the answer module output layer to 0.4. To prevent degenerate output, we ensure that at least one step in the answer module is active during training. Results The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module: The main results in terms of EM and F1 are shown in Table 1 . We observe that SAN achieves 76.235 EM and 84.056 F1, outperforming all other models. Standard 1-step model only achieves 75.139 EM and dynamic steps (via ReasoNet) achieves only 75.355 EM. SAN also outperforms a 5-step memory net with averaging, which implies averaging predictions is not the only thing that led to SAN's superior results; indeed, stochastic prediction dropout is an effective technique. The K-best oracle results is shown in Figure 3 . The K-best spans are computed by ordering the spans according the their probabilities $P^{begin} \times P^{end}$ . We limit K in the range 1 to 4 and then pick the span with the best EM or F1 as oracle. SAN also outperforms the other models in terms of K-best oracle scores. Impressively, these models achieve human performance at $K=2$ for EM and $K=3$ for F1. Finally, we compare our results with other top models in Table 2 . Note that all the results in Table 2 are taken from the published papers. We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity. Note that the best-performing model BIBREF14 used a large-scale language model as an extra contextual embedding, which gave a significant improvement (+4.3% dev F1). We expect significant improvements if we add this to SAN in future work. How robust are the results? We are interested in whether the proposed model is sensitive to different random initial conditions. Table 3 shows the development set scores of SAN trained from initialization with different random seeds. We observe that the SAN results are consistently strong regardless of the 10 different initializations. For example, the mean EM score is 76.131 and the lowest EM score is 75.922, both of which still outperform the 75.355 EM of the Dynamic step ReasoNet in Table 1 . We are also interested in how sensitive are the results to the number of reasoning steps, which is a fixed hyper-parameter. Since we are using dropout, a natural question is whether we can extend the number of steps to an extremely large number. Table 4 shows the development set scores for $T=1$ to $T=10$ . We observe that there is a gradual improvement as we increase $T=1$ to $T=5$ , but after 5 steps the improvements have saturated. In fact, the EM/F1 scores drop slightly, but considering that the random initialization results in Table 3 show a standard deviation of 0.142 and a spread of 0.426 (for EM), we believe that the $T=10$ result does not statistically differ from the $T=5$ result. In summary, we think it is useful to perform some approximate hyper-parameter tuning for the number of steps, but it is not necessary to find the exact optimal value. Finally, we test SAN on two Adversarial SQuAD datasets, AddSent and AddOneSent BIBREF22 , where the passages contain auto-generated adversarial distracting sentences to fool computer systems that are developed to answer questions about the passages. For example, AddSent is constructed by adding sentences that look similar to the question, but do not actually contradict the correct answer. AddOneSent is constructed by appending a random human-approved sentence to the passage. We evaluate the single SAN model (i.e., the one presented in Table 2 ) on both AddSent and AddOneSent. The results in Table 5 show that SAN achieves the new state-of-the-art performance and SAN's superior result is mainly attributed to the multi-step answer module, which leads to significant improvement in F1 score over the Standard 1-step answer module, i.e., +1.2 on AddSent and +0.7 on AddOneSent. Is it possible to use different numbers of steps in test vs. train? For practical deployment scenarios, prediction speed at test time is an important criterion. Therefore, one question is whether SAN can train with, e.g. $T=5$ steps but test with $T=1$ steps. Table 6 shows the results of a SAN trained on $T=5$ steps, but tested with different number of steps. As expected, the results are best when $T$ matches during training and test; however, it is important to note that small numbers of steps $T=1$ and $T=2$ nevertheless achieve strong results. For example, prediction at $T=1$ achieves 75.58, which outperforms a standard 1-step model (75.14 EM) as in Table 1 that has approximate equivalent prediction time. How does the training time compare? The average training time per epoch is comparable: our implementation running on a GTX Titan X is 22 minutes for 5-step memory net, 30 minutes for ReasoNet, and 24 minutes for SAN. The learning curve is shown in Figure 4 . We observe that all systems improve at approximately the same rate up to 10 or 15 epochs. However, SAN continues to improve afterwards as other models start to saturate. This observation is consistent with previous works using dropout BIBREF11 . We believe that while training time per epoch is similar between SAN and other models, it is recommended to train SAN for more epochs in order to achieve gains in EM/F1. How does SAN perform by question type? To see whether SAN performs well on a particular type of question, we divided the development set by questions type based on their respective Wh-word, such as “who" and “where". The score breakdown by F1 is shown in Figure 5 . We observe that SAN seems to outperform other models uniformly across all types. The only exception is the Why questions, but there is too little data to derive strong conclusions. Experiments results on MS MARCO MS MARCO BIBREF27 is a large scale real-word RC dataset which contains 100,100 (100K) queries collected from anonymized user logs from the Bing search engine. The characteristic of MS MARCO is that all the questions are real user queries and passages are extracted from real web documents. For each query, approximate 10 passages are extracted from public web documents. The answers are generated by humans. The data is partitioned into a 82,430 training, a 10,047 development and 9,650 test tuples. The evaluation metrics are BLEU BIBREF28 and ROUGE-L BIBREF29 due to its free-form text answer style. To apply the same RC model, we search for a span in MS MARCO's passages that maximizes the ROUGE-L score with the raw free-form answer. It has an upper bound of 93.45 BLEU and 93.82 ROUGE-L on the development set. The MS MARCO dataset contains multiple passages per query. Our model as shown in Figure 2 is developed to generate answer from a single passage. Thus, we need to extend it to handle multiple passages. Following BIBREF13 , we take two steps to generate an answer to a query $Q$ from $J$ passages, $P^1, ..., P^J$ . First, we run SAN on every ( $P^j, Q$ ) pair, generating $J$ candidate answer spans, one from each passage. Then, we multiply the SAN score of each candidate answer span with its relevance score $r(P^j, Q)$ assigned by a passage ranker, and output the span with the maximum score as the answer. In our experiments, we use the passage ranker described in BIBREF30 . The ranker is trained on the same MS MARCO training data, and achieves 37.1 p@1 on the development set. The results in Table 7 show that SAN outperforms V-Net BIBREF31 and becomes the new state of the art. Related Work The recent big progress on MRC is largely due to the availability of the large-scale datasets BIBREF0 , BIBREF27 , BIBREF32 , BIBREF1 , since it is possible to train large end-to-end neural network models. In spite of the variety of model structures and attenion types BIBREF33 , BIBREF34 , BIBREF35 , BIBREF21 , BIBREF13 , BIBREF19 , a typical neural network MRC model first maps the symbolic representation of the documents and questions into a neural space, then search answers on top of it. We categorize these models into two groups based on the difference of the answer module: single-step and multi-step reasoning. The key difference between the two is what strategies are applied to search the final answers in the neural space. A single-step model matches the question and document only once and produce the final answers. It is simple yet efficient and can be trained using the classical back-propagation algorithm, thus it is adopted by most systems BIBREF34 , BIBREF21 , BIBREF19 , BIBREF18 , BIBREF36 , BIBREF37 , BIBREF17 . However, since humans often solve question answering tasks by re-reading and re-digesting the document multiple times before reaching the final answers (this may be based on the complexity of the questions/documents), it is natural to devise an iterative way to find answers as multi-step reasoning. Pioneered by BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , who used a predetermined fixed number of reasoning steps, Shen et al shen2016reasonet, shen2017empirical showed that multi-step reasoning outperforms single-step ones and dynamic multi-step reasoning further outperforms the fixed multi-step ones on two distinct MRC datasets (SQuAD and MS MARCO). But these models have to be trained using reinforcement learning methods, e.g., policy gradient, which are tricky to implement due to the instability issue. Our model is different in that we fix the number of reasoning steps, but perform stochastic dropout to prevent step bias. Further, our model can also be trained by using the back-propagation algorithm, which is simple and yet efficient. Conclusion We introduce Stochastic Answer Networks (SAN), a simple yet robust model for machine reading comprehension. The use of stochastic dropout in training and averaging in test at the answer module leads to robust improvements on SQuAD, outperforming both fixed step memory networks and dynamic step ReasoNet. We further empirically analyze the properties of SAN in detail. The model achieves results competitive with the state-of-the-art on the SQuAD leaderboard, as well as on the Adversarial SQuAD and MS MARCO datasets. Due to the strong connection between the proposed model with memory networks and ReasoNet, we would like to delve into the theoretical link between these models and its training algorithms. Further, we also would like to explore SAN on other tasks, such as text classification and natural language inference for its generalization in the future. Acknowledgments We thank Pengcheng He, Yu Wang and Xinying Song for help to set up dockers. We also thank Pranav Samir Rajpurkar for help on SQuAD evaluations, and the anonymous reviewers for valuable discussions and comments.
Compared to baselines SAN (Table 1) shows improvement of 1.096% on EM and 0.689% F1. Compared to other published SQuAD results (Table 2) SAN is ranked second.
de015276dcde4e7d1d648c6e31100ec80f61960f
de015276dcde4e7d1d648c6e31100ec80f61960f_0
Q: Do the authors perform experiments using their proposed method? Text: Introduction If you're good at replying to a single request, are you also likely to be good at doing dialogue? Much current work seems to assume that the answer to this question is yes, in that it attempts a scaling up from single pairs of utterance plus response to longer dialogues: See, e.g., the work on neural chatbots following on from BIBREF0, where the main evaluation metric is “next utterance retrieval”; and on visual dialogue BIBREF1, which views itself as a natural extension of visual question answering BIBREF2. If you assume, however, that dialogue crucially is a joint project between its participants in a way that single exchanges are not, you're likely to put more focus on coordination phenomena BIBREF3, but may end up with settings that combine multiple language capabilities in ways that current methods cannot yet seem to capture. (See, for example, the dialogues collected in BIBREF4.) In this short paper, we contribute a type of setting that introduces such coordination phenomena, while still allowing for control of the complexity of the resulting interaction. Visual Dialogue as Example of the Scaling Up Approach Figure FIGREF2 shows an example interaction with the original Visual Dialogue system BIBREF1. The competence of the system is impressive from a multimodal grounding perspective — it gets right several questions aiming at different aspects of the image. It is also clear, however, that this is a modest step beyond single-shot visual question answering BIBREF2. It seems that here the (human) questioner is doing all the work of keeping the dialogue alive, and there is little that suggest that the answerer is keeping any state about the dialogue. Later work by BIBREF6 on the “visual dialogue” dataset BIBREF1 indeed identified co-reference in the questions as the main issue that distinguishes this setting from one-shot question answering. One shortcoming of this setting—that the questioner was not provided with a good reason for why they are asking questions in the first place—was addressed in some related work: In the GuessWhat? setting introduced by BIBREF7 at around the same time as Visual Dialogue, a questioner is asking polar questions about an image, with the goal of identifying an object known only to the questioner. In the ALICE variant of the visual dialogue setting, the questioner is asking questions with the later goal in mind of identifying the image from a set also containing distractor images BIBREF8. These variants of the general setting provide purpose to the questioner, but not to the answerer, which is the target of the modelling effort; and, crucially, it does not give the dialogue a joint purpose, a shared sense of semantic ownership of the interaction, which is a central feature of most genres of human interaction BIBREF3. Coming back to the visual dialogue setting, it can be assumed that the crowd workers that created the original data did try to orient themselves to the usual maxims that govern conversational behaviour. However, being constrained by the rigid roles of questioner and answerer, and with the perceptual task being so easy for them, a need for dealing with miscommunication never arose for them and hence no such strategies can be learned from that data. That this is missing from the resulting agents can easily been shown in cases where something goes wrong, but normal repair mechanisms BIBREF9 are not available, as in the example interaction we created shown in Figure FIGREF5. Agreement Games In Herbert Clark's (BIBREF3) model of dialogue, the mutual need for ensuring understanding—“sufficient to current purposes”—is the main structuring force in dialogue. As a metaphor for this interaction management process, Clark uses the notion of a “secondary track”, on which the constant negotiation of this understanding happens. This can be done through quite subtle methods, such as simply producing a continuation that displays through its fit an understanding of the previous utterance, as well as through specially designed markers (such as feedback utterances like “uhu”). This model has been influential in the design of spoken dialogue systems BIBREF13, BIBREF14, where it has been incorporated in the design of dialogue state update rules. In the data-driven era, however, it seems to have become less well known, and many datasets almost appear as to be designed in such a way as to limit the possibility of grounding interactions. The idea behind our setting of “agreement games” is to make this secondary track more prominent and hence more easy to pick up from the data, by making reaching mutual understanding on the answer to the game question an explicit goal. Or, in Clark's term, the grounding criterion for answering the question is raised so as to make reaching mutual understanding on it an explicit, rather than as normally implicit, goal. The representational challenge is that it is an abstract object—the understanding of the discourse—that is jointly constructed, purely through verbal actions. Agreement Games ::: More formally An Agreement Game is a dialogue game with two regular participants, $\mathcal {P} = \lbrace P_1, P_2\rbrace $, and a disinterested third participant, $N$ (for Nature). $N$ poses a question $Q$ to the players $\mathcal {P}$, and provides them with information $I$ required to answer the question; possibly split up over the players. If $I$ contains visual information, we call the game a Grounded Agreement Game. The players can exchange messages in an unrestricted way. The game ends when one of the players explicitly proposes an answer $A$ and the other player explicity agrees with the proposal. As the answer $A$ will be based on a construal of $I$, the agreement on $A$ is also an agreement on that construal. Optionally, a reward can be given to the players after they have provided their joint answer, tied to some measure of quality of $A$. We illustrate the concept by discussing some instantiations that we have recently experimented with. Some Examples ::: The MeetUp Game In the MeetUp game BIBREF17, BIBREF18, the two participants are presented with an environment through which they can (separately) navigate and which is represented to static photographs or real indoor scenes (e.g., a picture of a bedroom, a kitchen, etc.) Their goal is to meet up in the same room, of a type previously told to them. (E.g., they might be told at the start of the game: “your goal is to meet up in a room of type kitchen.”) As the positions of the players are not represented graphically, the only way they can be sure of whether they have reached that goal is by conversing (via chat messages). Once they have come to the conclusion that they are in the same room, they can end the game by each sending a special signal. If they are indeed in such a winning constellation, they will receive a bonus. Unlike the Visual Dialogue setting discussed above, this setting ensures informational symmetry between the participants (both have access to the same type of information; but not the same information, as they can't “see” each other). More importantly, however, the constraint that the game only ends if they both agree ensures a “committment symmetry”, where the success of the game must be ensured by both participants. The design also provides for a clear “relevance place” at which an opportunity arises for semantic negotiation, namely, before the final decision is made. An example of this is shown in the example below. (The number in the parentheses indicate the time, relative to the beginning of the interaction, when the utterance was made.) . B (00:00:34): okay I think I'm there if I understand utility room B (00:00:42): It has a washer and dryer A (00:00:46): I was wondering too. This is sorta like a laundry room. A (00:00:55): This has pet bowl on the floor below a window. B (00:01:00): ok... let us keep looking A (00:01:22): And a small kids looking suit hanging on the wall. And a big banner above the window. B (00:01:33): Are you saying a utility room is like a laundry room? B (00:02:00): let me find you A (00:02:07): Google says, a room equipped with appliances for washing and other domestic work. A (00:02:09): So I think so. Some Examples ::: The MatchIt Game The MatchIt Game (Ilinykh et al., forthcoming) is a yet further simplified visual game. Here, the goal simply is to decide whether you and your partner are both looking at the same image (of the same genre as in MeetUp). In that sense, it is a reduction of the MeetUP game to the final stage, taking out the navigation aspect. As example SECREF12 shows, this can similarly lead to meta-semantic interaction, where classifications are revised. As SECREF12 shows, even in cases where a decision can be reached quickly, there can be an explicit mutual confirmation step, before the (silent) decision signal is sent. . B (00:00:25): white kitchen? A (00:00:25): im in a bathroom B (00:00:28): ah B (00:00:32): well wait B (00:00:38): there is something that looks like a big bath B (00:00:44): is it all white? A (00:00:54): yes its white and I see a bit of a kitchen A (00:01:11): yes B (00:01:11): are you sure it's a bathroom lol A (00:01:16): no its not a bathroom haha . A (00:00:24): i see stairs B (00:00:25): I see a staircase with a bike with wicker basket at the bottom of the staircase B (00:00:31): do you have a bike? A (00:00:39): no bike B: (00:00:46): okay..it is different A (00:00:54): yes Some Examples ::: The Concept Learning Game A third setting that we have explored BIBREF19 brings conceptual negotiation more clearly into the foreground. In that game, the players are presented with images of birds of particular species and are tasked with coming up with a description of common properties. Again, the final answer has to be approved by both participants. As SECREF13 shows, this can lead to an explicit negotiation of conceptual content. . Conclusions We have argued that some prominent current dialogue settings lack room for the occurence of coordination phenomena prevalent in natural dialogue. We have shown a simple condition that brings out the need for coordination in an explicit way, but still can easily be added to controlled (and controllable) dialogue settings.
Yes
56836afc57cae60210fa1e5294c88e40bb10cc0e
56836afc57cae60210fa1e5294c88e40bb10cc0e_0
Q: What NLP tasks do the authors evaluate feed-forward networks on? Text: Introduction Deep and recurrent neural networks with large network capacity have become increasingly accurate for challenging language processing tasks. For example, machine translation models have been able to attain impressive accuracies, with models that use hundreds of millions BIBREF0 , BIBREF1 or billions BIBREF2 of parameters. These models, however, may not be feasible in all computational settings. In particular, models running on mobile devices are often constrained in terms of memory and computation. Long Short-Term Memory (LSTM) models BIBREF3 have achieved good results with small memory footprints by using character-based input representations: e.g., the part-of-speech tagging models of gillick2016brnn have only roughly 900,000 parameters. Latency, however, can still be an issue with LSTMs, due to the large number of matrix multiplications they require (eight per LSTM cell): kimrush2016 report speeds of only 8.8 words/second when running a two-layer LSTM translation system on an Android phone. Feed-forward neural networks have the potential to be much faster. In this paper, we show that small feed-forward networks can achieve results at or near the state-of-the-art on a variety of natural language processing tasks, with an order of magnitude speedup over an LSTM-based approach. We begin by introducing the network model structure and the character-based representations we use throughout all tasks (§ "Small Feed-Forward Network Models" ). The four tasks that we address are: language identification (Lang-ID), part-of-speech (POS) tagging, word segmentation, and preordering for translation. In order to use feed-forward networks for structured prediction tasks, we use transition systems BIBREF4 , BIBREF5 with feature embeddings as proposed by chen-manning:2014:EMNLP, and introduce two novel transition systems for the last two tasks. We focus on budgeted models and ablate four techniques (one on each task) for improving accuracy for a given memory budget: We achieve results at or near state-of-the-art with small ( $< 3$ MB) models on all four tasks. Small Feed-Forward Network Models The network architectures are designed to limit the memory and runtime of the model. Figure 1 illustrates the model architecture: Memory needs are dominated by the embedding matrix sizes ( $\sum _g V_g D_g$ , where $V_g$ and $D_g$ are the vocabulary sizes and dimensions respectively for each feature group $g$ ), while runtime is strongly influenced by the hidden layer dimensions. Experiments We experiment with small feed-forward networks for four diverse NLP tasks: language identification, part-of-speech tagging, word segmentation, and preordering for statistical machine translation. Language Identification Recent shared tasks on code-switching BIBREF14 and dialects BIBREF15 have generated renewed interest in language identification. We restrict our focus to single language identification across diverse languages, and compare to the work of baldwin2010language on predicting the language of Wikipedia text in 66 languages. For this task, we obtain the input $\mathbf {h}_0$ by separately averaging the embeddings for each n-gram length ( $N=[1,4]$ ), as summation did not produce good results. Table 1 shows that we outperform the low-memory nearest-prototype model of baldwin2010language. Their nearest neighbor model is the most accurate but its memory scales linearly with the size of the training data. Moreover, we can apply quantization to the embedding matrix without hurting prediction accuracy: it is better to use less precision for each dimension, but to use more dimensions. Our subsequent models all use quantization. There is no noticeable variation in processing speed when performing dequantization on-the-fly at inference time. Our 16-dim Lang-ID model runs at 4450 documents/second (5.6 MB of text per second) on the preprocessed Wikipedia dataset. These techniques back the open-source Compact Language Detector v3 (CLD3) that runs in Google Chrome browsers. Our experimental Lang-ID model uses the same overall architecture as CLD3, but uses a simpler feature set, less involved preprocessing, and covers fewer languages. POS Tagging We apply our model as an unstructured classifier to predict a POS tag for each token independently, and compare its performance to that of the byte-to-span (BTS) model BIBREF16 . BTS is a 4-layer LSTM network that maps a sequence of bytes to a sequence of labeled spans, such as tokens and their POS tags. Both approaches limit model size by using small input vocabularies: byte values in the case of BTS, and hashed character n-grams and (optionally) cluster ids in our case. It is well known that word clusters can be powerful features in linear models for a variety of tasks BIBREF17 , BIBREF18 . Here, we show that they can also be useful in neural network models. However, naively introducing word cluster features drastically increases the amount of memory required, as a word-to-cluster mapping file with hundreds of thousands of entries can be several megabytes on its own. By representing word clusters with a Bloom map BIBREF19 , a key-value based generalization of Bloom filters, we can reduce the space required by a factor of $\sim $ 15 and use 300KB to (approximately) represent the clusters for 250,000 word types. In order to compare against the monolingual setting of gillick2016brnn, we train models for the same set of 13 languages from the Universal Dependency treebanks v1.1 BIBREF20 corpus, using the standard predefined splits. As shown in Table 2 , our best models are 0.3% more accuate on average across all languages than the BTS monolingual models, while using 6x fewer parameters and 36x fewer FLOPs. The cluster features play an important role, providing a 15% relative reduction in error over our vanilla model, but also increase the overall size. Halving all feature embedding dimensions (except for the cluster features) still gives a 12% reduction in error and trims the overall size back to 1.1x the vanilla model, staying well under 1MB in total. This halved model configuration has a throughput of 46k tokens/second, on average. Two potential advantages of BTS are that it does not require tokenized input and has a more accurate multilingual version, achieving 95.85% accuracy. From a memory perspective, one multilingual BTS model will take less space than separate FF models. However, from a runtime perspective, a pipeline of our models doing language identification, word segmentation, and then POS tagging would still be faster than a single instance of the deep LSTM BTS model, by about 12x in our FLOPs estimate. Segmentation Word segmentation is critical for processing Asian languages where words are not explicitly separated by spaces. Recently, neural networks have significantly improved segmentation accuracy BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . We use a structured model based on the transition system in Table 3 , and similar to the one proposed by zhang-clark2007. We conduct the segmentation experiments on the Chinese Treebank 6.0 with the recommended data splits. No external resources or pretrained embeddings are used. Hashing was detrimental to quality in our preliminary experiments, hence we do not use it for this task. To learn an embedding for unknown characters, we cast characters occurring only once in the training set to a special symbol. Because we are not using hashing here, we need to be careful about the size of the input vocabulary. The neural network with its non-linearity is in theory able to learn bigrams by conjoining unigrams, but it has been shown that explicitly using character bigram features leads to better accuracy BIBREF21 , BIBREF26 . zhang-zhang-fu:2016:P16-1 suggests that embedding manually specified feature conjunctions further improves accuracy (`zhang-zhang-fu:2016:P16-1-combo' in Table 4 ). However, such embeddings could easily lead to a model size explosion and thus are not considered in this work. The results in Table 4 show that spending our memory budget on small bigram embeddings is more effective than on larger character embeddings, in terms of both accuracy and model size. Our model featuring bigrams runs at 110KB of text per second, or 39k tokens/second. Preordering Preordering source-side words into the target-side word order is a useful preprocessing task for statistical machine translation BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 . We propose a novel transition system for this task (Table 5 ), so that we can repeatedly apply a small network to produce these permutations. Inspired by a non-projective parsing transition system BIBREF31 , the system uses a swap action to permute spans. The system is sound for permutations: any derivation will end with all of the input words in a permuted order, and complete: all permutations are reachable (use shift and swap operations to perform a bubble sort, then append $n-1$ times to form a single span). For training and evaluation, we use the English-Japanese manual word alignments from nakagawa2015efficient. For preordering, we experiment with either spending all of our memory budget on reordering, or spending some of the memory budget on features over predicted POS tags, which also requires an additional neural network to predict these tags. Full feature templates are in the supplementary material. As the POS tagger network uses features based on a three word window around the token, another possibility is to add all of the features that would have affected the POS tag of a token to the reorderer directly. Table 6 shows results with or without using the predicted POS tags in the preorderer, as well as including the features used by the tagger in the reorderer directly and only training the downstream task. The preorderer that includes a separate network for POS tagging and then extracts features over the predicted tags is more accurate and smaller than the model that includes all the features that contribute to a POS tag in the reorderer directly. This pipeline processes 7k tokens/second when taking pretokenized text as input, with the POS tagger accounting for 23% of the computation time. Conclusions This paper shows that small feed-forward networks are sufficient to achieve useful accuracies on a variety of tasks. In resource-constrained environments, speed and memory are important metrics to optimize as well as accuracies. While large and deep recurrent models are likely to be the most accurate whenever they can be afforded, feed-foward networks can provide better value in terms of runtime and memory, and should be considered a strong baseline. Acknowledgments We thank Kuzman Ganchev, Fernando Pereira, and the anonymous reviewers for their useful comments. Supplementary Material Quantization Details The values comprising a generic embedding matrix $\mathbf {E} \in \mathbb {R}^{V \times D}$ are ordinarily stored with 32-bit floating-point precision in our implementation. For quantization, we first calculate a scale factor $s_i$ for each embedding vector $\mathbf {e}_i$ as $s_i=\frac{1}{b-1}\max _j \left|e_{ij}\right|.$ Each weight $e_{ij}$ is then quantized into an 8-bit integer as $q_{ij}=\lfloor \frac{1}{2} + \frac{e_{ij}}{s_i} + b\rfloor ,$ where the bias $b=128$ . Hence, the number of bits required to store the embedding matrix is reduced by a factor of 4, in exchange for storing the $V$ additional scale values. At inference time, the embeddings are dequantized on-the-fly. FLOPs Calculation The product of $\mathbf {A} \in \mathbb {R}^{P \times Q}$ and $\mathbf {b} \in \mathbb {R}^Q$ involves $P(2Q-1)$ FLOPs, and our single ReLu hidden layer requires performing this operation once per timestep ( $P$ = $M$ , $Q$ = $H_0$ ). Here, $H_0$ denotes the size of the embedding vector $\mathbf {h}_0$ , which equals 408, 464 and 260 for our respective POS models as ordered in Table 2 . In contrast, each LSTM layer requires eight products per timestep, and the BTS model has four layers ( $P$ = $Q$ =320). The particular sequence-to-sequence representation scheme of gillick2016brnn requires at least four timesteps to produce a meaningful output: the individual input byte(s), and a start, length and label of the predicted span. A single timestep is therefore a relaxed lower bound on the number of FLOPs needed for BTS inference. Word Clusters The word clusters we use are for the 250k most frequent words from a large unannotated corpus that was clustered into 256 classes using the distributed Exchange algorithm BIBREF36 and the procedure described in Appendix A of tackstrom2012clusters. The space required to store them in a Bloom map is calculated using the formula derived by talbot2008bloom: each entry requires $1.23*(\log \frac{1}{\epsilon } + H)$ bits, where $H$ is the entropy of the distribution on the set of values, and $\epsilon =2^{-E}$ , with $E$ the number of error bits employed. We use 0 error bits and assume a uniform distribution for the 256 values, i.e. $H=8$ , hence we need 9.84 bits per entry, or 300KB for the 250k entries. Lang-ID Details In our language identification evaluation, the 1,2,3,4-gram embedding vectors each have 6 or 16 dimensions, depending on the experimental setting. Their hashed vocabulary sizes ( $V_g$ ) are 100, 1000, 5000, and 5000, respectively. The hidden layer size is fixed at $M$ =208. We preprocess data by removing non-alphabetic characters and pieces of markup text (i.e., anything located between $<$ and $>$ , including the brackets). At test time, if this results in an empty string, we skip the markup removal, and if that still results in an empty string, we process the original string. This procedure is an artefact of the Wikipedia dataset, where some documents contain only punctuation or trivial HTML code, yet we must make predictions for them to render the results directly comparable to the literature. POS Details The Small FF model in the comparison to BTS uses 2,3,4-grams and some byte unigrams (see feature templates in Table vii ). The n-grams have embedding sizes of 16 and the byte unigrams get 4 dimensions. In our $\frac{1}{2}$ -dimension setting, the aforementioned dimensions are halved to 8 and 2. Cluster features get embedding vectors of size 8. The hashed feature vocabularies for n-grams are 500, 200, and 4000, respectively. The hidden layer size is fixed at $M$ =320. Segmentation Details Feature templates used in segmentation experiments are listed in Table viii . Besides, we define length feature to be the number of characters between top of $\sigma $ and the front of $\beta $ , this maximum feature value is clipped to 100. The length feature is used in all segmentation models, and the embedding dimension is set to 6. We set the cutoff for both character and character-bigrams to 2 in order to learn unknown character/bigram embeddings. The hidden layer size is fixed at $M$ =256. Preordering Details The feature templates for the preorderer look at the top four spans on the stack and the first four spans in the buffer; for each span, the feature templates look at up to the first two words and last two words within the span. The “vanilla” variant of the preorderer includes character n-grams, word bytes, and whether the span has ever participated in a swap transition. The POS features are the predicted tags for the words in these positions. Table ix shows the full feature templates for the preorderer.
language identification, part-of-speech tagging, word segmentation, and preordering for statistical machine translation
6147846520a3dc05b230241f2ad6d411d614e24c
6147846520a3dc05b230241f2ad6d411d614e24c_0
Q: What are three challenging tasks authors evaluated their sequentially aligned representations? Text: Introduction As time passes, language usage changes. For example, the names `Bert' and `Elmo' would only rarely make an appearance prior to 2018 in the context of scientific writing. After the publication of BERT BIBREF0 and ELMo BIBREF1, however, usage has increased in frequency. In the context of named entities on Twitter, it is also likely that these names would be tagged as person prior to 2018, and are now more likely to refer to an artefact. As such, their part-of-speech tags will also differ. Evidently, evolution of language usage affects multiple natural language processing (NLP) tasks and models based on data from one point in time cannot be expected to operate for an extended period of time. In order to become more robust to language evolution, data should be collected at multiple points in time. We consider a dynamic learning paradigm where one makes predictions for data points from the current time-step given labelled data points from previous time-steps. As time increments, data points from the current step are labelled and new unlabelled data points are observed. This setting occurs in natural language processing in, for instance, the prediction of paper acceptance to conferences BIBREF2 or named entity recognition from yearly data dumps of Twitter BIBREF3. Changes in language usage cause a data drift between time-steps and some way of controlling for the shift between time-steps is necessary. In this paper, we apply a domain adaptation technique to correct for shifts. Domain adaptation is a furtive area of research within machine learning that deals with learning from training data drawn from one data-generating distribution (source domain) and generalizing to test data drawn from another, different data-generating distribution (target domain) BIBREF4. We are interested in whether a sequence of adaptations can compensate for the data drift caused by shifts in the meaning of words or features across time. Given that linguistic tokens are embedded in some vector space using neural language models, we observe that in time-varying dynamic tasks, the drift causes token embeddings to occupy different parts of embedding space over consecutive time-steps. We want to avoid the computational expense of re-training a neural network every time-step. Instead, in each time-step, we map linguistic tokens using the same pre-trained language model (a "BERT" network BIBREF0) and align the resulting embeddings using a second procedure called subspace alignment BIBREF5. We apply subspace alignment sequentially: find the principal components in each time-step and linearly transform the components from the previous step to match the current step. A classifier trained on the aligned embeddings from the previous step will be more suited to classify embeddings in the current step. We show that sequential subspace alignment (SSA) yields substantial improvements in three challenging tasks: paper acceptance prediction on the PeerRead data set BIBREF2; Named Entity Recognition on the Broad Twitter Corpus BIBREF3; and rumour stance detection on the RumourEval 2019 data set BIBREF6. These tasks are chosen to vary in terms of domains, timescales, and the granularity of the linguistic units.In addition to evaluating SSA, we include two technical contributions as we extend the method both to allow for time series of unbounded length and to consider instance similarities between classes. The best-performing SSA methods proposed here are semi-supervised, but require only between 2 and 10 annotated data points per class from the test year for successful alignment. Crucially, the best proposed SSA models outperform baselines utilising more data, including the whole data set. Subspace Alignment Suppose we embed words from a named entity recognition task, where artefacts should be distinguished from persons. Figure FIGREF1 shows scatterplots with data collected at two different time-points, say 2017 (top; source domain) and 2018 (bottom; target domain). Red points are examples of artefacts embedded in this space and blue points are examples of persons. We wish to classify the unknown points (black) from 2018 using the known points from 2018 and the known points from 2017. As can be seen, the data from 2017 is not particularly relevant to classification of data from 2018, because the red and blue point clouds do not match. In other words, a classifier trained to discriminate red from blue in 2017 would make many mistakes when applied directly to the data from 2018, partly because words such as 'Bert' and 'Elmo' have changed from being person to also being artefacts. To make the source data from 2017 relevant – and reap the benefits of having more data – we wish to align source and target data points. Subspace Alignment ::: Unsupervised subspace alignment Unsupervised alignment extracts a set of bases from each data set and transforms the source components such that they match the target components BIBREF5. Let $C_{\cal S}$ be the principal components of the source data $X_{t-1}$ and $C_{\cal T}$ be the components of the target data set $X_t$. The optimal linear transformation matrix is found by minimising the difference between the transformed source components and the target components: where $\Vert \cdot \Vert _{F}$ denotes the Frobenius norm. Note that we left-multiplied both terms in the norm with the same matrix $C_{\cal S}^{\top }$ and that due to orthonormality of the principal components, $C_{\cal S}^{\top } C_{\cal S}$ is the identity and drops out. Source data $X_{t-1}$ is aligned to target data by first mapping it onto its own principal components and then applying the transformation matrix, $X_{t-1} C_{\cal S} M^{*}$. Target data $X_t$ is also projected onto its target components, $X_t C_{\cal T}$. The alignment is performed on the $d$ largest principal components, i.e. a subspace of the embedding. Keeping $d$ small avoids the otherwise high computational expense of eigendecomposition in high-dimensional data. Unsupervised alignment will only match the total structure of both data sets. Therefore, global shifts between domains can be accounted for, but not local shifts. Figure FIGREF1 is an example of a setting with local shifts, i.e. red and blue classes are shifted differently. Performing unsupervised alignment on this setting would fail. Figure FIGREF2 (left middle) shows the source data (left) aligned to the target data (right) in an unsupervised fashion. Note that although the total data sets match, the classes (red and blue ellipses) are not matched. Subspace Alignment ::: Semi-supervised subspace alignment In semi-supervised alignment, one performs subspace alignment per class. As such, at least 1 target label per class needs to be available. However, even then, with only 1 target label per class, we would only be able to find 1 principal component. To allow for the estimation of more components, we provisionally label all target samples using a 1-nearest-neighbour classifier, starting from the given target labels. Using pseudo-labelled target samples, we estimate $d$ components. Now, the optimal linear transformation matrix for each class can be found with an equivalent procedure as in Equation DISPLAY_FORM4: Afterwards, we transform the source samples of each class $X_{t-1}^k$ through the projection onto class-specific components $C_{{\cal S},k}$ and the optimal transformation: $X_{t-1}^{k} C_{{\cal S}, k} M_{k}^{*}$. Additionally, we centre each transformed source class on the corresponding target class. Figure FIGREF2 (right middle) shows the source documents transformed through semi-supervised alignment. Now, the classes match the classes of the target data. Subspace Alignment ::: Extending SSA to Unbounded Time Semi-supervised alignment allows for aligning two time steps, $t_1$ and $t_2$, to a joint space $t^{\prime }_{1,2}$. However, when considering a further alignment to another time step $t_3$, this can not trivially be mapped, since the joint space $t^{\prime }_{1,2}$ necessarily has a lower dimensionality. Observing that two independently aligned spaces, $t^{\prime }_{1,2}$ and $t^{\prime }_{2,3}$, do have the same dimensionality, we further learn a new alignment between the two, resulting in the joint space of $t^{\prime }_{1,2}$ and $t^{\prime }_{2,3}$, namely $t^{\prime \prime }_{1,2,3}$. Although this is seemingly straight-forward, there is no guarantee that $t^{\prime }_{1,2}$ and $t^{\prime }_{2,3}$ will be coherent with one another, in the same way that two word embedding spaces trained with different algorithms might also differ in spite of having the same dimensionality. This issue is partially taken care of by using semi-supervised alignment which takes class labels into account when learning the 'deeper' alignment $t^{\prime \prime }$. We further find that it is beneficial to also take the similarities between samples into account when aligning. Subspace Alignment ::: Considering Sample Similarities between Classes Since intermediary spaces, such as $t^{\prime }_{1,2}$ and $t^{\prime }_{2,3}$, do not necessarily share the same semantic properties, we add a step to the semi-supervised alignment procedure. Given that the initial unaligned spaces do encode similarities between instances, we run the $k$-means clustering algorithm ($k=5$) to give us some course-grained indication of instance similarities in the original embedding space. This cluster ID is passed to SSA, resulting in an alignment which both attempts to match classes across time steps, in addition to instance similarities. Hence, even though $t^{\prime }_{1,2}$ and $t^{\prime }_{2,3}$ are not necessarily semantically coherent, an alignment to $t^{\prime \prime }_{1,2,3}$ is made possible. Experimental Setup In the past year, several approaches to pre-training representations on language modelling based on transformer architectures BIBREF7 have been proposed. These models essentially use a multi-head self-attention mechanism in order to learn representations which are able to attend directly to any part of a sequence. Recent work has shown that such contextualised representations pre-trained on language modelling tasks offer highly versatile representations which can be fine-tuned on seemingly any given task BIBREF1, BIBREF0, BIBREF8, BIBREF9. In line with the recommendations from experiments on fine-tuning representations BIBREF10, we use a frozen BERT to extract a consistent task-agnostic representation. Using a frozen BERT with subsequent subspace alignment allows us to avoid re-training a neural network each time-step while still working in an embedding learned by a neural language model. It also allows us to test the effectiveness of SSA without the confounding influence of representation updates. Experimental Setup ::: Three Tasks. We consider three tasks representing a broad selection of natural language understanding scenarios: paper acceptance prediction based on the PeerRead data set BIBREF2, Named Entity Recognition (NER) based on the Broad Twitter Corpus BIBREF3, and author stance prediction based on the RumEval-19 data set BIBREF6. These tasks were chosen so as to represent i) different textual domains, across ii) differing time scales, and iii) operating at varying levels of linguistic granularity. As we are dealing with dynamical learning, the vast majority of NLP data sets can unfortunately not be used since they do not include time stamps. Paper Acceptance Prediction The PeerRead data set contains papers from ten years of arXiv history, as well as papers and reviews from major AI and NLP conferences BIBREF2. From the perspective of evaluating our method, the arXiv sub-set of this data set offers the possibility of evaluating our method while adapting to ten years of history. This is furthermore the only subset of the data annotated with both timestamps and with a relatively balanced accept/reject annotation. As arXiv naturally contains both accepted and rejected papers, this acceptance status has been assigned based on BIBREF11 who match arXiv submissions to bibliographic entries in DBLP, and additionally defining acceptance as having been accepted to major conferences, and not to workshops. This results in a data set of nearly 12,000 papers, from which we use the raw abstract text as input to our system. The first three years were filtered out due to containing very few papers. We use the standard train/test splits supplied with the data set. BIBREF2 show that it is possible to predict paper acceptance status at major conferences at above baseline levels. Our intuition in applying SSA to this problem, is that the topic of a paper is likely to bias acceptance to certain conferences across time. For instance, it is plausible that the likelihood of a neural paper being accepted to an NLP conference before and after 2013 differs wildly. Hence, we expect that our model will, to some extent, represent the topic of an article, and that this will lend itself nicely to SSA. Paper Acceptance Prediction ::: Model We use the pre-trained bert-base-uncased model as the base for our paper acceptance prediction model. Following the approach of BIBREF0, we take the final hidden state (i.e., the output of the transformer) corresponding to the special [CLS] token of an input sequence to be our representation of a paper, as this has aggregated information through the sequence (Figure FIGREF14). This gives us a $d$-dimensional representation of each document, where $d=786$. In all of the experiments for this task, we train an SVM with an RBF kernel on these representations, either with or without SSA depending on the setting. Paper Acceptance Prediction ::: Experiments & Results We set up a series of experiments where we observe past data, and evaluate on present data. We compare both unsupervised and semi-supervised subspace alignment, with several strong baselines. The baselines represent cases in which we have access to more data, and consist of training our model on either all data, on the same year as the evaluation year, and on the previous year. In our alignment settings, we only observe data from the previous year, and apply subspace alignment. This is a different task than presented by BIBREF2, as we evaluate paper acceptance for papers in the present. Hence, our scores are not directly comparable to theirs. One parameter which significantly influences performance, is the number of labelled data points we use for learning the semi-supervised subspace alignment. We tuned this hyperparameter on the development set, finding an increasing trend. Using as few as 2 tuning points per class yielded an increase in performance in some cases (Figure FIGREF16). Our results are shown in Table TABREF10, using 10 tuning samples per class. With unsupervised subspace alignment, we observe relatively unstable results – in one exceptional case, namely testing on 2010, unsupervised alignment is as helpful as semi-supervised alignment. Semi-supervised alignment, however, yields consistent improvements in performance across the board. It is especially promising that adapting from past data outperforms training on all available data, as well as training on the actual in-domain data. This highlights the importance of controlling for data drift due to language evolution. It shows that this signal can be taken advantage of to increase performance on present data with only a small amount of annotated data. We further find that using several past time steps in the Unbounded condition is generally helpful, as is using instance similarities in the alignment. Named Entity Recognition The Broad Twitter Corpus contains tweets annotated with named entities, collected between the years 2009 and 2014 BIBREF3. However, as only a handful of tweets are collected before 2012, we focus our analysis on the final three years of this period (i.e. two test years). The corpus includes diverse data, annotated in part via crowdsourcing and in part by experts. The inventory of tags in their tag scheme is relatively small, including Person, Location, and Organisation. To the best of our knowledge no one has evaluated on this corpus either in general or per year, and so we cannot compare with previous work. In the case of NER, we expect the adaptation step of our model to capture the fact that named entities may change their meaning across time (e.g. the example with ”Bert” and ”BERT” in Figure FIGREF1). This is related to work showing temporal drift of topics BIBREF12. Named Entity Recognition ::: Model Since casing is typically an important feature in NER, we use the pre-trained bert-base-cased model as our base for NER. For each token, we extract its contextualised representation from BERT, before applying SSA. As BIBREF0 achieve state-of-the-art results without conditioning the predicted tag sequence on surrounding tags (as would be the case with a CRF, for example), we also opt for this simpler architecture. The resulting contextualised representations are therefore passed to an MLP with a single hidden layer (200 hidden units, ReLU activation), before predicting NER tags. We train the MLP over 5 epochs using the Adam optimiser BIBREF13. Named Entity Recognition ::: Experiments & Results As with previous experiments, we compare unsupervised and semi-supervised subspace alignment with baselines corresponding to using all data, data from the same year as the evaluation year, and data from the previous year. For each year, we divide the data into 80/10/10 splits for training, development, and test. Results on the two test years 2013 and 2014 are shown in Table TABREF17. In the case of NER, we do not observe any positive results for unsupervised subspace alignment. In the case of semi-supervised alignment, however, we find increased performance as compared to training on the previous year, and compared to training on all data. This shows that learning an alignment from just a few data points can help the model to generalise from past data. However, unlike our previous experiments, results are somewhat better when given access to the entire set of training data from the test year itself in the case of NER. The fact that training on only 2013 and evaluating on the same year does not work well can be explained by the fact that the amount of data available for 2013 is only 10% of that for 2012. The identical results for the unbounded extension is because aligning from a single time step renders this irrelevant. SDQC Stance Classification The RumourEval-2019 data set consists of roughly 5500 tweets collected for 8 events surrounding well-known incidents, such as the Charlie Hebdo shooting in Paris BIBREF6. Since the shared task test set is not available, we split the training set into a training, dev and test part based on rumours (one rumour will be training data with a 90/10 split for development and another rumour will be the test data, with a few samples labelled). For Subtask A, tweets are annotated with stances, denoting whether it is in the category Support, Deny, Query, or Comment (SDQC). Each rumour only lasts a couple of days, but the total data set spans years, from August 2014 to November 2016. We regard each rumour as a time-step and adapt from the rumour at time $t$-1 to the rumour at time $t$. We note that this setting is more difficult than the previous two due to the irregular time intervals. We disregard the rumour ebola-essien as it has too few samples per class. SDQC Stance Classification ::: Model For this task, we use the same modelling approach as described for paper acceptance prediction. This method is also suitable here, since we simply require a condensed representation of a few sentences on which to base our temporal adaptation and predictions. In the last iteration of the task, the winning system used hand-crafted features to achieve a high performance BIBREF14. Including these would complicate SSA, so we opt for this simpler architecture instead. We use the shorter time-scale of approximately weeks rather than years as rumours can change rapidly BIBREF15. SDQC Stance Classification ::: Experiments & Results In this experiment, we start with the earliest rumour and adapt to the next rumour in time. As before, we run the following baselines: training on all available labelled data (i.e. all previous rumours and the labelled data for the current rumour), training on the labelled data from the current rumour (designated as `same') and training on the labelled data from the previous rumour. We perform both unsupervised and semi-supervised alignment using data from the previous rumour. We label 5 samples per class for each rumour. In this data set, there is a large class imbalance, with a large majority of comment tweets and few support or deny tweets. To address this, we over-sample the minority classes. Afterwards, a SVM with RBF is trained and we test on unlabelled tweets for the current rumour. Table TABREF23 shows the performance of the baselines and the two alignment procedures. As with the previous tasks, semi-supervised alignment generally helps, except for in the charliehebdo rumour. Analysis and Discussion We have shown that sequential subspace alignment is useful across natural language processing tasks. For the PeerRead data set we were particularly successful. This might be explained by the fact that the topic of a paper is a simple feature for SSA to pick up on, while being predictive of a paper's acceptance chances. For NER, on the other hand, named entities can change in less predictable ways across time, proving a larger challenge for our approach. For SDQC, we were successful in cases where the tweets are nicely clustered by class. For instance, where both rumours are about terrorist attacks, many of the support tweets were headlines from reputable newspaper agencies. These agencies structure tweets in a way that is consistently dissimilar from comments and queries. The effect of our unbounded time extension boosts results on the PeerRead data set, as the data stretches across a range of years. In the case of NER, however, this extension is excessive as only two time steps are available. In the case of SDQC, the lack of improvement could be due to the irregular time intervals, making it hard to learn consistent mappings from rumour to rumour. Adding instance similarity clustering aids alignment, since considering sample similarities across classes is important over longer time scales. Analysis and Discussion ::: Example of Aligning Tweets Finally, we set up the following simplified experiment to investigate the effect of alignment on SDQC data. First, we consider the rumour charliehebdo, where we picked the following tweet: px Support:France: 10 people dead after shooting at HQ of satirical weekly newspaper #CharlieHebdo, according to witnesses <URL> px It has been labeled to be in support of the veracity of the rumour. We will consider the scenario where we use this tweet and others involving the charliehebdo incident to predict author stance in the rumour germanwings-crash. Before alignment, the following 2 germanwings-crash tweets are among the nearest neighbours in the embedding space: px Query: @USER @USER if they had, it’s likely the descent rate would’ve been steeper and the speed not reduce, no ? px Comment: @USER Praying for the families and friends of those involved in crash. I'm so sorry for your loss. The second tweet is semantically similar (both are on the topic of tragedy), but the other is unrelated. Note that the news agency tweet differs from the comment and query tweets in that it stems from a reputable source, mentions details and includes a reference. After alignment, the charliehebdo tweet has the following 2 nearest neighbours: px Support: “@USER: 148 passengers were on board #GermanWings Airbus A320 which has crashed in the southern French Alps <URL>” px Support: Report: Co-Pilot Locked Out Of Cockpit Before Fatal Plane Crash <URL> #Germanwings <URL> Now, both neighbours are of the support class. This example shows that semi-supervised alignment maps source tweets from one class close to target tweets of the same class. Analysis and Discussion ::: Limitations A necessary assumption in both unsupervised and semi-supervised subspace alignment is that the data of each class is clustered in the embedding space. In other words, that most embedded tokens lie closer to other embedded tokens of the same class than to embedded tokens of another class. If this is not the case, then aligning to a few labelled samples of class $k$ does not mean that the embedded source tokens are aligned to any other target points of class $k$. This assumption is violated if, for instance, people only discuss one aspect of a rumour on day one and discuss several aspects of a rumour simultaneously on day two. One would observe a single cluster of token embeddings for supporters of the rumour initially and several clusters at a later time-step. There is no unique solution to aligning a single cluster to several clusters. Additionally, if the few samples labeled in the current time-step (for semi-supervised alignment) are falsely labeled or their label is ambiguous (e.g. a tweet that could equally be labeled as query or deny), then it is possible that the source data is aligned to the wrong target data point cloud. It is important that the few labeled tokens actually represent their classes. This is a common requirement in semi-supervised learning and is not specific to sequential alignment of text representations. Analysis and Discussion ::: Related Work The temporal nature of data can have a significant impact in natural language processing tasks. For instance, BIBREF16 compare a number of approaches to diachronic word embeddings, and detection of semantic shifts across time. For instance, such representations can be used to uncover changes of word meanings, or senses of new words altogether BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Other work has investigated changes in the usage of parts of speech across time BIBREF22. BIBREF23 investigate the changing meanings and associations of words across time, in the perspective of language change. By learning time-aware embeddings, they are able to outperform standard word representation learning algorithms, and can discover, e.g., equivalent technologies through time. BIBREF24 show that lexical features can change their polarity across time, which can have a significant impact in sentiment analysis. BIBREF12 show that associating topics with continuous distributions of timestamps yields substantial improvements in terms of topic prediction and interpretation of trends. Temporal effects in NLP have also been studied in the context of scientific journals, for instance in the context of emerging themes and viewpoints BIBREF25, BIBREF26, and in terms of topic modelling on news corpora across time BIBREF27. Finally, in the context of rumour stance classification, BIBREF28 show that temporal information as a feature in addition to textual content offers an improvement in results. While this previous work has highlighted the extent to which language change across time is relevant for NLP, we present a concrete approach to taking advantage of this change. Nonetheless, these results could inspire more specialized forms of sequential adaptation for specific tasks. Unsupervised subspace alignment has been used in computer vision to adapt between various types of representations of objects, such as high-definition photos, online retail images and illustrations BIBREF5, BIBREF29. Alignment is not restricted to linear transformations, but can be made non-linear through kernelisation BIBREF30. An extension to semi-supervised alignment has been done for images BIBREF31, but not in the context of classification of text embeddings or domain adaptation on a sequential basis. Conclusions In this paper, we introduced sequential subspace alignment (SSA) for natural language processing (NLP), which allows for improved generalisation from past to present data. Experimental evidence shows that this method is useful across diverse NLP tasks, in various temporal settings ranging from weeks to years, and for word-level and document-level representations. The best-performing SSA method, aligning sub-spaces in a semi-supervised way, outperforms simply training on all data with no alignment.
paper acceptance prediction, Named Entity Recognition (NER), author stance prediction
99cf494714c67723692ad1279132212db29295f3
99cf494714c67723692ad1279132212db29295f3_0
Q: What is the difference in findings of Buck et al? It looks like the same conclusion was mentioned in Buck et al.. Text: Introduction BIBREF0 propose a reinforcement learning framework for question answering, called active question answering (ActiveQA), that aims to improve answering by systematically perturbing input questions (cf. BIBREF1 ). Figure 1 depicts the generic agent-environment framework. The agent (AQA) interacts with the environment (E) in order to answer a question ( $q_0$ ). The environment includes a question answering system (Q&A), and emits observations and rewards. A state $s_t$ at time $t$ is the sequence of observations and previous actions generated starting from $q_0$ : $s_t=x_0,u_0,x_1,\ldots ,u_{t-1},x_t$ , where $x_i$ includes the question asked ( $q_{i}$ ), the corresponding answer returned by the QA system ( $a_i$ ), and possibly additional information such as features or auxiliary tasks. The agent includes an action scoring component (U), which produced and action $u_t$ by deciding whether to submit a new question to the environment or to return a final answer. Formally, $u_t\in \mathcal {Q}\cup \mathcal {A}$ , where $s_t$0 is the set of all possible questions, and $s_t$1 is the set of all possible answers. The agent relies on a question reformulation system (QR), that provides candidate follow up questions, and on an answer ranking system (AR), which scores the answers contained in $s_t$2 . Each answer returned is assigned a reward. The objective is to maximize the expected reward over a set of questions. BIBREF0 present a simplified version of this system with three core components: a question reformulator, an off-the-shelf black box QA system, and a candidate answer selection model. The question reformulator is trained with policy gradient BIBREF2 to optimize the F1 score of the answers returned by the QA system to the question reformulations in place of the original question. The reformulator is implemented as a sequence-to-sequence model of the kind used for machine translation BIBREF3 , BIBREF4 . When generating question reformulations, the action-space is equal to the size of the vocabulary, typically $16k$ sentence pieces. Due to this large number of actions we warm start the reformulation policy with a monolingual sequence-to-sequence model that performs generic paraphrasing. This model is trained using the zero-shot translation technique BIBREF5 on a large multilingual parallel corpus BIBREF6 , followed by regular supervised learning on a smaller monolingual corpus of questions BIBREF7 . The reformulation and selection models form a trainable agent that seeks the best answers from the QA system. The reformulator proposes $N$ versions $q_i$ of the input question $q_0$ and passes them to the environment, which provides $N$ corresponding answers, $a_i$ . The selection model scores each triple $(q_0,q_i,a_i)$ and returns the top-scoring candidate. Crucially, the agent may only query the environment with natural language questions. Thus, ActiveQA involves a machine-machine communication process inspired by the human-machine communication that takes place when users interact with digital services during information seeking tasks. For example, while searching for information on a search engine users tend to adopt a keyword-like `queryese' style of questioning. The AQA agent proves effective at reformulating questions on SearchQA BIBREF8 , a large dataset of complex questions from the Jeopardy! game. For this task BiDAF is chosen for the environment BIBREF9 , a deep network built for QA which has produced state-of-the-art results. Compared to a QA system that forms the environment using only the original questions, AQA outperforms this baseline by a wide margin, 11.4% absolute F1, thereby reducing the gap between machine (BiDAF) and human performance by 66%. Here we perform a qualitative analysis of this communication process to better understand what kind of language the agent has learned. We find that while optimizing its reformulations to adapt to the language of the QA system, AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations. These include term re-weighting (tf-idf), expansion and morphological simplification/stemming. We hypothesize that the explanation of this behaviour is that current machine comprehension tasks primarily require ranking of short textual snippets, thus incentivizing relevance more than deep language understanding. Analysis of the Agent's Language We analyze input questions and reformulations on the $12k$ example development partition of the SearchQA dataset. Our goal is to gain insights on how the agent's language evolves during training via policy gradient. It is important to note that in the SearchQA dataset the original Jeopardy! clues have been preprocessed by lower-casing and stop word removal. The resulting preprocessed clues that form the sources (inputs) for the sequence-to-sequence reformulation model resemble more keyword-based search queries than grammatical questions. For example, the clue Gandhi was deeply influenced by this count who wrote "War and Peace" is simplified to gandhi deeply influenced count wrote war peace. The Language of SearchQA Questions Figure 2 summarizes statistics of the questions and rewrites which may shed some light on how the language changes. The (preprocessed) SearchQA questions contain 9.6 words on average. They contain few repeated terms, computed as the mean term frequency (TF) per question. The average is 1.03, but for most of the queries TF is 1.0, i.e. no repetitions. We also compute the median document frequency (DF) per query, where a document is the context from which the answer is selected. DF gives a measure of how informative the question terms are. The Language of the Base NMT Model We first consider the top hypothesis generated by the pre-trained NMT reformulation system, before reinforcement learning (Base-NMT). This system is trained with full supervision, using a large multilingual and a small monolingual dataset. The Base-NMT rewrites differ greatly from their sources. They are shorter, 6.3 words on average, and have even fewer repeated terms (1.01). Interestingly, these reformulations are mostly syntactically well-formed questions. For example, the clue above becomes Who influenced count wrote war?. Base-NMT improves structural language quality by properly reinserting dropped function words and wh-phrases. We also verified the increased fluency by using a large language model and found that the Base-NMT rewrites are 50% more likely than the original questions. The bottom right hand plot in Figure 2 summarizes the language model distributions (LM WordLogP). The plot shows the average per-token language model negative log probabilities; a lower score indicates greater fluency. Although the distributions overlap to a great extent due to the large variance across questions, the differences in means are significant. While more fluent, the Base-NMT rewrites involve rarer terms, as indicated by the decrease in DF. This is probably due to a domain mismatch between SearchQA and the NMT training corpus. The Language of the AQA Agent We next consider the top hypothesis generated by the AQA question reformulator (AQA-QR) after the policy gradient training. The AQA-QR rewrites are those whose corresponding answers are evaluated as AQA Top Hyp. in BIBREF0 . Note, these single rewrites alone outperform the original SearchQA queries by a small margin (+2% on test). We analyze the top hypothesis instead of the final output of the full AQA agent to avoid confounding effects from the answer selection step. These rewrites look different from both the Base-NMT and the SearchQA ones. For the example above AQA-QR's top hypothesis is What is name gandhi gandhi influence wrote peace peace?. Surprisingly, 99.8% start with the prefix What is name. The second most frequent is What country is (81 times), followed by What is is (70) and What state (14). This is puzzling as it happens only for 9 Base-NMT rewrites, and never in the original SearchQA questions. We speculate it might be related to the fact that virtually all answers involve names, of named entities (Micronesia) or generic concepts (pizza). AQA-QR's rewrites are visibly less fluent than both the SearchQA and the Base-MT counterparts. In terms of language model probability they are less likely than both SearchQA and Base-NMT. However, they have more repeated terms (1.2 average TF), are significantly longer (11.9) than the Base-NMT initialization and contain more informative context terms (lower DF) than SearchQA questions. Additionally, AQA-QR's reformulations contain morphological variants in 12.5% of cases. The number of questions that contain multiple tokens with the same stem doubles from SearchQA to AQA-QR. Singular forms are preferred over plurals. Morphological simplification is useful because it increases the chance that a word variant in the question matches the context. Conclusions: Rediscovering IR? Recently, BIBREF10 trained chatbots that negotiate via language utterances in order to complete a task. They report that the agent's language diverges from human language if there is no incentive for fluency in the reward function. Our findings seem related. The fact that the questions reformulated by AQA do not resemble natural language is not due to the keyword-like SearchQA input questions, because Base-NMT is capable of producing fluent questions from the same input. AQA learns to re-weight terms by focusing on informative (lower DF) terms while increasing term frequency (TF) via duplication. At the same time it learns to modify surface forms in ways akin to stemming and morphological analysis. Some of the techniques seem to adapt also to the specific properties of current deep QA architectures such as character-based modelling and attention. Sometimes AQA learns to generate semantically nonsensical, novel, surface term variants; e.g., it might transform the adjective dense to densey. The only justification for this is that such forms can be still exploited by the character-based BiDAF question encoder. Finally, repetitions can directly increase the chances of alignment in the attention components. We hypothesize that there is no incentive for the model to use human language due to the nature of the task. AQA learns to ask BiDAF questions by optimizing a language that increases the likelihood of BiDAF extracting the right answer. BIBREF11 argue that reading comprehension systems are not capable of significant language understanding and fail easily in adversarial settings. We suspect that current machine comprehension tasks involve mostly simple pattern matching and relevance modelling. As a consequence deep QA systems behave as sophisticated ranking systems trained to sort snippets of text from the context. As such, they resemble document retrieval systems which incentivizes the (re-)discovery of IR techniques that have been successful for decades BIBREF12 .
AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations
85e45b37408bb353c6068ba62c18e516d4f67fe9
85e45b37408bb353c6068ba62c18e516d4f67fe9_0
Q: What is the baseline? Text: Introduction It is natural to think of NLP tasks existing in a hierarchy, with each task building upon the previous tasks. For example, Part of Speech (POS) is known to be an extremely strong feature for Noun Phrase Chunking, and downstream tasks such as greedy Language Modeling (LM) can make use of information about the syntactic and semantic structure recovered from junior tasks in making predictions. Conversely, information about downstream tasks should also provide information that aids generalisation for junior downstream tasks, a form of semi-supervised learning. Arguably, there is a two-way relationship between each pair of tasks. Following work such as sogaard2016deep, that exploits such hierarchies in a fully supervised setting, we represent this hierarchical relationship within the structure of a multi-task Recurrent Neural Network (RNN), where junior tasks in the hierarchy are supervised on inner layers and the parameters are jointly optimised during training. Joint optimisation within a hierarchical network acts as a form of regularisation in two ways: first, it forces the network to learn general representations within the parameters of the shared hidden layers BIBREF0 ; second, there is a penalty on the supervised junior layers for forming a representation and making predictions that are inconsistent with senior tasks. Intuitively, we can see how this can be beneficial - when humans receive new information from one task that is inconsistent with with our internal representation of a junior task we update both representations to maintain a coherent view of the world. By incorporating an unsupervised auxiliary task (e.g. plank2016multilingual) as the most senior layer we can use this structure for semi-supervised learning - the error on the unsupervised tasks penalises junior tasks when their representations and predictions are not consistent. It is the aim of this paper to demonstrate that organising a network in such a way can improve performance. To that end, although we do not achieve state of the art results, we see a small but consistent performance improvement against a baseline. A diagram of our model can be seen in Figure 1 . Our Contributions: Linguistically Motivated Task Hierarchies When we speak and understand language we are arguably performing many different linguistic tasks at once. At the top level we might be trying to formulate the best possible sequence of words given all of the contextual and prior information, but this requires us to do lower-level tasks like understanding the syntactic and semantic roles of the words we choose in a specific context. This paper seeks to examine the POS tagging, Chunking and Language Modeling hierarchy and demonstrate that, by developing an algorithm that both exploits this structure and optimises all three jointly, we can improve performance. Motivating our Choice of Tasks In the original introductory paper to Noun Phrase Chunking, abney1991parsing, Chunking is motivated by describing a three-phase process - first, you read the words and assign a Part of Speech tag, you then use a ‘Chunker’ to group these words together into chunks depending on the context and the Parts of Speech, and finally you build a parse tree on top of the chunks. The parallels between this linguistic description of parsing and our architecture are clear; first, we build a prediction for POS, we then use this prediction to assist in parsing by Chunk, which we then use for greedy Language Modeling. In this hierarchy, we consider Language Modeling as auxiliary - designed to improve performance on POS and Chunking, and so therefore results are not presented for this task. Our Model In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep. Work such as mirowski-vlachos:2015:ACL-IJCNLP in which incorporating syntactic dependencies improves performance, demonstrates the benefits of incorporating junior tasks in prediction. Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks. A diagram of our network can be seen in Figure 1 . Supervision of Multiple Tasks Our model has 3 sources of error signals - one for each task. Since each task is categorical we use the discrete cross entropy to calculate the loss for each task: $ H(p, q) = - \sum _{i}^{n_{labels}} p(label_i) \ log \ q(label_i) $ Where $n_{labels}$ is the number of labels in the task, $q(label_i)$ is the probability of label $i$ under the predicted distribution, and $p(label_i)$ is the probability of label $i$ in the true distribution (in this case, a one-hot vector). During training with fully supervised data (POS, Chunk and Language Modeling), we optimise the mean cross entropy: $ Loss(x,y) = \frac{1}{n} \sum _{i}^{n} H(y, f_{task_i}(x)) $ Where $f_{task_i}(x)$ is the predicted distribution on task number $i$ from our model. When labels are missing, we drop the associated cross entropy terms from the loss, and omit the cross entropy calculation from the forward pass. Bi-Directional RNNs Our network is a Bi-Directional Recurrent Neural Network (Bi-RNN) (schuster1997bidirectional) with Gated Recurrent Units (GRUs) (cho2014properties, chung2014empirical). In a Bi-Directional RNN we run left-to-right through the sentence, and then we run right-to-left. This gives us two hidden states at time step t - one from the left-to-right pass, and one from the right-to-left pass. These are then combined to provide a probability distribution for the tag token conditioned on all of the other words in the sentence. Implementation Details During training we alternate batches of data with POS and Chunk and Language Model labels with batches of just Language Modeling according to some probability $ 0 < \gamma < 1$ . We train our model using the ADAM (kingma2014adam) optimiser for 100 epochs, where one epoch corresponds to one pass through the labelled data. We train in batch sizes of $32\times 32$ . Data Sets We present our experiments on two data sets - CoNLL 2000 Chunking data set (tjong2000introduction) which is derived from the Penn Tree Bank newspaper text (marcus1993building), and the Genia biomedical corpus (kim2003genia), derived from biomedical article abstracts. These two data sets were chosen since they perform differently under the same classifiers BIBREF1 . The unlabelled data for semi-supervised learning for newspaper text is the Penn Tree Bank, and for biomedical text it a custom data set of Pubmed abstracts. Baseline Results We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer. We also present results for our hierarchical model where there is no training on unlabelled data (but there is the LM) and confirm previous results that arranging tasks in a hierarchy improves performance. Results for both models can be seen for POS in Table 2 and for Chunk in Table 1 . Semi-Supervised Experiments Experiments showing the effects of our semi-supervised learning regime on models initialised both with and without pre-trained word embeddings can be seen in Tables 3 and 4 . In models without pre-trained word embeddings we see a significant improvement associated with the semi-supervised regime. However, we observe that for models with pre-trained word embeddings, the positive impact of semi-supervised learning is less significant. This is likely due to the fact some of the regularities learned using the language model are already contained within the embedding. In fact, the training schedule of SENNA is similar to that of neural language modelling (collobert2011natural). Two other points are worthy of mention in the experiments with 100 % of the training data. First, the impact of semi-supervised learning on biomedical data is significantly less than on newspaper data. This is likely due to the smaller overlap between vocabularies in the training set and vocabularies in the test set. Second, the benefits for POS are smaller than they are for Chunking - this is likely due to the POS weights being more heavily regularised by receiving gradients from both the Chunking and Language Modeling loss. Finally, we run experiments with only a fraction of the training data to see whether our semi-supervised approach makes our models more robust (Tables 3 and 4 ). Here, we find variable but consistent improvement in the performance of our tasks even at 1 % of the original training data. Label Embeddings Our model structure includes an embedding layer between each task. This layer allows us to learn low-dimensional vector representations of labels, and expose regularities in a way similar to e.g. mikolov2013distributed. We demonstrate this in Figure 2 where we present a T-SNE visualisation of our label embeddings for Chunking and observe clusters along the diagonal. Conclusions & Further Work In this paper we have demonstrated two things: a way to use hierarchical neural networks to conduct semi-supervised learning and the associated performance improvements, and a way to learn low-dimensional embeddings of labels. Future work would investigate how to address Catastrophic Forgetting BIBREF2 (the problem in Neural Networks of forgetting previous tasks when training on a new task), which leads to the requirement for the mix parameter $\gamma $ in our algorithm, and prevents such models such as ours from scaling to larger supervised task hierarchies where the training data may be various and disjoint.
The baseline is a multi-task architecture inspired by another paper.
f4e1d2276d3fc781b686d2bb44eead73e06fbf3f
f4e1d2276d3fc781b686d2bb44eead73e06fbf3f_0
Q: What is the unsupervised task in the final layer? Text: Introduction It is natural to think of NLP tasks existing in a hierarchy, with each task building upon the previous tasks. For example, Part of Speech (POS) is known to be an extremely strong feature for Noun Phrase Chunking, and downstream tasks such as greedy Language Modeling (LM) can make use of information about the syntactic and semantic structure recovered from junior tasks in making predictions. Conversely, information about downstream tasks should also provide information that aids generalisation for junior downstream tasks, a form of semi-supervised learning. Arguably, there is a two-way relationship between each pair of tasks. Following work such as sogaard2016deep, that exploits such hierarchies in a fully supervised setting, we represent this hierarchical relationship within the structure of a multi-task Recurrent Neural Network (RNN), where junior tasks in the hierarchy are supervised on inner layers and the parameters are jointly optimised during training. Joint optimisation within a hierarchical network acts as a form of regularisation in two ways: first, it forces the network to learn general representations within the parameters of the shared hidden layers BIBREF0 ; second, there is a penalty on the supervised junior layers for forming a representation and making predictions that are inconsistent with senior tasks. Intuitively, we can see how this can be beneficial - when humans receive new information from one task that is inconsistent with with our internal representation of a junior task we update both representations to maintain a coherent view of the world. By incorporating an unsupervised auxiliary task (e.g. plank2016multilingual) as the most senior layer we can use this structure for semi-supervised learning - the error on the unsupervised tasks penalises junior tasks when their representations and predictions are not consistent. It is the aim of this paper to demonstrate that organising a network in such a way can improve performance. To that end, although we do not achieve state of the art results, we see a small but consistent performance improvement against a baseline. A diagram of our model can be seen in Figure 1 . Our Contributions: Linguistically Motivated Task Hierarchies When we speak and understand language we are arguably performing many different linguistic tasks at once. At the top level we might be trying to formulate the best possible sequence of words given all of the contextual and prior information, but this requires us to do lower-level tasks like understanding the syntactic and semantic roles of the words we choose in a specific context. This paper seeks to examine the POS tagging, Chunking and Language Modeling hierarchy and demonstrate that, by developing an algorithm that both exploits this structure and optimises all three jointly, we can improve performance. Motivating our Choice of Tasks In the original introductory paper to Noun Phrase Chunking, abney1991parsing, Chunking is motivated by describing a three-phase process - first, you read the words and assign a Part of Speech tag, you then use a ‘Chunker’ to group these words together into chunks depending on the context and the Parts of Speech, and finally you build a parse tree on top of the chunks. The parallels between this linguistic description of parsing and our architecture are clear; first, we build a prediction for POS, we then use this prediction to assist in parsing by Chunk, which we then use for greedy Language Modeling. In this hierarchy, we consider Language Modeling as auxiliary - designed to improve performance on POS and Chunking, and so therefore results are not presented for this task. Our Model In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep. Work such as mirowski-vlachos:2015:ACL-IJCNLP in which incorporating syntactic dependencies improves performance, demonstrates the benefits of incorporating junior tasks in prediction. Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks. A diagram of our network can be seen in Figure 1 . Supervision of Multiple Tasks Our model has 3 sources of error signals - one for each task. Since each task is categorical we use the discrete cross entropy to calculate the loss for each task: $ H(p, q) = - \sum _{i}^{n_{labels}} p(label_i) \ log \ q(label_i) $ Where $n_{labels}$ is the number of labels in the task, $q(label_i)$ is the probability of label $i$ under the predicted distribution, and $p(label_i)$ is the probability of label $i$ in the true distribution (in this case, a one-hot vector). During training with fully supervised data (POS, Chunk and Language Modeling), we optimise the mean cross entropy: $ Loss(x,y) = \frac{1}{n} \sum _{i}^{n} H(y, f_{task_i}(x)) $ Where $f_{task_i}(x)$ is the predicted distribution on task number $i$ from our model. When labels are missing, we drop the associated cross entropy terms from the loss, and omit the cross entropy calculation from the forward pass. Bi-Directional RNNs Our network is a Bi-Directional Recurrent Neural Network (Bi-RNN) (schuster1997bidirectional) with Gated Recurrent Units (GRUs) (cho2014properties, chung2014empirical). In a Bi-Directional RNN we run left-to-right through the sentence, and then we run right-to-left. This gives us two hidden states at time step t - one from the left-to-right pass, and one from the right-to-left pass. These are then combined to provide a probability distribution for the tag token conditioned on all of the other words in the sentence. Implementation Details During training we alternate batches of data with POS and Chunk and Language Model labels with batches of just Language Modeling according to some probability $ 0 < \gamma < 1$ . We train our model using the ADAM (kingma2014adam) optimiser for 100 epochs, where one epoch corresponds to one pass through the labelled data. We train in batch sizes of $32\times 32$ . Data Sets We present our experiments on two data sets - CoNLL 2000 Chunking data set (tjong2000introduction) which is derived from the Penn Tree Bank newspaper text (marcus1993building), and the Genia biomedical corpus (kim2003genia), derived from biomedical article abstracts. These two data sets were chosen since they perform differently under the same classifiers BIBREF1 . The unlabelled data for semi-supervised learning for newspaper text is the Penn Tree Bank, and for biomedical text it a custom data set of Pubmed abstracts. Baseline Results We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer. We also present results for our hierarchical model where there is no training on unlabelled data (but there is the LM) and confirm previous results that arranging tasks in a hierarchy improves performance. Results for both models can be seen for POS in Table 2 and for Chunk in Table 1 . Semi-Supervised Experiments Experiments showing the effects of our semi-supervised learning regime on models initialised both with and without pre-trained word embeddings can be seen in Tables 3 and 4 . In models without pre-trained word embeddings we see a significant improvement associated with the semi-supervised regime. However, we observe that for models with pre-trained word embeddings, the positive impact of semi-supervised learning is less significant. This is likely due to the fact some of the regularities learned using the language model are already contained within the embedding. In fact, the training schedule of SENNA is similar to that of neural language modelling (collobert2011natural). Two other points are worthy of mention in the experiments with 100 % of the training data. First, the impact of semi-supervised learning on biomedical data is significantly less than on newspaper data. This is likely due to the smaller overlap between vocabularies in the training set and vocabularies in the test set. Second, the benefits for POS are smaller than they are for Chunking - this is likely due to the POS weights being more heavily regularised by receiving gradients from both the Chunking and Language Modeling loss. Finally, we run experiments with only a fraction of the training data to see whether our semi-supervised approach makes our models more robust (Tables 3 and 4 ). Here, we find variable but consistent improvement in the performance of our tasks even at 1 % of the original training data. Label Embeddings Our model structure includes an embedding layer between each task. This layer allows us to learn low-dimensional vector representations of labels, and expose regularities in a way similar to e.g. mikolov2013distributed. We demonstrate this in Figure 2 where we present a T-SNE visualisation of our label embeddings for Chunking and observe clusters along the diagonal. Conclusions & Further Work In this paper we have demonstrated two things: a way to use hierarchical neural networks to conduct semi-supervised learning and the associated performance improvements, and a way to learn low-dimensional embeddings of labels. Future work would investigate how to address Catastrophic Forgetting BIBREF2 (the problem in Neural Networks of forgetting previous tasks when training on a new task), which leads to the requirement for the mix parameter $\gamma $ in our algorithm, and prevents such models such as ours from scaling to larger supervised task hierarchies where the training data may be various and disjoint.
Language Modeling
bf2ebc9bbd4cbdf8922c051f406effc97fd16e54
bf2ebc9bbd4cbdf8922c051f406effc97fd16e54_0
Q: How many supervised tasks are used? Text: Introduction It is natural to think of NLP tasks existing in a hierarchy, with each task building upon the previous tasks. For example, Part of Speech (POS) is known to be an extremely strong feature for Noun Phrase Chunking, and downstream tasks such as greedy Language Modeling (LM) can make use of information about the syntactic and semantic structure recovered from junior tasks in making predictions. Conversely, information about downstream tasks should also provide information that aids generalisation for junior downstream tasks, a form of semi-supervised learning. Arguably, there is a two-way relationship between each pair of tasks. Following work such as sogaard2016deep, that exploits such hierarchies in a fully supervised setting, we represent this hierarchical relationship within the structure of a multi-task Recurrent Neural Network (RNN), where junior tasks in the hierarchy are supervised on inner layers and the parameters are jointly optimised during training. Joint optimisation within a hierarchical network acts as a form of regularisation in two ways: first, it forces the network to learn general representations within the parameters of the shared hidden layers BIBREF0 ; second, there is a penalty on the supervised junior layers for forming a representation and making predictions that are inconsistent with senior tasks. Intuitively, we can see how this can be beneficial - when humans receive new information from one task that is inconsistent with with our internal representation of a junior task we update both representations to maintain a coherent view of the world. By incorporating an unsupervised auxiliary task (e.g. plank2016multilingual) as the most senior layer we can use this structure for semi-supervised learning - the error on the unsupervised tasks penalises junior tasks when their representations and predictions are not consistent. It is the aim of this paper to demonstrate that organising a network in such a way can improve performance. To that end, although we do not achieve state of the art results, we see a small but consistent performance improvement against a baseline. A diagram of our model can be seen in Figure 1 . Our Contributions: Linguistically Motivated Task Hierarchies When we speak and understand language we are arguably performing many different linguistic tasks at once. At the top level we might be trying to formulate the best possible sequence of words given all of the contextual and prior information, but this requires us to do lower-level tasks like understanding the syntactic and semantic roles of the words we choose in a specific context. This paper seeks to examine the POS tagging, Chunking and Language Modeling hierarchy and demonstrate that, by developing an algorithm that both exploits this structure and optimises all three jointly, we can improve performance. Motivating our Choice of Tasks In the original introductory paper to Noun Phrase Chunking, abney1991parsing, Chunking is motivated by describing a three-phase process - first, you read the words and assign a Part of Speech tag, you then use a ‘Chunker’ to group these words together into chunks depending on the context and the Parts of Speech, and finally you build a parse tree on top of the chunks. The parallels between this linguistic description of parsing and our architecture are clear; first, we build a prediction for POS, we then use this prediction to assist in parsing by Chunk, which we then use for greedy Language Modeling. In this hierarchy, we consider Language Modeling as auxiliary - designed to improve performance on POS and Chunking, and so therefore results are not presented for this task. Our Model In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep. Work such as mirowski-vlachos:2015:ACL-IJCNLP in which incorporating syntactic dependencies improves performance, demonstrates the benefits of incorporating junior tasks in prediction. Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks. A diagram of our network can be seen in Figure 1 . Supervision of Multiple Tasks Our model has 3 sources of error signals - one for each task. Since each task is categorical we use the discrete cross entropy to calculate the loss for each task: $ H(p, q) = - \sum _{i}^{n_{labels}} p(label_i) \ log \ q(label_i) $ Where $n_{labels}$ is the number of labels in the task, $q(label_i)$ is the probability of label $i$ under the predicted distribution, and $p(label_i)$ is the probability of label $i$ in the true distribution (in this case, a one-hot vector). During training with fully supervised data (POS, Chunk and Language Modeling), we optimise the mean cross entropy: $ Loss(x,y) = \frac{1}{n} \sum _{i}^{n} H(y, f_{task_i}(x)) $ Where $f_{task_i}(x)$ is the predicted distribution on task number $i$ from our model. When labels are missing, we drop the associated cross entropy terms from the loss, and omit the cross entropy calculation from the forward pass. Bi-Directional RNNs Our network is a Bi-Directional Recurrent Neural Network (Bi-RNN) (schuster1997bidirectional) with Gated Recurrent Units (GRUs) (cho2014properties, chung2014empirical). In a Bi-Directional RNN we run left-to-right through the sentence, and then we run right-to-left. This gives us two hidden states at time step t - one from the left-to-right pass, and one from the right-to-left pass. These are then combined to provide a probability distribution for the tag token conditioned on all of the other words in the sentence. Implementation Details During training we alternate batches of data with POS and Chunk and Language Model labels with batches of just Language Modeling according to some probability $ 0 < \gamma < 1$ . We train our model using the ADAM (kingma2014adam) optimiser for 100 epochs, where one epoch corresponds to one pass through the labelled data. We train in batch sizes of $32\times 32$ . Data Sets We present our experiments on two data sets - CoNLL 2000 Chunking data set (tjong2000introduction) which is derived from the Penn Tree Bank newspaper text (marcus1993building), and the Genia biomedical corpus (kim2003genia), derived from biomedical article abstracts. These two data sets were chosen since they perform differently under the same classifiers BIBREF1 . The unlabelled data for semi-supervised learning for newspaper text is the Penn Tree Bank, and for biomedical text it a custom data set of Pubmed abstracts. Baseline Results We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer. We also present results for our hierarchical model where there is no training on unlabelled data (but there is the LM) and confirm previous results that arranging tasks in a hierarchy improves performance. Results for both models can be seen for POS in Table 2 and for Chunk in Table 1 . Semi-Supervised Experiments Experiments showing the effects of our semi-supervised learning regime on models initialised both with and without pre-trained word embeddings can be seen in Tables 3 and 4 . In models without pre-trained word embeddings we see a significant improvement associated with the semi-supervised regime. However, we observe that for models with pre-trained word embeddings, the positive impact of semi-supervised learning is less significant. This is likely due to the fact some of the regularities learned using the language model are already contained within the embedding. In fact, the training schedule of SENNA is similar to that of neural language modelling (collobert2011natural). Two other points are worthy of mention in the experiments with 100 % of the training data. First, the impact of semi-supervised learning on biomedical data is significantly less than on newspaper data. This is likely due to the smaller overlap between vocabularies in the training set and vocabularies in the test set. Second, the benefits for POS are smaller than they are for Chunking - this is likely due to the POS weights being more heavily regularised by receiving gradients from both the Chunking and Language Modeling loss. Finally, we run experiments with only a fraction of the training data to see whether our semi-supervised approach makes our models more robust (Tables 3 and 4 ). Here, we find variable but consistent improvement in the performance of our tasks even at 1 % of the original training data. Label Embeddings Our model structure includes an embedding layer between each task. This layer allows us to learn low-dimensional vector representations of labels, and expose regularities in a way similar to e.g. mikolov2013distributed. We demonstrate this in Figure 2 where we present a T-SNE visualisation of our label embeddings for Chunking and observe clusters along the diagonal. Conclusions & Further Work In this paper we have demonstrated two things: a way to use hierarchical neural networks to conduct semi-supervised learning and the associated performance improvements, and a way to learn low-dimensional embeddings of labels. Future work would investigate how to address Catastrophic Forgetting BIBREF2 (the problem in Neural Networks of forgetting previous tasks when training on a new task), which leads to the requirement for the mix parameter $\gamma $ in our algorithm, and prevents such models such as ours from scaling to larger supervised task hierarchies where the training data may be various and disjoint.
two
c13fe4064df0cfebd0538f29cb13e917fc5c3be0
c13fe4064df0cfebd0538f29cb13e917fc5c3be0_0
Q: What is the network architecture? Text: Introduction It is natural to think of NLP tasks existing in a hierarchy, with each task building upon the previous tasks. For example, Part of Speech (POS) is known to be an extremely strong feature for Noun Phrase Chunking, and downstream tasks such as greedy Language Modeling (LM) can make use of information about the syntactic and semantic structure recovered from junior tasks in making predictions. Conversely, information about downstream tasks should also provide information that aids generalisation for junior downstream tasks, a form of semi-supervised learning. Arguably, there is a two-way relationship between each pair of tasks. Following work such as sogaard2016deep, that exploits such hierarchies in a fully supervised setting, we represent this hierarchical relationship within the structure of a multi-task Recurrent Neural Network (RNN), where junior tasks in the hierarchy are supervised on inner layers and the parameters are jointly optimised during training. Joint optimisation within a hierarchical network acts as a form of regularisation in two ways: first, it forces the network to learn general representations within the parameters of the shared hidden layers BIBREF0 ; second, there is a penalty on the supervised junior layers for forming a representation and making predictions that are inconsistent with senior tasks. Intuitively, we can see how this can be beneficial - when humans receive new information from one task that is inconsistent with with our internal representation of a junior task we update both representations to maintain a coherent view of the world. By incorporating an unsupervised auxiliary task (e.g. plank2016multilingual) as the most senior layer we can use this structure for semi-supervised learning - the error on the unsupervised tasks penalises junior tasks when their representations and predictions are not consistent. It is the aim of this paper to demonstrate that organising a network in such a way can improve performance. To that end, although we do not achieve state of the art results, we see a small but consistent performance improvement against a baseline. A diagram of our model can be seen in Figure 1 . Our Contributions: Linguistically Motivated Task Hierarchies When we speak and understand language we are arguably performing many different linguistic tasks at once. At the top level we might be trying to formulate the best possible sequence of words given all of the contextual and prior information, but this requires us to do lower-level tasks like understanding the syntactic and semantic roles of the words we choose in a specific context. This paper seeks to examine the POS tagging, Chunking and Language Modeling hierarchy and demonstrate that, by developing an algorithm that both exploits this structure and optimises all three jointly, we can improve performance. Motivating our Choice of Tasks In the original introductory paper to Noun Phrase Chunking, abney1991parsing, Chunking is motivated by describing a three-phase process - first, you read the words and assign a Part of Speech tag, you then use a ‘Chunker’ to group these words together into chunks depending on the context and the Parts of Speech, and finally you build a parse tree on top of the chunks. The parallels between this linguistic description of parsing and our architecture are clear; first, we build a prediction for POS, we then use this prediction to assist in parsing by Chunk, which we then use for greedy Language Modeling. In this hierarchy, we consider Language Modeling as auxiliary - designed to improve performance on POS and Chunking, and so therefore results are not presented for this task. Our Model In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep. Work such as mirowski-vlachos:2015:ACL-IJCNLP in which incorporating syntactic dependencies improves performance, demonstrates the benefits of incorporating junior tasks in prediction. Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks. A diagram of our network can be seen in Figure 1 . Supervision of Multiple Tasks Our model has 3 sources of error signals - one for each task. Since each task is categorical we use the discrete cross entropy to calculate the loss for each task: $ H(p, q) = - \sum _{i}^{n_{labels}} p(label_i) \ log \ q(label_i) $ Where $n_{labels}$ is the number of labels in the task, $q(label_i)$ is the probability of label $i$ under the predicted distribution, and $p(label_i)$ is the probability of label $i$ in the true distribution (in this case, a one-hot vector). During training with fully supervised data (POS, Chunk and Language Modeling), we optimise the mean cross entropy: $ Loss(x,y) = \frac{1}{n} \sum _{i}^{n} H(y, f_{task_i}(x)) $ Where $f_{task_i}(x)$ is the predicted distribution on task number $i$ from our model. When labels are missing, we drop the associated cross entropy terms from the loss, and omit the cross entropy calculation from the forward pass. Bi-Directional RNNs Our network is a Bi-Directional Recurrent Neural Network (Bi-RNN) (schuster1997bidirectional) with Gated Recurrent Units (GRUs) (cho2014properties, chung2014empirical). In a Bi-Directional RNN we run left-to-right through the sentence, and then we run right-to-left. This gives us two hidden states at time step t - one from the left-to-right pass, and one from the right-to-left pass. These are then combined to provide a probability distribution for the tag token conditioned on all of the other words in the sentence. Implementation Details During training we alternate batches of data with POS and Chunk and Language Model labels with batches of just Language Modeling according to some probability $ 0 < \gamma < 1$ . We train our model using the ADAM (kingma2014adam) optimiser for 100 epochs, where one epoch corresponds to one pass through the labelled data. We train in batch sizes of $32\times 32$ . Data Sets We present our experiments on two data sets - CoNLL 2000 Chunking data set (tjong2000introduction) which is derived from the Penn Tree Bank newspaper text (marcus1993building), and the Genia biomedical corpus (kim2003genia), derived from biomedical article abstracts. These two data sets were chosen since they perform differently under the same classifiers BIBREF1 . The unlabelled data for semi-supervised learning for newspaper text is the Penn Tree Bank, and for biomedical text it a custom data set of Pubmed abstracts. Baseline Results We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer. We also present results for our hierarchical model where there is no training on unlabelled data (but there is the LM) and confirm previous results that arranging tasks in a hierarchy improves performance. Results for both models can be seen for POS in Table 2 and for Chunk in Table 1 . Semi-Supervised Experiments Experiments showing the effects of our semi-supervised learning regime on models initialised both with and without pre-trained word embeddings can be seen in Tables 3 and 4 . In models without pre-trained word embeddings we see a significant improvement associated with the semi-supervised regime. However, we observe that for models with pre-trained word embeddings, the positive impact of semi-supervised learning is less significant. This is likely due to the fact some of the regularities learned using the language model are already contained within the embedding. In fact, the training schedule of SENNA is similar to that of neural language modelling (collobert2011natural). Two other points are worthy of mention in the experiments with 100 % of the training data. First, the impact of semi-supervised learning on biomedical data is significantly less than on newspaper data. This is likely due to the smaller overlap between vocabularies in the training set and vocabularies in the test set. Second, the benefits for POS are smaller than they are for Chunking - this is likely due to the POS weights being more heavily regularised by receiving gradients from both the Chunking and Language Modeling loss. Finally, we run experiments with only a fraction of the training data to see whether our semi-supervised approach makes our models more robust (Tables 3 and 4 ). Here, we find variable but consistent improvement in the performance of our tasks even at 1 % of the original training data. Label Embeddings Our model structure includes an embedding layer between each task. This layer allows us to learn low-dimensional vector representations of labels, and expose regularities in a way similar to e.g. mikolov2013distributed. We demonstrate this in Figure 2 where we present a T-SNE visualisation of our label embeddings for Chunking and observe clusters along the diagonal. Conclusions & Further Work In this paper we have demonstrated two things: a way to use hierarchical neural networks to conduct semi-supervised learning and the associated performance improvements, and a way to learn low-dimensional embeddings of labels. Future work would investigate how to address Catastrophic Forgetting BIBREF2 (the problem in Neural Networks of forgetting previous tasks when training on a new task), which leads to the requirement for the mix parameter $\gamma $ in our algorithm, and prevents such models such as ours from scaling to larger supervised task hierarchies where the training data may be various and disjoint.
The network architecture has a multi-task Bi-Directional Recurrent Neural Network, with an unsupervised sequence labeling task and a low-dimensional embedding layer between tasks. There is a hidden layer after each successive task with skip connections to the senior supervised layers.
6adde6bc3e27a32eac5daa57d30ab373f77690be
6adde6bc3e27a32eac5daa57d30ab373f77690be_0
Q: Is the proposed model more sensitive than previous context-aware models too? Text: Introduction Despite its rapid adoption by academia and industry and its recent success BIBREF0 , neural machine translation has been found largely incapable of exploiting additional context other than the current source sentence. This incapability stems from the fact that larger-context machine translation systems tend to ignore additional context, such as previous sentences and associated images. Much of recent efforts have gone into building a novel network architecture that can better exploit additional context however without much success BIBREF1 , BIBREF2 , BIBREF3 . In this paper, we approach the problem of larger-context neural machine translation from the perspective of “learning” instead. We propose to explicitly encourage the model to exploit additional context by assigning a higher log-probability to a translation paired with a correct context than to that paired with an incorrect one. We design this regularization term to be applied at token, sentence and batch levels to cope with the fact that the benefit from additional context may differ from one level to another. Our experiments on document-level translation using a modified transformer BIBREF4 reveal that the model trained using the proposed learning algorithm is indeed sensitive to the context, contrarily to some previous works BIBREF1 . We also see a small improvement in terms of overall quality (measured in BLEU). These two observations together suggest that the proposed approach is a promising direction toward building an effective larger-context neural translation model. Background: Larger-Context Neural Machine Translation A larger-context neural machine translation system extends upon the conventional neural machine translation system by incorporating the context $C$ , beyond a source sentence $X$ , when translating into a sentence $Y$ in the target language. In the case of multimodal machine translation, this additional context is an image which the source sentence $X$ describes. In the case of document-level machine translation, the additional context $C$ may include other sentences in a document in which the source sentence $X$ appears. Such a larger-context neural machine translation system consists of an encoder $f^C$ that encodes the additional context $C$ into a set of vector representations that are combined with those extracted from the source sentence $X$ by the original encoder $f^X$ . These vectors are then used by the decoder $X$0 to compute the conditional distribution over the target sequences $X$1 in the autoregressive paradigm, i.e., $X$2 where $\theta $ is a collection of all the parameters in the neural translation model. $f^X$ and $g$ are often implemented as neural networks, such as recurrent networks with attention BIBREF5 , convolutional networks BIBREF6 and self-attention BIBREF7 . Training is often done by maximizing the log-likelihood given a set of training triplets $\mathcal {D}^{(tr)}=\mathcal {X}^{(tr)}*\mathcal {Y}^{(tr)}*\mathcal {C}^{(tr)}=\left\lbrace (X_1, Y_1, C_1), \ldots , (X_N, Y_N, C_N)\right\rbrace $ . The log-likelihood is defined as $$\mathcal {L}(\theta ; \mathcal {D}) = \frac{1}{N}\sum _{n=1}^N \sum _{t=1}^{T_n} \log p(y_t^n|y_{<t}^n, X_N, C_N).$$ (Eq. 1) Once training is done, it is a standard practice to use beam search to find a translation that approximately maximizes $ \sum _{t=1}^T \log p(y_t|y_{<t}, X, C). $ Existing approaches to larger-context neural translation Existing approaches to larger-context neural machine translation have mostly focused on either modifying the input or the network architecture. BIBREF8 concatenate the previous source sentence to the current source sentence, which was followed by BIBREF9 who also concatenate the previous target sentence. BIBREF2 explore various concatenation strategies when the additional context is an image. Other groups have proposed various modifications to the existing neural translation systems BIBREF10 , BIBREF11 , BIBREF4 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 in the case of document-level translation, while using usual maximum likelihood learning. BIBREF16 on the other hand introduces a discriminator that forces the network to improve signal-to-noise ratio in the additional context. In parallel, there have been many proposals on novel network architectures for multi-modal translation BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In personalized translation, BIBREF21 bias the output distribution according to the context. All these previous efforts are clearly distinguished from our work in that our approach focuses entirely on a learning algorithm and is agnostic to the underlying network architecture. Learning to use the context In this paper, we focus on “learning” rather than a network architecture. Instead of coming up with a new architecture that facilitates larger-context translation, our goal is to come up with a learning algorithm that can be used with any underlying larger-context neural machine translation system. Neutral, useful and harmful context To do so, we first notice that by the law of total probability, $$p_\theta (y_t|y_{<t}, X) =& \sum _C p_{\theta } (y_t|y_{<t}, X, C) p(C|X) \nonumber \\ =& \mathbb {E}_{C\sim C|X} \left[ p_{\theta } (y_t|y_{<t}, X, C) \right]$$ (Eq. 3) As such, over the entire distribution of contexts $C$ given a source $X$ , the additional context is overall “neutral”. When the context $C$ is used, there are two cases. First, the context may be “useful”. In this case, the model can assign a better probability to a correct target token $y^*_t$ when the context was provided than when it was not: $p_{\theta }(y^*_t|y_{<t}, X,C) > p_{\theta }(y^*_t|y_{<t}, X)$ . On the other hand, the additional context can certainly be used harmfully: $p_{\theta }(y^*_t|y_{<t}, X,C) < p_{\theta }(y^*_t|y_{<t}, X)$ . Although these “neutral”, “useful” and “harmful” behaviours are defined at the token level, we can easily extend them to various levels by defining the following score functions: $ &\text{(token) }& s^{\text{tok}}(y_t|\cdot ) = \log p_{\theta }(y^*_t|\cdot ), \\ &\text{(sent.) }& s^{\text{sent}}(Y|\cdot ) = \sum _{t=1}^T \log p_{\theta }(y^*_t|y_{<t}, \cdot ), \\ &\text{(data) }& s^{\text{data}}(\mathcal {Y}|\cdot ) = \sum _{Y \in \mathcal {Y}} s^{\text{sent}}(Y|\cdot ). $ Context regularization With these scores defined at three different levels, we propose to regularize learning to encourage a neural translation system to prefer using the context in a useful way. Our regularization term works at all three levels–tokens, sentences and the entire data– and is based on a margin ranking loss BIBREF22 : $$&\mathcal {R}(\theta ; \mathcal {D}) = \\ \nonumber & \alpha _d \left[ \left(\sum _{n=1}^N T_n\right)\delta _d - s^{\text{data}}(\mathcal {Y}|\mathcal {X},\mathcal {C}) + s^{\text{data}}(\mathcal {Y}|\mathcal {X})\right]_+ \\ \nonumber & + \alpha _s \sum _{n=1}^N \left[T_n \delta _s - s^{\text{sent}}(Y_n|X_n, C_n) \right. \\ \nonumber & \left. \qquad \qquad \qquad \qquad + s^{\text{sent}}(Y_n|X_n) \right]_+ \\ \nonumber & + \alpha _\tau \sum _{n=1}^N \sum _{t=1}^{T_n} \left[\delta _\tau - s^{\text{tok}}(y^n_t|y^n_{<t}, X_n, C_n) \right. \\ \nonumber & \left. \qquad \qquad \qquad \qquad + s^{\text{tok}}(y^n_t|y^n_{<t}, X_n) \right]_+,$$ (Eq. 5) where $\alpha _d$ , $\alpha _s$ and $\alpha _\tau $ are the regularization strengths at the data-, sentence- and token-level. $\delta _d$ , $\delta _s$ and $\delta _\tau $ are corresponding margin values. The proposed regularization term explicitly encourages the usefulness of the additional context at all the levels. We use the margin ranking loss to only lightly bias the model to use the context in a useful way but not necessarily force it to fully rely on the context, as it is expected that most of the necessary information is already contained in the source $X$ and that the additional context $C$ only provides a little complementary information. Estimating context-less scores It is not trivial to compute the score when the context was missing based on Eq. ( 3 ), as it requires (1) the access to $p(C|X)$ and (2) the intractable marginalization over all possible $C$ . In this paper, we explore the simplest strategy of approximating $p(C|X)$ with the data distribution of sentences $p_{\text{data}}(C)$ . We assume that the context $C$ is independently distributed from the source $X$ , i.e., $p(C|X)=p(C)$ and that the context $C$ follows the data distribution. This allows us to approximate the expectation by uniformly selecting $M$ training contexts at random: $ s(\cdot |\cdot ) = \log p(\cdot |\cdot ) \approx \log \frac{1}{M} \sum _{m=1}^M p(\cdot |\cdot , C_m), $ where $C^m$ is the $m$ -th sample. A better estimation of $p(C|X)$ is certainly possible. One such approach would be to use a larger-context recurrent language model by BIBREF23 . Another possible approach is to use an off-the-shelf retrieval engine to build a non-parametric sampler. We leave the investigation of these alternatives to the future. An intrinsic evaluation metric The conditions for “neutral”, “useful” and “harmful” context also serve as bases on which we can build an intrinsic evaluation metric of a larger-context neural machine translation system. We propose this metric by observing that, for a well-trained larger-context translation system, $ \Delta ^{\mathcal {D}}(\theta ) = s(\mathcal {Y}|\mathcal {X},\mathcal {C}; \theta ) - s(\mathcal {Y}|\mathcal {X}; \theta ) > 0, $ while it would be 0 for a larger-context model that completely ignores the additional context. We compute this metric over the validation set using the sample-based approximation scheme from above. Alternatively, we may compute the difference in BLEU ( $\Delta _{BLEU}^{\mathcal {D}}(\theta )$ ) over the validation or test data. These metrics are complementary to others that evaluate specific discourse phenomena on specially designed test sets BIBREF9 . Result and Analysis In Table 1 , we present the translation quality (in BLEU) of the four variants. We make a number of observations. First, the use of previous sentence (c) does not improve over the baseline (a–b) when the larger-context model was trained only to maximize the log-likelihood ( 1 ). We furthermore see that the translation quality of the larger-context model only marginally degrades even when the incorrectly paired previous sentence was given instead ( $\Delta _{BLEU}^{\mathcal {D}^{test}}(\theta ) = 0.40$ ), implying that this model largely ignores the previous sentence. Second, we observe that the larger-context model improves upon the baselines, trained either without any additional context (a) or with purely random context (b), when it was trained with the proposed regularization term (d). The evaluation metric $\Delta _{BLEU}^{\mathcal {D}^{test}}(\theta )$ is also significantly larger than 0, suggesting the effectiveness of the proposed regularization term in encouraging the model to focus on the additional context. In Fig. 1 , we contrast the translation qualities (measured in BLEU) between having the correctly paired (LC) and incorrectly paired (LC+Rand) previous sentences. The sentences in the validation set were sorted according to the difference $s^{\text{sent}}(Y|X,C) - s^{\text{sent}}(Y|X)$ , and we report the cumulative BLEU scores. The gap is large for those sentences that were deemed by the larger-context model to benefit from the additional context. This match between the score difference (which uses the reference translation) and the actual translation quality further confirms the validity of the proposed approach. Conclusion We proposed a novel regularization term for encouraging a larger-context machine translation model to focus more on the additional context using a multi-level pair-wise ranking loss. The proposed learning approach is generally applicable to any network architecture. Our empirical evaluation demonstrates that a larger-context translation model trained by the proposed approach indeed becomes more sensitive to the additional context and outperforms a context-less baseline. We believe this work is an encouraging first step toward developing a better context-aware learning algorithm for larger-context machine translation. We identify three future directions; (1) a better context distribution $p(C|X)$ , (2) efficient evaluation of the context-less scores, and (3) evaluation using other tasks, such as multi-modal translation. Acknowledgments SJ thanks NSERC. KC thanks support by AdeptMind, eBay, TenCent, NVIDIA and CIFAR. This work was partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Electronics (Improving Deep Learning using Latent Structure). Larger-Context Transformer A shared 6-layer transformer encoder is used to independently encode an additional context $C$ and a source sentence $X$ . $ c &= \text{TransformerEnc}_6(C) \\ x &= \text{TransformerEnc}_6(X) $ Using $x$ as queries ( $q$ ), a multi-head attention mechanism attends to $c$ as key-values ( $k,v$ ). The input and output are merged through a gate. The final source representation is obtained through a feed-forward module (FF) used in typical transformer layers. $ \hat{x}_c &= \text{Attn}(q=x; k,v=c) \\ g &= \text{Linear}([x;\hat{x}_c]) \\ x_c &= \text{FF}(g\cdot \text{Dropout}(\hat{x}_c) + (1-g)\cdot x) $ We use a standard 6-layer transformer decoder which attends to $x_c$ . $ p_\theta (y_t | y_{<t}, X, C) = \text{TransformerDec}_6(y_{<t}, x_c), $
Unanswerable
90ad8d7ee27192b89ffcfa4a68302f370e6333a8
90ad8d7ee27192b89ffcfa4a68302f370e6333a8_0
Q: In what ways the larger context is ignored for the models that do consider larger context? Text: Introduction Despite its rapid adoption by academia and industry and its recent success BIBREF0 , neural machine translation has been found largely incapable of exploiting additional context other than the current source sentence. This incapability stems from the fact that larger-context machine translation systems tend to ignore additional context, such as previous sentences and associated images. Much of recent efforts have gone into building a novel network architecture that can better exploit additional context however without much success BIBREF1 , BIBREF2 , BIBREF3 . In this paper, we approach the problem of larger-context neural machine translation from the perspective of “learning” instead. We propose to explicitly encourage the model to exploit additional context by assigning a higher log-probability to a translation paired with a correct context than to that paired with an incorrect one. We design this regularization term to be applied at token, sentence and batch levels to cope with the fact that the benefit from additional context may differ from one level to another. Our experiments on document-level translation using a modified transformer BIBREF4 reveal that the model trained using the proposed learning algorithm is indeed sensitive to the context, contrarily to some previous works BIBREF1 . We also see a small improvement in terms of overall quality (measured in BLEU). These two observations together suggest that the proposed approach is a promising direction toward building an effective larger-context neural translation model. Background: Larger-Context Neural Machine Translation A larger-context neural machine translation system extends upon the conventional neural machine translation system by incorporating the context $C$ , beyond a source sentence $X$ , when translating into a sentence $Y$ in the target language. In the case of multimodal machine translation, this additional context is an image which the source sentence $X$ describes. In the case of document-level machine translation, the additional context $C$ may include other sentences in a document in which the source sentence $X$ appears. Such a larger-context neural machine translation system consists of an encoder $f^C$ that encodes the additional context $C$ into a set of vector representations that are combined with those extracted from the source sentence $X$ by the original encoder $f^X$ . These vectors are then used by the decoder $X$0 to compute the conditional distribution over the target sequences $X$1 in the autoregressive paradigm, i.e., $X$2 where $\theta $ is a collection of all the parameters in the neural translation model. $f^X$ and $g$ are often implemented as neural networks, such as recurrent networks with attention BIBREF5 , convolutional networks BIBREF6 and self-attention BIBREF7 . Training is often done by maximizing the log-likelihood given a set of training triplets $\mathcal {D}^{(tr)}=\mathcal {X}^{(tr)}*\mathcal {Y}^{(tr)}*\mathcal {C}^{(tr)}=\left\lbrace (X_1, Y_1, C_1), \ldots , (X_N, Y_N, C_N)\right\rbrace $ . The log-likelihood is defined as $$\mathcal {L}(\theta ; \mathcal {D}) = \frac{1}{N}\sum _{n=1}^N \sum _{t=1}^{T_n} \log p(y_t^n|y_{<t}^n, X_N, C_N).$$ (Eq. 1) Once training is done, it is a standard practice to use beam search to find a translation that approximately maximizes $ \sum _{t=1}^T \log p(y_t|y_{<t}, X, C). $ Existing approaches to larger-context neural translation Existing approaches to larger-context neural machine translation have mostly focused on either modifying the input or the network architecture. BIBREF8 concatenate the previous source sentence to the current source sentence, which was followed by BIBREF9 who also concatenate the previous target sentence. BIBREF2 explore various concatenation strategies when the additional context is an image. Other groups have proposed various modifications to the existing neural translation systems BIBREF10 , BIBREF11 , BIBREF4 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 in the case of document-level translation, while using usual maximum likelihood learning. BIBREF16 on the other hand introduces a discriminator that forces the network to improve signal-to-noise ratio in the additional context. In parallel, there have been many proposals on novel network architectures for multi-modal translation BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In personalized translation, BIBREF21 bias the output distribution according to the context. All these previous efforts are clearly distinguished from our work in that our approach focuses entirely on a learning algorithm and is agnostic to the underlying network architecture. Learning to use the context In this paper, we focus on “learning” rather than a network architecture. Instead of coming up with a new architecture that facilitates larger-context translation, our goal is to come up with a learning algorithm that can be used with any underlying larger-context neural machine translation system. Neutral, useful and harmful context To do so, we first notice that by the law of total probability, $$p_\theta (y_t|y_{<t}, X) =& \sum _C p_{\theta } (y_t|y_{<t}, X, C) p(C|X) \nonumber \\ =& \mathbb {E}_{C\sim C|X} \left[ p_{\theta } (y_t|y_{<t}, X, C) \right]$$ (Eq. 3) As such, over the entire distribution of contexts $C$ given a source $X$ , the additional context is overall “neutral”. When the context $C$ is used, there are two cases. First, the context may be “useful”. In this case, the model can assign a better probability to a correct target token $y^*_t$ when the context was provided than when it was not: $p_{\theta }(y^*_t|y_{<t}, X,C) > p_{\theta }(y^*_t|y_{<t}, X)$ . On the other hand, the additional context can certainly be used harmfully: $p_{\theta }(y^*_t|y_{<t}, X,C) < p_{\theta }(y^*_t|y_{<t}, X)$ . Although these “neutral”, “useful” and “harmful” behaviours are defined at the token level, we can easily extend them to various levels by defining the following score functions: $ &\text{(token) }& s^{\text{tok}}(y_t|\cdot ) = \log p_{\theta }(y^*_t|\cdot ), \\ &\text{(sent.) }& s^{\text{sent}}(Y|\cdot ) = \sum _{t=1}^T \log p_{\theta }(y^*_t|y_{<t}, \cdot ), \\ &\text{(data) }& s^{\text{data}}(\mathcal {Y}|\cdot ) = \sum _{Y \in \mathcal {Y}} s^{\text{sent}}(Y|\cdot ). $ Context regularization With these scores defined at three different levels, we propose to regularize learning to encourage a neural translation system to prefer using the context in a useful way. Our regularization term works at all three levels–tokens, sentences and the entire data– and is based on a margin ranking loss BIBREF22 : $$&\mathcal {R}(\theta ; \mathcal {D}) = \\ \nonumber & \alpha _d \left[ \left(\sum _{n=1}^N T_n\right)\delta _d - s^{\text{data}}(\mathcal {Y}|\mathcal {X},\mathcal {C}) + s^{\text{data}}(\mathcal {Y}|\mathcal {X})\right]_+ \\ \nonumber & + \alpha _s \sum _{n=1}^N \left[T_n \delta _s - s^{\text{sent}}(Y_n|X_n, C_n) \right. \\ \nonumber & \left. \qquad \qquad \qquad \qquad + s^{\text{sent}}(Y_n|X_n) \right]_+ \\ \nonumber & + \alpha _\tau \sum _{n=1}^N \sum _{t=1}^{T_n} \left[\delta _\tau - s^{\text{tok}}(y^n_t|y^n_{<t}, X_n, C_n) \right. \\ \nonumber & \left. \qquad \qquad \qquad \qquad + s^{\text{tok}}(y^n_t|y^n_{<t}, X_n) \right]_+,$$ (Eq. 5) where $\alpha _d$ , $\alpha _s$ and $\alpha _\tau $ are the regularization strengths at the data-, sentence- and token-level. $\delta _d$ , $\delta _s$ and $\delta _\tau $ are corresponding margin values. The proposed regularization term explicitly encourages the usefulness of the additional context at all the levels. We use the margin ranking loss to only lightly bias the model to use the context in a useful way but not necessarily force it to fully rely on the context, as it is expected that most of the necessary information is already contained in the source $X$ and that the additional context $C$ only provides a little complementary information. Estimating context-less scores It is not trivial to compute the score when the context was missing based on Eq. ( 3 ), as it requires (1) the access to $p(C|X)$ and (2) the intractable marginalization over all possible $C$ . In this paper, we explore the simplest strategy of approximating $p(C|X)$ with the data distribution of sentences $p_{\text{data}}(C)$ . We assume that the context $C$ is independently distributed from the source $X$ , i.e., $p(C|X)=p(C)$ and that the context $C$ follows the data distribution. This allows us to approximate the expectation by uniformly selecting $M$ training contexts at random: $ s(\cdot |\cdot ) = \log p(\cdot |\cdot ) \approx \log \frac{1}{M} \sum _{m=1}^M p(\cdot |\cdot , C_m), $ where $C^m$ is the $m$ -th sample. A better estimation of $p(C|X)$ is certainly possible. One such approach would be to use a larger-context recurrent language model by BIBREF23 . Another possible approach is to use an off-the-shelf retrieval engine to build a non-parametric sampler. We leave the investigation of these alternatives to the future. An intrinsic evaluation metric The conditions for “neutral”, “useful” and “harmful” context also serve as bases on which we can build an intrinsic evaluation metric of a larger-context neural machine translation system. We propose this metric by observing that, for a well-trained larger-context translation system, $ \Delta ^{\mathcal {D}}(\theta ) = s(\mathcal {Y}|\mathcal {X},\mathcal {C}; \theta ) - s(\mathcal {Y}|\mathcal {X}; \theta ) > 0, $ while it would be 0 for a larger-context model that completely ignores the additional context. We compute this metric over the validation set using the sample-based approximation scheme from above. Alternatively, we may compute the difference in BLEU ( $\Delta _{BLEU}^{\mathcal {D}}(\theta )$ ) over the validation or test data. These metrics are complementary to others that evaluate specific discourse phenomena on specially designed test sets BIBREF9 . Result and Analysis In Table 1 , we present the translation quality (in BLEU) of the four variants. We make a number of observations. First, the use of previous sentence (c) does not improve over the baseline (a–b) when the larger-context model was trained only to maximize the log-likelihood ( 1 ). We furthermore see that the translation quality of the larger-context model only marginally degrades even when the incorrectly paired previous sentence was given instead ( $\Delta _{BLEU}^{\mathcal {D}^{test}}(\theta ) = 0.40$ ), implying that this model largely ignores the previous sentence. Second, we observe that the larger-context model improves upon the baselines, trained either without any additional context (a) or with purely random context (b), when it was trained with the proposed regularization term (d). The evaluation metric $\Delta _{BLEU}^{\mathcal {D}^{test}}(\theta )$ is also significantly larger than 0, suggesting the effectiveness of the proposed regularization term in encouraging the model to focus on the additional context. In Fig. 1 , we contrast the translation qualities (measured in BLEU) between having the correctly paired (LC) and incorrectly paired (LC+Rand) previous sentences. The sentences in the validation set were sorted according to the difference $s^{\text{sent}}(Y|X,C) - s^{\text{sent}}(Y|X)$ , and we report the cumulative BLEU scores. The gap is large for those sentences that were deemed by the larger-context model to benefit from the additional context. This match between the score difference (which uses the reference translation) and the actual translation quality further confirms the validity of the proposed approach. Conclusion We proposed a novel regularization term for encouraging a larger-context machine translation model to focus more on the additional context using a multi-level pair-wise ranking loss. The proposed learning approach is generally applicable to any network architecture. Our empirical evaluation demonstrates that a larger-context translation model trained by the proposed approach indeed becomes more sensitive to the additional context and outperforms a context-less baseline. We believe this work is an encouraging first step toward developing a better context-aware learning algorithm for larger-context machine translation. We identify three future directions; (1) a better context distribution $p(C|X)$ , (2) efficient evaluation of the context-less scores, and (3) evaluation using other tasks, such as multi-modal translation. Acknowledgments SJ thanks NSERC. KC thanks support by AdeptMind, eBay, TenCent, NVIDIA and CIFAR. This work was partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Electronics (Improving Deep Learning using Latent Structure). Larger-Context Transformer A shared 6-layer transformer encoder is used to independently encode an additional context $C$ and a source sentence $X$ . $ c &= \text{TransformerEnc}_6(C) \\ x &= \text{TransformerEnc}_6(X) $ Using $x$ as queries ( $q$ ), a multi-head attention mechanism attends to $c$ as key-values ( $k,v$ ). The input and output are merged through a gate. The final source representation is obtained through a feed-forward module (FF) used in typical transformer layers. $ \hat{x}_c &= \text{Attn}(q=x; k,v=c) \\ g &= \text{Linear}([x;\hat{x}_c]) \\ x_c &= \text{FF}(g\cdot \text{Dropout}(\hat{x}_c) + (1-g)\cdot x) $ We use a standard 6-layer transformer decoder which attends to $x_c$ . $ p_\theta (y_t | y_{<t}, X, C) = \text{TransformerDec}_6(y_{<t}, x_c), $
Unanswerable
ff814793387c8f3b61f09b88c73c00360a22a60e
ff814793387c8f3b61f09b88c73c00360a22a60e_0
Q: Does the latent dialogue state heklp their model? Text: Introduction Task-oriented dialog systems help a user to accomplish some goal using natural language, such as making a restaurant reservation, getting technical support, or placing a phonecall. Historically, these dialog systems have been built as a pipeline, with modules for language understanding, state tracking, action selection, and language generation. However, dependencies between modules introduce considerable complexity – for example, it is often unclear how to define the dialog state and what history to maintain, yet action selection relies exclusively on the state for input. Moreover, training each module requires specialized labels. Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information. This paper presents a model for end-to-end learning, called Hybrid Code Networks (HCNs) which addresses these problems. In addition to learning an RNN, HCNs also allow a developer to express domain knowledge via software and action templates. Experiments show that, compared to existing recurrent end-to-end techniques, HCNs achieve the same performance with considerably less training data, while retaining the key benefit of end-to-end trainability. Moreover, the neural network can be trained with supervised learning or reinforcement learning, by changing the gradient update applied. This paper is organized as follows. Section "Model description" describes the model, and Section "Related work" compares the model to related work. Section "Supervised learning evaluation I" applies HCNs to the bAbI dialog dataset BIBREF0 . Section "Supervised learning evaluation II" then applies the method to real customer support domains at our company. Section "Reinforcement learning illustration" illustrates how HCNs can be optimized with reinforcement learning, and Section "Conclusion" concludes. Model description At a high level, the four components of a Hybrid Code Network are a recurrent neural network; domain-specific software; domain-specific action templates; and a conventional entity extraction module for identifying entity mentions in text. Both the RNN and the developer code maintain state. Each action template can be a textual communicative action or an API call. The HCN model is summarized in Figure 1 . The cycle begins when the user provides an utterance, as text (step 1). The utterance is featurized in several ways. First, a bag of words vector is formed (step 2). Second, an utterance embedding is formed, using a pre-built utterance embedding model (step 3). Third, an entity extraction module identifies entity mentions (step 4) – for example, identifying “Jennifer Jones” as a <name> entity. The text and entity mentions are then passed to “Entity tracking” code provided by the developer (step 5), which grounds and maintains entities – for example, mapping the text “Jennifer Jones” to a specific row in a database. This code can optionally return an “action mask”, indicating actions which are permitted at the current timestep, as a bit vector. For example, if a target phone number has not yet been identified, the API action to place a phone call may be masked. It can also optionally return “context features” which are features the developer thinks will be useful for distinguishing among actions, such as which entities are currently present and which are absent. The feature components from steps 1-5 are concatenated to form a feature vector (step 6). This vector is passed to an RNN, such as a long short-term memory (LSTM) BIBREF1 or gated recurrent unit (GRU) BIBREF2 . The RNN computes a hidden state (vector), which is retained for the next timestep (step 8), and passed to a dense layer with a softmax activation, with output dimension equal to the number of distinct system action templates (step 9). Thus the output of step 9 is a distribution over action templates. Next, the action mask is applied as an element-wise multiplication, and the result is normalized back to a probability distribution (step 10) – this forces non-permitted actions to take on probability zero. From the resulting distribution (step 11), an action is selected (step 12). When RL is active, exploration is required, so in this case an action is sampled from the distribution; when RL is not active, the best action should be chosen, and so the action with the highest probability is always selected. The selected action is next passed to “Entity output” developer code that can substitute in entities (step 13) and produce a fully-formed action – for example, mapping the template “<city>, right?” to “Seattle, right?”. In step 14, control branches depending on the type of the action: if it is an API action, the corresponding API call in the developer code is invoked (step 15) – for example, to render rich content to the user. APIs can act as sensors and return features relevant to the dialog, so these can be added to the feature vector in the next timestep (step 16). If the action is text, it is rendered to the user (step 17), and cycle then repeats. The action taken is provided as a feature to the RNN in the next timestep (step 18). Related work Broadly there are two lines of work applying machine learning to dialog control. The first decomposes a dialog system into a pipeline, typically including language understanding, dialog state tracking, action selection policy, and language generation BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Specifically related to HCNs, past work has implemented the policy as feed-forward neural networks BIBREF12 , trained with supervised learning followed by reinforcement learning BIBREF13 . In these works, the policy has not been recurrent – i.e., the policy depends on the state tracker to summarize observable dialog history into state features, which requires design and specialized labeling. By contrast, HCNs use an RNN which automatically infers a representation of state. For learning efficiency, HCNs use an external light-weight process for tracking entity values, but the policy is not strictly dependent on it: as an illustration, in Section "Supervised learning evaluation II" below, we demonstrate an HCN-based dialog system which has no external state tracker. If there is context which is not apparent in the text in the dialog, such as database status, this can be encoded as a context feature to the RNN. The second, more recent line of work applies recurrent neural networks (RNNs) to learn “end-to-end” models, which map from an observable dialog history directly to a sequence of output words BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . These systems can be applied to task-oriented domains by adding special “API call” actions, enumerating database output as a sequence of tokens BIBREF0 , then learning an RNN using Memory Networks BIBREF27 , gated memory networks BIBREF28 , query reduction networks BIBREF29 , and copy-augmented networks BIBREF30 . In each of these architectures, the RNN learns to manipulate entity values, for example by saving them in a memory. Output is produced by generating a sequence of tokens (or ranking all possible surface forms), which can also draw from this memory. HCNs also use an RNN to accumulate dialog state and choose actions. However, HCNs differ in that they use developer-provided action templates, which can contain entity references, such as “<city>, right?”. This design reduce learning complexity, and also enable the software to limit which actions are available via an action mask, at the expense of developer effort. To further reduce learning complexity in a practical system, entities are tracked separately, outside the the RNN, which also allows them to be substituted into action templates. Also, past end-to-end recurrent models have been trained using supervised learning, whereas we show how HCNs can also be trained with reinforcement learning. Supervised learning evaluation I In this section we compare HCNs to existing approaches on the public “bAbI dialog” dataset BIBREF0 . This dataset includes two end-to-end dialog learning tasks, in the restaurant domain, called task5 and task6. Task5 consists of synthetic, simulated dialog data, with highly regular user behavior and constrained vocabulary. Dialogs include a database access action which retrieves relevant restaurants from a database, with results included in the dialog transcript. We test on the “OOV” variant of Task5, which includes entity values not observed in the training set. Task6 draws on human-computer dialog data from the second dialog state tracking challenge (DSTC2), where usability subjects (crowd-workers) interacted with several variants of a spoken dialog system BIBREF31 . Since the database from DSTC2 was not provided, database calls have been inferred from the data and inserted into the dialog transcript. Example dialogs are provided in the Appendix Sections "bAbI Task5 example dialog" and "bAbI Task6 example dialog" . To apply HCNs, we wrote simple domain-specific software, as follows. First, for entity extraction (step 4 in Figure 1 ), we used a simple string match, with a pre-defined list of entity names – i.e., the list of restaurants available in the database. Second, in the context update (step 5), we wrote simple logic for tracking entities: when an entity is recognized in the user input, it is retained by the software, over-writing any previously stored value. For example, if the price “cheap” is recognized in the first turn, it is retained as price=cheap. If “expensive” is then recognized in the third turn, it over-writes “cheap” so the code now holds price=expensive. Third, system actions were templatized: for example, system actions of the form “prezzo is a nice restaurant in the west of town in the moderate price range” all map to the template “<name> is a nice restaurant in the <location> of town in the <price> price range”. This results in 16 templates for Task5 and 58 for Task6. Fourth, when database results are received into the entity state, they are sorted by rating. Finally, an action mask was created which encoded common-sense dependencies. These are implemented as simple if-then rules based on the presence of entity values: for example, only allow an API call if pre-conditions are met; only offer a restaurant if database results have already been received; do not ask for an entity if it is already known; etc. For Task6, we noticed that the system can say that no restaurants match the current query without consulting the database (for an example dialog, see Section "bAbI Task6 example dialog" in the Appendix). In a practical system this information would be retrieved from the database and not encoded in the RNN. So, we mined the training data and built a table of search queries known to yield no results. We also added context features that indicated the state of the database – for example, whether there were any restaurants matching the current query. The complete set of context features is given in Appendix Section "Task5 and Task6 context features" . Altogether this code consisted of about 250 lines of Python. We then trained an HCN on the training set, employing the domain-specific software described above. We selected an LSTM for the recurrent layer BIBREF1 , with the AdaDelta optimizer BIBREF32 . We used the development set to tune the number of hidden units (128), and the number of epochs (12). Utterance embeddings were formed by averaging word embeddings, using a publicly available 300-dimensional word embedding model trained using word2vec on web data BIBREF33 . The word embeddings were static and not updated during LSTM training. In training, each dialog formed one minibatch, and updates were done on full rollouts (i.e., non-truncated back propagation through time). The training loss was categorical cross-entropy. Further low-level implementation details are in the Appendix Section "Model implementation details" . We ran experiments with four variants of our model: with and without the utterance embeddings, and with and without the action mask (Figure 1 , steps 3 and 6 respectively). Following past work, we report average turn accuracy – i.e., for each turn in each dialog, present the (true) history of user and system actions to the network and obtain the network's prediction as a string of characters. The turn is correct if the string matches the reference exactly, and incorrect if not. We also report dialog accuracy, which indicates if all turns in a dialog are correct. We compare to four past end-to-end approaches BIBREF0 , BIBREF28 , BIBREF30 , BIBREF29 . We emphasize that past approaches have applied purely sequence-to-sequence models, or (as a baseline) purely programmed rules BIBREF0 . By contrast, Hybrid Code Networks are a hybrid of hand-coded rules and learned models. Results are shown in Table 1 . Since Task5 is synthetic data generated using rules, it is possible to obtain perfect accuracy using rules (line 1). The addition of domain knowledge greatly simplifies the learning task and enables HCNs to also attain perfect accuracy. On Task6, rules alone fare poorly, whereas HCNs outperform past learned models. We next examined learning curves, training with increasing numbers of dialogs. To guard against bias in the ordering of the training set, we averaged over 5 runs, randomly permuting the order of the training dialogs in each run. Results are in Figure 2 . In Task5, the action mask and utterance embeddings substantially reduce the number of training dialogs required (note the horizontal axis scale is logarithmic). For Task6, the benefits of the utterance embeddings are less clear. An error analysis showed that there are several systematic differences between the training and testing sets. Indeed, DSTC2 intentionally used different dialog policies for the training and test sets, whereas our goal is to mimic the policy in the training set. Nonetheless, these tasks are the best public benchmark we are aware of, and HCNs exceed performance of existing sequence-to-sequence models. In addition, they match performance of past models using an order of magnitude less data (200 vs. 1618 dialogs), which is crucial in practical settings where collecting realistic dialogs for a new domain can be expensive. Supervised learning evaluation II We now turn to comparing with purely hand-crafted approaches. To do this, we obtained logs from our company's text-based customer support dialog system, which uses a sophisticated rule-based dialog manager. Data from this system is attractive for evaluation because it is used by real customers – not usability subjects – and because its rule-based dialog manager was developed by customer support professionals at our company, and not the authors. This data is not publicly available, but we are unaware of suitable human-computer dialog data in the public domain which uses rules. Customers start using the dialog system by entering a brief description of their problem, such as “I need to update my operating system”. They are then routed to one of several hundred domains, where each domain attempts to resolve a particular problem. In this study, we collected human-computer transcripts for the high-traffic domains “reset password” and “cannot access account”. We labeled the dialog data as follows. First, we enumerated unique system actions observed in the data. Then, for each dialog, starting from the beginning, we examined each system action, and determined whether it was “correct”. Here, correct means that it was the most appropriate action among the set of existing system actions, given the history of that dialog. If multiple actions were arguably appropriate, we broke ties in favor of the existing rule-based dialog manager. Example dialogs are provided in the Appendix Sections "Forgot password example dialog" and "Account access example dialog" . If a system action was labeled as correct, we left it as-is and continued to the next system action. If the system action was not correct, we replaced it with the correct system action, and discarded the rest of the dialog, since we do not know how the user would have replied to this new system action. The resulting dataset contained a mixture of complete and partial dialogs, containing only correct system actions. We partitioned this set into training and test dialogs. Basic statistics of the data are shown in Table 2 . In this domain, no entities were relevant to the control flow, and there was no obvious mask logic since any question could follow any question. Therefore, we wrote no domain-specific software for this instance of the HCN, and relied purely on the recurrent neural network to drive the conversation. The architecture and training of the RNN was the same as in Section "Supervised learning evaluation I" , except that here we did not have enough data for a validation set, so we instead trained until we either achieved 100% accuracy on the training set or reached 200 epochs. To evaluate, we observe that conventional measures like average dialog accuracy unfairly penalize the system used to collect the dialogs – in our case, the rule-based system. If the system used for collection makes an error at turn $t$ , the labeled dialog only includes the sub-dialog up to turn $t$ , and the system being evaluated off-line is only evaluated on that sub-dialog. In other words, in our case, reporting dialog accuracy would favor the HCN because it would be evaluated on fewer turns than the rule-based system. We therefore use a comparative measure that examines which method produces longer continuous sequences of correct system actions, starting from the beginning of the dialog. Specifically, we report $\Delta P = \frac{C(\text{HCN-win}) - C(\text{rule-win})}{C(\text{all})}$ , where $C(\text{HCN-win})$ is the number of test dialogs where the rule-based approach output a wrong action before the HCN; $C(\text{rule-win})$ is the number of test dialogs where the HCN output a wrong action before the rule-based approach; and $C(\text{all})$ is the number of dialogs in the test set. When $\Delta P > 0$ , there are more dialogs in which HCNs produce longer continuous sequences of correct actions starting from the beginning of the dialog. We run all experiments 5 times, each time shuffling the order of the training set. Results are in Figure 3 . HCNs exceed performance of the existing rule-based system after about 30 dialogs. In these domains, we have a further source of knowledge: the rule-based dialog managers themselves can be used to generate example “sunny-day” dialogs, where the user provides purely expected inputs. From each rule-based controller, synthetic dialogs were sampled to cover each expected user response at least once, and added to the set of labeled real dialogs. This resulted in 75 dialogs for the “Forgot password” domain, and 325 for the “Can't access account” domain. Training was repeated as described above. Results are also included in Figure 3 , with the suffix “sampled”. In the “Can't access account” domain, the sampled dialogs yield a large improvement, probably because the flow chart for this domain is large, so the sampled dialogs increase coverage. The gain in the “forgot password” domain is present but smaller. In summary, HCNs can out-perform production-grade rule-based systems with a reasonable number of labeled dialogs, and adding synthetic “sunny-day” dialogs improves performance further. Moreover, unlike existing pipelined approaches to dialog management that rely on an explicit state tracker, this HCN used no explicit state tracker, highlighting an advantage of the model. Reinforcement learning illustration In the previous sections, supervised learning (SL) was applied to train the LSTM to mimic dialogs provided by the system developer. Once a system operates at scale, interacting with a large number of users, it is desirable for the system to continue to learn autonomously using reinforcement learning (RL). With RL, each turn receives a measurement of goodness called a reward; the agent explores different sequences of actions in different situations, and makes adjustments so as to maximize the expected discounted sum of rewards, which is called the return, denoted $G$ . For optimization, we selected a policy gradient approach BIBREF34 , which has been successfully applied to dialog systems BIBREF35 , robotics BIBREF36 , and the board game Go BIBREF37 . In policy gradient-based RL, a model $\pi $ is parameterized by $\mathbf {w}$ and outputs a distribution from which actions are sampled at each timestep. At the end of a trajectory – in our case, dialog – the return $G$ for that trajectory is computed, and the gradients of the probabilities of the actions taken with respect to the model weights are computed. The weights are then adjusted by taking a gradient step proportional to the return: $$\mathbf {w} \leftarrow \mathbf {w} + \alpha ( \sum _t \triangledown _{\mathbf {w}} \log \pi (a_t|\mathbf {h_t};\mathbf {w}) ) ( G - b ) $$ (Eq. 14) where $\alpha $ is a learning rate; $a_t$ is the action taken at timestep $t$ ; $\mathbf {h_t}$ is the dialog history at time $t$ ; $G$ is the return of the dialog; $\triangledown _{\mathbf {x}} F$ denotes the Jacobian of $F$ with respect to $\mathbf {x}$ ; $b$ is a baseline described below; and $a_t$0 is the LSTM – i.e., a stochastic policy which outputs a distribution over $a_t$1 given a dialog history $a_t$2 , parameterized by weights $a_t$3 . The baseline $a_t$4 is an estimate of the average return of the current policy, estimated on the last 100 dialogs using weighted importance sampling. Intuitively, “better” dialogs receive a positive gradient step, making the actions selected more likely; and “worse” dialogs receive a negative gradient step, making the actions selected less likely. SL and RL correspond to different methods of updating weights, so both can be applied to the same network. However, there is no guarantee that the optimal RL policy will agree with the SL training set; therefore, after each RL gradient step, we check whether the updated policy reconstructs the training set. If not, we re-run SL gradient steps on the training set until the model reproduces the training set. Note that this approach allows new training dialogs to be added at any time during RL optimization. We illustrate RL optimization on a simulated dialog task in the name dialing domain. In this system, a contact's name may have synonyms (“Michael” may also be called “Mike”), and a contact may have more than one phone number, such as “work” or “mobile”, which may in turn have synonyms like “cell” for “mobile”. This domain has a database of names and phone numbers taken from the Microsoft personnel directory, 5 entity types – firstname, nickname, lastname, phonenumber, and phonetype – and 14 actions, including 2 API call actions. Simple entity logic was coded, which retains the most recent copy of recognized entities. A simple action mask suppresses impossible actions, such as placing a phonecall before a phone number has been retrieved from the database. Example dialogs are provided in Appendix Section "Name dialing example dialogs" . To perform optimization, we created a simulated user. At the start of a dialog, the simulated user randomly selected a name and phone type, including names and phone types not covered by the dialog system. When speaking, the simulated user can use the canonical name or a nickname; usually answers questions but can ignore the system; can provide additional information not requested; and can give up. The simulated user was parameterized by around 10 probabilities, set by hand. We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. Finally, we created a set of 21 labeled dialogs, which will be used for supervised learning. For the RNN in the HCN, we again used an LSTM with AdaDelta, this time with 32 hidden units. RL policy updates are made after each dialog. Since a simulated user was employed, we did not have real user utterances, and instead relied on context features, omitting bag-of-words and utterance embedding features. We first evaluate RL by randomly initializing an LSTM, and begin RL optimization. After 10 RL updates, we freeze the policy, and run 500 dialogs with the user simulation to measure task completion. We repeat all of this for 100 runs, and report average performance. In addition, we also report results by initializing the LSTM using supervised learning on the training set, consisting of 1, 2, 5, or 10 dialogs sampled randomly from the training set, then running RL as described above. Results are in Figure 4 . Although RL alone can find a good policy, pre-training with just a handful of labeled dialogs improves learning speed dramatically. Additional experiments, not shown for space, found that ablating the action mask slowed training, agreeing with BIBREF6 . Finally, we conduct a further experiment where we sample 10 training dialogs, then add one to the training set just before RL dialog 0, 100, 200, ... , 900. Results are shown in Figure 4 . This shows that SL dialogs can be introduced as RL is in progress – i.e., that it is possible to interleave RL and SL. This is an attractive property for practical systems: if a dialog error is spotted by a developer while RL is in progress, it is natural to add a training dialog to the training set. Conclusion This paper has introduced Hybrid Code Networks for end-to-end learning of task-oriented dialog systems. HCNs support a separation of concerns where procedural knowledge and constraints can be expressed in software, and the control flow is learned. Compared to existing end-to-end approaches, HCNs afford more developer control and require less training data, at the expense of a small amount of developer effort. Results in this paper have explored three different dialog domains. On a public benchmark in the restaurants domain, HCNs exceeded performance of purely learned models. Results in two troubleshooting domains exceeded performance of a commercially deployed rule-based system. Finally, in a name-dialing domain, results from dialog simulation show that HCNs can also be optimized with a mixture of reinforcement and supervised learning. In future work, we plan to extend HCNs by incorporating lines of existing work, such as integrating the entity extraction step into the neural network BIBREF38 , adding richer utterance embeddings BIBREF39 , and supporting text generation BIBREF14 . We will also explore using HCNs with automatic speech recognition (ASR) input, for example by forming features from n-grams of the ASR n-best results BIBREF40 . Of course, we also plan to deploy the model in a live dialog system. More broadly, HCNs are a general model for stateful control, and we would be interested to explore applications beyond dialog systems – for example, in NLP medical settings or human-robot NL interaction tasks, providing domain constraints are important for safety; and in resource-poor settings, providing domain knowledge can amplify limited data. Model implementation details The RNN was specified using Keras version 0.3.3, with back-end computation in Theano version 0.8.0.dev0 BIBREF42 , BIBREF41 . The Keras model specification is given below. The input variable obs includes all features from Figure 1 step 6 except for the previous action (step 18) and the action mask (step 6, top-most vector). # Given: # obs_size, action_size, nb_hidden g = Graph() g.add_input( name='obs', input_shape=(None, obs_size) ) g.add_input( name='prev_action', input_shape=(None, action_size) ) g.add_input( name='avail_actions', input_shape=(None, action_size) ) g.add_node( LSTM( n_hidden, return_sequences=True, activation='tanh', ), name='h1', inputs=[ 'obs', 'prev_action', 'avail_actions' ] ) g.add_node( TimeDistributedDense( action_size, activation='softmax', ), name='h2', input='h1' ) g.add_node( Activation( activation=normalize, ), name='action', inputs=['h2','avail_actions'], merge_mode='mul', create_output=True ) g.compile( optimizer=Adadelta(clipnorm=1.), sample_weight_modes={ 'action': 'temporal' }, loss={ 'action':'categorical_crossentropy' } ) Model sizes are given in Table 3 . Example dialogs are given below for each of the 5 dialog systems. For space and readability, the entity tags that appear in the user and system sides of the dialogs have been removed – for example, Call <name>Joan</name> is shown as Call Joan. bAbI Task5 example dialog good morning hello what can i help you with today i'd like to book a table with italian food i'm on it [silence] where should it be in paris how many people would be in your party for six people please which price range are looking for in a cheap price range please ok let me look into some options for you [silence] api_call italian paris six cheap [database results returned] [silence] what do you think of this option: resto_madrid_cheap_spanish_8stars no this does not work for me sure let me find an other option for you [silence] what do you think of this option: resto_madrid_cheap_spanish_6stars do you have something else sure let me find an other option for you [silence] what do you think of this option: resto_madrid_cheap_spanish_1stars it's perfect great let me do the reservation may i have the phone number of the restaurant here it is resto_madrid_cheap_spanish_1stars_phone thanks is there anything i can help you with no thank you you're welcome bAbI Task6 example dialog In the system turn marked [*], the system provided venue information without first making an API call. Hello , welcome to the Cambridge restaurant system . You can ask for restaurants by area , price range or food type . How may I help you ? im looking for canapes food in any price range I am sorry but there is no canapes restaurant that matches your request [*] how about european api_call european R_location R_price [database results returned] [silence] michaelhouse_cafe is a great restaurant serving european food and it is in the expensive price range food an What part of town do you have in mind? any any part of town michaelhouse_cafe serves european food . may i have the address and phone number please The phone number of michaelhouse_cafe is michaelhouse_cafe_phone thank you good bye you are welcome Task5 and Task6 context features Context features used in Section "Supervised learning evaluation I" are in Table 4 . Forgot password example dialog Some system actions have been shortened for space. How do I recover my password Okay, you don't need to remember your password, we can reset it. Would you like to try that? Yes To reset your password, please see this URL ... Was that helpful? I don't have a flashdrive Let's connect you to a person who can help you Account access example dialog Some system actions have been shortened for space. cannot sign in to my email Please check caps lock is off, and that your email is spelled correctly. Are you able to access your account now? no If you still can't sign in, please go the the following page and check that the email server is running ok... Is everything up and running? yes You need to reset your password - here's how ... Did this solve your problem? still cannot get in Did you receive the security code? yes Check you entered the right numbers. Did this solve your problem? no Have you seen a message that your profile is temporarily blocked? just your message of update on sept 15 Did you turn on two-step verification recently? whats that? Sorry, I can't help. Rephrase your question, or click below to talk to a person. Name dialing example dialogs How can I help you? Call Joe There's more than one person named Joe. Can you say their full name? Call Joe Adamson SavePhonetypeavail() Calling Joe Adamson, work PlaceCall() How can I help you? Call Sally on her home Sorry, I don't have a home number for Sally Smith. I only have a work phone. Do you want to call that number? no Oh, sorry about that. Goodbye.
Yes
059acc270062921ad27ee40a77fd50de6f02840a
059acc270062921ad27ee40a77fd50de6f02840a_0
Q: Do the authors test on datasets other than bAbl? Text: Introduction Task-oriented dialog systems help a user to accomplish some goal using natural language, such as making a restaurant reservation, getting technical support, or placing a phonecall. Historically, these dialog systems have been built as a pipeline, with modules for language understanding, state tracking, action selection, and language generation. However, dependencies between modules introduce considerable complexity – for example, it is often unclear how to define the dialog state and what history to maintain, yet action selection relies exclusively on the state for input. Moreover, training each module requires specialized labels. Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information. This paper presents a model for end-to-end learning, called Hybrid Code Networks (HCNs) which addresses these problems. In addition to learning an RNN, HCNs also allow a developer to express domain knowledge via software and action templates. Experiments show that, compared to existing recurrent end-to-end techniques, HCNs achieve the same performance with considerably less training data, while retaining the key benefit of end-to-end trainability. Moreover, the neural network can be trained with supervised learning or reinforcement learning, by changing the gradient update applied. This paper is organized as follows. Section "Model description" describes the model, and Section "Related work" compares the model to related work. Section "Supervised learning evaluation I" applies HCNs to the bAbI dialog dataset BIBREF0 . Section "Supervised learning evaluation II" then applies the method to real customer support domains at our company. Section "Reinforcement learning illustration" illustrates how HCNs can be optimized with reinforcement learning, and Section "Conclusion" concludes. Model description At a high level, the four components of a Hybrid Code Network are a recurrent neural network; domain-specific software; domain-specific action templates; and a conventional entity extraction module for identifying entity mentions in text. Both the RNN and the developer code maintain state. Each action template can be a textual communicative action or an API call. The HCN model is summarized in Figure 1 . The cycle begins when the user provides an utterance, as text (step 1). The utterance is featurized in several ways. First, a bag of words vector is formed (step 2). Second, an utterance embedding is formed, using a pre-built utterance embedding model (step 3). Third, an entity extraction module identifies entity mentions (step 4) – for example, identifying “Jennifer Jones” as a <name> entity. The text and entity mentions are then passed to “Entity tracking” code provided by the developer (step 5), which grounds and maintains entities – for example, mapping the text “Jennifer Jones” to a specific row in a database. This code can optionally return an “action mask”, indicating actions which are permitted at the current timestep, as a bit vector. For example, if a target phone number has not yet been identified, the API action to place a phone call may be masked. It can also optionally return “context features” which are features the developer thinks will be useful for distinguishing among actions, such as which entities are currently present and which are absent. The feature components from steps 1-5 are concatenated to form a feature vector (step 6). This vector is passed to an RNN, such as a long short-term memory (LSTM) BIBREF1 or gated recurrent unit (GRU) BIBREF2 . The RNN computes a hidden state (vector), which is retained for the next timestep (step 8), and passed to a dense layer with a softmax activation, with output dimension equal to the number of distinct system action templates (step 9). Thus the output of step 9 is a distribution over action templates. Next, the action mask is applied as an element-wise multiplication, and the result is normalized back to a probability distribution (step 10) – this forces non-permitted actions to take on probability zero. From the resulting distribution (step 11), an action is selected (step 12). When RL is active, exploration is required, so in this case an action is sampled from the distribution; when RL is not active, the best action should be chosen, and so the action with the highest probability is always selected. The selected action is next passed to “Entity output” developer code that can substitute in entities (step 13) and produce a fully-formed action – for example, mapping the template “<city>, right?” to “Seattle, right?”. In step 14, control branches depending on the type of the action: if it is an API action, the corresponding API call in the developer code is invoked (step 15) – for example, to render rich content to the user. APIs can act as sensors and return features relevant to the dialog, so these can be added to the feature vector in the next timestep (step 16). If the action is text, it is rendered to the user (step 17), and cycle then repeats. The action taken is provided as a feature to the RNN in the next timestep (step 18). Related work Broadly there are two lines of work applying machine learning to dialog control. The first decomposes a dialog system into a pipeline, typically including language understanding, dialog state tracking, action selection policy, and language generation BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Specifically related to HCNs, past work has implemented the policy as feed-forward neural networks BIBREF12 , trained with supervised learning followed by reinforcement learning BIBREF13 . In these works, the policy has not been recurrent – i.e., the policy depends on the state tracker to summarize observable dialog history into state features, which requires design and specialized labeling. By contrast, HCNs use an RNN which automatically infers a representation of state. For learning efficiency, HCNs use an external light-weight process for tracking entity values, but the policy is not strictly dependent on it: as an illustration, in Section "Supervised learning evaluation II" below, we demonstrate an HCN-based dialog system which has no external state tracker. If there is context which is not apparent in the text in the dialog, such as database status, this can be encoded as a context feature to the RNN. The second, more recent line of work applies recurrent neural networks (RNNs) to learn “end-to-end” models, which map from an observable dialog history directly to a sequence of output words BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . These systems can be applied to task-oriented domains by adding special “API call” actions, enumerating database output as a sequence of tokens BIBREF0 , then learning an RNN using Memory Networks BIBREF27 , gated memory networks BIBREF28 , query reduction networks BIBREF29 , and copy-augmented networks BIBREF30 . In each of these architectures, the RNN learns to manipulate entity values, for example by saving them in a memory. Output is produced by generating a sequence of tokens (or ranking all possible surface forms), which can also draw from this memory. HCNs also use an RNN to accumulate dialog state and choose actions. However, HCNs differ in that they use developer-provided action templates, which can contain entity references, such as “<city>, right?”. This design reduce learning complexity, and also enable the software to limit which actions are available via an action mask, at the expense of developer effort. To further reduce learning complexity in a practical system, entities are tracked separately, outside the the RNN, which also allows them to be substituted into action templates. Also, past end-to-end recurrent models have been trained using supervised learning, whereas we show how HCNs can also be trained with reinforcement learning. Supervised learning evaluation I In this section we compare HCNs to existing approaches on the public “bAbI dialog” dataset BIBREF0 . This dataset includes two end-to-end dialog learning tasks, in the restaurant domain, called task5 and task6. Task5 consists of synthetic, simulated dialog data, with highly regular user behavior and constrained vocabulary. Dialogs include a database access action which retrieves relevant restaurants from a database, with results included in the dialog transcript. We test on the “OOV” variant of Task5, which includes entity values not observed in the training set. Task6 draws on human-computer dialog data from the second dialog state tracking challenge (DSTC2), where usability subjects (crowd-workers) interacted with several variants of a spoken dialog system BIBREF31 . Since the database from DSTC2 was not provided, database calls have been inferred from the data and inserted into the dialog transcript. Example dialogs are provided in the Appendix Sections "bAbI Task5 example dialog" and "bAbI Task6 example dialog" . To apply HCNs, we wrote simple domain-specific software, as follows. First, for entity extraction (step 4 in Figure 1 ), we used a simple string match, with a pre-defined list of entity names – i.e., the list of restaurants available in the database. Second, in the context update (step 5), we wrote simple logic for tracking entities: when an entity is recognized in the user input, it is retained by the software, over-writing any previously stored value. For example, if the price “cheap” is recognized in the first turn, it is retained as price=cheap. If “expensive” is then recognized in the third turn, it over-writes “cheap” so the code now holds price=expensive. Third, system actions were templatized: for example, system actions of the form “prezzo is a nice restaurant in the west of town in the moderate price range” all map to the template “<name> is a nice restaurant in the <location> of town in the <price> price range”. This results in 16 templates for Task5 and 58 for Task6. Fourth, when database results are received into the entity state, they are sorted by rating. Finally, an action mask was created which encoded common-sense dependencies. These are implemented as simple if-then rules based on the presence of entity values: for example, only allow an API call if pre-conditions are met; only offer a restaurant if database results have already been received; do not ask for an entity if it is already known; etc. For Task6, we noticed that the system can say that no restaurants match the current query without consulting the database (for an example dialog, see Section "bAbI Task6 example dialog" in the Appendix). In a practical system this information would be retrieved from the database and not encoded in the RNN. So, we mined the training data and built a table of search queries known to yield no results. We also added context features that indicated the state of the database – for example, whether there were any restaurants matching the current query. The complete set of context features is given in Appendix Section "Task5 and Task6 context features" . Altogether this code consisted of about 250 lines of Python. We then trained an HCN on the training set, employing the domain-specific software described above. We selected an LSTM for the recurrent layer BIBREF1 , with the AdaDelta optimizer BIBREF32 . We used the development set to tune the number of hidden units (128), and the number of epochs (12). Utterance embeddings were formed by averaging word embeddings, using a publicly available 300-dimensional word embedding model trained using word2vec on web data BIBREF33 . The word embeddings were static and not updated during LSTM training. In training, each dialog formed one minibatch, and updates were done on full rollouts (i.e., non-truncated back propagation through time). The training loss was categorical cross-entropy. Further low-level implementation details are in the Appendix Section "Model implementation details" . We ran experiments with four variants of our model: with and without the utterance embeddings, and with and without the action mask (Figure 1 , steps 3 and 6 respectively). Following past work, we report average turn accuracy – i.e., for each turn in each dialog, present the (true) history of user and system actions to the network and obtain the network's prediction as a string of characters. The turn is correct if the string matches the reference exactly, and incorrect if not. We also report dialog accuracy, which indicates if all turns in a dialog are correct. We compare to four past end-to-end approaches BIBREF0 , BIBREF28 , BIBREF30 , BIBREF29 . We emphasize that past approaches have applied purely sequence-to-sequence models, or (as a baseline) purely programmed rules BIBREF0 . By contrast, Hybrid Code Networks are a hybrid of hand-coded rules and learned models. Results are shown in Table 1 . Since Task5 is synthetic data generated using rules, it is possible to obtain perfect accuracy using rules (line 1). The addition of domain knowledge greatly simplifies the learning task and enables HCNs to also attain perfect accuracy. On Task6, rules alone fare poorly, whereas HCNs outperform past learned models. We next examined learning curves, training with increasing numbers of dialogs. To guard against bias in the ordering of the training set, we averaged over 5 runs, randomly permuting the order of the training dialogs in each run. Results are in Figure 2 . In Task5, the action mask and utterance embeddings substantially reduce the number of training dialogs required (note the horizontal axis scale is logarithmic). For Task6, the benefits of the utterance embeddings are less clear. An error analysis showed that there are several systematic differences between the training and testing sets. Indeed, DSTC2 intentionally used different dialog policies for the training and test sets, whereas our goal is to mimic the policy in the training set. Nonetheless, these tasks are the best public benchmark we are aware of, and HCNs exceed performance of existing sequence-to-sequence models. In addition, they match performance of past models using an order of magnitude less data (200 vs. 1618 dialogs), which is crucial in practical settings where collecting realistic dialogs for a new domain can be expensive. Supervised learning evaluation II We now turn to comparing with purely hand-crafted approaches. To do this, we obtained logs from our company's text-based customer support dialog system, which uses a sophisticated rule-based dialog manager. Data from this system is attractive for evaluation because it is used by real customers – not usability subjects – and because its rule-based dialog manager was developed by customer support professionals at our company, and not the authors. This data is not publicly available, but we are unaware of suitable human-computer dialog data in the public domain which uses rules. Customers start using the dialog system by entering a brief description of their problem, such as “I need to update my operating system”. They are then routed to one of several hundred domains, where each domain attempts to resolve a particular problem. In this study, we collected human-computer transcripts for the high-traffic domains “reset password” and “cannot access account”. We labeled the dialog data as follows. First, we enumerated unique system actions observed in the data. Then, for each dialog, starting from the beginning, we examined each system action, and determined whether it was “correct”. Here, correct means that it was the most appropriate action among the set of existing system actions, given the history of that dialog. If multiple actions were arguably appropriate, we broke ties in favor of the existing rule-based dialog manager. Example dialogs are provided in the Appendix Sections "Forgot password example dialog" and "Account access example dialog" . If a system action was labeled as correct, we left it as-is and continued to the next system action. If the system action was not correct, we replaced it with the correct system action, and discarded the rest of the dialog, since we do not know how the user would have replied to this new system action. The resulting dataset contained a mixture of complete and partial dialogs, containing only correct system actions. We partitioned this set into training and test dialogs. Basic statistics of the data are shown in Table 2 . In this domain, no entities were relevant to the control flow, and there was no obvious mask logic since any question could follow any question. Therefore, we wrote no domain-specific software for this instance of the HCN, and relied purely on the recurrent neural network to drive the conversation. The architecture and training of the RNN was the same as in Section "Supervised learning evaluation I" , except that here we did not have enough data for a validation set, so we instead trained until we either achieved 100% accuracy on the training set or reached 200 epochs. To evaluate, we observe that conventional measures like average dialog accuracy unfairly penalize the system used to collect the dialogs – in our case, the rule-based system. If the system used for collection makes an error at turn $t$ , the labeled dialog only includes the sub-dialog up to turn $t$ , and the system being evaluated off-line is only evaluated on that sub-dialog. In other words, in our case, reporting dialog accuracy would favor the HCN because it would be evaluated on fewer turns than the rule-based system. We therefore use a comparative measure that examines which method produces longer continuous sequences of correct system actions, starting from the beginning of the dialog. Specifically, we report $\Delta P = \frac{C(\text{HCN-win}) - C(\text{rule-win})}{C(\text{all})}$ , where $C(\text{HCN-win})$ is the number of test dialogs where the rule-based approach output a wrong action before the HCN; $C(\text{rule-win})$ is the number of test dialogs where the HCN output a wrong action before the rule-based approach; and $C(\text{all})$ is the number of dialogs in the test set. When $\Delta P > 0$ , there are more dialogs in which HCNs produce longer continuous sequences of correct actions starting from the beginning of the dialog. We run all experiments 5 times, each time shuffling the order of the training set. Results are in Figure 3 . HCNs exceed performance of the existing rule-based system after about 30 dialogs. In these domains, we have a further source of knowledge: the rule-based dialog managers themselves can be used to generate example “sunny-day” dialogs, where the user provides purely expected inputs. From each rule-based controller, synthetic dialogs were sampled to cover each expected user response at least once, and added to the set of labeled real dialogs. This resulted in 75 dialogs for the “Forgot password” domain, and 325 for the “Can't access account” domain. Training was repeated as described above. Results are also included in Figure 3 , with the suffix “sampled”. In the “Can't access account” domain, the sampled dialogs yield a large improvement, probably because the flow chart for this domain is large, so the sampled dialogs increase coverage. The gain in the “forgot password” domain is present but smaller. In summary, HCNs can out-perform production-grade rule-based systems with a reasonable number of labeled dialogs, and adding synthetic “sunny-day” dialogs improves performance further. Moreover, unlike existing pipelined approaches to dialog management that rely on an explicit state tracker, this HCN used no explicit state tracker, highlighting an advantage of the model. Reinforcement learning illustration In the previous sections, supervised learning (SL) was applied to train the LSTM to mimic dialogs provided by the system developer. Once a system operates at scale, interacting with a large number of users, it is desirable for the system to continue to learn autonomously using reinforcement learning (RL). With RL, each turn receives a measurement of goodness called a reward; the agent explores different sequences of actions in different situations, and makes adjustments so as to maximize the expected discounted sum of rewards, which is called the return, denoted $G$ . For optimization, we selected a policy gradient approach BIBREF34 , which has been successfully applied to dialog systems BIBREF35 , robotics BIBREF36 , and the board game Go BIBREF37 . In policy gradient-based RL, a model $\pi $ is parameterized by $\mathbf {w}$ and outputs a distribution from which actions are sampled at each timestep. At the end of a trajectory – in our case, dialog – the return $G$ for that trajectory is computed, and the gradients of the probabilities of the actions taken with respect to the model weights are computed. The weights are then adjusted by taking a gradient step proportional to the return: $$\mathbf {w} \leftarrow \mathbf {w} + \alpha ( \sum _t \triangledown _{\mathbf {w}} \log \pi (a_t|\mathbf {h_t};\mathbf {w}) ) ( G - b ) $$ (Eq. 14) where $\alpha $ is a learning rate; $a_t$ is the action taken at timestep $t$ ; $\mathbf {h_t}$ is the dialog history at time $t$ ; $G$ is the return of the dialog; $\triangledown _{\mathbf {x}} F$ denotes the Jacobian of $F$ with respect to $\mathbf {x}$ ; $b$ is a baseline described below; and $a_t$0 is the LSTM – i.e., a stochastic policy which outputs a distribution over $a_t$1 given a dialog history $a_t$2 , parameterized by weights $a_t$3 . The baseline $a_t$4 is an estimate of the average return of the current policy, estimated on the last 100 dialogs using weighted importance sampling. Intuitively, “better” dialogs receive a positive gradient step, making the actions selected more likely; and “worse” dialogs receive a negative gradient step, making the actions selected less likely. SL and RL correspond to different methods of updating weights, so both can be applied to the same network. However, there is no guarantee that the optimal RL policy will agree with the SL training set; therefore, after each RL gradient step, we check whether the updated policy reconstructs the training set. If not, we re-run SL gradient steps on the training set until the model reproduces the training set. Note that this approach allows new training dialogs to be added at any time during RL optimization. We illustrate RL optimization on a simulated dialog task in the name dialing domain. In this system, a contact's name may have synonyms (“Michael” may also be called “Mike”), and a contact may have more than one phone number, such as “work” or “mobile”, which may in turn have synonyms like “cell” for “mobile”. This domain has a database of names and phone numbers taken from the Microsoft personnel directory, 5 entity types – firstname, nickname, lastname, phonenumber, and phonetype – and 14 actions, including 2 API call actions. Simple entity logic was coded, which retains the most recent copy of recognized entities. A simple action mask suppresses impossible actions, such as placing a phonecall before a phone number has been retrieved from the database. Example dialogs are provided in Appendix Section "Name dialing example dialogs" . To perform optimization, we created a simulated user. At the start of a dialog, the simulated user randomly selected a name and phone type, including names and phone types not covered by the dialog system. When speaking, the simulated user can use the canonical name or a nickname; usually answers questions but can ignore the system; can provide additional information not requested; and can give up. The simulated user was parameterized by around 10 probabilities, set by hand. We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. Finally, we created a set of 21 labeled dialogs, which will be used for supervised learning. For the RNN in the HCN, we again used an LSTM with AdaDelta, this time with 32 hidden units. RL policy updates are made after each dialog. Since a simulated user was employed, we did not have real user utterances, and instead relied on context features, omitting bag-of-words and utterance embedding features. We first evaluate RL by randomly initializing an LSTM, and begin RL optimization. After 10 RL updates, we freeze the policy, and run 500 dialogs with the user simulation to measure task completion. We repeat all of this for 100 runs, and report average performance. In addition, we also report results by initializing the LSTM using supervised learning on the training set, consisting of 1, 2, 5, or 10 dialogs sampled randomly from the training set, then running RL as described above. Results are in Figure 4 . Although RL alone can find a good policy, pre-training with just a handful of labeled dialogs improves learning speed dramatically. Additional experiments, not shown for space, found that ablating the action mask slowed training, agreeing with BIBREF6 . Finally, we conduct a further experiment where we sample 10 training dialogs, then add one to the training set just before RL dialog 0, 100, 200, ... , 900. Results are shown in Figure 4 . This shows that SL dialogs can be introduced as RL is in progress – i.e., that it is possible to interleave RL and SL. This is an attractive property for practical systems: if a dialog error is spotted by a developer while RL is in progress, it is natural to add a training dialog to the training set. Conclusion This paper has introduced Hybrid Code Networks for end-to-end learning of task-oriented dialog systems. HCNs support a separation of concerns where procedural knowledge and constraints can be expressed in software, and the control flow is learned. Compared to existing end-to-end approaches, HCNs afford more developer control and require less training data, at the expense of a small amount of developer effort. Results in this paper have explored three different dialog domains. On a public benchmark in the restaurants domain, HCNs exceeded performance of purely learned models. Results in two troubleshooting domains exceeded performance of a commercially deployed rule-based system. Finally, in a name-dialing domain, results from dialog simulation show that HCNs can also be optimized with a mixture of reinforcement and supervised learning. In future work, we plan to extend HCNs by incorporating lines of existing work, such as integrating the entity extraction step into the neural network BIBREF38 , adding richer utterance embeddings BIBREF39 , and supporting text generation BIBREF14 . We will also explore using HCNs with automatic speech recognition (ASR) input, for example by forming features from n-grams of the ASR n-best results BIBREF40 . Of course, we also plan to deploy the model in a live dialog system. More broadly, HCNs are a general model for stateful control, and we would be interested to explore applications beyond dialog systems – for example, in NLP medical settings or human-robot NL interaction tasks, providing domain constraints are important for safety; and in resource-poor settings, providing domain knowledge can amplify limited data. Model implementation details The RNN was specified using Keras version 0.3.3, with back-end computation in Theano version 0.8.0.dev0 BIBREF42 , BIBREF41 . The Keras model specification is given below. The input variable obs includes all features from Figure 1 step 6 except for the previous action (step 18) and the action mask (step 6, top-most vector). # Given: # obs_size, action_size, nb_hidden g = Graph() g.add_input( name='obs', input_shape=(None, obs_size) ) g.add_input( name='prev_action', input_shape=(None, action_size) ) g.add_input( name='avail_actions', input_shape=(None, action_size) ) g.add_node( LSTM( n_hidden, return_sequences=True, activation='tanh', ), name='h1', inputs=[ 'obs', 'prev_action', 'avail_actions' ] ) g.add_node( TimeDistributedDense( action_size, activation='softmax', ), name='h2', input='h1' ) g.add_node( Activation( activation=normalize, ), name='action', inputs=['h2','avail_actions'], merge_mode='mul', create_output=True ) g.compile( optimizer=Adadelta(clipnorm=1.), sample_weight_modes={ 'action': 'temporal' }, loss={ 'action':'categorical_crossentropy' } ) Model sizes are given in Table 3 . Example dialogs are given below for each of the 5 dialog systems. For space and readability, the entity tags that appear in the user and system sides of the dialogs have been removed – for example, Call <name>Joan</name> is shown as Call Joan. bAbI Task5 example dialog good morning hello what can i help you with today i'd like to book a table with italian food i'm on it [silence] where should it be in paris how many people would be in your party for six people please which price range are looking for in a cheap price range please ok let me look into some options for you [silence] api_call italian paris six cheap [database results returned] [silence] what do you think of this option: resto_madrid_cheap_spanish_8stars no this does not work for me sure let me find an other option for you [silence] what do you think of this option: resto_madrid_cheap_spanish_6stars do you have something else sure let me find an other option for you [silence] what do you think of this option: resto_madrid_cheap_spanish_1stars it's perfect great let me do the reservation may i have the phone number of the restaurant here it is resto_madrid_cheap_spanish_1stars_phone thanks is there anything i can help you with no thank you you're welcome bAbI Task6 example dialog In the system turn marked [*], the system provided venue information without first making an API call. Hello , welcome to the Cambridge restaurant system . You can ask for restaurants by area , price range or food type . How may I help you ? im looking for canapes food in any price range I am sorry but there is no canapes restaurant that matches your request [*] how about european api_call european R_location R_price [database results returned] [silence] michaelhouse_cafe is a great restaurant serving european food and it is in the expensive price range food an What part of town do you have in mind? any any part of town michaelhouse_cafe serves european food . may i have the address and phone number please The phone number of michaelhouse_cafe is michaelhouse_cafe_phone thank you good bye you are welcome Task5 and Task6 context features Context features used in Section "Supervised learning evaluation I" are in Table 4 . Forgot password example dialog Some system actions have been shortened for space. How do I recover my password Okay, you don't need to remember your password, we can reset it. Would you like to try that? Yes To reset your password, please see this URL ... Was that helpful? I don't have a flashdrive Let's connect you to a person who can help you Account access example dialog Some system actions have been shortened for space. cannot sign in to my email Please check caps lock is off, and that your email is spelled correctly. Are you able to access your account now? no If you still can't sign in, please go the the following page and check that the email server is running ok... Is everything up and running? yes You need to reset your password - here's how ... Did this solve your problem? still cannot get in Did you receive the security code? yes Check you entered the right numbers. Did this solve your problem? no Have you seen a message that your profile is temporarily blocked? just your message of update on sept 15 Did you turn on two-step verification recently? whats that? Sorry, I can't help. Rephrase your question, or click below to talk to a person. Name dialing example dialogs How can I help you? Call Joe There's more than one person named Joe. Can you say their full name? Call Joe Adamson SavePhonetypeavail() Calling Joe Adamson, work PlaceCall() How can I help you? Call Sally on her home Sorry, I don't have a home number for Sally Smith. I only have a work phone. Do you want to call that number? no Oh, sorry about that. Goodbye.
No
6a9eb407be6a459dc976ffeae17bdd8f71c8791c
6a9eb407be6a459dc976ffeae17bdd8f71c8791c_0
Q: What is the reward model for the reinforcement learning appraoch? Text: Introduction Task-oriented dialog systems help a user to accomplish some goal using natural language, such as making a restaurant reservation, getting technical support, or placing a phonecall. Historically, these dialog systems have been built as a pipeline, with modules for language understanding, state tracking, action selection, and language generation. However, dependencies between modules introduce considerable complexity – for example, it is often unclear how to define the dialog state and what history to maintain, yet action selection relies exclusively on the state for input. Moreover, training each module requires specialized labels. Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information. This paper presents a model for end-to-end learning, called Hybrid Code Networks (HCNs) which addresses these problems. In addition to learning an RNN, HCNs also allow a developer to express domain knowledge via software and action templates. Experiments show that, compared to existing recurrent end-to-end techniques, HCNs achieve the same performance with considerably less training data, while retaining the key benefit of end-to-end trainability. Moreover, the neural network can be trained with supervised learning or reinforcement learning, by changing the gradient update applied. This paper is organized as follows. Section "Model description" describes the model, and Section "Related work" compares the model to related work. Section "Supervised learning evaluation I" applies HCNs to the bAbI dialog dataset BIBREF0 . Section "Supervised learning evaluation II" then applies the method to real customer support domains at our company. Section "Reinforcement learning illustration" illustrates how HCNs can be optimized with reinforcement learning, and Section "Conclusion" concludes. Model description At a high level, the four components of a Hybrid Code Network are a recurrent neural network; domain-specific software; domain-specific action templates; and a conventional entity extraction module for identifying entity mentions in text. Both the RNN and the developer code maintain state. Each action template can be a textual communicative action or an API call. The HCN model is summarized in Figure 1 . The cycle begins when the user provides an utterance, as text (step 1). The utterance is featurized in several ways. First, a bag of words vector is formed (step 2). Second, an utterance embedding is formed, using a pre-built utterance embedding model (step 3). Third, an entity extraction module identifies entity mentions (step 4) – for example, identifying “Jennifer Jones” as a <name> entity. The text and entity mentions are then passed to “Entity tracking” code provided by the developer (step 5), which grounds and maintains entities – for example, mapping the text “Jennifer Jones” to a specific row in a database. This code can optionally return an “action mask”, indicating actions which are permitted at the current timestep, as a bit vector. For example, if a target phone number has not yet been identified, the API action to place a phone call may be masked. It can also optionally return “context features” which are features the developer thinks will be useful for distinguishing among actions, such as which entities are currently present and which are absent. The feature components from steps 1-5 are concatenated to form a feature vector (step 6). This vector is passed to an RNN, such as a long short-term memory (LSTM) BIBREF1 or gated recurrent unit (GRU) BIBREF2 . The RNN computes a hidden state (vector), which is retained for the next timestep (step 8), and passed to a dense layer with a softmax activation, with output dimension equal to the number of distinct system action templates (step 9). Thus the output of step 9 is a distribution over action templates. Next, the action mask is applied as an element-wise multiplication, and the result is normalized back to a probability distribution (step 10) – this forces non-permitted actions to take on probability zero. From the resulting distribution (step 11), an action is selected (step 12). When RL is active, exploration is required, so in this case an action is sampled from the distribution; when RL is not active, the best action should be chosen, and so the action with the highest probability is always selected. The selected action is next passed to “Entity output” developer code that can substitute in entities (step 13) and produce a fully-formed action – for example, mapping the template “<city>, right?” to “Seattle, right?”. In step 14, control branches depending on the type of the action: if it is an API action, the corresponding API call in the developer code is invoked (step 15) – for example, to render rich content to the user. APIs can act as sensors and return features relevant to the dialog, so these can be added to the feature vector in the next timestep (step 16). If the action is text, it is rendered to the user (step 17), and cycle then repeats. The action taken is provided as a feature to the RNN in the next timestep (step 18). Related work Broadly there are two lines of work applying machine learning to dialog control. The first decomposes a dialog system into a pipeline, typically including language understanding, dialog state tracking, action selection policy, and language generation BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Specifically related to HCNs, past work has implemented the policy as feed-forward neural networks BIBREF12 , trained with supervised learning followed by reinforcement learning BIBREF13 . In these works, the policy has not been recurrent – i.e., the policy depends on the state tracker to summarize observable dialog history into state features, which requires design and specialized labeling. By contrast, HCNs use an RNN which automatically infers a representation of state. For learning efficiency, HCNs use an external light-weight process for tracking entity values, but the policy is not strictly dependent on it: as an illustration, in Section "Supervised learning evaluation II" below, we demonstrate an HCN-based dialog system which has no external state tracker. If there is context which is not apparent in the text in the dialog, such as database status, this can be encoded as a context feature to the RNN. The second, more recent line of work applies recurrent neural networks (RNNs) to learn “end-to-end” models, which map from an observable dialog history directly to a sequence of output words BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . These systems can be applied to task-oriented domains by adding special “API call” actions, enumerating database output as a sequence of tokens BIBREF0 , then learning an RNN using Memory Networks BIBREF27 , gated memory networks BIBREF28 , query reduction networks BIBREF29 , and copy-augmented networks BIBREF30 . In each of these architectures, the RNN learns to manipulate entity values, for example by saving them in a memory. Output is produced by generating a sequence of tokens (or ranking all possible surface forms), which can also draw from this memory. HCNs also use an RNN to accumulate dialog state and choose actions. However, HCNs differ in that they use developer-provided action templates, which can contain entity references, such as “<city>, right?”. This design reduce learning complexity, and also enable the software to limit which actions are available via an action mask, at the expense of developer effort. To further reduce learning complexity in a practical system, entities are tracked separately, outside the the RNN, which also allows them to be substituted into action templates. Also, past end-to-end recurrent models have been trained using supervised learning, whereas we show how HCNs can also be trained with reinforcement learning. Supervised learning evaluation I In this section we compare HCNs to existing approaches on the public “bAbI dialog” dataset BIBREF0 . This dataset includes two end-to-end dialog learning tasks, in the restaurant domain, called task5 and task6. Task5 consists of synthetic, simulated dialog data, with highly regular user behavior and constrained vocabulary. Dialogs include a database access action which retrieves relevant restaurants from a database, with results included in the dialog transcript. We test on the “OOV” variant of Task5, which includes entity values not observed in the training set. Task6 draws on human-computer dialog data from the second dialog state tracking challenge (DSTC2), where usability subjects (crowd-workers) interacted with several variants of a spoken dialog system BIBREF31 . Since the database from DSTC2 was not provided, database calls have been inferred from the data and inserted into the dialog transcript. Example dialogs are provided in the Appendix Sections "bAbI Task5 example dialog" and "bAbI Task6 example dialog" . To apply HCNs, we wrote simple domain-specific software, as follows. First, for entity extraction (step 4 in Figure 1 ), we used a simple string match, with a pre-defined list of entity names – i.e., the list of restaurants available in the database. Second, in the context update (step 5), we wrote simple logic for tracking entities: when an entity is recognized in the user input, it is retained by the software, over-writing any previously stored value. For example, if the price “cheap” is recognized in the first turn, it is retained as price=cheap. If “expensive” is then recognized in the third turn, it over-writes “cheap” so the code now holds price=expensive. Third, system actions were templatized: for example, system actions of the form “prezzo is a nice restaurant in the west of town in the moderate price range” all map to the template “<name> is a nice restaurant in the <location> of town in the <price> price range”. This results in 16 templates for Task5 and 58 for Task6. Fourth, when database results are received into the entity state, they are sorted by rating. Finally, an action mask was created which encoded common-sense dependencies. These are implemented as simple if-then rules based on the presence of entity values: for example, only allow an API call if pre-conditions are met; only offer a restaurant if database results have already been received; do not ask for an entity if it is already known; etc. For Task6, we noticed that the system can say that no restaurants match the current query without consulting the database (for an example dialog, see Section "bAbI Task6 example dialog" in the Appendix). In a practical system this information would be retrieved from the database and not encoded in the RNN. So, we mined the training data and built a table of search queries known to yield no results. We also added context features that indicated the state of the database – for example, whether there were any restaurants matching the current query. The complete set of context features is given in Appendix Section "Task5 and Task6 context features" . Altogether this code consisted of about 250 lines of Python. We then trained an HCN on the training set, employing the domain-specific software described above. We selected an LSTM for the recurrent layer BIBREF1 , with the AdaDelta optimizer BIBREF32 . We used the development set to tune the number of hidden units (128), and the number of epochs (12). Utterance embeddings were formed by averaging word embeddings, using a publicly available 300-dimensional word embedding model trained using word2vec on web data BIBREF33 . The word embeddings were static and not updated during LSTM training. In training, each dialog formed one minibatch, and updates were done on full rollouts (i.e., non-truncated back propagation through time). The training loss was categorical cross-entropy. Further low-level implementation details are in the Appendix Section "Model implementation details" . We ran experiments with four variants of our model: with and without the utterance embeddings, and with and without the action mask (Figure 1 , steps 3 and 6 respectively). Following past work, we report average turn accuracy – i.e., for each turn in each dialog, present the (true) history of user and system actions to the network and obtain the network's prediction as a string of characters. The turn is correct if the string matches the reference exactly, and incorrect if not. We also report dialog accuracy, which indicates if all turns in a dialog are correct. We compare to four past end-to-end approaches BIBREF0 , BIBREF28 , BIBREF30 , BIBREF29 . We emphasize that past approaches have applied purely sequence-to-sequence models, or (as a baseline) purely programmed rules BIBREF0 . By contrast, Hybrid Code Networks are a hybrid of hand-coded rules and learned models. Results are shown in Table 1 . Since Task5 is synthetic data generated using rules, it is possible to obtain perfect accuracy using rules (line 1). The addition of domain knowledge greatly simplifies the learning task and enables HCNs to also attain perfect accuracy. On Task6, rules alone fare poorly, whereas HCNs outperform past learned models. We next examined learning curves, training with increasing numbers of dialogs. To guard against bias in the ordering of the training set, we averaged over 5 runs, randomly permuting the order of the training dialogs in each run. Results are in Figure 2 . In Task5, the action mask and utterance embeddings substantially reduce the number of training dialogs required (note the horizontal axis scale is logarithmic). For Task6, the benefits of the utterance embeddings are less clear. An error analysis showed that there are several systematic differences between the training and testing sets. Indeed, DSTC2 intentionally used different dialog policies for the training and test sets, whereas our goal is to mimic the policy in the training set. Nonetheless, these tasks are the best public benchmark we are aware of, and HCNs exceed performance of existing sequence-to-sequence models. In addition, they match performance of past models using an order of magnitude less data (200 vs. 1618 dialogs), which is crucial in practical settings where collecting realistic dialogs for a new domain can be expensive. Supervised learning evaluation II We now turn to comparing with purely hand-crafted approaches. To do this, we obtained logs from our company's text-based customer support dialog system, which uses a sophisticated rule-based dialog manager. Data from this system is attractive for evaluation because it is used by real customers – not usability subjects – and because its rule-based dialog manager was developed by customer support professionals at our company, and not the authors. This data is not publicly available, but we are unaware of suitable human-computer dialog data in the public domain which uses rules. Customers start using the dialog system by entering a brief description of their problem, such as “I need to update my operating system”. They are then routed to one of several hundred domains, where each domain attempts to resolve a particular problem. In this study, we collected human-computer transcripts for the high-traffic domains “reset password” and “cannot access account”. We labeled the dialog data as follows. First, we enumerated unique system actions observed in the data. Then, for each dialog, starting from the beginning, we examined each system action, and determined whether it was “correct”. Here, correct means that it was the most appropriate action among the set of existing system actions, given the history of that dialog. If multiple actions were arguably appropriate, we broke ties in favor of the existing rule-based dialog manager. Example dialogs are provided in the Appendix Sections "Forgot password example dialog" and "Account access example dialog" . If a system action was labeled as correct, we left it as-is and continued to the next system action. If the system action was not correct, we replaced it with the correct system action, and discarded the rest of the dialog, since we do not know how the user would have replied to this new system action. The resulting dataset contained a mixture of complete and partial dialogs, containing only correct system actions. We partitioned this set into training and test dialogs. Basic statistics of the data are shown in Table 2 . In this domain, no entities were relevant to the control flow, and there was no obvious mask logic since any question could follow any question. Therefore, we wrote no domain-specific software for this instance of the HCN, and relied purely on the recurrent neural network to drive the conversation. The architecture and training of the RNN was the same as in Section "Supervised learning evaluation I" , except that here we did not have enough data for a validation set, so we instead trained until we either achieved 100% accuracy on the training set or reached 200 epochs. To evaluate, we observe that conventional measures like average dialog accuracy unfairly penalize the system used to collect the dialogs – in our case, the rule-based system. If the system used for collection makes an error at turn $t$ , the labeled dialog only includes the sub-dialog up to turn $t$ , and the system being evaluated off-line is only evaluated on that sub-dialog. In other words, in our case, reporting dialog accuracy would favor the HCN because it would be evaluated on fewer turns than the rule-based system. We therefore use a comparative measure that examines which method produces longer continuous sequences of correct system actions, starting from the beginning of the dialog. Specifically, we report $\Delta P = \frac{C(\text{HCN-win}) - C(\text{rule-win})}{C(\text{all})}$ , where $C(\text{HCN-win})$ is the number of test dialogs where the rule-based approach output a wrong action before the HCN; $C(\text{rule-win})$ is the number of test dialogs where the HCN output a wrong action before the rule-based approach; and $C(\text{all})$ is the number of dialogs in the test set. When $\Delta P > 0$ , there are more dialogs in which HCNs produce longer continuous sequences of correct actions starting from the beginning of the dialog. We run all experiments 5 times, each time shuffling the order of the training set. Results are in Figure 3 . HCNs exceed performance of the existing rule-based system after about 30 dialogs. In these domains, we have a further source of knowledge: the rule-based dialog managers themselves can be used to generate example “sunny-day” dialogs, where the user provides purely expected inputs. From each rule-based controller, synthetic dialogs were sampled to cover each expected user response at least once, and added to the set of labeled real dialogs. This resulted in 75 dialogs for the “Forgot password” domain, and 325 for the “Can't access account” domain. Training was repeated as described above. Results are also included in Figure 3 , with the suffix “sampled”. In the “Can't access account” domain, the sampled dialogs yield a large improvement, probably because the flow chart for this domain is large, so the sampled dialogs increase coverage. The gain in the “forgot password” domain is present but smaller. In summary, HCNs can out-perform production-grade rule-based systems with a reasonable number of labeled dialogs, and adding synthetic “sunny-day” dialogs improves performance further. Moreover, unlike existing pipelined approaches to dialog management that rely on an explicit state tracker, this HCN used no explicit state tracker, highlighting an advantage of the model. Reinforcement learning illustration In the previous sections, supervised learning (SL) was applied to train the LSTM to mimic dialogs provided by the system developer. Once a system operates at scale, interacting with a large number of users, it is desirable for the system to continue to learn autonomously using reinforcement learning (RL). With RL, each turn receives a measurement of goodness called a reward; the agent explores different sequences of actions in different situations, and makes adjustments so as to maximize the expected discounted sum of rewards, which is called the return, denoted $G$ . For optimization, we selected a policy gradient approach BIBREF34 , which has been successfully applied to dialog systems BIBREF35 , robotics BIBREF36 , and the board game Go BIBREF37 . In policy gradient-based RL, a model $\pi $ is parameterized by $\mathbf {w}$ and outputs a distribution from which actions are sampled at each timestep. At the end of a trajectory – in our case, dialog – the return $G$ for that trajectory is computed, and the gradients of the probabilities of the actions taken with respect to the model weights are computed. The weights are then adjusted by taking a gradient step proportional to the return: $$\mathbf {w} \leftarrow \mathbf {w} + \alpha ( \sum _t \triangledown _{\mathbf {w}} \log \pi (a_t|\mathbf {h_t};\mathbf {w}) ) ( G - b ) $$ (Eq. 14) where $\alpha $ is a learning rate; $a_t$ is the action taken at timestep $t$ ; $\mathbf {h_t}$ is the dialog history at time $t$ ; $G$ is the return of the dialog; $\triangledown _{\mathbf {x}} F$ denotes the Jacobian of $F$ with respect to $\mathbf {x}$ ; $b$ is a baseline described below; and $a_t$0 is the LSTM – i.e., a stochastic policy which outputs a distribution over $a_t$1 given a dialog history $a_t$2 , parameterized by weights $a_t$3 . The baseline $a_t$4 is an estimate of the average return of the current policy, estimated on the last 100 dialogs using weighted importance sampling. Intuitively, “better” dialogs receive a positive gradient step, making the actions selected more likely; and “worse” dialogs receive a negative gradient step, making the actions selected less likely. SL and RL correspond to different methods of updating weights, so both can be applied to the same network. However, there is no guarantee that the optimal RL policy will agree with the SL training set; therefore, after each RL gradient step, we check whether the updated policy reconstructs the training set. If not, we re-run SL gradient steps on the training set until the model reproduces the training set. Note that this approach allows new training dialogs to be added at any time during RL optimization. We illustrate RL optimization on a simulated dialog task in the name dialing domain. In this system, a contact's name may have synonyms (“Michael” may also be called “Mike”), and a contact may have more than one phone number, such as “work” or “mobile”, which may in turn have synonyms like “cell” for “mobile”. This domain has a database of names and phone numbers taken from the Microsoft personnel directory, 5 entity types – firstname, nickname, lastname, phonenumber, and phonetype – and 14 actions, including 2 API call actions. Simple entity logic was coded, which retains the most recent copy of recognized entities. A simple action mask suppresses impossible actions, such as placing a phonecall before a phone number has been retrieved from the database. Example dialogs are provided in Appendix Section "Name dialing example dialogs" . To perform optimization, we created a simulated user. At the start of a dialog, the simulated user randomly selected a name and phone type, including names and phone types not covered by the dialog system. When speaking, the simulated user can use the canonical name or a nickname; usually answers questions but can ignore the system; can provide additional information not requested; and can give up. The simulated user was parameterized by around 10 probabilities, set by hand. We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. Finally, we created a set of 21 labeled dialogs, which will be used for supervised learning. For the RNN in the HCN, we again used an LSTM with AdaDelta, this time with 32 hidden units. RL policy updates are made after each dialog. Since a simulated user was employed, we did not have real user utterances, and instead relied on context features, omitting bag-of-words and utterance embedding features. We first evaluate RL by randomly initializing an LSTM, and begin RL optimization. After 10 RL updates, we freeze the policy, and run 500 dialogs with the user simulation to measure task completion. We repeat all of this for 100 runs, and report average performance. In addition, we also report results by initializing the LSTM using supervised learning on the training set, consisting of 1, 2, 5, or 10 dialogs sampled randomly from the training set, then running RL as described above. Results are in Figure 4 . Although RL alone can find a good policy, pre-training with just a handful of labeled dialogs improves learning speed dramatically. Additional experiments, not shown for space, found that ablating the action mask slowed training, agreeing with BIBREF6 . Finally, we conduct a further experiment where we sample 10 training dialogs, then add one to the training set just before RL dialog 0, 100, 200, ... , 900. Results are shown in Figure 4 . This shows that SL dialogs can be introduced as RL is in progress – i.e., that it is possible to interleave RL and SL. This is an attractive property for practical systems: if a dialog error is spotted by a developer while RL is in progress, it is natural to add a training dialog to the training set. Conclusion This paper has introduced Hybrid Code Networks for end-to-end learning of task-oriented dialog systems. HCNs support a separation of concerns where procedural knowledge and constraints can be expressed in software, and the control flow is learned. Compared to existing end-to-end approaches, HCNs afford more developer control and require less training data, at the expense of a small amount of developer effort. Results in this paper have explored three different dialog domains. On a public benchmark in the restaurants domain, HCNs exceeded performance of purely learned models. Results in two troubleshooting domains exceeded performance of a commercially deployed rule-based system. Finally, in a name-dialing domain, results from dialog simulation show that HCNs can also be optimized with a mixture of reinforcement and supervised learning. In future work, we plan to extend HCNs by incorporating lines of existing work, such as integrating the entity extraction step into the neural network BIBREF38 , adding richer utterance embeddings BIBREF39 , and supporting text generation BIBREF14 . We will also explore using HCNs with automatic speech recognition (ASR) input, for example by forming features from n-grams of the ASR n-best results BIBREF40 . Of course, we also plan to deploy the model in a live dialog system. More broadly, HCNs are a general model for stateful control, and we would be interested to explore applications beyond dialog systems – for example, in NLP medical settings or human-robot NL interaction tasks, providing domain constraints are important for safety; and in resource-poor settings, providing domain knowledge can amplify limited data. Model implementation details The RNN was specified using Keras version 0.3.3, with back-end computation in Theano version 0.8.0.dev0 BIBREF42 , BIBREF41 . The Keras model specification is given below. The input variable obs includes all features from Figure 1 step 6 except for the previous action (step 18) and the action mask (step 6, top-most vector). # Given: # obs_size, action_size, nb_hidden g = Graph() g.add_input( name='obs', input_shape=(None, obs_size) ) g.add_input( name='prev_action', input_shape=(None, action_size) ) g.add_input( name='avail_actions', input_shape=(None, action_size) ) g.add_node( LSTM( n_hidden, return_sequences=True, activation='tanh', ), name='h1', inputs=[ 'obs', 'prev_action', 'avail_actions' ] ) g.add_node( TimeDistributedDense( action_size, activation='softmax', ), name='h2', input='h1' ) g.add_node( Activation( activation=normalize, ), name='action', inputs=['h2','avail_actions'], merge_mode='mul', create_output=True ) g.compile( optimizer=Adadelta(clipnorm=1.), sample_weight_modes={ 'action': 'temporal' }, loss={ 'action':'categorical_crossentropy' } ) Model sizes are given in Table 3 . Example dialogs are given below for each of the 5 dialog systems. For space and readability, the entity tags that appear in the user and system sides of the dialogs have been removed – for example, Call <name>Joan</name> is shown as Call Joan. bAbI Task5 example dialog good morning hello what can i help you with today i'd like to book a table with italian food i'm on it [silence] where should it be in paris how many people would be in your party for six people please which price range are looking for in a cheap price range please ok let me look into some options for you [silence] api_call italian paris six cheap [database results returned] [silence] what do you think of this option: resto_madrid_cheap_spanish_8stars no this does not work for me sure let me find an other option for you [silence] what do you think of this option: resto_madrid_cheap_spanish_6stars do you have something else sure let me find an other option for you [silence] what do you think of this option: resto_madrid_cheap_spanish_1stars it's perfect great let me do the reservation may i have the phone number of the restaurant here it is resto_madrid_cheap_spanish_1stars_phone thanks is there anything i can help you with no thank you you're welcome bAbI Task6 example dialog In the system turn marked [*], the system provided venue information without first making an API call. Hello , welcome to the Cambridge restaurant system . You can ask for restaurants by area , price range or food type . How may I help you ? im looking for canapes food in any price range I am sorry but there is no canapes restaurant that matches your request [*] how about european api_call european R_location R_price [database results returned] [silence] michaelhouse_cafe is a great restaurant serving european food and it is in the expensive price range food an What part of town do you have in mind? any any part of town michaelhouse_cafe serves european food . may i have the address and phone number please The phone number of michaelhouse_cafe is michaelhouse_cafe_phone thank you good bye you are welcome Task5 and Task6 context features Context features used in Section "Supervised learning evaluation I" are in Table 4 . Forgot password example dialog Some system actions have been shortened for space. How do I recover my password Okay, you don't need to remember your password, we can reset it. Would you like to try that? Yes To reset your password, please see this URL ... Was that helpful? I don't have a flashdrive Let's connect you to a person who can help you Account access example dialog Some system actions have been shortened for space. cannot sign in to my email Please check caps lock is off, and that your email is spelled correctly. Are you able to access your account now? no If you still can't sign in, please go the the following page and check that the email server is running ok... Is everything up and running? yes You need to reset your password - here's how ... Did this solve your problem? still cannot get in Did you receive the security code? yes Check you entered the right numbers. Did this solve your problem? no Have you seen a message that your profile is temporarily blocked? just your message of update on sept 15 Did you turn on two-step verification recently? whats that? Sorry, I can't help. Rephrase your question, or click below to talk to a person. Name dialing example dialogs How can I help you? Call Joe There's more than one person named Joe. Can you say their full name? Call Joe Adamson SavePhonetypeavail() Calling Joe Adamson, work PlaceCall() How can I help you? Call Sally on her home Sorry, I don't have a home number for Sally Smith. I only have a work phone. Do you want to call that number? no Oh, sorry about that. Goodbye.
reward 1 for successfully completing the task, with a discount by the number of turns, and reward 0 when fail
cacb83e15e160d700db93c3f67c79a11281d20c5
cacb83e15e160d700db93c3f67c79a11281d20c5_0
Q: Does this paper propose a new task that others can try to improve performance on? Text: Introduction and Related Work Social norms are informal understandings that govern human behavior. They serve as the basis for our beliefs and expectations about others, and are instantiated in human-human conversation through verbal and nonverbal behaviors BIBREF0 , BIBREF1 . There is considerable body of work on modeling socially normative behavior in intelligent agent-based systems BIBREF2 , BIBREF3 , aiming to facilitate lifelike conversations with human users. Violating such social norms and impoliteness in the conversation, on the other hand, have also been demonstrated to positively affect certain aspects of the social interaction. For instance, BIBREF4 suggests impoliteness may challenge rapport in strangers but it is also an indicator of built relationship among friends. The literature on social psychology BIBREF5 shows that the task of managing interpersonal bond like rapport requires management of face which, in turn, relies on behavioral expectation, which are allied with social norms early in a relationship, and become more interpersonally determined as the relationship proceeds. BIBREF6 advanced the arguments by proposing that with the increasing knowledge of one another, more general norms may be purposely violated in order to accommodate each other's behavior expectation. Moreover, they proposed that such kind of social norm violation in fact reinforce the sense of in-group connectedness. Finally in BIBREF7 , the authors discovered the effect of temporally co-occurring smile and social norm violation that signal high interpersonal rapport. Thus, we believe that recognizing the phenomena of social norm violation in dialog can contribute important insights into understanding the interpersonal dynamics that unfold between the interlocutors. Interesting prior work on quantifying social norm violation has taken a heavily data-driven focus BIBREF8 , BIBREF9 . For instance, BIBREF8 trained a series of bigram language models to quantify the violation of social norms in users' posts on an online community by leveraging cross-entropy value, or the deviation of word sequences predicted by the language model and their usage by the user. However, their models were trained on written-language instead of natural face-face dialog corpus. Another kind of social norm violation was examined by BIBREF10 , who developed a classifier to identify specific types of sarcasm in tweets. They utilized a bootstrapping algorithm to automatically extract lists of positive sentiment phrases and negative situation phrases from given sarcastic tweets, which were in turn leveraged to recognize sarcasm in an SVM classifier. However, no contextual information was considered in this work. BIBREF11 understood the nature of social norm violation in dialog by correlating it with associated observable verbal, vocal and visual cues. By leveraging their findings and statistical machine learning techniques, they built a computational model for automatic recognition. While they preserved short-term temporal contextual information in the model, this study avoided dealing with sparsity of the social norm violation phenomena by under-sampling the negative-class instances to make a balanced dataset. Motivated by theoretical rationale and prior empirical findings concerning the relationship between violation social norm and interpersonal dynamics, in the current work, we take a step towards addressing the above limitations and our contributions are two-fold: (1)We quantitatively evaluate the contribution of long-term temporal contextual information on detecting violation of social norm. (2)We incorporate this understanding to our computational model for automatic recognizing social norm violation by leveraging the power of recurrent neural network on modeling the long-term temporal dependencies. Data and Annotation Reciprocal peer tutoring data was collected from 12 American English-speaking dyads (6 friends and 6 strangers; 6 boys and 6 girls), with a mean age of 13 years, who interacted for 5 hourly sessions over as many weeks (a total of 60 sessions, and 5400 minutes of data), tutoring one another in algebra. Each session began with a period of getting to know one another, after which the first tutoring period started, followed by another small social interlude, a second tutoring period with role reversal between the tutor and tutee, and then the final social time. We assessed our automatic recognition of social norm violation against this corpus annotated for those strategies. Inter-rater reliability (IRR) for the social norm violation that computed via Krippendorff's alpha was 0.75. IRR for visual behavior was 0.89 for eye gaze, 0.75 for smile count (how many smiles occur), 0.64 for smile duration and 0.99 for head nod. Table 1 shows statistics of our corpus. Below we discuss the definition of social norm violation. Ground Truth: Social norm violations are behaviors or actions that go against general socially acceptable and stereotypical behaviors. In a first pass, we coded whether a clause was a social norm violation. In a second pass, if a social norm violation, we differentiated: (1) breaking the conversational rules of the experiment (e.g. off-task talk during tutoring session, insulting the experimenter or the experiment, etc); (2) face threatening acts (e.g. criticizing, teasing, or insulting, etc); (3) referring to one's own or the other person's social norm violations or general social norm violations (e.g. referring to the need to get back to focusing on work, or to the other person being verbally annoying etc). Social norms are culturally-specific, and so we judged a social norm violation by the impact it had on the listener (e.g. shock, specific reference to the behavior as a violation, etc.). Model and Experiment In this section, our objective was to build a computational model for detecting social norm violation. Towards this end, we first took each clause, the smallest units that can express a complete proposition, as the prediction unit. Next, inspired from the thorough analysis in BIBREF11 , we extracted verbal and visual features of the speaker that were highly correlated to social norm violation clauses, with rare threshold being set to 20. Verbal features included LIWC features BIBREF12 that helped in categorization of words used during usage of social norm violation, bigrams, part of speech bigrams and word-part of speech pairs from the speaker's clauses. Visual features included head node, smile and eye gaze information of the speaker. In total there were 3782 features per clause. Models We treated a dialog $D$ as a sequence of clauses $c_0, ... c_T$ , where $T$ was the number of clauses in the $D$ . Each clause $c_i$ was a tuple $([w^i_0, ...w^i_m], e_i)$ , where $[w^i_0, ...w^i_m]$ was the $m$ words in the clause $c_i$ , and $e_i$ was the corresponding meta information such as the relationship of the dyad and nonverbal behavior during the generation of the clause. The handcrafted feature of size 3782 was denoted as $c_0, ... c_T$0 , and could be viewed as a mapping function $c_0, ... c_T$1 . Meanwhile, each clause was associated with a binary label $c_0, ... c_T$2 that indicates the ground truth of whether $c_0, ... c_T$3 is a violation of social norm. Eventually, the goal was to model $c_0, ... c_T$4 , the conditional distribution over whether the latest clause was a violation of social norm, given the entire history of the dialog. We first trained a L2 regularized logistic regression model using the proposed verbal and visual features $f_i$ as inputs (leftmost in Figure 1). This model serves as our baseline. Past empirical results suggest two possible hypotheses of improving the model performance: 1. improvement in clause level representation 2. inclusion of contextual information for prediction. Therefore, we designed Local/Global-Context models to test these hypotheses. The Local-Context recurrent neural network (RNN) models the context inside a clause at the word-level by encoding word embeddings of size 300 in a clause $c_i$ sequentially using a Long-short Term Memory (LSTM) cell of size 300. The mechanism of LSTM is defined as: $ \left[ \begin{matrix} i_t \\ f_t \\ o_t \\ j_t \\ \end{matrix} \right] &= \left[ \begin{matrix} \sigma \\ \sigma \\ \sigma \\ tanh \\ \end{matrix} \right] W [h_{t-1}, x_t] \\ c_t &= f_t \odot c_{t-1} + i_t \odot j_t\\ h_t &= o_t \odot tanh(c_t) $ We treated last hidden LSTM output $h^i_m$ as the clause embedding and concatenated that with the corresponding meta information vector $e_i$ . The combined vector was linearly transformed and then fed into a softmax function. Next our Global-Context RNN investigated the influence of clause-level context in detecting social norm violation, by using the LSTM cells to model the long-term temporal dependencies. For a fair comparison, we used the same hand-crafted feature $f_i$ used in the logistic regression model as the representation of clause $c_i$ . As shown in Figure 1 , we first obtained a linear embedding of size 150 $emb_i=W_{e}f_i+b_i$ of $f_i$ . Then $emb_i$ was used as the inputs to LSTM of size 600. The hidden output $h_i$ at each time step was fed into a multilayer perceptron (MLP) with 1 hidden layer of size 100. We applied 50% dropout regularization BIBREF13 at the input/output of LSTM and MLP hidden layer for better generalization. Finally the model was optimized w.r.t to the cross entropy loss. A further challenge was the length of dialog. The average number of clauses in training dialog was 817.8, which made it computationally intractable to backpropagate through the entire sequence. Therefore, truncated backpropagation through time (TBPTT) BIBREF14 was used by unrolling the network for 20 steps. The final state of LSTM of each batch was fetched into the next batch as the initial state. Experiment Result We observed that Global-Context RNN with 2 LSTM layers outperformed other models as showed in Table 2. First, by comparing logistic regression model with our best model, the result indicates the strong predictive power of long-term temporal contextual information on the task of detecting social norm violation in dialog. On the other hand, Local-Context RNN model did not achieve significant improvement on overall performance regarding to logistic regression, which means that our learned clause representation through training process has less competence compared to hand-crafted features inspired from linguistic knowledge. One potential reason for such a result could be insufficient amount of training set in order to learn a generic clause representation. Conclusion and Future Work In this work, we began by indicating our interest in quantitatively learning the contribution of long-term temporal contextual information on detecting social norm violation in discourse. We then leveraged the power of recurrent neural network on modeling long-term temporal dependency. Inspired by hand-crafted multimodal features derived from qualitative and quantitative analysis in former empirical studies, we developed a Global-Context RNN model to detect social norm violation in human dialog. This model will play a prime role in building socially-aware agents that have capabilities of understanding interpersonal dynamics that unfold in the interaction, which is in turn, essential to better adapt to the interpersonal relationship felt by their users. Thus, to serve this goal, our future work will build a generative model of social norm violation, which will make an agent act towards more realistic human behavior understanding, reasoning and generation. We begin to model those aspects of human-human interaction that are not only helpful to human-agent collaboration, but also sustain aspects of what we cherish most in being human.
No, there has been previous work on recognizing social norm violation.
33957fde72f9082a5c11844e7c47c58f8029c4ae
33957fde72f9082a5c11844e7c47c58f8029c4ae_0
Q: What knowledge base do they use? Text: Introduction Semantic parsing is the task of mapping a phrase in natural language onto a formal query in some fixed schema, which can then be executed against a knowledge base (KB) BIBREF0 , BIBREF1 . For example, the phrase “Who is the president of the United States?” might be mapped onto the query $\lambda (x).$ $\textsc {/government/president\_of}$ ( $x$ , $\textsc {USA}$ ), which, when executed against Freebase BIBREF2 , returns $\textsc {Barack Obama}$ . By mapping phrases to executable statements, semantic parsers can leverage large, curated sources of knowledge to answer questions BIBREF3 . This benefit comes with an inherent limitation, however—semantic parsers can only produce executable statements within their manually-produced schema. There is no query against Freebase that can answer questions like “Who are the Democratic front-runners in the US election?”, as Freebase does not encode information about front-runners. Semantic parsers trained for Freebase fail on these kinds of questions. To overcome this limitation, recent work has proposed methods for open vocabulary semantic parsing, which replace a formal KB with a probabilistic database learned from a text corpus. In these methods, language is mapped onto queries with predicates derived directly from the text itself BIBREF4 , BIBREF5 . For instance, the question above might be mapped to $\lambda (x).$ $\textit {president\_of}$ ( $x$ , $\textsc {USA}$ ). This query is not executable against any KB, however, and so open vocabulary semantic parsers must learn execution models for the predicates found in the text. They do this with a distributional approach similar to word embedding methods, giving them broad coverage, but lacking access to the large, curated KBs available to traditional semantic parsers. Prior work in semantic parsing, then, has either had direct access to the information in a knowledge base, or broad coverage over all of natural language using the information in a large corpus, but not both. In this work, we show how to combine these two approaches by incorporating KB information into open vocabulary semantic parsing models. Our key insight is that formal KB queries can be converted into features that can be added to the learned execution models of open vocabulary semantic parsers. This conversion allows open vocabulary models to use the KB fact $\textsc {/government/president\_of}$ ( $\textsc {BarackObama}$ , $\textsc {USA}$ ) when scoring $\textit {president\_of}$ ( $\textsc {BarackObama}$ , $\textsc {USA}$ ), without requiring the model to map the language onto a single formal statement. Crucially, this featurization also allows the model to use these KB facts even when they only provide partial information about the language being modeled. For example, knowing that an entity is a $\textsc {politician}$ is very helpful information for deciding whether that entity is a front-runner. Our approach, outlined in Figure 1 , effectively learns the meaning of a word as a distributional vector plus a weighted combination of Freebase queries, a considerably more expressive representation than those used by prior work. While this combination is the main contribution of our work, we also present some small improvements that allow open vocabulary semantic parsing models to make better use of KB information when it is available: improving the logical forms generated by the semantic parser, and employing a simple technique from related work for generating candidate entities from the KB. We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions. By giving open vocabulary semantic parsers direct access to KB information, we improve mean average precision on this task by over 120%. Open vocabulary semantic parsing In this section, we briefly describe the current state-of-the-art model for open vocabulary semantic parsing, introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary. Instead of mapping text to Freebase queries, as done by a traditional semantic parser, their method parses text to a surface logical form with predicates derived directly from the words in the text (see Figure 1 ). Next, a distribution over denotations for each predicate is learned using a matrix factorization approach similar to that of Riedel et al. riedel-2013-mf-universal-schema. This distribution is concisely represented using a probabilistic database, which also enables efficient probabilistic execution of logical form queries. The matrix factorization has two sets of parameters: each category or relation has a learned $k$ -dimensional embedding $\theta $ , and each entity or entity pair has a learned $k$ -dimensional embedding $\phi $ . The probability assigned to a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is given by: $ p(c(e)) &= \sigma ( \theta _c^T \phi _e ) \\ p(r(e_1, e_2)) &= \sigma ( \theta _r^T \phi _{(e_1, e_2)} ) $ The probability of a predicate instance is the sigmoided inner product of the corresponding predicate and entity embeddings. Predicates with nearby embeddings will have similar distributions over the entities in their denotation. The parameters $\theta $ and $\phi $ are learned using a query ranking objective that optimizes them to rank entities observed in the denotation of a logical form above unobserved entities. Given the trained predicate and entity parameters, the system is capable of efficiently computing the marginal probability that an entity is an element of a logical form's denotation using approximate inference algorithms for probabilistic databases. The model presented in this section is purely distributional, with predicate and entity models that draw only on co-occurrence information found in a corpus. In the following sections, we show how to augment this model with information contained in large, curated KBs such as Freebase. Converting Freebase queries to features Our key insight is that the executable queries used by traditional semantic parsers can be converted into features that provide KB information to the execution models of open vocabulary semantic parsers. Here we show how this is done. Traditional semantic parsers map words onto distributions over executable queries, select one to execute, and return sets of entities or entity pairs from a KB as a result. Instead of executing a single query, we can simply execute all possible queries and use an entity's (or entity pair's) membership in each set as a feature in our predicate models. There are two problems with this approach: (1) the set of all possible queries is intractably large, so we need a mechanism similar to a semantic parser's lexicon to select a small set of queries for each word; and (2) executing hundreds or thousands of queries at runtime for each predicate and entity is not computationally tractable. To solve these problems, we use a graph-based technique called subgraph feature extraction (SFE) BIBREF6 . Subgraph feature extraction SFE is a technique for generating feature matrices over node pairs in graphs with labeled edges. When the graph corresponds to a formal KB such as Freebase, the features generated by SFE are isomorphic to statements in the KB schema BIBREF7 . This means that we can use SFE to generate a feature vector for each entity (or entity pair) which succinctly captures the set of all statements in whose denotations the entity (or entity pair) appears. Using this feature vector as part of the semantic parser's entity models solves problem (2) above, and performing feature selection for each predicate solves problem (1). Some example features extracted by SFE are shown in Figure 2 . For entity pairs, these features include the sequence of edges (or paths) connecting the nodes corresponding to the entity pair. For entities, these features include the set of paths connected to the node, optionally including the node at the end of the path. Note the correspondence between these features and Freebase queries: the path $\langle $ $\textsc {designed}$ $\rightarrow $ $\textsc {located\_in}$ $\rangle $ can be executed as a query against Freebase, returning a set of (architect, location) entity pairs, where the architect designed a structure in the location. ( $\textsc {Palladio}$ , $\textsc {Italy}$ ) is one such entity pair, so this pair has a feature value of 1 for this query. Feature selection The feature vectors produced by SFE contain tens of millions of possible formal statements. Out of these tens of millions of formal statements, only a handful represent relevant Freebase queries for any particular predicate. We therefore select a small number of statements to consider for each learned predicate in the open vocabulary semantic parser. We select features by first summing the entity and entity pair feature vectors seen with each predicate in the training data. For example, the phrase “Italian architect Andrea Palladio” is considered a positive training example for the predicate instances $\textit {architect}(\textsc {Palladio})$ and $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$ . We add the feature vectors for $\textsc {Palladio}$ and ( $\textsc {Italy}$ , $\textsc {Palladio}$ ) to the feature counts for the predicates $\textit {architect}$ and $\textit {architect\_N/N}$ , respectively. This gives a set of counts $\textsc {count}$ ( $\pi $ ), $\textsc {count}$ ( $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$0 ), and $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$1 ( $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$2 ), for each predicate $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$3 and feature $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$4 . The features are then ranked by PMI for each predicate by computing $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$5 . After removing low-frequency features, we pick the $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$6 features with the highest PMI values for each predicate to use in our model. Combined predicate models Here we present our approach to incorporating KB information into open vocabulary semantic parsers. Having described how we use SFE to generate features corresponding to statements in a formal schema, adding these features to the models described in Section "Subgraph feature extraction" is straightforward. We saw in Section "Subgraph feature extraction" that open vocabulary semantic parsers learn distributional vectors for each category, relation, entity and entity pair. We augment these vectors with the feature vectors described in Section "Converting Freebase queries to features" . Each category and relation receives a weight $\omega $ for each selected Freebase query, and each entity and entity pair has an associated feature vector $\psi $ . The truth probability of a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is thus given by: $ p(c(e)) &= \sigma ( \theta _c^T \phi _e + \omega _c^T \psi _c(e)) \\ p(r(e_1, e_2)) &= \sigma ( \theta _r^T \phi _{(e_1, e_2)} + \omega _r^T \psi _r(e_1, e_2) ) $ In these equations, $\theta $ and $\phi $ are learned predicate and entity embeddings, as described in Section "Subgraph feature extraction" . The second term in the sum represents our new features and their learned weights. $\psi _c(e)$ and $\psi _r(e_1, e_2)$ are SFE feature vectors for each entity and entity pair; a different set of features is chosen for each predicate $c$ and $r$ , as described in Section "Making full use of KB information" . $\omega _c$ and $\omega _r$ are learned weights for these features. In our model, there are now three sets of parameters to be learned: (1) $\theta $ , low-dimensional distributional vectors trained for each predicate; (2) $\phi $ , low-dimensional distributional vectors trained for each entity and entity pair; and (3) $\omega $ , weights associated with the selected formal SFE features for each predicate. All of these parameters are optimized jointly, using the same method described in Section "Subgraph feature extraction" . Note here that each SFE feature corresponds to a query over the formal schema, defining a set of entities (or entity pairs). The associated feature weight measures the likelihood that an entity in this set is also in the denotation of the surface predicate. Our models include many such features for each surface predicate, effectively mapping each surface predicate onto a weighted combination of Freebase queries. Making full use of KB information In addition to improving predicate models, as just described, adding KB information to open vocabulary semantic parsers suggests two other simple improvements: (1) using more specific logical forms, and (2) generating candidate entities from the KB. Logical form generation Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary generate logical forms from natural language statements by computing a syntactic CCG parse, then applying a collection of rules to produce logical forms. However, their logical form analyses do not model noun-mediated relations well. For example, given the phrase “Italian architect Andrea Palladio,” their system's logical form would include the relation $\textit {N/N}(\textsc {Italy}, \textsc {Palladio})$ . Here, the $\textit {N/N}$ predicate represents a generic noun modifier relation; however, this relation is too vague for the predicate model to accurately learn its denotation. A similar problem occurs with prepositions and possessives, e.g., it is similarly hard to learn the denotation of the predicate $\textit {of}$ . Our system improves the analysis of noun-mediated relations by simply including the noun in the predicate name. In the architect example above, our system produces the relation $\textit {architect\_N/N}$ . It does this by concatenating all intervening noun modifiers between two entity mentions and including them in the predicate name; for example, “Illinois attorney general Lisa Madigan” produces the predicate $\textit {attorney\_general\_N/N}$ . We similarly improve the analyses of prepositions and possessives to include the head noun. For example, “Barack Obama, president of the U.S.” produces the predicate instance $\textit {president\_of}(\textsc {Barack Obama}, \textsc {U.S.})$ , and “Rome, Italy's capital” produces the predicate $\textit {^{\prime }s\_capital}$ . This process generates more specific predicates that more closely align with the KB facts that we make available to the predicate models. Candidate entity generation A key benefit of our predicate models is that they are able to assign scores to entity pairs that were never seen in the training data. Distributional models have no learned vectors for these entity pairs and therefore assume $p(r(e_1,e_2)) = 0$ for unseen entity pairs $(e_1,e_2)$ . This limits the recall of these models when applied to question answering, as entity pairs will not have been observed for many correct, but rare entity answers. In contrast, because our models have access to a large KB, the formal component of the model can always give a score to any entity pair in the KB. This allows our model to considerably improve question answering performance on rare entities. It would be computationally intractable to consider all Freebase entities as answers to queries, and so we use a simple candidate entity generation technique to consider only a small set of likely entities for a given query. We first find all entities in the query, and consider as candidates any entity that has either been seen at training time with a query entity or is directly connected to a query entity in Freebase. This candidate entity generation is common practice for recent question answering models over Freebase BIBREF8 , though, for the reasons stated above, it has not been used previously in open vocabulary semantic parsing models. Evaluation We evaluate our open-vocabulary semantic parser on a fill-in-the-blank natural language query task. Each test example is a natural language phrase containing at least two Freebase entities, one of which is held out. The system must propose a ranked list of Freebase entities to fill in the blank left by the held out entity, and the predicted entities are then judged manually for correctness. We compare our proposed models, which combine distributional and formal elements, with a purely distributional baseline from prior work. All of the data and code used in these experiments is available at http://github.com/allenai/open_vocab_semparse. Data Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations. We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. For a final evaluation, we generated another, similar test set from a different held out section of ClueWeb, in the same fashion as done by Krishnamurthy and Mitchell. This final test set contains 307 queries. Models We compare three models in our experiments: (1) the distributional model of Krishnamurthy and Mitchell, described in Section "Subgraph feature extraction" , which is the current state-of-the-art method for open vocabulary semantic parsing; (2) a formal model (new to this work), where the distributional parameters $\theta $ and $\phi $ in Section "Combined predicate models" are fixed at zero; and (3) the combined model described in Section "Combined predicate models" (also new to this work). In each of these models, we used vectors of size 300 for all embeddings. Except where noted, all experiments use our modified logical forms (Section "Evaluation" ) and our entity proposal mechanism (Section "Related work" ). We do not compare against any traditional semantic parsers, as more than half of the questions in our dataset are not answerable by Freebase queries, and so are out of scope for those parsers BIBREF5 . Methodology Given a fill-in-the-blank query such as “Italian architect ”, each system produces a ranked list of 100 candidate entities. To compare the output of the systems, we follow a pooled evaluation protocol commonly used in relation extraction and information retrieval BIBREF11 , BIBREF12 . We take the top 30 predictions from each system and manually annotate whether they are correct, and use those annotations to compute the average precision (AP) and reciprocal rank (RR) of each system on the query. Average precision is defined as $\frac{1}{m}\sum ^m_{k=1} \mathrm {Prec}(k) \times \mathrm {Correct}(k)$ , where $\mathrm {Prec}(k)$ is the precision at rank $k$ , $\mathrm {Correct}(k)$ is an indicator function for whether the $k$ th answer is correct, and $m$ is number of returned answers (up to 100 in this evaluation). AP is equivalent to calculating the area under a precision-recall curve. Reciprocal rank is computed by first finding the rank $r$ of the first correct prediction made by a system. Reciprocal rank is then $\frac{1}{r}$ , ranging from 1 (if the first prediction is correct) to 0 (if there is no correct answer returned). In the tables below we report mean average precision (MAP) and mean reciprocal rank (MRR), averaged over all of the queries in the test set. We also report a weighted version of MAP, where the AP of each query is scaled by the number of annotated correct answers to the query (shown as W-MAP in the tables for space considerations). Results We first show the effect of the new logical forms introduced in Section "Evaluation" . As can be seen in Table 1 , with our improved logical forms, all models are better able to capture the semantics of language. This improvement is most pronounced in the formal models, which have more capacity to get specific features from Freebase with the new logical forms. As our logical forms give all models better performance, the remaining experiments we present all use these logical forms. We next show the improvement gained by using the simple candidate entity generation outlined in Section "Related work" . By simply appending the list of connected entities in Freebase to the end of the rankings returned by the distributional model, MAP improves by 40% (see Table 2 ). The connectedness of an entity pair in Freebase is very informative, especially for rare entities that are not seen together during training. Table 3 shows a comparison between the semantic parsing models on the development set. As can be seen, the combined model significantly improves performance over prior work, giving a relative gain in weighted MAP of 29%. Table 4 shows that these improvements are consistent on the final test set, as well. The performance improvement seen by the combined model is actually larger on this set, with gains on our metrics ranging from 50% to 87%. On both of these datasets, the difference in MAP between the combined model and the distributional model is statistically significant (by a paired permutation test, $p < 0.05$ ). The differences between the combined model and the formal model, and between the formal model and the distributional model, are not statistically significant, as each method has certain kinds of queries that it performs well on. Only the combined model is able to consistently outperform the distributional model on all kinds of queries. Discussion Our model tends to outperform the distributional model on queries containing predicates with exact or partial correlates in Freebase. For example, our model obtains nearly perfect average precision on the queries “French newspaper ” and “Israeli prime minister ,” both of which can be exactly expressed in Freebase. The top features for $\textit {newspaper}$ ( $x$ ) all indicate that $x$ has type $\textsc {newspaper}$ in Freebase, and the top features for $\textit {newspaper\_N/N}$ ( $x$ , $y$ ) indicate that $y$ is a newspaper, and that $x$ is either the circulation area of $y$ or the language of $x$0 . The model also performs well on queries with partial Freebase correlates, such as “Microsoft head honcho ”, “The United States, 's closest ally”, and “Patriots linebacker ,” although with somewhat lower average precision. The high weight features in these cases tend to provide useful hints, even though there is no direct correlate; for example, the model learns that “honchos” are people, and that they tend to be CEOs and film producers. There are also some areas where our model can be improved. First, in some cases, the edge sequence features used by the model are not expressive enough to identify the correct relation in Freebase. An example of this problem is the “linebacker” example above, where the features for $\textit {linebacker\_N/N}$ can capture which athletes play for which teams, but not the positions of those athletes. Second, our model can under-perform on predicates with no close mapping to Freebase. An example where this problem occurs is the query “ is a NASA mission.” Third, there remains room to further improve the logical forms produced by the semantic parser, specifically for multi-word expressions. One problem occurs with multi-word noun modifiers, e.g., “Vice president Al Gore” is mapped to $\textit {vice}(\textsc {Al Gore}) \wedge \textit {president}(\textsc {Al Gore})$ . Another problem is that there is no back-off with multi-word relations. For example, the predicate $\textit {head\_honcho\_N/N}$ was never seen in the training data, so it is replaced with $\textit {unknown}$ ; however, it would be better to replace it with $\textit {honcho\_N/N}$ , which was seen in the training data. Finally, although using connected entities in Freebase as additional candidates during inference is helpful, it often over- or under-generates candidates. A more tailored, per-query search process could improve performance. Related work There is an extensive literature on building semantic parsers to answer questions against a KB BIBREF1 , BIBREF3 , BIBREF13 , BIBREF14 . Some of this work has used surface (or ungrounded) logical forms as an intermediate representation, similar to our work BIBREF15 , BIBREF16 , BIBREF8 , BIBREF17 . The main difference between our work and these techniques is that they map surface logical forms to a single executable Freebase query, while we learn execution models for the surface logical forms directly, using a weighted combination of Freebase queries as part of the model. None of these prior works can assign meaning to language that is not directly representable in the KB schema. Choi, Kwiatkowski and Zettlemoyer choi-2015-semantic-parsing-partial-ontologies presented an information extraction system that performs a semantic parse of open-domain text, recognizing when a predicate cannot be mapped to Freebase. However, while they recognize when a predicate is not mappable to Freebase, they do not attempt to learn execution models for those predicates, nor can they answer questions using those predicates. Yao and Van Durme yao-2014-info-extraction-freebase-qa and Dong et al. dong-2015-freebase-qa-mccnn proposed question answering models that use similar features to those used in this work. However, they did not produce semantic parses of language, instead using methods that are non-compositional and do not permit complex queries. Finally, learning probabilistic databases in an open vocabulary semantic parser has a strong connection with KB completion. In addition to SFE BIBREF6 , our work draws on work on embedding the entities and relations in a KB BIBREF12 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , as well as work on graph-based methods for reasoning with KBs BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Our combination of embedding methods with graph-based methods in this paper is suggestive of how one could combine the two in methods for KB completion. Initial work exploring this direction has already been done by Toutanova and Chen toutanova-2015-observed-vs-latent-kbc. Conclusion Prior work in semantic parsing has either leveraged large knowledge bases to answer questions, or used distributional techniques to gain broad coverage over all of natural language. In this paper, we have shown how to gain both of these benefits by converting the queries generated by traditional semantic parsers into features which are then used in open vocabulary semantic parsing models. We presented a technique to do this conversion in a way that is scalable using graph-based feature extraction methods. Our combined model achieved relative gains of over 50% in mean average precision and mean reciprocal rank versus a purely distributional approach. We also introduced a better mapping from surface text to logical forms, and a simple method for using a KB to find candidate entities during inference. Taken together, the methods introduced in this paper improved mean average precision on our task from .163 to .370, a 127% relative improvement over prior work. This work suggests a new direction for semantic parsing research. Existing semantic parsers map language to a single KB query, an approach that successfully leverages a KB's predicate instances, but is fundamentally limited by its schema. In contrast, our approach maps language to a weighted combination of queries plus a distributional component; this approach is capable of representing a much broader class of concepts while still using the KB when it is helpful. Furthermore, it is capable of using the KB even when the meaning of the language cannot be exactly represented by a KB predicate, which is a common occurrence. We believe that this kind of approach could significantly expand the applicability of semantic parsing techniques to more complex domains where the assumptions of traditional techniques are too limiting. We are actively exploring applying these techniques to science question answering BIBREF26 , for example, where existing KBs provide only partial coverage of the questions.
Freebase
1c4cd22d6eaefffd47b93c2124f6779a06d2d9e1
1c4cd22d6eaefffd47b93c2124f6779a06d2d9e1_0
Q: How big is their dataset? Text: Introduction Semantic parsing is the task of mapping a phrase in natural language onto a formal query in some fixed schema, which can then be executed against a knowledge base (KB) BIBREF0 , BIBREF1 . For example, the phrase “Who is the president of the United States?” might be mapped onto the query $\lambda (x).$ $\textsc {/government/president\_of}$ ( $x$ , $\textsc {USA}$ ), which, when executed against Freebase BIBREF2 , returns $\textsc {Barack Obama}$ . By mapping phrases to executable statements, semantic parsers can leverage large, curated sources of knowledge to answer questions BIBREF3 . This benefit comes with an inherent limitation, however—semantic parsers can only produce executable statements within their manually-produced schema. There is no query against Freebase that can answer questions like “Who are the Democratic front-runners in the US election?”, as Freebase does not encode information about front-runners. Semantic parsers trained for Freebase fail on these kinds of questions. To overcome this limitation, recent work has proposed methods for open vocabulary semantic parsing, which replace a formal KB with a probabilistic database learned from a text corpus. In these methods, language is mapped onto queries with predicates derived directly from the text itself BIBREF4 , BIBREF5 . For instance, the question above might be mapped to $\lambda (x).$ $\textit {president\_of}$ ( $x$ , $\textsc {USA}$ ). This query is not executable against any KB, however, and so open vocabulary semantic parsers must learn execution models for the predicates found in the text. They do this with a distributional approach similar to word embedding methods, giving them broad coverage, but lacking access to the large, curated KBs available to traditional semantic parsers. Prior work in semantic parsing, then, has either had direct access to the information in a knowledge base, or broad coverage over all of natural language using the information in a large corpus, but not both. In this work, we show how to combine these two approaches by incorporating KB information into open vocabulary semantic parsing models. Our key insight is that formal KB queries can be converted into features that can be added to the learned execution models of open vocabulary semantic parsers. This conversion allows open vocabulary models to use the KB fact $\textsc {/government/president\_of}$ ( $\textsc {BarackObama}$ , $\textsc {USA}$ ) when scoring $\textit {president\_of}$ ( $\textsc {BarackObama}$ , $\textsc {USA}$ ), without requiring the model to map the language onto a single formal statement. Crucially, this featurization also allows the model to use these KB facts even when they only provide partial information about the language being modeled. For example, knowing that an entity is a $\textsc {politician}$ is very helpful information for deciding whether that entity is a front-runner. Our approach, outlined in Figure 1 , effectively learns the meaning of a word as a distributional vector plus a weighted combination of Freebase queries, a considerably more expressive representation than those used by prior work. While this combination is the main contribution of our work, we also present some small improvements that allow open vocabulary semantic parsing models to make better use of KB information when it is available: improving the logical forms generated by the semantic parser, and employing a simple technique from related work for generating candidate entities from the KB. We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions. By giving open vocabulary semantic parsers direct access to KB information, we improve mean average precision on this task by over 120%. Open vocabulary semantic parsing In this section, we briefly describe the current state-of-the-art model for open vocabulary semantic parsing, introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary. Instead of mapping text to Freebase queries, as done by a traditional semantic parser, their method parses text to a surface logical form with predicates derived directly from the words in the text (see Figure 1 ). Next, a distribution over denotations for each predicate is learned using a matrix factorization approach similar to that of Riedel et al. riedel-2013-mf-universal-schema. This distribution is concisely represented using a probabilistic database, which also enables efficient probabilistic execution of logical form queries. The matrix factorization has two sets of parameters: each category or relation has a learned $k$ -dimensional embedding $\theta $ , and each entity or entity pair has a learned $k$ -dimensional embedding $\phi $ . The probability assigned to a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is given by: $ p(c(e)) &= \sigma ( \theta _c^T \phi _e ) \\ p(r(e_1, e_2)) &= \sigma ( \theta _r^T \phi _{(e_1, e_2)} ) $ The probability of a predicate instance is the sigmoided inner product of the corresponding predicate and entity embeddings. Predicates with nearby embeddings will have similar distributions over the entities in their denotation. The parameters $\theta $ and $\phi $ are learned using a query ranking objective that optimizes them to rank entities observed in the denotation of a logical form above unobserved entities. Given the trained predicate and entity parameters, the system is capable of efficiently computing the marginal probability that an entity is an element of a logical form's denotation using approximate inference algorithms for probabilistic databases. The model presented in this section is purely distributional, with predicate and entity models that draw only on co-occurrence information found in a corpus. In the following sections, we show how to augment this model with information contained in large, curated KBs such as Freebase. Converting Freebase queries to features Our key insight is that the executable queries used by traditional semantic parsers can be converted into features that provide KB information to the execution models of open vocabulary semantic parsers. Here we show how this is done. Traditional semantic parsers map words onto distributions over executable queries, select one to execute, and return sets of entities or entity pairs from a KB as a result. Instead of executing a single query, we can simply execute all possible queries and use an entity's (or entity pair's) membership in each set as a feature in our predicate models. There are two problems with this approach: (1) the set of all possible queries is intractably large, so we need a mechanism similar to a semantic parser's lexicon to select a small set of queries for each word; and (2) executing hundreds or thousands of queries at runtime for each predicate and entity is not computationally tractable. To solve these problems, we use a graph-based technique called subgraph feature extraction (SFE) BIBREF6 . Subgraph feature extraction SFE is a technique for generating feature matrices over node pairs in graphs with labeled edges. When the graph corresponds to a formal KB such as Freebase, the features generated by SFE are isomorphic to statements in the KB schema BIBREF7 . This means that we can use SFE to generate a feature vector for each entity (or entity pair) which succinctly captures the set of all statements in whose denotations the entity (or entity pair) appears. Using this feature vector as part of the semantic parser's entity models solves problem (2) above, and performing feature selection for each predicate solves problem (1). Some example features extracted by SFE are shown in Figure 2 . For entity pairs, these features include the sequence of edges (or paths) connecting the nodes corresponding to the entity pair. For entities, these features include the set of paths connected to the node, optionally including the node at the end of the path. Note the correspondence between these features and Freebase queries: the path $\langle $ $\textsc {designed}$ $\rightarrow $ $\textsc {located\_in}$ $\rangle $ can be executed as a query against Freebase, returning a set of (architect, location) entity pairs, where the architect designed a structure in the location. ( $\textsc {Palladio}$ , $\textsc {Italy}$ ) is one such entity pair, so this pair has a feature value of 1 for this query. Feature selection The feature vectors produced by SFE contain tens of millions of possible formal statements. Out of these tens of millions of formal statements, only a handful represent relevant Freebase queries for any particular predicate. We therefore select a small number of statements to consider for each learned predicate in the open vocabulary semantic parser. We select features by first summing the entity and entity pair feature vectors seen with each predicate in the training data. For example, the phrase “Italian architect Andrea Palladio” is considered a positive training example for the predicate instances $\textit {architect}(\textsc {Palladio})$ and $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$ . We add the feature vectors for $\textsc {Palladio}$ and ( $\textsc {Italy}$ , $\textsc {Palladio}$ ) to the feature counts for the predicates $\textit {architect}$ and $\textit {architect\_N/N}$ , respectively. This gives a set of counts $\textsc {count}$ ( $\pi $ ), $\textsc {count}$ ( $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$0 ), and $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$1 ( $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$2 ), for each predicate $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$3 and feature $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$4 . The features are then ranked by PMI for each predicate by computing $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$5 . After removing low-frequency features, we pick the $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$6 features with the highest PMI values for each predicate to use in our model. Combined predicate models Here we present our approach to incorporating KB information into open vocabulary semantic parsers. Having described how we use SFE to generate features corresponding to statements in a formal schema, adding these features to the models described in Section "Subgraph feature extraction" is straightforward. We saw in Section "Subgraph feature extraction" that open vocabulary semantic parsers learn distributional vectors for each category, relation, entity and entity pair. We augment these vectors with the feature vectors described in Section "Converting Freebase queries to features" . Each category and relation receives a weight $\omega $ for each selected Freebase query, and each entity and entity pair has an associated feature vector $\psi $ . The truth probability of a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is thus given by: $ p(c(e)) &= \sigma ( \theta _c^T \phi _e + \omega _c^T \psi _c(e)) \\ p(r(e_1, e_2)) &= \sigma ( \theta _r^T \phi _{(e_1, e_2)} + \omega _r^T \psi _r(e_1, e_2) ) $ In these equations, $\theta $ and $\phi $ are learned predicate and entity embeddings, as described in Section "Subgraph feature extraction" . The second term in the sum represents our new features and their learned weights. $\psi _c(e)$ and $\psi _r(e_1, e_2)$ are SFE feature vectors for each entity and entity pair; a different set of features is chosen for each predicate $c$ and $r$ , as described in Section "Making full use of KB information" . $\omega _c$ and $\omega _r$ are learned weights for these features. In our model, there are now three sets of parameters to be learned: (1) $\theta $ , low-dimensional distributional vectors trained for each predicate; (2) $\phi $ , low-dimensional distributional vectors trained for each entity and entity pair; and (3) $\omega $ , weights associated with the selected formal SFE features for each predicate. All of these parameters are optimized jointly, using the same method described in Section "Subgraph feature extraction" . Note here that each SFE feature corresponds to a query over the formal schema, defining a set of entities (or entity pairs). The associated feature weight measures the likelihood that an entity in this set is also in the denotation of the surface predicate. Our models include many such features for each surface predicate, effectively mapping each surface predicate onto a weighted combination of Freebase queries. Making full use of KB information In addition to improving predicate models, as just described, adding KB information to open vocabulary semantic parsers suggests two other simple improvements: (1) using more specific logical forms, and (2) generating candidate entities from the KB. Logical form generation Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary generate logical forms from natural language statements by computing a syntactic CCG parse, then applying a collection of rules to produce logical forms. However, their logical form analyses do not model noun-mediated relations well. For example, given the phrase “Italian architect Andrea Palladio,” their system's logical form would include the relation $\textit {N/N}(\textsc {Italy}, \textsc {Palladio})$ . Here, the $\textit {N/N}$ predicate represents a generic noun modifier relation; however, this relation is too vague for the predicate model to accurately learn its denotation. A similar problem occurs with prepositions and possessives, e.g., it is similarly hard to learn the denotation of the predicate $\textit {of}$ . Our system improves the analysis of noun-mediated relations by simply including the noun in the predicate name. In the architect example above, our system produces the relation $\textit {architect\_N/N}$ . It does this by concatenating all intervening noun modifiers between two entity mentions and including them in the predicate name; for example, “Illinois attorney general Lisa Madigan” produces the predicate $\textit {attorney\_general\_N/N}$ . We similarly improve the analyses of prepositions and possessives to include the head noun. For example, “Barack Obama, president of the U.S.” produces the predicate instance $\textit {president\_of}(\textsc {Barack Obama}, \textsc {U.S.})$ , and “Rome, Italy's capital” produces the predicate $\textit {^{\prime }s\_capital}$ . This process generates more specific predicates that more closely align with the KB facts that we make available to the predicate models. Candidate entity generation A key benefit of our predicate models is that they are able to assign scores to entity pairs that were never seen in the training data. Distributional models have no learned vectors for these entity pairs and therefore assume $p(r(e_1,e_2)) = 0$ for unseen entity pairs $(e_1,e_2)$ . This limits the recall of these models when applied to question answering, as entity pairs will not have been observed for many correct, but rare entity answers. In contrast, because our models have access to a large KB, the formal component of the model can always give a score to any entity pair in the KB. This allows our model to considerably improve question answering performance on rare entities. It would be computationally intractable to consider all Freebase entities as answers to queries, and so we use a simple candidate entity generation technique to consider only a small set of likely entities for a given query. We first find all entities in the query, and consider as candidates any entity that has either been seen at training time with a query entity or is directly connected to a query entity in Freebase. This candidate entity generation is common practice for recent question answering models over Freebase BIBREF8 , though, for the reasons stated above, it has not been used previously in open vocabulary semantic parsing models. Evaluation We evaluate our open-vocabulary semantic parser on a fill-in-the-blank natural language query task. Each test example is a natural language phrase containing at least two Freebase entities, one of which is held out. The system must propose a ranked list of Freebase entities to fill in the blank left by the held out entity, and the predicted entities are then judged manually for correctness. We compare our proposed models, which combine distributional and formal elements, with a purely distributional baseline from prior work. All of the data and code used in these experiments is available at http://github.com/allenai/open_vocab_semparse. Data Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations. We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. For a final evaluation, we generated another, similar test set from a different held out section of ClueWeb, in the same fashion as done by Krishnamurthy and Mitchell. This final test set contains 307 queries. Models We compare three models in our experiments: (1) the distributional model of Krishnamurthy and Mitchell, described in Section "Subgraph feature extraction" , which is the current state-of-the-art method for open vocabulary semantic parsing; (2) a formal model (new to this work), where the distributional parameters $\theta $ and $\phi $ in Section "Combined predicate models" are fixed at zero; and (3) the combined model described in Section "Combined predicate models" (also new to this work). In each of these models, we used vectors of size 300 for all embeddings. Except where noted, all experiments use our modified logical forms (Section "Evaluation" ) and our entity proposal mechanism (Section "Related work" ). We do not compare against any traditional semantic parsers, as more than half of the questions in our dataset are not answerable by Freebase queries, and so are out of scope for those parsers BIBREF5 . Methodology Given a fill-in-the-blank query such as “Italian architect ”, each system produces a ranked list of 100 candidate entities. To compare the output of the systems, we follow a pooled evaluation protocol commonly used in relation extraction and information retrieval BIBREF11 , BIBREF12 . We take the top 30 predictions from each system and manually annotate whether they are correct, and use those annotations to compute the average precision (AP) and reciprocal rank (RR) of each system on the query. Average precision is defined as $\frac{1}{m}\sum ^m_{k=1} \mathrm {Prec}(k) \times \mathrm {Correct}(k)$ , where $\mathrm {Prec}(k)$ is the precision at rank $k$ , $\mathrm {Correct}(k)$ is an indicator function for whether the $k$ th answer is correct, and $m$ is number of returned answers (up to 100 in this evaluation). AP is equivalent to calculating the area under a precision-recall curve. Reciprocal rank is computed by first finding the rank $r$ of the first correct prediction made by a system. Reciprocal rank is then $\frac{1}{r}$ , ranging from 1 (if the first prediction is correct) to 0 (if there is no correct answer returned). In the tables below we report mean average precision (MAP) and mean reciprocal rank (MRR), averaged over all of the queries in the test set. We also report a weighted version of MAP, where the AP of each query is scaled by the number of annotated correct answers to the query (shown as W-MAP in the tables for space considerations). Results We first show the effect of the new logical forms introduced in Section "Evaluation" . As can be seen in Table 1 , with our improved logical forms, all models are better able to capture the semantics of language. This improvement is most pronounced in the formal models, which have more capacity to get specific features from Freebase with the new logical forms. As our logical forms give all models better performance, the remaining experiments we present all use these logical forms. We next show the improvement gained by using the simple candidate entity generation outlined in Section "Related work" . By simply appending the list of connected entities in Freebase to the end of the rankings returned by the distributional model, MAP improves by 40% (see Table 2 ). The connectedness of an entity pair in Freebase is very informative, especially for rare entities that are not seen together during training. Table 3 shows a comparison between the semantic parsing models on the development set. As can be seen, the combined model significantly improves performance over prior work, giving a relative gain in weighted MAP of 29%. Table 4 shows that these improvements are consistent on the final test set, as well. The performance improvement seen by the combined model is actually larger on this set, with gains on our metrics ranging from 50% to 87%. On both of these datasets, the difference in MAP between the combined model and the distributional model is statistically significant (by a paired permutation test, $p < 0.05$ ). The differences between the combined model and the formal model, and between the formal model and the distributional model, are not statistically significant, as each method has certain kinds of queries that it performs well on. Only the combined model is able to consistently outperform the distributional model on all kinds of queries. Discussion Our model tends to outperform the distributional model on queries containing predicates with exact or partial correlates in Freebase. For example, our model obtains nearly perfect average precision on the queries “French newspaper ” and “Israeli prime minister ,” both of which can be exactly expressed in Freebase. The top features for $\textit {newspaper}$ ( $x$ ) all indicate that $x$ has type $\textsc {newspaper}$ in Freebase, and the top features for $\textit {newspaper\_N/N}$ ( $x$ , $y$ ) indicate that $y$ is a newspaper, and that $x$ is either the circulation area of $y$ or the language of $x$0 . The model also performs well on queries with partial Freebase correlates, such as “Microsoft head honcho ”, “The United States, 's closest ally”, and “Patriots linebacker ,” although with somewhat lower average precision. The high weight features in these cases tend to provide useful hints, even though there is no direct correlate; for example, the model learns that “honchos” are people, and that they tend to be CEOs and film producers. There are also some areas where our model can be improved. First, in some cases, the edge sequence features used by the model are not expressive enough to identify the correct relation in Freebase. An example of this problem is the “linebacker” example above, where the features for $\textit {linebacker\_N/N}$ can capture which athletes play for which teams, but not the positions of those athletes. Second, our model can under-perform on predicates with no close mapping to Freebase. An example where this problem occurs is the query “ is a NASA mission.” Third, there remains room to further improve the logical forms produced by the semantic parser, specifically for multi-word expressions. One problem occurs with multi-word noun modifiers, e.g., “Vice president Al Gore” is mapped to $\textit {vice}(\textsc {Al Gore}) \wedge \textit {president}(\textsc {Al Gore})$ . Another problem is that there is no back-off with multi-word relations. For example, the predicate $\textit {head\_honcho\_N/N}$ was never seen in the training data, so it is replaced with $\textit {unknown}$ ; however, it would be better to replace it with $\textit {honcho\_N/N}$ , which was seen in the training data. Finally, although using connected entities in Freebase as additional candidates during inference is helpful, it often over- or under-generates candidates. A more tailored, per-query search process could improve performance. Related work There is an extensive literature on building semantic parsers to answer questions against a KB BIBREF1 , BIBREF3 , BIBREF13 , BIBREF14 . Some of this work has used surface (or ungrounded) logical forms as an intermediate representation, similar to our work BIBREF15 , BIBREF16 , BIBREF8 , BIBREF17 . The main difference between our work and these techniques is that they map surface logical forms to a single executable Freebase query, while we learn execution models for the surface logical forms directly, using a weighted combination of Freebase queries as part of the model. None of these prior works can assign meaning to language that is not directly representable in the KB schema. Choi, Kwiatkowski and Zettlemoyer choi-2015-semantic-parsing-partial-ontologies presented an information extraction system that performs a semantic parse of open-domain text, recognizing when a predicate cannot be mapped to Freebase. However, while they recognize when a predicate is not mappable to Freebase, they do not attempt to learn execution models for those predicates, nor can they answer questions using those predicates. Yao and Van Durme yao-2014-info-extraction-freebase-qa and Dong et al. dong-2015-freebase-qa-mccnn proposed question answering models that use similar features to those used in this work. However, they did not produce semantic parses of language, instead using methods that are non-compositional and do not permit complex queries. Finally, learning probabilistic databases in an open vocabulary semantic parser has a strong connection with KB completion. In addition to SFE BIBREF6 , our work draws on work on embedding the entities and relations in a KB BIBREF12 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , as well as work on graph-based methods for reasoning with KBs BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Our combination of embedding methods with graph-based methods in this paper is suggestive of how one could combine the two in methods for KB completion. Initial work exploring this direction has already been done by Toutanova and Chen toutanova-2015-observed-vs-latent-kbc. Conclusion Prior work in semantic parsing has either leveraged large knowledge bases to answer questions, or used distributional techniques to gain broad coverage over all of natural language. In this paper, we have shown how to gain both of these benefits by converting the queries generated by traditional semantic parsers into features which are then used in open vocabulary semantic parsing models. We presented a technique to do this conversion in a way that is scalable using graph-based feature extraction methods. Our combined model achieved relative gains of over 50% in mean average precision and mean reciprocal rank versus a purely distributional approach. We also introduced a better mapping from surface text to logical forms, and a simple method for using a KB to find candidate entities during inference. Taken together, the methods introduced in this paper improved mean average precision on our task from .163 to .370, a 127% relative improvement over prior work. This work suggests a new direction for semantic parsing research. Existing semantic parsers map language to a single KB query, an approach that successfully leverages a KB's predicate instances, but is fundamentally limited by its schema. In contrast, our approach maps language to a weighted combination of queries plus a distributional component; this approach is capable of representing a much broader class of concepts while still using the KB when it is helpful. Furthermore, it is capable of using the KB even when the meaning of the language cannot be exactly represented by a KB predicate, which is a common occurrence. We believe that this kind of approach could significantly expand the applicability of semantic parsing techniques to more complex domains where the assumptions of traditional techniques are too limiting. We are actively exploring applying these techniques to science question answering BIBREF26 , for example, where existing KBs provide only partial coverage of the questions.
3 million webpages processed with a CCG parser for training, 220 queries for development, and 307 queries for testing
2122bd05c03dde098aa17e36773e1ac7b6011969
2122bd05c03dde098aa17e36773e1ac7b6011969_0
Q: What task do they evaluate on? Text: Introduction Semantic parsing is the task of mapping a phrase in natural language onto a formal query in some fixed schema, which can then be executed against a knowledge base (KB) BIBREF0 , BIBREF1 . For example, the phrase “Who is the president of the United States?” might be mapped onto the query $\lambda (x).$ $\textsc {/government/president\_of}$ ( $x$ , $\textsc {USA}$ ), which, when executed against Freebase BIBREF2 , returns $\textsc {Barack Obama}$ . By mapping phrases to executable statements, semantic parsers can leverage large, curated sources of knowledge to answer questions BIBREF3 . This benefit comes with an inherent limitation, however—semantic parsers can only produce executable statements within their manually-produced schema. There is no query against Freebase that can answer questions like “Who are the Democratic front-runners in the US election?”, as Freebase does not encode information about front-runners. Semantic parsers trained for Freebase fail on these kinds of questions. To overcome this limitation, recent work has proposed methods for open vocabulary semantic parsing, which replace a formal KB with a probabilistic database learned from a text corpus. In these methods, language is mapped onto queries with predicates derived directly from the text itself BIBREF4 , BIBREF5 . For instance, the question above might be mapped to $\lambda (x).$ $\textit {president\_of}$ ( $x$ , $\textsc {USA}$ ). This query is not executable against any KB, however, and so open vocabulary semantic parsers must learn execution models for the predicates found in the text. They do this with a distributional approach similar to word embedding methods, giving them broad coverage, but lacking access to the large, curated KBs available to traditional semantic parsers. Prior work in semantic parsing, then, has either had direct access to the information in a knowledge base, or broad coverage over all of natural language using the information in a large corpus, but not both. In this work, we show how to combine these two approaches by incorporating KB information into open vocabulary semantic parsing models. Our key insight is that formal KB queries can be converted into features that can be added to the learned execution models of open vocabulary semantic parsers. This conversion allows open vocabulary models to use the KB fact $\textsc {/government/president\_of}$ ( $\textsc {BarackObama}$ , $\textsc {USA}$ ) when scoring $\textit {president\_of}$ ( $\textsc {BarackObama}$ , $\textsc {USA}$ ), without requiring the model to map the language onto a single formal statement. Crucially, this featurization also allows the model to use these KB facts even when they only provide partial information about the language being modeled. For example, knowing that an entity is a $\textsc {politician}$ is very helpful information for deciding whether that entity is a front-runner. Our approach, outlined in Figure 1 , effectively learns the meaning of a word as a distributional vector plus a weighted combination of Freebase queries, a considerably more expressive representation than those used by prior work. While this combination is the main contribution of our work, we also present some small improvements that allow open vocabulary semantic parsing models to make better use of KB information when it is available: improving the logical forms generated by the semantic parser, and employing a simple technique from related work for generating candidate entities from the KB. We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions. By giving open vocabulary semantic parsers direct access to KB information, we improve mean average precision on this task by over 120%. Open vocabulary semantic parsing In this section, we briefly describe the current state-of-the-art model for open vocabulary semantic parsing, introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary. Instead of mapping text to Freebase queries, as done by a traditional semantic parser, their method parses text to a surface logical form with predicates derived directly from the words in the text (see Figure 1 ). Next, a distribution over denotations for each predicate is learned using a matrix factorization approach similar to that of Riedel et al. riedel-2013-mf-universal-schema. This distribution is concisely represented using a probabilistic database, which also enables efficient probabilistic execution of logical form queries. The matrix factorization has two sets of parameters: each category or relation has a learned $k$ -dimensional embedding $\theta $ , and each entity or entity pair has a learned $k$ -dimensional embedding $\phi $ . The probability assigned to a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is given by: $ p(c(e)) &= \sigma ( \theta _c^T \phi _e ) \\ p(r(e_1, e_2)) &= \sigma ( \theta _r^T \phi _{(e_1, e_2)} ) $ The probability of a predicate instance is the sigmoided inner product of the corresponding predicate and entity embeddings. Predicates with nearby embeddings will have similar distributions over the entities in their denotation. The parameters $\theta $ and $\phi $ are learned using a query ranking objective that optimizes them to rank entities observed in the denotation of a logical form above unobserved entities. Given the trained predicate and entity parameters, the system is capable of efficiently computing the marginal probability that an entity is an element of a logical form's denotation using approximate inference algorithms for probabilistic databases. The model presented in this section is purely distributional, with predicate and entity models that draw only on co-occurrence information found in a corpus. In the following sections, we show how to augment this model with information contained in large, curated KBs such as Freebase. Converting Freebase queries to features Our key insight is that the executable queries used by traditional semantic parsers can be converted into features that provide KB information to the execution models of open vocabulary semantic parsers. Here we show how this is done. Traditional semantic parsers map words onto distributions over executable queries, select one to execute, and return sets of entities or entity pairs from a KB as a result. Instead of executing a single query, we can simply execute all possible queries and use an entity's (or entity pair's) membership in each set as a feature in our predicate models. There are two problems with this approach: (1) the set of all possible queries is intractably large, so we need a mechanism similar to a semantic parser's lexicon to select a small set of queries for each word; and (2) executing hundreds or thousands of queries at runtime for each predicate and entity is not computationally tractable. To solve these problems, we use a graph-based technique called subgraph feature extraction (SFE) BIBREF6 . Subgraph feature extraction SFE is a technique for generating feature matrices over node pairs in graphs with labeled edges. When the graph corresponds to a formal KB such as Freebase, the features generated by SFE are isomorphic to statements in the KB schema BIBREF7 . This means that we can use SFE to generate a feature vector for each entity (or entity pair) which succinctly captures the set of all statements in whose denotations the entity (or entity pair) appears. Using this feature vector as part of the semantic parser's entity models solves problem (2) above, and performing feature selection for each predicate solves problem (1). Some example features extracted by SFE are shown in Figure 2 . For entity pairs, these features include the sequence of edges (or paths) connecting the nodes corresponding to the entity pair. For entities, these features include the set of paths connected to the node, optionally including the node at the end of the path. Note the correspondence between these features and Freebase queries: the path $\langle $ $\textsc {designed}$ $\rightarrow $ $\textsc {located\_in}$ $\rangle $ can be executed as a query against Freebase, returning a set of (architect, location) entity pairs, where the architect designed a structure in the location. ( $\textsc {Palladio}$ , $\textsc {Italy}$ ) is one such entity pair, so this pair has a feature value of 1 for this query. Feature selection The feature vectors produced by SFE contain tens of millions of possible formal statements. Out of these tens of millions of formal statements, only a handful represent relevant Freebase queries for any particular predicate. We therefore select a small number of statements to consider for each learned predicate in the open vocabulary semantic parser. We select features by first summing the entity and entity pair feature vectors seen with each predicate in the training data. For example, the phrase “Italian architect Andrea Palladio” is considered a positive training example for the predicate instances $\textit {architect}(\textsc {Palladio})$ and $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$ . We add the feature vectors for $\textsc {Palladio}$ and ( $\textsc {Italy}$ , $\textsc {Palladio}$ ) to the feature counts for the predicates $\textit {architect}$ and $\textit {architect\_N/N}$ , respectively. This gives a set of counts $\textsc {count}$ ( $\pi $ ), $\textsc {count}$ ( $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$0 ), and $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$1 ( $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$2 ), for each predicate $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$3 and feature $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$4 . The features are then ranked by PMI for each predicate by computing $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$5 . After removing low-frequency features, we pick the $\textit {architect\_N/N}(\textsc {Italy}, \textsc {Palladio})$6 features with the highest PMI values for each predicate to use in our model. Combined predicate models Here we present our approach to incorporating KB information into open vocabulary semantic parsers. Having described how we use SFE to generate features corresponding to statements in a formal schema, adding these features to the models described in Section "Subgraph feature extraction" is straightforward. We saw in Section "Subgraph feature extraction" that open vocabulary semantic parsers learn distributional vectors for each category, relation, entity and entity pair. We augment these vectors with the feature vectors described in Section "Converting Freebase queries to features" . Each category and relation receives a weight $\omega $ for each selected Freebase query, and each entity and entity pair has an associated feature vector $\psi $ . The truth probability of a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is thus given by: $ p(c(e)) &= \sigma ( \theta _c^T \phi _e + \omega _c^T \psi _c(e)) \\ p(r(e_1, e_2)) &= \sigma ( \theta _r^T \phi _{(e_1, e_2)} + \omega _r^T \psi _r(e_1, e_2) ) $ In these equations, $\theta $ and $\phi $ are learned predicate and entity embeddings, as described in Section "Subgraph feature extraction" . The second term in the sum represents our new features and their learned weights. $\psi _c(e)$ and $\psi _r(e_1, e_2)$ are SFE feature vectors for each entity and entity pair; a different set of features is chosen for each predicate $c$ and $r$ , as described in Section "Making full use of KB information" . $\omega _c$ and $\omega _r$ are learned weights for these features. In our model, there are now three sets of parameters to be learned: (1) $\theta $ , low-dimensional distributional vectors trained for each predicate; (2) $\phi $ , low-dimensional distributional vectors trained for each entity and entity pair; and (3) $\omega $ , weights associated with the selected formal SFE features for each predicate. All of these parameters are optimized jointly, using the same method described in Section "Subgraph feature extraction" . Note here that each SFE feature corresponds to a query over the formal schema, defining a set of entities (or entity pairs). The associated feature weight measures the likelihood that an entity in this set is also in the denotation of the surface predicate. Our models include many such features for each surface predicate, effectively mapping each surface predicate onto a weighted combination of Freebase queries. Making full use of KB information In addition to improving predicate models, as just described, adding KB information to open vocabulary semantic parsers suggests two other simple improvements: (1) using more specific logical forms, and (2) generating candidate entities from the KB. Logical form generation Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary generate logical forms from natural language statements by computing a syntactic CCG parse, then applying a collection of rules to produce logical forms. However, their logical form analyses do not model noun-mediated relations well. For example, given the phrase “Italian architect Andrea Palladio,” their system's logical form would include the relation $\textit {N/N}(\textsc {Italy}, \textsc {Palladio})$ . Here, the $\textit {N/N}$ predicate represents a generic noun modifier relation; however, this relation is too vague for the predicate model to accurately learn its denotation. A similar problem occurs with prepositions and possessives, e.g., it is similarly hard to learn the denotation of the predicate $\textit {of}$ . Our system improves the analysis of noun-mediated relations by simply including the noun in the predicate name. In the architect example above, our system produces the relation $\textit {architect\_N/N}$ . It does this by concatenating all intervening noun modifiers between two entity mentions and including them in the predicate name; for example, “Illinois attorney general Lisa Madigan” produces the predicate $\textit {attorney\_general\_N/N}$ . We similarly improve the analyses of prepositions and possessives to include the head noun. For example, “Barack Obama, president of the U.S.” produces the predicate instance $\textit {president\_of}(\textsc {Barack Obama}, \textsc {U.S.})$ , and “Rome, Italy's capital” produces the predicate $\textit {^{\prime }s\_capital}$ . This process generates more specific predicates that more closely align with the KB facts that we make available to the predicate models. Candidate entity generation A key benefit of our predicate models is that they are able to assign scores to entity pairs that were never seen in the training data. Distributional models have no learned vectors for these entity pairs and therefore assume $p(r(e_1,e_2)) = 0$ for unseen entity pairs $(e_1,e_2)$ . This limits the recall of these models when applied to question answering, as entity pairs will not have been observed for many correct, but rare entity answers. In contrast, because our models have access to a large KB, the formal component of the model can always give a score to any entity pair in the KB. This allows our model to considerably improve question answering performance on rare entities. It would be computationally intractable to consider all Freebase entities as answers to queries, and so we use a simple candidate entity generation technique to consider only a small set of likely entities for a given query. We first find all entities in the query, and consider as candidates any entity that has either been seen at training time with a query entity or is directly connected to a query entity in Freebase. This candidate entity generation is common practice for recent question answering models over Freebase BIBREF8 , though, for the reasons stated above, it has not been used previously in open vocabulary semantic parsing models. Evaluation We evaluate our open-vocabulary semantic parser on a fill-in-the-blank natural language query task. Each test example is a natural language phrase containing at least two Freebase entities, one of which is held out. The system must propose a ranked list of Freebase entities to fill in the blank left by the held out entity, and the predicted entities are then judged manually for correctness. We compare our proposed models, which combine distributional and formal elements, with a purely distributional baseline from prior work. All of the data and code used in these experiments is available at http://github.com/allenai/open_vocab_semparse. Data Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations. We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. For a final evaluation, we generated another, similar test set from a different held out section of ClueWeb, in the same fashion as done by Krishnamurthy and Mitchell. This final test set contains 307 queries. Models We compare three models in our experiments: (1) the distributional model of Krishnamurthy and Mitchell, described in Section "Subgraph feature extraction" , which is the current state-of-the-art method for open vocabulary semantic parsing; (2) a formal model (new to this work), where the distributional parameters $\theta $ and $\phi $ in Section "Combined predicate models" are fixed at zero; and (3) the combined model described in Section "Combined predicate models" (also new to this work). In each of these models, we used vectors of size 300 for all embeddings. Except where noted, all experiments use our modified logical forms (Section "Evaluation" ) and our entity proposal mechanism (Section "Related work" ). We do not compare against any traditional semantic parsers, as more than half of the questions in our dataset are not answerable by Freebase queries, and so are out of scope for those parsers BIBREF5 . Methodology Given a fill-in-the-blank query such as “Italian architect ”, each system produces a ranked list of 100 candidate entities. To compare the output of the systems, we follow a pooled evaluation protocol commonly used in relation extraction and information retrieval BIBREF11 , BIBREF12 . We take the top 30 predictions from each system and manually annotate whether they are correct, and use those annotations to compute the average precision (AP) and reciprocal rank (RR) of each system on the query. Average precision is defined as $\frac{1}{m}\sum ^m_{k=1} \mathrm {Prec}(k) \times \mathrm {Correct}(k)$ , where $\mathrm {Prec}(k)$ is the precision at rank $k$ , $\mathrm {Correct}(k)$ is an indicator function for whether the $k$ th answer is correct, and $m$ is number of returned answers (up to 100 in this evaluation). AP is equivalent to calculating the area under a precision-recall curve. Reciprocal rank is computed by first finding the rank $r$ of the first correct prediction made by a system. Reciprocal rank is then $\frac{1}{r}$ , ranging from 1 (if the first prediction is correct) to 0 (if there is no correct answer returned). In the tables below we report mean average precision (MAP) and mean reciprocal rank (MRR), averaged over all of the queries in the test set. We also report a weighted version of MAP, where the AP of each query is scaled by the number of annotated correct answers to the query (shown as W-MAP in the tables for space considerations). Results We first show the effect of the new logical forms introduced in Section "Evaluation" . As can be seen in Table 1 , with our improved logical forms, all models are better able to capture the semantics of language. This improvement is most pronounced in the formal models, which have more capacity to get specific features from Freebase with the new logical forms. As our logical forms give all models better performance, the remaining experiments we present all use these logical forms. We next show the improvement gained by using the simple candidate entity generation outlined in Section "Related work" . By simply appending the list of connected entities in Freebase to the end of the rankings returned by the distributional model, MAP improves by 40% (see Table 2 ). The connectedness of an entity pair in Freebase is very informative, especially for rare entities that are not seen together during training. Table 3 shows a comparison between the semantic parsing models on the development set. As can be seen, the combined model significantly improves performance over prior work, giving a relative gain in weighted MAP of 29%. Table 4 shows that these improvements are consistent on the final test set, as well. The performance improvement seen by the combined model is actually larger on this set, with gains on our metrics ranging from 50% to 87%. On both of these datasets, the difference in MAP between the combined model and the distributional model is statistically significant (by a paired permutation test, $p < 0.05$ ). The differences between the combined model and the formal model, and between the formal model and the distributional model, are not statistically significant, as each method has certain kinds of queries that it performs well on. Only the combined model is able to consistently outperform the distributional model on all kinds of queries. Discussion Our model tends to outperform the distributional model on queries containing predicates with exact or partial correlates in Freebase. For example, our model obtains nearly perfect average precision on the queries “French newspaper ” and “Israeli prime minister ,” both of which can be exactly expressed in Freebase. The top features for $\textit {newspaper}$ ( $x$ ) all indicate that $x$ has type $\textsc {newspaper}$ in Freebase, and the top features for $\textit {newspaper\_N/N}$ ( $x$ , $y$ ) indicate that $y$ is a newspaper, and that $x$ is either the circulation area of $y$ or the language of $x$0 . The model also performs well on queries with partial Freebase correlates, such as “Microsoft head honcho ”, “The United States, 's closest ally”, and “Patriots linebacker ,” although with somewhat lower average precision. The high weight features in these cases tend to provide useful hints, even though there is no direct correlate; for example, the model learns that “honchos” are people, and that they tend to be CEOs and film producers. There are also some areas where our model can be improved. First, in some cases, the edge sequence features used by the model are not expressive enough to identify the correct relation in Freebase. An example of this problem is the “linebacker” example above, where the features for $\textit {linebacker\_N/N}$ can capture which athletes play for which teams, but not the positions of those athletes. Second, our model can under-perform on predicates with no close mapping to Freebase. An example where this problem occurs is the query “ is a NASA mission.” Third, there remains room to further improve the logical forms produced by the semantic parser, specifically for multi-word expressions. One problem occurs with multi-word noun modifiers, e.g., “Vice president Al Gore” is mapped to $\textit {vice}(\textsc {Al Gore}) \wedge \textit {president}(\textsc {Al Gore})$ . Another problem is that there is no back-off with multi-word relations. For example, the predicate $\textit {head\_honcho\_N/N}$ was never seen in the training data, so it is replaced with $\textit {unknown}$ ; however, it would be better to replace it with $\textit {honcho\_N/N}$ , which was seen in the training data. Finally, although using connected entities in Freebase as additional candidates during inference is helpful, it often over- or under-generates candidates. A more tailored, per-query search process could improve performance. Related work There is an extensive literature on building semantic parsers to answer questions against a KB BIBREF1 , BIBREF3 , BIBREF13 , BIBREF14 . Some of this work has used surface (or ungrounded) logical forms as an intermediate representation, similar to our work BIBREF15 , BIBREF16 , BIBREF8 , BIBREF17 . The main difference between our work and these techniques is that they map surface logical forms to a single executable Freebase query, while we learn execution models for the surface logical forms directly, using a weighted combination of Freebase queries as part of the model. None of these prior works can assign meaning to language that is not directly representable in the KB schema. Choi, Kwiatkowski and Zettlemoyer choi-2015-semantic-parsing-partial-ontologies presented an information extraction system that performs a semantic parse of open-domain text, recognizing when a predicate cannot be mapped to Freebase. However, while they recognize when a predicate is not mappable to Freebase, they do not attempt to learn execution models for those predicates, nor can they answer questions using those predicates. Yao and Van Durme yao-2014-info-extraction-freebase-qa and Dong et al. dong-2015-freebase-qa-mccnn proposed question answering models that use similar features to those used in this work. However, they did not produce semantic parses of language, instead using methods that are non-compositional and do not permit complex queries. Finally, learning probabilistic databases in an open vocabulary semantic parser has a strong connection with KB completion. In addition to SFE BIBREF6 , our work draws on work on embedding the entities and relations in a KB BIBREF12 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , as well as work on graph-based methods for reasoning with KBs BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Our combination of embedding methods with graph-based methods in this paper is suggestive of how one could combine the two in methods for KB completion. Initial work exploring this direction has already been done by Toutanova and Chen toutanova-2015-observed-vs-latent-kbc. Conclusion Prior work in semantic parsing has either leveraged large knowledge bases to answer questions, or used distributional techniques to gain broad coverage over all of natural language. In this paper, we have shown how to gain both of these benefits by converting the queries generated by traditional semantic parsers into features which are then used in open vocabulary semantic parsing models. We presented a technique to do this conversion in a way that is scalable using graph-based feature extraction methods. Our combined model achieved relative gains of over 50% in mean average precision and mean reciprocal rank versus a purely distributional approach. We also introduced a better mapping from surface text to logical forms, and a simple method for using a KB to find candidate entities during inference. Taken together, the methods introduced in this paper improved mean average precision on our task from .163 to .370, a 127% relative improvement over prior work. This work suggests a new direction for semantic parsing research. Existing semantic parsers map language to a single KB query, an approach that successfully leverages a KB's predicate instances, but is fundamentally limited by its schema. In contrast, our approach maps language to a weighted combination of queries plus a distributional component; this approach is capable of representing a much broader class of concepts while still using the KB when it is helpful. Furthermore, it is capable of using the KB even when the meaning of the language cannot be exactly represented by a KB predicate, which is a common occurrence. We believe that this kind of approach could significantly expand the applicability of semantic parsing techniques to more complex domains where the assumptions of traditional techniques are too limiting. We are actively exploring applying these techniques to science question answering BIBREF26 , for example, where existing KBs provide only partial coverage of the questions.
Fill-in-the-blank natural language questions
1d6c42e3f545d55daa86bea6fabf0b1c52a93bbb
1d6c42e3f545d55daa86bea6fabf0b1c52a93bbb_0
Q: Do some pretraining objectives perform better than others for sentence level understanding tasks? Text: Introduction State-of-the-art models for natural language processing (NLP) tasks like translation, question answering, and parsing include components intended to extract representations for the meaning and contents of each input sentence. These sentence encoder components are typically trained directly for the target task at hand. This approach can be effective on data rich tasks and yields human performance on some narrowly-defined benchmarks BIBREF1 , BIBREF2 , but it is tenable only for the few NLP tasks with millions of examples of training data. This has prompted interest in pretraining for sentence encoding: There is good reason to believe it should be possible to exploit outside data and training signals to effectively pretrain these encoders, both because they are intended to primarily capture sentence meaning rather than any task-specific skill, and because we have seen dramatic successes with pretraining in the related domains of word embeddings BIBREF3 and image encoders BIBREF4 . More concretely, four recent papers show that pretrained sentence encoders can yield very strong performance on NLP tasks. First, BIBREF5 show that a BiLSTM encoder from a neural machine translation (MT) system can be effectively reused elsewhere. BIBREF6 , BIBREF0 , and BIBREF7 show that various kinds of encoder pretrained in an unsupervised fashion through generative language modeling (LM) are effective as well. Each paper uses its own evaluation methods, though, making it unclear which pretraining task is most effective or whether multiple pretraining tasks can be productively combined; in the related setting of sentence-to-vector encoding, multitask learning with multiple labeled datasets has yielded a robust state of the art BIBREF8 . This paper attempts to systematically address these questions. We train reusable sentence encoders on 17 different pretraining tasks, several simple baselines, and several combinations of these tasks, all using a single model architecture and procedure for pretraining and transfer, inspired by ELMo. We then evaluate each of these encoders on the nine target language understanding tasks in the GLUE benchmark BIBREF9 , yielding a total of 40 sentence encoders and 360 total trained models. We then measure correlation in performance across target tasks and plot learning curves evaluating the effect of training data volume on each pretraining and target tasks. Looking to the results of this experiment, we find that language modeling is the most effective single pretraining task we study, and that multitask learning during pretraining can offer further gains and a new state-of-the-art among fixed sentence encoders. We also, however, find reasons to worry that ELMo-style pretraining, in which we pretrain a model and use it on target tasks with no further fine-tuning, is brittle and seriously limiting: (i) Trivial baseline representations do nearly as well as the best pretrained encoders, and the margins between substantially different pretraining tasks can be extremely small. (ii) Different target tasks differ dramatically on what kinds of pretraining they benefit most from, and multitask pretraining is not sufficient to circumvent this problem and offer general-purpose pretrained encoders. Related Work Work toward learning reusable sentence encoders can be traced back at least as far as the multitask model of BIBREF10 , but has seen a recent surge in progress with the successes of CoVe BIBREF5 , ULMFit BIBREF6 , ELMo BIBREF0 , and the Transformer LM BIBREF7 . However, each uses a different model and dataset from the others, so while these works serve as existence proofs that effective reusable sentence encoders are possible, they do not address the question of what task or tasks should be used to create them. The revival of interest in sentence encoder pretraining is recent enough that relatively little has been done to understand the relative merits of these models, though two exceptions stand out. In unpublished work, BIBREF11 offer an analysis of the relative strengths of translation and language modeling using a single architecture and training dataset. They find that encoders trained as language models reliably uncover the most syntactic structure, even when they are trained on a strict subset of the data used for a comparable translation model. BIBREF12 offer a deeper investigation of model design issues for ELMo, showing that all of the standard architectures for sentence encoding can be effectively pretrained with broadly similar performance, and that all learn reasonably good representations of the morphological and syntactic properties of sentences. There has been a great deal of work on sentence-to-vector encoding, a setting in which the pretrained encoder produces a fixed-size vector representation for each input sentence BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . These vectors are potentially useful for tasks that require fast similarity-based matching of sentences, but using them to replace sentence encoders trained in the conventional way on a given target text classification task does not reliably yield state-of-the art performance on that task BIBREF8 . Multitask representation learning in NLP in general has been well studied, and again can be traced back at least as far as BIBREF10 . For example, BIBREF18 show promising results from the combination of translation and parsing, BIBREF8 show the benefits of multitask learning in sentence-to-vector encoding, and BIBREF19 and BIBREF20 offer studies of when multitask learning is helpful for lower-level NLP tasks. Pretraining Tasks Our main experiment compares encoders pretrained on a large number of tasks and task combinations, where a task is a dataset–objective function pair. This section lists these tasks, which we select either to serve as baselines or because they have shown promise in outside prior work, especially prior work on sentence-to-vector encoding. Appendix "Additional Pretraining Task Details" includes additional details on how we implemented some of these tasks, and names tasks we evaluated but left out. Models and Training Procedures We implement our models using the AllenNLP toolkit BIBREF37 , aiming to build the simplest architecture that could be reasonably expected to perform well on the target tasks under study. The design of the models roughly follows that used in the GLUE baselines and ELMo. Results Table 1 shows results on the GLUE dev set for all our pretrained encoders, each with and without the pretrained ELMo BiLSTM layers ( $^E$ ). The N/A baselines are untrained encoders with random intialization. The Single-Task baselines are aggregations of results from nine GLUE runs: The result in this row for a given GLUE task uses the encoder pretrained on only that task. For consistency with other runs, we treat the pretraining task and the target task as two separate tasks in all cases (including here) and give them separate task-specific parameters, despite the fact that they use identical data. We use $^S$ and $^C$ to distinguish the sequence-to-sequence and classification versions of the Reddit task, respectively. To comply with GLUE's limits on test set access, we evaluated only three of our pretrained encoders on test data. These reflect our best models with and without the use of the pretrained ELMo encoder, and with and without the use of GLUE data during pretraining. For discussion of our limited hyperparameter tuning, see above. For roughly-comparable GLUE results in prior work, see BIBREF9 or https://www.gluebenchmark.com; we omit them here in the interest of space. The limited size of a US Letter page prevent us from including these baselines in this table. As of writing, the best test result using a comparable frozen pretrained encoder is 68.9 from BIBREF9 for a model similar to our GLUE $^E$ multitask model, and the best overall result is 72.8 from BIBREF7 with a model that is fine-tuned in its entirety for each target task. While not feasible to run each setting multiple times, we estimate the variance of the GLUE score by re-running the random encoder and MNLI pretraining setups with and without ELMo with different random seeds. Across five runs, we recorded $\sigma =0.4$ for the random encoder (N/A in table), and $\sigma =0.2$ for MNLI $^E$ . This variation is substantial but not so high as to render results meaningless. For the explicitly adversarial WNLI dataset BIBREF28 , only one of our models reached even the most frequent class performance of 56.3. In computing average and test set performances, we replace model predictions with the most frequent label to simulate the better performance achievable by choosing not to model that task. Looking to other target tasks, the grammar-related CoLA task benefits dramatically from ELMo pretraining: The best result without language model pretraining is less than half the result achieved with such pretraining. In contrast, the meaning-oriented textual similarity benchmark STS sees good results with several kinds of pretraining, but does not benefit substantially from the use of ELMo. Comparing pretraining tasks in isolation without ELMo, language modeling performs best, followed by MNLI. The remaining pretraining tasks yield performance near that of the random baseline. Even when training directly on each target task (Single-Task in table), we get less than a one point gain over this simple baseline. Adding ELMo yielded improvements in performance across all pretraining tasks. MNLI and English–German translation perform best in this setting, with SkipThought, Reddit classification, and DisSent also outperforming the ELMo-augmented random baseline. With ELMo, a multitask model performs best, but without it, all three multitask models are tied or outperformed by models trained on one of their constituent tasks, suggesting that our approach to multitask learning is not reliably able to produce models that productively use the knowledge taught by each training task. However, of the two non-ELMo models that perform best on the development data, the multitask model generalizes better than the single-task model on test data for tasks like STS where the test set contains new out-of-domain data. Conclusions This paper presents a systematic comparison of tasks and task-combinations for the pretraining of sentence-level BiLSTM encoders like those seen in ELMo and CoVe. With 40 pretraining tasks and task combinations (not counting many more ruled out early) and nine target tasks, this represents a far more comprehensive study than any seen on this problem to date. Our chief positive results are perhaps unsurprising: Language modeling works well as a pretraining task, and no other single task is consistently better. Multitask pretraining can produce results better than any single task can, and sets a new state-of-the-art among comparable models. Target task performance continues to improve with the addition of more language model data, even at large scales, suggesting that further work scaling up language model pretraining is warranted. However, a closer look at our results suggests that the pretrain-and-freeze paradigm that underlies ELMo and CoVe might not be a sound platform for future work: Some trivial baselines do strikingly well, the margins between pretraining tasks are small, and some pretraining configurations (such as MNLI $^E$ ) yield better performance with less data. This suggests that we may be nearing an upper bound on the performance that can be reached with methods like these. In addition, different tasks benefit from different forms of pretraining to a striking degree—with correlations between target tasks often low or negative—and multitask pretraining tasks fail to reliably produce models better than their best individual components. This suggests that if truly general-purpose sentence encoders are possible, our current methods cannot produce them. While further work on language modeling seems straightforward and worthwhile, the author(s) of this paper believe that the future of this line of work will require a better understanding of the ways in which neural network target task models can benefit from outside knowledge and data, and new methods for pretraining and transfer learning to allow them to do so. Hyperparameters and Optimization Details See Section UID18 for general comments on hyperparameter tuning. Multitask Learning Methods Our multitask learning experiments have three somewhat distinctive properties: (i) We mix tasks with very different amounts of training data—at the extreme, under 1,000 examples for WNLI, and over 1,000,000,000 examples from LM BWB. (ii) Our goal is to optimize the quality of the shared encoder, not the performance of any one of the tasks in the multitask mix. (iii) We mix a relatively large number of tasks, up to eighteen at once in some conditions. These conditions make it challenging but important to avoid overfitting or underfitting any of our tasks. Relatively little work has been done on this problem, so we conduct a small experiment here. All our experiments use the basic paradigm of randomly sampling a new task to train on at each step, and we experiment with two hyperparameters that can be used to control over- and underfitting: The probability with which we sample each task and the weight with which we scale the loss for each task. Our experiments follow the setup in Appendix "Hyperparameters and Optimization Details" , and do not use the ELMo BiLSTM. Diagnostic Set Results Table 6 , below, shows results on the four coarse-grained categories of the GLUE diagnostic set for all our pretraining experiments. This set consists of about 1000 expert-constructed examples in NLI format meant to isolate a range of relevant phenomena. Results use the target task classifier trained on the MNLI training set. No model achieves performance anywhere close to human-level performance, suggesting that either none of our pretrained models extract features that are suitable for robust reasoning over text, or that the MNLI training set and the MNLI target-task model are not able to exploit any such features that exist. See Section "Analysis and Discussion" for further discussion. While no model achieves near-human performance, the use of ELMo and other forms of unsupervised pretraining appears to be helpful on examples that highlight world knowledge and lexical-semantic knowledge, and less so on examples that highlight complex logical reasoning patterns or alternations in sentence structure. This relative weakness on sentence structure is somewhat surprising given the finding in BIBREF11 that language model pretraining is helpful for tasks involving sentence structure.
Yes
480e10e5a1b9c0ae9f7763b7611eeae9e925096b
480e10e5a1b9c0ae9f7763b7611eeae9e925096b_0
Q: Did the authors try stacking multiple convolutional layers? Text: Introduction Large-scale knowledge bases (KBs), such as YAGO BIBREF0 , Freebase BIBREF1 and DBpedia BIBREF2 , are usually databases of triples representing the relationships between entities in the form of fact (head entity, relation, tail entity) denoted as (h, r, t), e.g., (Melbourne, cityOf, Australia). These KBs are useful resources in many applications such as semantic searching and ranking BIBREF3 , BIBREF4 , BIBREF5 , question answering BIBREF6 , BIBREF7 and machine reading BIBREF8 . However, the KBs are still incomplete, i.e., missing a lot of valid triples BIBREF9 , BIBREF10 . Therefore, much research work has been devoted towards knowledge base completion or link prediction to predict whether a triple (h, r, t) is valid or not BIBREF11 . Many embedding models have proposed to learn vector or matrix representations for entities and relations, obtaining state-of-the-art (SOTA) link prediction results BIBREF12 . In these embedding models, valid triples obtain lower implausibility scores than invalid triples. Let us take the well-known embedding model TransE BIBREF13 as an example. In TransE, entities and relations are represented by $k$ -dimensional vector embeddings. TransE employs a transitional characteristic to model relationships between entities, in which it assumes that if (h, r, t) is a valid fact, the embedding of head entity $h$ plus the embedding of relation $r$ should be close to the embedding of tail entity $t$ , i.e. $v_h$ + $v_r$ $\approx $ $v_t$ (here, $v_h$ , $v_r$ and $h$0 are embeddings of $h$1 , $h$2 and $h$3 respectively). That is, a TransE score $h$4 of the valid triple (h, r, t) should be close to 0 and smaller than a score $h$5 of an invalid triple (h', r', t'). The transitional characteristic in TransE also implies the global relationships among same dimensional entries of $h$6 , $h$7 and $h$8 . Other transition-based models extend TransE to additionally use projection vectors or matrices to translate head and tail embeddings into the relation vector space, such as: TransH BIBREF14 , TransR BIBREF15 , TransD BIBREF16 , STransE BIBREF17 and TranSparse BIBREF18 . Furthermore, DISTMULT BIBREF19 and ComplEx BIBREF20 use a tri-linear dot product to compute the score for each triple. Recent research has shown that using relation paths between entities in the KBs could help to get contextual information for improving KB completion performance BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . See other embedding models for KB completion in BIBREF26 . Recently, convolutional neural networks (CNNs), originally designed for computer vision BIBREF27 , have significantly received research attention in natural language processing BIBREF28 , BIBREF29 . CNN learns non-linear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks. Inspired from the success in computer vision, BIBREF30 proposed ConvE—the first model applying CNN for the KB completion task. In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer. Different filters of the same $3\times 3$ shape are operated over the input matrix to output feature map tensors. These feature map tensors are then vectorized and mapped into a vector via a linear transformation. Then this vector is computed with $v_t$ via a dot product to return a score for (h, r, t). See a formal definition of the ConvE score function in Table 1 . It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of $v_h$ or $v_r$ , i.e., ConvE does not observe the global relationships among same dimensional entries of an embedding triple ( $v_h$ , $v_r$ , $v_t$ ), so that ConvE ignores the transitional characteristic in transition-based models, which is one of the most useful intuitions for the task. In this paper, we present ConvKB—an embedding model which proposes a novel use of CNN for the KB completion task. In ConvKB, each entity or relation is associated with an unique $k$ -dimensional embedding. Let $v_h$ , $v_r$ and $v_t$ denote $k$ -dimensional embeddings of $h$ , $r$ and $t$ , respectively. For each triple (h, r, t), the corresponding triple of $k$ -dimensional embeddings ( $v_h$ , $v_h$0 , $v_h$1 ) is represented as a $v_h$2 input matrix. This input matrix is fed to the convolution layer where different filters of the same $v_h$3 shape are used to extract the global relationships among same dimensional entries of the embedding triple. That is, these filters are repeatedly operated over every row of the input matrix to produce different feature maps. The feature maps are concatenated into a single feature vector which is then computed with a weight vector via a dot product to produce a score for the triple (h, r, t). This score is used to infer whether the triple (h, r, t) is valid or not. Our contributions in this paper are as follows: Proposed ConvKB model A knowledge base $\mathcal {G}$ is a collection of valid factual triples in the form of (head entity, relation, tail entity) denoted as $(h, r, t)$ such that $h, t \in \mathcal {E}$ and $r \in \mathcal {R}$ where $\mathcal {E}$ is a set of entities and $\mathcal {R}$ is a set of relations. Embedding models aim to define a score function $f$ giving an implausibility score for each triple $(h, r, t)$ such that valid triples receive lower scores than invalid triples. Table 1 presents score functions in previous SOTA models. We denote the dimensionality of embeddings by $k$ such that each embedding triple ( $v_h$ , $v_r$ , $v_t$ ) are viewed as a matrix $A = [v_h,v_r,v_t] \in \mathbb {R}^{k\times 3}$ . And $A_{i,:} \in \mathbb {R}^{1\times 3}$ denotes the $i$ -th row of $A$ . Suppose that we use a filter $\omega \in \mathbb {R}^{1\times 3}$ operated on the convolution layer. $\omega $ is not only aimed to examine the global relationships between same dimensional entries of the embedding triple ( $v_h$0 , $v_h$1 , $v_h$2 ), but also to generalize the transitional characteristics in the transition-based models. $v_h$3 is repeatedly operated over every row of $v_h$4 to finally generate a feature map $v_h$5 as: $$v_i = g\left(\omega \cdot {A_{i,:}} + b\right) \nonumber $$ (Eq. 4) where $b \in \mathbb {R}$ is a bias term and $g$ is some activation function such as ReLU. Our ConvKB uses different filters $\in \mathbb {R}^{1\times 3}$ to generate different feature maps. Let ${\Omega }$ and $\tau $ denote the set of filters and the number of filters, respectively, i.e. $\tau = |{\Omega }|$ , resulting in $\tau $ feature maps. These $\tau $ feature maps are concatenated into a single vector $\in \mathbb {R}^{\tau k\times 1}$ which is then computed with a weight vector ${w} \in \mathbb {R}^{\tau k\times 1}$ via a dot product to give a score for the triple $(h, r, t)$ . Figure 1 illustrates the computation process in ConvKB. Formally, we define the ConvKB score function $f$ as follows: $$f(h,r,t) = \mathsf {concat}\left(g\left([v_h,v_r,v_t]\ast {\Omega }\right)\right)\cdot {w} \nonumber $$ (Eq. 6) where ${\Omega }$ and ${w}$ are shared parameters, independent of $h$ , $r$ and $t$ ; $\ast $ denotes a convolution operator; and $\mathsf {concat}$ denotes a concatenation operator. If we only use one filter $\omega $ (i.e. using $\tau =1$ ) with a fixed bias term $b=0$ and the activation function $g(x)=|x|$ or $g(x)=x^2$ , and fix $\omega = [1, 1, -1]$ and ${w} = \textbf {1}$ during training, ConvKB reduces to the plain TransE model BIBREF13 . So our ConvKB model can be viewed as an extension of TransE to further model global relationships. We use the Adam optimizer BIBREF32 to train ConvKB by minimizing the loss function $\mathcal {L}$ BIBREF20 with $L_2$ regularization on the weight vector ${w}$ of the model: $$\text{in which, } l_{(h,r,t)} = \left\lbrace \begin{array}{l} 1\;\text{for } (h,r,t)\in \mathcal {G}\\ -1\;\text{for } (h,r,t)\in \mathcal {G}^{\prime } \end{array} \right.$$ (Eq. ) $ \mathcal {L} & = \sum _{\begin{array}{c}(h,r,t) \in \lbrace \mathcal {G} \cup \mathcal {G}^{\prime }\rbrace \end{array}} \log \left(1 + \exp \left(l_{(h,r,t)} \cdot f\left(h,r,t\right)\right)\right) \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{\lambda }{2}\Vert {w}\Vert ^2_2 \nonumber $ here $\mathcal {G}^{\prime }$ is a collection of invalid triples generated by corrupting valid triples in $\mathcal {G}$ . Datasets We evaluate ConvKB on two benchmark datasets: WN18RR BIBREF30 and FB15k-237 BIBREF31 . WN18RR and FB15k-237 are correspondingly subsets of two common datasets WN18 and FB15k BIBREF13 . As noted by BIBREF31 , WN18 and FB15k are easy because they contain many reversible relations. So knowing relations are reversible allows us to easily predict the majority of test triples, e.g. state-of-the-art results on both WN18 and FB15k are obtained by using a simple reversal rule as shown in BIBREF30 . Therefore, WN18RR and FB15k-237 are created to not suffer from this reversible relation problem in WN18 and FB15k, for which the knowledge base completion task is more realistic. Table 2 presents the statistics of WN18RR and FB15k-237. Evaluation protocol In the KB completion or link prediction task BIBREF13 , the purpose is to predict a missing entity given a relation and another entity, i.e, inferring $h$ given $(r, t)$ or inferring $t$ given $(h, r)$ . The results are calculated based on ranking the scores produced by the score function $f$ on test triples. Following BIBREF13 , for each valid test triple $(h, r, t)$ , we replace either $h$ or $t$ by each of other entities in $\mathcal {E}$ to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF13 , i.e., not taking any corrupted triples that appear in the KB into accounts. We rank the valid test triple and corrupted triples in ascending order of their scores. We employ three common evaluation metrics: mean rank (MR), mean reciprocal rank (MRR), and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance. Training protocol We use the common Bernoulli trick BIBREF14 , BIBREF15 to generate the head or tail entities when sampling invalid triples. We also use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in ConvKB. We employ a TransE implementation available at: https://github.com/datquocnguyen/STransE. We train TransE for 3,000 epochs, using a grid search of hyper-parameters: the dimensionality of embeddings $k \in \lbrace 50, 100\rbrace $ , SGD learning rate $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $ , $\mathit {l}_1$ -norm or $\mathit {l}_2$ -norm, and margin $\gamma \in \lbrace 1, 3, 5, 7\rbrace $ . The highest Hits@10 scores on the validation set are when using $\mathit {l}_1$ -norm, learning rate at $5e^{-4}$ , $\gamma $ = 5 and $k$ = 50 for WN18RR, and using $\mathit {l}_1$ -norm, learning rate at $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $0 , $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $1 = 1 and k = 100 for FB15k-237. To learn our model parameters including entity and relation embeddings, filters $\omega $ and the weight vector ${w}$ , we use Adam BIBREF32 and select its initial learning rate $\in \lbrace 5e^{-6}, 1e^{-5}, 5e^{-5}, 1e^{-4}, 5e^{-4}\rbrace $ . We use ReLU as the activation function $g$ . We fix the batch size at 256 and set the $L_2$ -regularizer $\lambda $ at 0.001 in our objective function. The filters $\omega $ are initialized by a truncated normal distribution or by $[0.1, 0.1, -0.1]$ . We select the number of filters $\tau \in \lbrace 50, 100, 200, 400, 500\rbrace $ . We run ConvKB up to 200 epochs and use outputs from the last epoch for evaluation. The highest Hits@10 scores on the validation set are obtained when using $k$ = 50, ${w}$0 , the truncated normal distribution for filter initialization, and the initial learning rate at ${w}$1 on WN18RR; and k = 100, ${w}$2 , ${w}$3 for filter initialization, and the initial learning rate at ${w}$4 on FB15k-237. Main experimental results Table 3 compares the experimental results of our ConvKB model with previous published results, using the same experimental setup. Table 3 shows that ConvKB obtains the best MR and highest Hits@10 scores on WN18RR and also the highest MRR and Hits@10 scores on FB15k-237. ConvKB does better than the closely related model TransE on both experimental datasets, especially on FB15k-237 where ConvKB gains significant improvements of $347-257 = 90$ in MR (which is about 26% relative improvement) and $0.396 - 0.294 = 0.102$ in MRR (which is 34+% relative improvement), and also obtains $51.7 - 46.5 = 5.2$ % absolute improvement in Hits@10. Previous work shows that TransE obtains very competitive results BIBREF21 , BIBREF38 , BIBREF20 , BIBREF25 . However, when comparing the CNN-based embedding model ConvE with other models, BIBREF30 did not experiment with TransE. We reconfirm previous findings that TransE in fact is a strong baseline model, e.g., TransE obtains better MR and Hits@10 than ConvE on WN18RR. ConvKB obtains better scores than ConvE on both datasets (except MRR on WN18RR and MR on FB15k-237), thus showing the usefulness of taking transitional characteristics into accounts. In particular, on FB15k-237, ConvKB achieves improvements of $0.394-0.316 = 0.078$ in MRR (which is about 25% relative improvement) and $51.7 - 49.1 = 2.6$ % in Hits@10, while both ConvKB and ConvE produce similar MR scores. ConvKB also obtains 25% relatively higher MRR score than the relation path-based model KB $_{LRN}$ on FB15k-237. In addition, ConvKB gives better Hits@10 than KB $_{LRN}$ , however, KB $_{LRN}$ gives better MR than ConvKB. We plan to extend ConvKB with relation path information to obtain better link prediction performance in future work. Conclusion In this paper, we propose a novel embedding model ConvKB for the knowledge base completion task. ConvKB applies the convolutional neural network to explore the global relationships among same dimensional entries of the entity and relation embeddings, so that ConvKB generalizes the transitional characteristics in the transition-based embedding models. Experimental results show that our model ConvKB outperforms other state-of-the-art models on two benchmark datasets WN18RR and FB15k-237. Our code is available at: https://github.com/daiquocnguyen/ConvKB. We also plan to extend ConvKB for a new application where we could formulate data in the form of triples. For example, inspired from the work by BIBREF39 for search personalization, we can also apply ConvKB to model user-oriented relationships between submitted queries and documents returned by search engines, i.e. modeling triple representations (query, user, document). Acknowledgments This research was partially supported by the Australian Research Council (ARC) Discovery Grant Project DP160103934.
No
056fc821d1ec1e8ca5dc958d14ea389857b1a299
056fc821d1ec1e8ca5dc958d14ea389857b1a299_0
Q: How many feature maps are generated for a given triple? Text: Introduction Large-scale knowledge bases (KBs), such as YAGO BIBREF0 , Freebase BIBREF1 and DBpedia BIBREF2 , are usually databases of triples representing the relationships between entities in the form of fact (head entity, relation, tail entity) denoted as (h, r, t), e.g., (Melbourne, cityOf, Australia). These KBs are useful resources in many applications such as semantic searching and ranking BIBREF3 , BIBREF4 , BIBREF5 , question answering BIBREF6 , BIBREF7 and machine reading BIBREF8 . However, the KBs are still incomplete, i.e., missing a lot of valid triples BIBREF9 , BIBREF10 . Therefore, much research work has been devoted towards knowledge base completion or link prediction to predict whether a triple (h, r, t) is valid or not BIBREF11 . Many embedding models have proposed to learn vector or matrix representations for entities and relations, obtaining state-of-the-art (SOTA) link prediction results BIBREF12 . In these embedding models, valid triples obtain lower implausibility scores than invalid triples. Let us take the well-known embedding model TransE BIBREF13 as an example. In TransE, entities and relations are represented by $k$ -dimensional vector embeddings. TransE employs a transitional characteristic to model relationships between entities, in which it assumes that if (h, r, t) is a valid fact, the embedding of head entity $h$ plus the embedding of relation $r$ should be close to the embedding of tail entity $t$ , i.e. $v_h$ + $v_r$ $\approx $ $v_t$ (here, $v_h$ , $v_r$ and $h$0 are embeddings of $h$1 , $h$2 and $h$3 respectively). That is, a TransE score $h$4 of the valid triple (h, r, t) should be close to 0 and smaller than a score $h$5 of an invalid triple (h', r', t'). The transitional characteristic in TransE also implies the global relationships among same dimensional entries of $h$6 , $h$7 and $h$8 . Other transition-based models extend TransE to additionally use projection vectors or matrices to translate head and tail embeddings into the relation vector space, such as: TransH BIBREF14 , TransR BIBREF15 , TransD BIBREF16 , STransE BIBREF17 and TranSparse BIBREF18 . Furthermore, DISTMULT BIBREF19 and ComplEx BIBREF20 use a tri-linear dot product to compute the score for each triple. Recent research has shown that using relation paths between entities in the KBs could help to get contextual information for improving KB completion performance BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . See other embedding models for KB completion in BIBREF26 . Recently, convolutional neural networks (CNNs), originally designed for computer vision BIBREF27 , have significantly received research attention in natural language processing BIBREF28 , BIBREF29 . CNN learns non-linear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks. Inspired from the success in computer vision, BIBREF30 proposed ConvE—the first model applying CNN for the KB completion task. In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer. Different filters of the same $3\times 3$ shape are operated over the input matrix to output feature map tensors. These feature map tensors are then vectorized and mapped into a vector via a linear transformation. Then this vector is computed with $v_t$ via a dot product to return a score for (h, r, t). See a formal definition of the ConvE score function in Table 1 . It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of $v_h$ or $v_r$ , i.e., ConvE does not observe the global relationships among same dimensional entries of an embedding triple ( $v_h$ , $v_r$ , $v_t$ ), so that ConvE ignores the transitional characteristic in transition-based models, which is one of the most useful intuitions for the task. In this paper, we present ConvKB—an embedding model which proposes a novel use of CNN for the KB completion task. In ConvKB, each entity or relation is associated with an unique $k$ -dimensional embedding. Let $v_h$ , $v_r$ and $v_t$ denote $k$ -dimensional embeddings of $h$ , $r$ and $t$ , respectively. For each triple (h, r, t), the corresponding triple of $k$ -dimensional embeddings ( $v_h$ , $v_h$0 , $v_h$1 ) is represented as a $v_h$2 input matrix. This input matrix is fed to the convolution layer where different filters of the same $v_h$3 shape are used to extract the global relationships among same dimensional entries of the embedding triple. That is, these filters are repeatedly operated over every row of the input matrix to produce different feature maps. The feature maps are concatenated into a single feature vector which is then computed with a weight vector via a dot product to produce a score for the triple (h, r, t). This score is used to infer whether the triple (h, r, t) is valid or not. Our contributions in this paper are as follows: Proposed ConvKB model A knowledge base $\mathcal {G}$ is a collection of valid factual triples in the form of (head entity, relation, tail entity) denoted as $(h, r, t)$ such that $h, t \in \mathcal {E}$ and $r \in \mathcal {R}$ where $\mathcal {E}$ is a set of entities and $\mathcal {R}$ is a set of relations. Embedding models aim to define a score function $f$ giving an implausibility score for each triple $(h, r, t)$ such that valid triples receive lower scores than invalid triples. Table 1 presents score functions in previous SOTA models. We denote the dimensionality of embeddings by $k$ such that each embedding triple ( $v_h$ , $v_r$ , $v_t$ ) are viewed as a matrix $A = [v_h,v_r,v_t] \in \mathbb {R}^{k\times 3}$ . And $A_{i,:} \in \mathbb {R}^{1\times 3}$ denotes the $i$ -th row of $A$ . Suppose that we use a filter $\omega \in \mathbb {R}^{1\times 3}$ operated on the convolution layer. $\omega $ is not only aimed to examine the global relationships between same dimensional entries of the embedding triple ( $v_h$0 , $v_h$1 , $v_h$2 ), but also to generalize the transitional characteristics in the transition-based models. $v_h$3 is repeatedly operated over every row of $v_h$4 to finally generate a feature map $v_h$5 as: $$v_i = g\left(\omega \cdot {A_{i,:}} + b\right) \nonumber $$ (Eq. 4) where $b \in \mathbb {R}$ is a bias term and $g$ is some activation function such as ReLU. Our ConvKB uses different filters $\in \mathbb {R}^{1\times 3}$ to generate different feature maps. Let ${\Omega }$ and $\tau $ denote the set of filters and the number of filters, respectively, i.e. $\tau = |{\Omega }|$ , resulting in $\tau $ feature maps. These $\tau $ feature maps are concatenated into a single vector $\in \mathbb {R}^{\tau k\times 1}$ which is then computed with a weight vector ${w} \in \mathbb {R}^{\tau k\times 1}$ via a dot product to give a score for the triple $(h, r, t)$ . Figure 1 illustrates the computation process in ConvKB. Formally, we define the ConvKB score function $f$ as follows: $$f(h,r,t) = \mathsf {concat}\left(g\left([v_h,v_r,v_t]\ast {\Omega }\right)\right)\cdot {w} \nonumber $$ (Eq. 6) where ${\Omega }$ and ${w}$ are shared parameters, independent of $h$ , $r$ and $t$ ; $\ast $ denotes a convolution operator; and $\mathsf {concat}$ denotes a concatenation operator. If we only use one filter $\omega $ (i.e. using $\tau =1$ ) with a fixed bias term $b=0$ and the activation function $g(x)=|x|$ or $g(x)=x^2$ , and fix $\omega = [1, 1, -1]$ and ${w} = \textbf {1}$ during training, ConvKB reduces to the plain TransE model BIBREF13 . So our ConvKB model can be viewed as an extension of TransE to further model global relationships. We use the Adam optimizer BIBREF32 to train ConvKB by minimizing the loss function $\mathcal {L}$ BIBREF20 with $L_2$ regularization on the weight vector ${w}$ of the model: $$\text{in which, } l_{(h,r,t)} = \left\lbrace \begin{array}{l} 1\;\text{for } (h,r,t)\in \mathcal {G}\\ -1\;\text{for } (h,r,t)\in \mathcal {G}^{\prime } \end{array} \right.$$ (Eq. ) $ \mathcal {L} & = \sum _{\begin{array}{c}(h,r,t) \in \lbrace \mathcal {G} \cup \mathcal {G}^{\prime }\rbrace \end{array}} \log \left(1 + \exp \left(l_{(h,r,t)} \cdot f\left(h,r,t\right)\right)\right) \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{\lambda }{2}\Vert {w}\Vert ^2_2 \nonumber $ here $\mathcal {G}^{\prime }$ is a collection of invalid triples generated by corrupting valid triples in $\mathcal {G}$ . Datasets We evaluate ConvKB on two benchmark datasets: WN18RR BIBREF30 and FB15k-237 BIBREF31 . WN18RR and FB15k-237 are correspondingly subsets of two common datasets WN18 and FB15k BIBREF13 . As noted by BIBREF31 , WN18 and FB15k are easy because they contain many reversible relations. So knowing relations are reversible allows us to easily predict the majority of test triples, e.g. state-of-the-art results on both WN18 and FB15k are obtained by using a simple reversal rule as shown in BIBREF30 . Therefore, WN18RR and FB15k-237 are created to not suffer from this reversible relation problem in WN18 and FB15k, for which the knowledge base completion task is more realistic. Table 2 presents the statistics of WN18RR and FB15k-237. Evaluation protocol In the KB completion or link prediction task BIBREF13 , the purpose is to predict a missing entity given a relation and another entity, i.e, inferring $h$ given $(r, t)$ or inferring $t$ given $(h, r)$ . The results are calculated based on ranking the scores produced by the score function $f$ on test triples. Following BIBREF13 , for each valid test triple $(h, r, t)$ , we replace either $h$ or $t$ by each of other entities in $\mathcal {E}$ to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF13 , i.e., not taking any corrupted triples that appear in the KB into accounts. We rank the valid test triple and corrupted triples in ascending order of their scores. We employ three common evaluation metrics: mean rank (MR), mean reciprocal rank (MRR), and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance. Training protocol We use the common Bernoulli trick BIBREF14 , BIBREF15 to generate the head or tail entities when sampling invalid triples. We also use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in ConvKB. We employ a TransE implementation available at: https://github.com/datquocnguyen/STransE. We train TransE for 3,000 epochs, using a grid search of hyper-parameters: the dimensionality of embeddings $k \in \lbrace 50, 100\rbrace $ , SGD learning rate $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $ , $\mathit {l}_1$ -norm or $\mathit {l}_2$ -norm, and margin $\gamma \in \lbrace 1, 3, 5, 7\rbrace $ . The highest Hits@10 scores on the validation set are when using $\mathit {l}_1$ -norm, learning rate at $5e^{-4}$ , $\gamma $ = 5 and $k$ = 50 for WN18RR, and using $\mathit {l}_1$ -norm, learning rate at $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $0 , $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $1 = 1 and k = 100 for FB15k-237. To learn our model parameters including entity and relation embeddings, filters $\omega $ and the weight vector ${w}$ , we use Adam BIBREF32 and select its initial learning rate $\in \lbrace 5e^{-6}, 1e^{-5}, 5e^{-5}, 1e^{-4}, 5e^{-4}\rbrace $ . We use ReLU as the activation function $g$ . We fix the batch size at 256 and set the $L_2$ -regularizer $\lambda $ at 0.001 in our objective function. The filters $\omega $ are initialized by a truncated normal distribution or by $[0.1, 0.1, -0.1]$ . We select the number of filters $\tau \in \lbrace 50, 100, 200, 400, 500\rbrace $ . We run ConvKB up to 200 epochs and use outputs from the last epoch for evaluation. The highest Hits@10 scores on the validation set are obtained when using $k$ = 50, ${w}$0 , the truncated normal distribution for filter initialization, and the initial learning rate at ${w}$1 on WN18RR; and k = 100, ${w}$2 , ${w}$3 for filter initialization, and the initial learning rate at ${w}$4 on FB15k-237. Main experimental results Table 3 compares the experimental results of our ConvKB model with previous published results, using the same experimental setup. Table 3 shows that ConvKB obtains the best MR and highest Hits@10 scores on WN18RR and also the highest MRR and Hits@10 scores on FB15k-237. ConvKB does better than the closely related model TransE on both experimental datasets, especially on FB15k-237 where ConvKB gains significant improvements of $347-257 = 90$ in MR (which is about 26% relative improvement) and $0.396 - 0.294 = 0.102$ in MRR (which is 34+% relative improvement), and also obtains $51.7 - 46.5 = 5.2$ % absolute improvement in Hits@10. Previous work shows that TransE obtains very competitive results BIBREF21 , BIBREF38 , BIBREF20 , BIBREF25 . However, when comparing the CNN-based embedding model ConvE with other models, BIBREF30 did not experiment with TransE. We reconfirm previous findings that TransE in fact is a strong baseline model, e.g., TransE obtains better MR and Hits@10 than ConvE on WN18RR. ConvKB obtains better scores than ConvE on both datasets (except MRR on WN18RR and MR on FB15k-237), thus showing the usefulness of taking transitional characteristics into accounts. In particular, on FB15k-237, ConvKB achieves improvements of $0.394-0.316 = 0.078$ in MRR (which is about 25% relative improvement) and $51.7 - 49.1 = 2.6$ % in Hits@10, while both ConvKB and ConvE produce similar MR scores. ConvKB also obtains 25% relatively higher MRR score than the relation path-based model KB $_{LRN}$ on FB15k-237. In addition, ConvKB gives better Hits@10 than KB $_{LRN}$ , however, KB $_{LRN}$ gives better MR than ConvKB. We plan to extend ConvKB with relation path information to obtain better link prediction performance in future work. Conclusion In this paper, we propose a novel embedding model ConvKB for the knowledge base completion task. ConvKB applies the convolutional neural network to explore the global relationships among same dimensional entries of the entity and relation embeddings, so that ConvKB generalizes the transitional characteristics in the transition-based embedding models. Experimental results show that our model ConvKB outperforms other state-of-the-art models on two benchmark datasets WN18RR and FB15k-237. Our code is available at: https://github.com/daiquocnguyen/ConvKB. We also plan to extend ConvKB for a new application where we could formulate data in the form of triples. For example, inspired from the work by BIBREF39 for search personalization, we can also apply ConvKB to model user-oriented relationships between submitted queries and documents returned by search engines, i.e. modeling triple representations (query, user, document). Acknowledgments This research was partially supported by the Australian Research Council (ARC) Discovery Grant Project DP160103934.
3 feature maps for a given tuple
974868e4e22f14766bcc76dc4927a7f2795dcd5e
974868e4e22f14766bcc76dc4927a7f2795dcd5e_0
Q: How does the number of parameters compare to other knowledge base completion models? Text: Introduction Large-scale knowledge bases (KBs), such as YAGO BIBREF0 , Freebase BIBREF1 and DBpedia BIBREF2 , are usually databases of triples representing the relationships between entities in the form of fact (head entity, relation, tail entity) denoted as (h, r, t), e.g., (Melbourne, cityOf, Australia). These KBs are useful resources in many applications such as semantic searching and ranking BIBREF3 , BIBREF4 , BIBREF5 , question answering BIBREF6 , BIBREF7 and machine reading BIBREF8 . However, the KBs are still incomplete, i.e., missing a lot of valid triples BIBREF9 , BIBREF10 . Therefore, much research work has been devoted towards knowledge base completion or link prediction to predict whether a triple (h, r, t) is valid or not BIBREF11 . Many embedding models have proposed to learn vector or matrix representations for entities and relations, obtaining state-of-the-art (SOTA) link prediction results BIBREF12 . In these embedding models, valid triples obtain lower implausibility scores than invalid triples. Let us take the well-known embedding model TransE BIBREF13 as an example. In TransE, entities and relations are represented by $k$ -dimensional vector embeddings. TransE employs a transitional characteristic to model relationships between entities, in which it assumes that if (h, r, t) is a valid fact, the embedding of head entity $h$ plus the embedding of relation $r$ should be close to the embedding of tail entity $t$ , i.e. $v_h$ + $v_r$ $\approx $ $v_t$ (here, $v_h$ , $v_r$ and $h$0 are embeddings of $h$1 , $h$2 and $h$3 respectively). That is, a TransE score $h$4 of the valid triple (h, r, t) should be close to 0 and smaller than a score $h$5 of an invalid triple (h', r', t'). The transitional characteristic in TransE also implies the global relationships among same dimensional entries of $h$6 , $h$7 and $h$8 . Other transition-based models extend TransE to additionally use projection vectors or matrices to translate head and tail embeddings into the relation vector space, such as: TransH BIBREF14 , TransR BIBREF15 , TransD BIBREF16 , STransE BIBREF17 and TranSparse BIBREF18 . Furthermore, DISTMULT BIBREF19 and ComplEx BIBREF20 use a tri-linear dot product to compute the score for each triple. Recent research has shown that using relation paths between entities in the KBs could help to get contextual information for improving KB completion performance BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . See other embedding models for KB completion in BIBREF26 . Recently, convolutional neural networks (CNNs), originally designed for computer vision BIBREF27 , have significantly received research attention in natural language processing BIBREF28 , BIBREF29 . CNN learns non-linear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks. Inspired from the success in computer vision, BIBREF30 proposed ConvE—the first model applying CNN for the KB completion task. In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer. Different filters of the same $3\times 3$ shape are operated over the input matrix to output feature map tensors. These feature map tensors are then vectorized and mapped into a vector via a linear transformation. Then this vector is computed with $v_t$ via a dot product to return a score for (h, r, t). See a formal definition of the ConvE score function in Table 1 . It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of $v_h$ or $v_r$ , i.e., ConvE does not observe the global relationships among same dimensional entries of an embedding triple ( $v_h$ , $v_r$ , $v_t$ ), so that ConvE ignores the transitional characteristic in transition-based models, which is one of the most useful intuitions for the task. In this paper, we present ConvKB—an embedding model which proposes a novel use of CNN for the KB completion task. In ConvKB, each entity or relation is associated with an unique $k$ -dimensional embedding. Let $v_h$ , $v_r$ and $v_t$ denote $k$ -dimensional embeddings of $h$ , $r$ and $t$ , respectively. For each triple (h, r, t), the corresponding triple of $k$ -dimensional embeddings ( $v_h$ , $v_h$0 , $v_h$1 ) is represented as a $v_h$2 input matrix. This input matrix is fed to the convolution layer where different filters of the same $v_h$3 shape are used to extract the global relationships among same dimensional entries of the embedding triple. That is, these filters are repeatedly operated over every row of the input matrix to produce different feature maps. The feature maps are concatenated into a single feature vector which is then computed with a weight vector via a dot product to produce a score for the triple (h, r, t). This score is used to infer whether the triple (h, r, t) is valid or not. Our contributions in this paper are as follows: Proposed ConvKB model A knowledge base $\mathcal {G}$ is a collection of valid factual triples in the form of (head entity, relation, tail entity) denoted as $(h, r, t)$ such that $h, t \in \mathcal {E}$ and $r \in \mathcal {R}$ where $\mathcal {E}$ is a set of entities and $\mathcal {R}$ is a set of relations. Embedding models aim to define a score function $f$ giving an implausibility score for each triple $(h, r, t)$ such that valid triples receive lower scores than invalid triples. Table 1 presents score functions in previous SOTA models. We denote the dimensionality of embeddings by $k$ such that each embedding triple ( $v_h$ , $v_r$ , $v_t$ ) are viewed as a matrix $A = [v_h,v_r,v_t] \in \mathbb {R}^{k\times 3}$ . And $A_{i,:} \in \mathbb {R}^{1\times 3}$ denotes the $i$ -th row of $A$ . Suppose that we use a filter $\omega \in \mathbb {R}^{1\times 3}$ operated on the convolution layer. $\omega $ is not only aimed to examine the global relationships between same dimensional entries of the embedding triple ( $v_h$0 , $v_h$1 , $v_h$2 ), but also to generalize the transitional characteristics in the transition-based models. $v_h$3 is repeatedly operated over every row of $v_h$4 to finally generate a feature map $v_h$5 as: $$v_i = g\left(\omega \cdot {A_{i,:}} + b\right) \nonumber $$ (Eq. 4) where $b \in \mathbb {R}$ is a bias term and $g$ is some activation function such as ReLU. Our ConvKB uses different filters $\in \mathbb {R}^{1\times 3}$ to generate different feature maps. Let ${\Omega }$ and $\tau $ denote the set of filters and the number of filters, respectively, i.e. $\tau = |{\Omega }|$ , resulting in $\tau $ feature maps. These $\tau $ feature maps are concatenated into a single vector $\in \mathbb {R}^{\tau k\times 1}$ which is then computed with a weight vector ${w} \in \mathbb {R}^{\tau k\times 1}$ via a dot product to give a score for the triple $(h, r, t)$ . Figure 1 illustrates the computation process in ConvKB. Formally, we define the ConvKB score function $f$ as follows: $$f(h,r,t) = \mathsf {concat}\left(g\left([v_h,v_r,v_t]\ast {\Omega }\right)\right)\cdot {w} \nonumber $$ (Eq. 6) where ${\Omega }$ and ${w}$ are shared parameters, independent of $h$ , $r$ and $t$ ; $\ast $ denotes a convolution operator; and $\mathsf {concat}$ denotes a concatenation operator. If we only use one filter $\omega $ (i.e. using $\tau =1$ ) with a fixed bias term $b=0$ and the activation function $g(x)=|x|$ or $g(x)=x^2$ , and fix $\omega = [1, 1, -1]$ and ${w} = \textbf {1}$ during training, ConvKB reduces to the plain TransE model BIBREF13 . So our ConvKB model can be viewed as an extension of TransE to further model global relationships. We use the Adam optimizer BIBREF32 to train ConvKB by minimizing the loss function $\mathcal {L}$ BIBREF20 with $L_2$ regularization on the weight vector ${w}$ of the model: $$\text{in which, } l_{(h,r,t)} = \left\lbrace \begin{array}{l} 1\;\text{for } (h,r,t)\in \mathcal {G}\\ -1\;\text{for } (h,r,t)\in \mathcal {G}^{\prime } \end{array} \right.$$ (Eq. ) $ \mathcal {L} & = \sum _{\begin{array}{c}(h,r,t) \in \lbrace \mathcal {G} \cup \mathcal {G}^{\prime }\rbrace \end{array}} \log \left(1 + \exp \left(l_{(h,r,t)} \cdot f\left(h,r,t\right)\right)\right) \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{\lambda }{2}\Vert {w}\Vert ^2_2 \nonumber $ here $\mathcal {G}^{\prime }$ is a collection of invalid triples generated by corrupting valid triples in $\mathcal {G}$ . Datasets We evaluate ConvKB on two benchmark datasets: WN18RR BIBREF30 and FB15k-237 BIBREF31 . WN18RR and FB15k-237 are correspondingly subsets of two common datasets WN18 and FB15k BIBREF13 . As noted by BIBREF31 , WN18 and FB15k are easy because they contain many reversible relations. So knowing relations are reversible allows us to easily predict the majority of test triples, e.g. state-of-the-art results on both WN18 and FB15k are obtained by using a simple reversal rule as shown in BIBREF30 . Therefore, WN18RR and FB15k-237 are created to not suffer from this reversible relation problem in WN18 and FB15k, for which the knowledge base completion task is more realistic. Table 2 presents the statistics of WN18RR and FB15k-237. Evaluation protocol In the KB completion or link prediction task BIBREF13 , the purpose is to predict a missing entity given a relation and another entity, i.e, inferring $h$ given $(r, t)$ or inferring $t$ given $(h, r)$ . The results are calculated based on ranking the scores produced by the score function $f$ on test triples. Following BIBREF13 , for each valid test triple $(h, r, t)$ , we replace either $h$ or $t$ by each of other entities in $\mathcal {E}$ to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF13 , i.e., not taking any corrupted triples that appear in the KB into accounts. We rank the valid test triple and corrupted triples in ascending order of their scores. We employ three common evaluation metrics: mean rank (MR), mean reciprocal rank (MRR), and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance. Training protocol We use the common Bernoulli trick BIBREF14 , BIBREF15 to generate the head or tail entities when sampling invalid triples. We also use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in ConvKB. We employ a TransE implementation available at: https://github.com/datquocnguyen/STransE. We train TransE for 3,000 epochs, using a grid search of hyper-parameters: the dimensionality of embeddings $k \in \lbrace 50, 100\rbrace $ , SGD learning rate $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $ , $\mathit {l}_1$ -norm or $\mathit {l}_2$ -norm, and margin $\gamma \in \lbrace 1, 3, 5, 7\rbrace $ . The highest Hits@10 scores on the validation set are when using $\mathit {l}_1$ -norm, learning rate at $5e^{-4}$ , $\gamma $ = 5 and $k$ = 50 for WN18RR, and using $\mathit {l}_1$ -norm, learning rate at $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $0 , $\in \lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\rbrace $1 = 1 and k = 100 for FB15k-237. To learn our model parameters including entity and relation embeddings, filters $\omega $ and the weight vector ${w}$ , we use Adam BIBREF32 and select its initial learning rate $\in \lbrace 5e^{-6}, 1e^{-5}, 5e^{-5}, 1e^{-4}, 5e^{-4}\rbrace $ . We use ReLU as the activation function $g$ . We fix the batch size at 256 and set the $L_2$ -regularizer $\lambda $ at 0.001 in our objective function. The filters $\omega $ are initialized by a truncated normal distribution or by $[0.1, 0.1, -0.1]$ . We select the number of filters $\tau \in \lbrace 50, 100, 200, 400, 500\rbrace $ . We run ConvKB up to 200 epochs and use outputs from the last epoch for evaluation. The highest Hits@10 scores on the validation set are obtained when using $k$ = 50, ${w}$0 , the truncated normal distribution for filter initialization, and the initial learning rate at ${w}$1 on WN18RR; and k = 100, ${w}$2 , ${w}$3 for filter initialization, and the initial learning rate at ${w}$4 on FB15k-237. Main experimental results Table 3 compares the experimental results of our ConvKB model with previous published results, using the same experimental setup. Table 3 shows that ConvKB obtains the best MR and highest Hits@10 scores on WN18RR and also the highest MRR and Hits@10 scores on FB15k-237. ConvKB does better than the closely related model TransE on both experimental datasets, especially on FB15k-237 where ConvKB gains significant improvements of $347-257 = 90$ in MR (which is about 26% relative improvement) and $0.396 - 0.294 = 0.102$ in MRR (which is 34+% relative improvement), and also obtains $51.7 - 46.5 = 5.2$ % absolute improvement in Hits@10. Previous work shows that TransE obtains very competitive results BIBREF21 , BIBREF38 , BIBREF20 , BIBREF25 . However, when comparing the CNN-based embedding model ConvE with other models, BIBREF30 did not experiment with TransE. We reconfirm previous findings that TransE in fact is a strong baseline model, e.g., TransE obtains better MR and Hits@10 than ConvE on WN18RR. ConvKB obtains better scores than ConvE on both datasets (except MRR on WN18RR and MR on FB15k-237), thus showing the usefulness of taking transitional characteristics into accounts. In particular, on FB15k-237, ConvKB achieves improvements of $0.394-0.316 = 0.078$ in MRR (which is about 25% relative improvement) and $51.7 - 49.1 = 2.6$ % in Hits@10, while both ConvKB and ConvE produce similar MR scores. ConvKB also obtains 25% relatively higher MRR score than the relation path-based model KB $_{LRN}$ on FB15k-237. In addition, ConvKB gives better Hits@10 than KB $_{LRN}$ , however, KB $_{LRN}$ gives better MR than ConvKB. We plan to extend ConvKB with relation path information to obtain better link prediction performance in future work. Conclusion In this paper, we propose a novel embedding model ConvKB for the knowledge base completion task. ConvKB applies the convolutional neural network to explore the global relationships among same dimensional entries of the entity and relation embeddings, so that ConvKB generalizes the transitional characteristics in the transition-based embedding models. Experimental results show that our model ConvKB outperforms other state-of-the-art models on two benchmark datasets WN18RR and FB15k-237. Our code is available at: https://github.com/daiquocnguyen/ConvKB. We also plan to extend ConvKB for a new application where we could formulate data in the form of triples. For example, inspired from the work by BIBREF39 for search personalization, we can also apply ConvKB to model user-oriented relationships between submitted queries and documents returned by search engines, i.e. modeling triple representations (query, user, document). Acknowledgments This research was partially supported by the Australian Research Council (ARC) Discovery Grant Project DP160103934.
Unanswerable