id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
c84590ba32df470a7c5343d8b99e541b217f10cf
c84590ba32df470a7c5343d8b99e541b217f10cf_0
Q: What datasets are used? Text: Introduction The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1. In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations. We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13. Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community. Related Work The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13. To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language. Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17. The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset. As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction. Baselines: Wikipedia Toxic Comments In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results. Baselines: Wikipedia Toxic Comments ::: Wikipedia Toxic Comments The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4. Baselines: Wikipedia Toxic Comments ::: Models We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification. Baselines: Wikipedia Toxic Comments ::: Experiments We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently. Build it Break it Fix it Method In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm: Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$. Break it: Ask crowdworkers to try to “beat the system" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive. Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks. Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again. See Figure FIGREF6 for a visualization of this process. Build it Break it Fix it Method ::: Break it Details ::: Definition of offensive Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online." We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online" was meant to mimic the setting of a public forum. Build it Break it Fix it Method ::: Break it Details ::: Crowderworker Task We ask crowdworkers to try to “beat the system" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system," or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9. Build it Break it Fix it Method ::: Break it Details ::: Models to Break During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive. Build it Break it Fix it Method ::: Fix it Details During the “fix it" round, we update the models with the newly collected adversarial data from the “break it" round. The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks. Single-Turn Task We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history. Single-Turn Task ::: Data Collection ::: Adversarial Collection We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method. Single-Turn Task ::: Data Collection ::: Standard Collection In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same. In this set-up, there is no real notion of “rounds", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \le i$ of the standard data as $S_i$. Single-Turn Task ::: Data Collection ::: Task Formulation Details Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it. The “safe data" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before. For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks. Single-Turn Task ::: Data Collection ::: Model Training Details Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively. For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \le i$ and optimize for performance on the validation sets $n \le i$. Single-Turn Task ::: Experimental Results We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it" results comparing the data collected and “fix it" results comparing the models obtained. Single-Turn Task ::: Experimental Results ::: Break it Phase Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not" than examples from the standard task, indicating that users are easily able to fool the classifier with negations. We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge. We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix. Single-Turn Task ::: Experimental Results ::: Fix it Phase Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$. Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior. Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind. Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase. Multi-Turn Task In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?" Multi-Turn Task ::: Task Implementation To this end, we collect data by asking crowdworkers to try to “beat" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier. We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30. Multi-Turn Task ::: Models To measure the impact of the context, we train models on this dataset with and without the given context. We use the fastText and the BERT-based model described in Section SECREF3. In addition, we build a BERT-based model variant that splits the last utterance (to be classified) and the rest of the history into two dialogue segments. Each segment is assigned an embedding and the input provided to the transformer is the sum of word embedding and segment embedding, replicating the setup of the Next Sentence Prediction that is used in the training of BERT BIBREF17. Multi-Turn Task ::: Experimental Results ::: Break it Phase During data collection, we observed that workers had an easier time bypassing the classifiers than in the single-turn set-up. See Table TABREF27. In the single-turn set-up, the task at hand gets harder with each round – the average score of the crowdworkers decreases from $4.56$ in round 1 to $1.6$ in round 3. Despite the fact that we are using our best single-turn classifier in the multi-turn set-up ($A_3$), the task becomes easier: the average score per round is $2.89$. This is because the workers are often able to use contextual information to suggest something offensive rather than say something offensive outright. See examples of submitted messages in Table TABREF29. Having context also allows one to express something offensive more efficiently: the messages supplied by workers in the multi-turn setting were significantly shorter on average, see Table TABREF21. Multi-Turn Task ::: Experimental Results ::: Fix it Phase During training, we multi-tasked the multi-turn adversarial task with the Wikipedia Toxic Comments task as well as the single-turn adversarial and standard tasks. We average the results of our best models from five different training runs. The results of these experiments are given in Table TABREF31. As we observed during the training of our baselines in Section SECREF3, the fastText model architecture is ill-equipped for this task relative to our BERT-based architectures. The fastText model performs worse given the dialogue context (an average of 23.56 offensive-class F1 relative to 37.1) than without, likely because its bag-of-embeddings representation is too simple to take the context into account. We see the opposite with our BERT-based models, indicating that more complex models are able to effectively use the contextual information to detect whether the response is safe or offensive. With the simple BERT-based architecture (that does not split the context and the utterance into separate segments), we observe an average of a 3.7 point increase in offensive-class F1 with the addition of context. When we use segments to separate the context from the utterance we are trying to classify, we observe an average of a 7.4 point increase in offensive-class F1. Thus, it appears that the use of contextual information to identify offensive language is critical to making these systems robust, and improving the model architecture to take account of this has large impact. Conclusion We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account. In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31. Additional Experimental Results ::: Additional Break It Phase Results Additional results regarding the crowdworkers' ability to “beat" the classifiers are reported in Table TABREF38. In particular, we report the percent of messages sent by the crowdsource workers that were marked safe and offensive by both $A_0$ and $A_{i-1}$. We note that very infrequently ($<1\%$ of the time) a message was marked offensive by $A_0$ but safe by $A_{i-1}$, showing that $A_0$ was relatively ineffective at catching adversarial behavior. In Table TABREF39, we report the categorization of examples into classes of offensive language from the blind human annotation of round 1 of the single-turn adversarial and standard data. We observe that in the adversarial set-up, there were fewer examples of bullying language but more examples targeting a protected class. Additional Experimental Results ::: Additional Fix It Phase Results We report F1, precision, and recall for the offensive class, as well as weighted-F1 for models $S_i$ and $A_i$ on the single-turn standard and adversarial tasks in Table TABREF41. Data Collection Interface Details During the adversarial data collection, we asked users to generate a message that “[the user believes] is not ok but that our system marks as ok," using the definition of “ok" and “not ok" described in the paper (i.e. “ok to send in a friendly conversation with someone you just met online"). In order to generate a variety of responses, during the single-turn adversarial collection, we provided users with a topic to base their response on 50% of the time. The topics were pulled from a set of 1365 crowd-sourced open-domain dialogue topics. Example topics include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger. Users were able to earn up to five points per round, with two tries for each point (to allow them to get a sense of the models' weaknesses). Users were informed of their score after each message, and provided with bonuses for good effort. The points did not affect the user's compensation, but rather, were provided as a way of gamifying the data collection, as this has been showed to increase data quality BIBREF25. Please see an example image of the chat interface in Figure FIGREF42.
The Wikipedia Toxic Comments dataset
88e9e5ad0e4c369b15d81a4e18f7d12ff8fa9f1b
88e9e5ad0e4c369b15d81a4e18f7d12ff8fa9f1b_0
Q: Is the origin of the dialogues in corpus some video game and what game is that? Text: Introduction The recent adoption of deep learning methods in natural language generation (NLG) for dialogue systems resulted in an explosion of neural data-to-text generation models, which depend on large training data. These are typically trained on one of the few parallel corpora publicly available, in particular the E2E BIBREF0 and the WebNLG BIBREF1 datasets. Crowdsourcing large NLG datasets tends to be a costly and time-consuming process, making it impractical outside of task-oriented dialogue systems. At the same time, current neural NLG models struggle to replicate the high language diversity of the training sentences present in these large datasets, and instead they learn to produce the same generic type of sentences as with considerably less training data BIBREF2, BIBREF3, BIBREF4. Motivated by the rising interest in open-domain dialogue systems and conversational agents, we present ViGGO – a smaller but more comprehensive dataset in the video game domain, introducing several generalizable dialogue acts (DAs), making it more suitable for training versatile and more conversational NLG models. The dataset provides almost 7K pairs of structured meaning representations (MRs) and crowdsourced reference utterances about more than 100 video games. Table TABREF2 lists three examples. Video games are a vast entertainment topic that can naturally be discussed in a casual conversation, similar to movies and music, yet in the dialogue systems community it does not enjoy popularity anywhere close to that of the latter two topics BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. Restaurants have served as the go-to topic in data-to-text NLG for decades, as they offer a sufficiently large set of various attributes and corresponding values to talk about. While they certainly can be a topic of a casual conversation, the existing restaurant datasets BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 are geared more toward a task-oriented dialogue where a system tries to narrow down a restaurant based on the user's preferences and ultimately give a recommendation. Our new video game dataset is designed to be more conversational, and to thus enable neural models to produce utterances more suitable for an open-domain dialogue system. Even the most recent addition to the publicly available restaurant datasets for data-to-text NLG, the E2E dataset BIBREF0, suffers from the lack of a conversational aspect. It has become popular, thanks to its unprecedented size and multiple reference utterances per MR, for training end-to-end neural models, yet it only provides a single DA type. In contrast with the E2E dataset, ViGGO presents utterances of 9 different DAs. Other domains have been represented by task-oriented datasets with multiple DA types, for example the Hotel, Laptop, and TV datasets BIBREF16, BIBREF17. Nevertheless, the DAs in these datasets vary greatly in complexity, and their distribution is thus heavily skewed, typically with two or three similar DAs comprising almost the entire dataset. In our video game dataset, we omitted simple DAs, in particular those that do not require any slots, such as greetings or short prompts, and focused on a set of substantial DAs only. The main contribution of our work is thus a new parallel data-to-text NLG corpus that (1) is more conversational, rather than information seeking or question answering, and thus more suitable for an open-domain dialogue system, (2) represents a new, unexplored domain which, however, has excellent potential for application in conversational agents, and (3) has high-quality, manually cleaned human-produced utterances. The ViGGO Dataset ViGGO features more than 100 different video game titles, whose attributes were harvested using free API access to two of the largest online video game databases: IGDB and GiantBomb. Using these attributes, we generated a set of 2,300 structured MRs. The human reference utterances for the generated MRs were then crowdsourced using vetted workers on the Amazon Mechanical Turk (MTurk) platform BIBREF18, resulting in 6,900 MR-utterance pairs altogether. With the goal of creating a clean, high-quality dataset, we strived to obtain reference utterances with correct mentions of all slots in the corresponding MR through post-processing. The ViGGO Dataset ::: Meaning Representations The MRs in the ViGGO dataset range from 1 to 8 slot-value pairs, and the slots come from a set of 14 different video game attributes. Table TABREF6 details how these slots may be distributed across the 9 different DAs. The inform DA, represented by 3,000 samples, is the most prevalent one, as the average number of slots it contains is significantly higher than that of all the other DAs. Figure FIGREF7 visualizes the MR length distribution across the entire dataset. The slots can be classified into 5 general categories covering most types of information MRs typically convey in data-to-text generation scenarios: Boolean, Numeric, Scalar, Categorical, and List. The first 4 categories are common in other NLG datasets, such as E2E, Laptop, TV, and Hotel, while the List slots are unique to ViGGO. List slots have values which may comprise multiple items from a discrete list of possible items. The ViGGO Dataset ::: Utterances With neural language generation in mind, we crowdsourced 3 reference utterances for each MR so as to provide the models with the information about how the same content can be realized in multiple different ways. At the same time, this allows for a more reliable automatic evaluation by comparing the generated utterances with a set of different references each, covering a broader spectrum of correct ways of expressing the content given by the MR. The raw data, however, contains a significant amount of noise, as is inevitable when crowdsourcing. We therefore created and enforced a robust set of heuristics and regular expressions to account for typos, grammatical errors, undesirable abbreviations, unsolicited information, and missing or incorrect slot realizations. The ViGGO Dataset ::: Data Collection The crowdsourcing of utterances on MTurk took place in three stages. After collecting one third of the utterances, we identified a pool of almost 30 workers who wrote the most diverse and natural-sounding sentences in the context of video games. We then filtered out all utterances of poor quality and had the qualified workers write new ones for the corresponding inputs. Finally, the remaining two thirds of utterances were completed by these workers exclusively. For each DA we created a separate task in order to minimize the workers' confusion. The instructions contained several different examples, as well as counter-examples, and they situated the DA in the context of a hypothetical conversation. The video game attributes to be used were provided for the workers in the form of a table, with their order shuffled so as to avoid any kind of bias. Further details on the data collection and cleaning are included in the Appendix. The ViGGO Dataset ::: Train/Validation/Test Split Despite the fact that the ViGGO dataset is not very large, we strived to make the test set reasonably challenging. To this end, we ensured that, after delexicalizing the name and the developer slots, there were no common MRs between the train set and either of the validation or test set. We maintained a similar MR length and slot distribution across the three partitions. The distribution of DA types, on the other hand, is skewed slightly toward fewer inform DA instances and a higher proportion of the less prevalent DAs in the validation and test sets (see Figure FIGREF11). With the exact partition sizes indicated in the diagram, the final ratio of samples is approximately $7.5:1:1.5$. The ViGGO Dataset ::: ViGGO vs. E2E Our new dataset was constructed under different constraints than the E2E dataset. First, in ViGGO we did not allow any omissions of slot mentions, as those are not justifiable for data-to-text generation with no previous context, and it makes the evaluation ambiguous. Second, the MRs in ViGGO are grounded by real video game data, which can encourage richer and more natural-sounding reference utterances. While ViGGO is only 13% the size of the E2E dataset, the lexical diversity of its utterances is 77% of that in the E2E dataset, as indicated by the “delexicalized vocabulary” column in Table TABREF13. Part of the reason naturally is the presence of additional DAs in ViGGO, and therefore we also indicate the statistics in Table TABREF13 for the inform samples only. The average inform utterance length in ViGGO turns out to be over 30% greater, in terms of both words and sentences per utterance. Finally, we note that, unlike the E2E dataset, our test set does not place any specific emphasis on longer MRs. While the average number of slots per MR in the inform DAs are comparable to the E2E dataset, in general the video game MRs are significantly shorter. This is by design, as shorter, more focused responses are more conversational than consistently dense utterances. Baseline System Evaluation The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model. For decoding we employ beam search of width 10 ($\alpha = 1.0$). The generated candidates are then reranked according to the heuristically determined slot coverage score. Before training the model on the ViGGO dataset, we confirmed on the E2E dataset that it performed on par with, or even slightly better than, the strong baseline models from the E2E NLG Challenge, namely, TGen BIBREF20 and Slug2Slug BIBREF21. Baseline System Evaluation ::: Automatic Metrics We evaluate our model's performance on the ViGGO dataset using the following standard NLG metrics: BLEU BIBREF22, METEOR BIBREF23, ROUGE-L BIBREF24, and CIDEr BIBREF25. Additionally, with our heuristic slot error rate (SER) metric we approximate the percentage of failed slot realizations (i.e., missed, incorrect, or hallucinated) across the test set. The results are shown in Table TABREF16. Baseline System Evaluation ::: Human Evaluation We let two expert annotators with no prior knowledge of the ViGGO dataset evaluate the outputs of our model. Their task was to rate 240 shuffled utterances (120 generated utterances and 120 human references) each on naturalness and coherence using a 5-point Likert scale. We define naturalness as a measure of how much one would expect to encounter an utterance in a conversation with a human, as opposed to sounding robotic, while coherence measures its grammaticality and fluency. Out of the 120 MRs in each partition, 40 were of the inform type, with the other 8 DAs represented by 10 samples each. In addition to that, we had the annotators rate a sample of 80 utterances from the E2E dataset (40 generated and 40 references) as a sort of a baseline for the human evaluation. With both datasets, our model's outputs were highly rated on both naturalness and coherence (see Table TABREF18). The scores for the ViGGO utterances were overall higher than those for the E2E ones, which we understand as an indication of the video game data being more fluent and conversational. At the same time, we observed that the utterances generated by our model tended to score higher than the reference utterances, though significantly more so for the E2E dataset. This is likely a consequence of the ViGGO dataset being cleaner and less noisy than the E2E dataset. In an additional evaluation of ViGGO, we asked the annotators to classify the utterance samples into the 9 DA groups. For this task they were provided with a brief description of each DA type. The annotators identified the DA incorrectly in only 7% of the samples, which we interpret as a confirmation that our DAs are well-defined. Most of the mistakes can be ascribed to the inherent similarity of the recommend and the suggest DA, as well as to our model often generating give_opinion utterances that resemble the inform ones. Baseline System Evaluation ::: Qualitative Analysis Among all 9 DAs, the one posing the greatest challenge for our model was give_opinion, due to its high diversity of reference utterances. Despite the occasional incoherence, it learned to produce rich and sensible utterances, for instance “Little Nightmares is a pretty good game. Tarsier Studios is a talented developer and the side view perspective makes it easy to play.”. Since our baseline model does not implement any form of a copy mechanism, it fails on instances with out-of-vocabulary terms, such as the values of the specifier slot in the test set. These, in fact, account for almost half of the errors indicated by the SER metric in Table TABREF16. Therefore, more robust models have good potential for improving on our scores. Discussion In Table TABREF20 we demonstrate how the 9 DAs of the ViGGO dataset can support a natural multi-turn exchange on the topic of video games, as a part of a longer casual conversation on different topics. One caveat of using a language generator trained on this dataset in a dialogue system as-is is that multiple subsequent turns discussing the same video game would be repeating its full name. ViGGO was designed for grounded generation but without context, and therefore it is up to the dialogue manager to ensure that pronouns are substituted for the names whenever it would sound more natural in a dialogue. Alternately, the dataset can easily be augmented with automatically constructed samples which omit the name slot in the MR and replace the name with a pronoun in the reference utterance. Conclusion In this paper we presented a new parallel corpus for data-to-text NLG, which contains 9 dialogue acts, making it more conversational than other similar datasets. The crowdsourced utterances were thoroughly cleaned in order to obtain high-quality human references, which we hope will support the recent trend in research to train neural models on small but high-quality data, like humans can. This could possibly be achieved by transferring fundamental knowledge from larger available corpora, such as the E2E dataset, but perhaps by other, completely new, methods. Appendix ::: Additional ViGGO Dataset Examples In Table TABREF22 we present one example of each DA in the ViGGO dataset, including the examples given in Table TABREF2. Appendix ::: Slot Categories In Section SECREF5 we mentioned that the slots in the ViGGO dataset can be classified into 5 general categories. Here we provide more detailed descriptions of the categories: Boolean – binary value, such as “yes”/“no” or “true”/“false” (e.g., has_multiplayer or available_on_steam), Numeric – value is a number or contains number(s) as the salient part (e.g., release_year or exp_release_date), Scalar – values are on a distinct scale (e.g., rating or esrb), Categorical – takes on virtually any value, typically coming from a certain category, such as names or types (e.g., name or developer), List – similar to categorical, where the value can, however, consist of multiple individual items (e.g., genres or player_perspective). Note that in ViGGO the items in the value of a List slot are comma-separated, and therefore the individual items must not contain a comma. There are no restrictions as to whether the values are single-word or multi-word in any of the categories. Appendix ::: Data Collection When generating the MRs for the inform DA, we fixed the slot ratios: the name and genres slots were mandatory in every MR, the player_perspective and release_year were enforced in about half of the MRs, while the remaining slots are present in about 25% of the MRs. At the same time we imposed two constraints on the slot combinations: (1) whenever one of the Steam, Linux or Mac related boolean slots is present in an MR, the platforms slot must be included too, and (2) whenever either of the Linux or Mac slots was picked for an MR, the other one was automatically added too. These two constraints were introduced so as to encourage reference utterances with natural aggregations and contrast relations. The remaining 8 DAs, however, contain significantly fewer slots each (see Table TABREF6). We therefore decided to have the MTurk workers select 5 unique slot combinations for each given video game before writing the corresponding utterances. Since for these DAs we collected less data, we tried to ensure in this way that we have a sufficient number of samples for those slot combinations that are most natural to be mentioned in each of the DAs. While fixing mandatory slots for each DA, we instructed the workers to choose 1 or 2 additional slots depending on the task. The data collection for MRs with only 1 additional slot and for those with 2 was performed separately, so as to prevent workers from taking the easy way out by always selecting just a single slot, given the option. Leaving the slot selection to crowdworkers yields a frequency distribution of all slot combinations, which presumably indicates the suitability of different slots to be mentioned together in a sentence. This meta-information can be made use of in a system's dialogue manager to sample from the observed slot combination distributions instead of sampling randomly or hard-coding the combinations. Figure FIGREF30 shows the distributions of the 8 slot pairs most commonly mentioned together in different DAs. These account for 53% of the selections among the 6 DAs that can take 2 additional slots besides the mandatory ones. We can observe some interesting trends in the distributions, such as that the developer + release_year combination was the most frequent one in the confirm DA, while fairly rare in most of the other DAs. This might be because this pair of a game's attributes is arguably the next best identifier of a game after its name. Appendix ::: Dataset Cleaning A large proportion of the raw data collected contained typos and various errors, as is inevitable when crowdsourcing. We took the following three steps to clean the data. First, we used regular expressions to enforce several standardization policies regarding special characters, punctuation, and the correction of undesired abbreviations/misspellings of standard domain-specific terms (e.g., we would change terms like “Play station” or “PS4” to the uniform “PlayStation”). At the same time, we removed or enforced hyphens uniformly in certain terms, for example, “single-player”. Although phrases such as “first person” should correctly have a hyphen when used as adjective, the turkers used this rule very inconsistently. In order to avoid model outputs being penalized during the evaluation by the arbitrary choice of a hyphen presence or absence in the reference utterances, we decided to remove the hyphen in all such phrases regardless of the noun/adjective use. Second, we developed an extensive set of heuristics to identify slot-related errors. This process revealed the vast majority of missing or incorrect slot mentions, which we subsequently fixed according to the corresponding MRs. Turkers would sometimes also inject a piece of information which was not present in the MR, some of which is not even represented by any of the slots, e.g., plot or main characters. We remove this extraneous information from the utterances so as to avoid confusing the neural model. This step thus involved certain manual work and was thus performed jointly with the third step. Finally, we further resolved the remaining typos, grammatical errors, and unsolicited information. Appendix ::: Model Parameters Even though on the small datasets we work with we do not necessarily expect the Transformer model to perform better than recurrent neural networks, we chose this model for its significantly faster training, without sacrificing the performance. For our experiments a small 2-layer Transformer with 8 heads proved to be sufficient. The input tokens are encoded into embeddings of size 256, and the target sequences were truncated to 60 tokens. The model performed best with dropout values of 0.2. For training of the Transformer models we used the Adam optimizer with a custom learning rate schedule including a brief linear warm-up and a cosine decay.
No
14e259a312e653f8fc0d52ca5325b43c3bdfb968
14e259a312e653f8fc0d52ca5325b43c3bdfb968_0
Q: Is any data-to-text generation model trained on this new corpus, what are the results? Text: Introduction The recent adoption of deep learning methods in natural language generation (NLG) for dialogue systems resulted in an explosion of neural data-to-text generation models, which depend on large training data. These are typically trained on one of the few parallel corpora publicly available, in particular the E2E BIBREF0 and the WebNLG BIBREF1 datasets. Crowdsourcing large NLG datasets tends to be a costly and time-consuming process, making it impractical outside of task-oriented dialogue systems. At the same time, current neural NLG models struggle to replicate the high language diversity of the training sentences present in these large datasets, and instead they learn to produce the same generic type of sentences as with considerably less training data BIBREF2, BIBREF3, BIBREF4. Motivated by the rising interest in open-domain dialogue systems and conversational agents, we present ViGGO – a smaller but more comprehensive dataset in the video game domain, introducing several generalizable dialogue acts (DAs), making it more suitable for training versatile and more conversational NLG models. The dataset provides almost 7K pairs of structured meaning representations (MRs) and crowdsourced reference utterances about more than 100 video games. Table TABREF2 lists three examples. Video games are a vast entertainment topic that can naturally be discussed in a casual conversation, similar to movies and music, yet in the dialogue systems community it does not enjoy popularity anywhere close to that of the latter two topics BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. Restaurants have served as the go-to topic in data-to-text NLG for decades, as they offer a sufficiently large set of various attributes and corresponding values to talk about. While they certainly can be a topic of a casual conversation, the existing restaurant datasets BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 are geared more toward a task-oriented dialogue where a system tries to narrow down a restaurant based on the user's preferences and ultimately give a recommendation. Our new video game dataset is designed to be more conversational, and to thus enable neural models to produce utterances more suitable for an open-domain dialogue system. Even the most recent addition to the publicly available restaurant datasets for data-to-text NLG, the E2E dataset BIBREF0, suffers from the lack of a conversational aspect. It has become popular, thanks to its unprecedented size and multiple reference utterances per MR, for training end-to-end neural models, yet it only provides a single DA type. In contrast with the E2E dataset, ViGGO presents utterances of 9 different DAs. Other domains have been represented by task-oriented datasets with multiple DA types, for example the Hotel, Laptop, and TV datasets BIBREF16, BIBREF17. Nevertheless, the DAs in these datasets vary greatly in complexity, and their distribution is thus heavily skewed, typically with two or three similar DAs comprising almost the entire dataset. In our video game dataset, we omitted simple DAs, in particular those that do not require any slots, such as greetings or short prompts, and focused on a set of substantial DAs only. The main contribution of our work is thus a new parallel data-to-text NLG corpus that (1) is more conversational, rather than information seeking or question answering, and thus more suitable for an open-domain dialogue system, (2) represents a new, unexplored domain which, however, has excellent potential for application in conversational agents, and (3) has high-quality, manually cleaned human-produced utterances. The ViGGO Dataset ViGGO features more than 100 different video game titles, whose attributes were harvested using free API access to two of the largest online video game databases: IGDB and GiantBomb. Using these attributes, we generated a set of 2,300 structured MRs. The human reference utterances for the generated MRs were then crowdsourced using vetted workers on the Amazon Mechanical Turk (MTurk) platform BIBREF18, resulting in 6,900 MR-utterance pairs altogether. With the goal of creating a clean, high-quality dataset, we strived to obtain reference utterances with correct mentions of all slots in the corresponding MR through post-processing. The ViGGO Dataset ::: Meaning Representations The MRs in the ViGGO dataset range from 1 to 8 slot-value pairs, and the slots come from a set of 14 different video game attributes. Table TABREF6 details how these slots may be distributed across the 9 different DAs. The inform DA, represented by 3,000 samples, is the most prevalent one, as the average number of slots it contains is significantly higher than that of all the other DAs. Figure FIGREF7 visualizes the MR length distribution across the entire dataset. The slots can be classified into 5 general categories covering most types of information MRs typically convey in data-to-text generation scenarios: Boolean, Numeric, Scalar, Categorical, and List. The first 4 categories are common in other NLG datasets, such as E2E, Laptop, TV, and Hotel, while the List slots are unique to ViGGO. List slots have values which may comprise multiple items from a discrete list of possible items. The ViGGO Dataset ::: Utterances With neural language generation in mind, we crowdsourced 3 reference utterances for each MR so as to provide the models with the information about how the same content can be realized in multiple different ways. At the same time, this allows for a more reliable automatic evaluation by comparing the generated utterances with a set of different references each, covering a broader spectrum of correct ways of expressing the content given by the MR. The raw data, however, contains a significant amount of noise, as is inevitable when crowdsourcing. We therefore created and enforced a robust set of heuristics and regular expressions to account for typos, grammatical errors, undesirable abbreviations, unsolicited information, and missing or incorrect slot realizations. The ViGGO Dataset ::: Data Collection The crowdsourcing of utterances on MTurk took place in three stages. After collecting one third of the utterances, we identified a pool of almost 30 workers who wrote the most diverse and natural-sounding sentences in the context of video games. We then filtered out all utterances of poor quality and had the qualified workers write new ones for the corresponding inputs. Finally, the remaining two thirds of utterances were completed by these workers exclusively. For each DA we created a separate task in order to minimize the workers' confusion. The instructions contained several different examples, as well as counter-examples, and they situated the DA in the context of a hypothetical conversation. The video game attributes to be used were provided for the workers in the form of a table, with their order shuffled so as to avoid any kind of bias. Further details on the data collection and cleaning are included in the Appendix. The ViGGO Dataset ::: Train/Validation/Test Split Despite the fact that the ViGGO dataset is not very large, we strived to make the test set reasonably challenging. To this end, we ensured that, after delexicalizing the name and the developer slots, there were no common MRs between the train set and either of the validation or test set. We maintained a similar MR length and slot distribution across the three partitions. The distribution of DA types, on the other hand, is skewed slightly toward fewer inform DA instances and a higher proportion of the less prevalent DAs in the validation and test sets (see Figure FIGREF11). With the exact partition sizes indicated in the diagram, the final ratio of samples is approximately $7.5:1:1.5$. The ViGGO Dataset ::: ViGGO vs. E2E Our new dataset was constructed under different constraints than the E2E dataset. First, in ViGGO we did not allow any omissions of slot mentions, as those are not justifiable for data-to-text generation with no previous context, and it makes the evaluation ambiguous. Second, the MRs in ViGGO are grounded by real video game data, which can encourage richer and more natural-sounding reference utterances. While ViGGO is only 13% the size of the E2E dataset, the lexical diversity of its utterances is 77% of that in the E2E dataset, as indicated by the “delexicalized vocabulary” column in Table TABREF13. Part of the reason naturally is the presence of additional DAs in ViGGO, and therefore we also indicate the statistics in Table TABREF13 for the inform samples only. The average inform utterance length in ViGGO turns out to be over 30% greater, in terms of both words and sentences per utterance. Finally, we note that, unlike the E2E dataset, our test set does not place any specific emphasis on longer MRs. While the average number of slots per MR in the inform DAs are comparable to the E2E dataset, in general the video game MRs are significantly shorter. This is by design, as shorter, more focused responses are more conversational than consistently dense utterances. Baseline System Evaluation The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model. For decoding we employ beam search of width 10 ($\alpha = 1.0$). The generated candidates are then reranked according to the heuristically determined slot coverage score. Before training the model on the ViGGO dataset, we confirmed on the E2E dataset that it performed on par with, or even slightly better than, the strong baseline models from the E2E NLG Challenge, namely, TGen BIBREF20 and Slug2Slug BIBREF21. Baseline System Evaluation ::: Automatic Metrics We evaluate our model's performance on the ViGGO dataset using the following standard NLG metrics: BLEU BIBREF22, METEOR BIBREF23, ROUGE-L BIBREF24, and CIDEr BIBREF25. Additionally, with our heuristic slot error rate (SER) metric we approximate the percentage of failed slot realizations (i.e., missed, incorrect, or hallucinated) across the test set. The results are shown in Table TABREF16. Baseline System Evaluation ::: Human Evaluation We let two expert annotators with no prior knowledge of the ViGGO dataset evaluate the outputs of our model. Their task was to rate 240 shuffled utterances (120 generated utterances and 120 human references) each on naturalness and coherence using a 5-point Likert scale. We define naturalness as a measure of how much one would expect to encounter an utterance in a conversation with a human, as opposed to sounding robotic, while coherence measures its grammaticality and fluency. Out of the 120 MRs in each partition, 40 were of the inform type, with the other 8 DAs represented by 10 samples each. In addition to that, we had the annotators rate a sample of 80 utterances from the E2E dataset (40 generated and 40 references) as a sort of a baseline for the human evaluation. With both datasets, our model's outputs were highly rated on both naturalness and coherence (see Table TABREF18). The scores for the ViGGO utterances were overall higher than those for the E2E ones, which we understand as an indication of the video game data being more fluent and conversational. At the same time, we observed that the utterances generated by our model tended to score higher than the reference utterances, though significantly more so for the E2E dataset. This is likely a consequence of the ViGGO dataset being cleaner and less noisy than the E2E dataset. In an additional evaluation of ViGGO, we asked the annotators to classify the utterance samples into the 9 DA groups. For this task they were provided with a brief description of each DA type. The annotators identified the DA incorrectly in only 7% of the samples, which we interpret as a confirmation that our DAs are well-defined. Most of the mistakes can be ascribed to the inherent similarity of the recommend and the suggest DA, as well as to our model often generating give_opinion utterances that resemble the inform ones. Baseline System Evaluation ::: Qualitative Analysis Among all 9 DAs, the one posing the greatest challenge for our model was give_opinion, due to its high diversity of reference utterances. Despite the occasional incoherence, it learned to produce rich and sensible utterances, for instance “Little Nightmares is a pretty good game. Tarsier Studios is a talented developer and the side view perspective makes it easy to play.”. Since our baseline model does not implement any form of a copy mechanism, it fails on instances with out-of-vocabulary terms, such as the values of the specifier slot in the test set. These, in fact, account for almost half of the errors indicated by the SER metric in Table TABREF16. Therefore, more robust models have good potential for improving on our scores. Discussion In Table TABREF20 we demonstrate how the 9 DAs of the ViGGO dataset can support a natural multi-turn exchange on the topic of video games, as a part of a longer casual conversation on different topics. One caveat of using a language generator trained on this dataset in a dialogue system as-is is that multiple subsequent turns discussing the same video game would be repeating its full name. ViGGO was designed for grounded generation but without context, and therefore it is up to the dialogue manager to ensure that pronouns are substituted for the names whenever it would sound more natural in a dialogue. Alternately, the dataset can easily be augmented with automatically constructed samples which omit the name slot in the MR and replace the name with a pronoun in the reference utterance. Conclusion In this paper we presented a new parallel corpus for data-to-text NLG, which contains 9 dialogue acts, making it more conversational than other similar datasets. The crowdsourced utterances were thoroughly cleaned in order to obtain high-quality human references, which we hope will support the recent trend in research to train neural models on small but high-quality data, like humans can. This could possibly be achieved by transferring fundamental knowledge from larger available corpora, such as the E2E dataset, but perhaps by other, completely new, methods. Appendix ::: Additional ViGGO Dataset Examples In Table TABREF22 we present one example of each DA in the ViGGO dataset, including the examples given in Table TABREF2. Appendix ::: Slot Categories In Section SECREF5 we mentioned that the slots in the ViGGO dataset can be classified into 5 general categories. Here we provide more detailed descriptions of the categories: Boolean – binary value, such as “yes”/“no” or “true”/“false” (e.g., has_multiplayer or available_on_steam), Numeric – value is a number or contains number(s) as the salient part (e.g., release_year or exp_release_date), Scalar – values are on a distinct scale (e.g., rating or esrb), Categorical – takes on virtually any value, typically coming from a certain category, such as names or types (e.g., name or developer), List – similar to categorical, where the value can, however, consist of multiple individual items (e.g., genres or player_perspective). Note that in ViGGO the items in the value of a List slot are comma-separated, and therefore the individual items must not contain a comma. There are no restrictions as to whether the values are single-word or multi-word in any of the categories. Appendix ::: Data Collection When generating the MRs for the inform DA, we fixed the slot ratios: the name and genres slots were mandatory in every MR, the player_perspective and release_year were enforced in about half of the MRs, while the remaining slots are present in about 25% of the MRs. At the same time we imposed two constraints on the slot combinations: (1) whenever one of the Steam, Linux or Mac related boolean slots is present in an MR, the platforms slot must be included too, and (2) whenever either of the Linux or Mac slots was picked for an MR, the other one was automatically added too. These two constraints were introduced so as to encourage reference utterances with natural aggregations and contrast relations. The remaining 8 DAs, however, contain significantly fewer slots each (see Table TABREF6). We therefore decided to have the MTurk workers select 5 unique slot combinations for each given video game before writing the corresponding utterances. Since for these DAs we collected less data, we tried to ensure in this way that we have a sufficient number of samples for those slot combinations that are most natural to be mentioned in each of the DAs. While fixing mandatory slots for each DA, we instructed the workers to choose 1 or 2 additional slots depending on the task. The data collection for MRs with only 1 additional slot and for those with 2 was performed separately, so as to prevent workers from taking the easy way out by always selecting just a single slot, given the option. Leaving the slot selection to crowdworkers yields a frequency distribution of all slot combinations, which presumably indicates the suitability of different slots to be mentioned together in a sentence. This meta-information can be made use of in a system's dialogue manager to sample from the observed slot combination distributions instead of sampling randomly or hard-coding the combinations. Figure FIGREF30 shows the distributions of the 8 slot pairs most commonly mentioned together in different DAs. These account for 53% of the selections among the 6 DAs that can take 2 additional slots besides the mandatory ones. We can observe some interesting trends in the distributions, such as that the developer + release_year combination was the most frequent one in the confirm DA, while fairly rare in most of the other DAs. This might be because this pair of a game's attributes is arguably the next best identifier of a game after its name. Appendix ::: Dataset Cleaning A large proportion of the raw data collected contained typos and various errors, as is inevitable when crowdsourcing. We took the following three steps to clean the data. First, we used regular expressions to enforce several standardization policies regarding special characters, punctuation, and the correction of undesired abbreviations/misspellings of standard domain-specific terms (e.g., we would change terms like “Play station” or “PS4” to the uniform “PlayStation”). At the same time, we removed or enforced hyphens uniformly in certain terms, for example, “single-player”. Although phrases such as “first person” should correctly have a hyphen when used as adjective, the turkers used this rule very inconsistently. In order to avoid model outputs being penalized during the evaluation by the arbitrary choice of a hyphen presence or absence in the reference utterances, we decided to remove the hyphen in all such phrases regardless of the noun/adjective use. Second, we developed an extensive set of heuristics to identify slot-related errors. This process revealed the vast majority of missing or incorrect slot mentions, which we subsequently fixed according to the corresponding MRs. Turkers would sometimes also inject a piece of information which was not present in the MR, some of which is not even represented by any of the slots, e.g., plot or main characters. We remove this extraneous information from the utterances so as to avoid confusing the neural model. This step thus involved certain manual work and was thus performed jointly with the third step. Finally, we further resolved the remaining typos, grammatical errors, and unsolicited information. Appendix ::: Model Parameters Even though on the small datasets we work with we do not necessarily expect the Transformer model to perform better than recurrent neural networks, we chose this model for its significantly faster training, without sacrificing the performance. For our experiments a small 2-layer Transformer with 8 heads proved to be sufficient. The input tokens are encoded into embeddings of size 256, and the target sequences were truncated to 60 tokens. The model performed best with dropout values of 0.2. For training of the Transformer models we used the Adam optimizer with a custom learning rate schedule including a brief linear warm-up and a cosine decay.
Yes, Transformer based seq2seq is evaluated with average BLEU 0.519, METEOR 0.388, ROUGE 0.631 CIDEr 2.531 and SER 2.55%.
e93b4a15b54d139b768d5913fb5fd1aed8ab25da
e93b4a15b54d139b768d5913fb5fd1aed8ab25da_0
Q: How the authors made sure that corpus is clean despite being crowdsourced? Text: Introduction The recent adoption of deep learning methods in natural language generation (NLG) for dialogue systems resulted in an explosion of neural data-to-text generation models, which depend on large training data. These are typically trained on one of the few parallel corpora publicly available, in particular the E2E BIBREF0 and the WebNLG BIBREF1 datasets. Crowdsourcing large NLG datasets tends to be a costly and time-consuming process, making it impractical outside of task-oriented dialogue systems. At the same time, current neural NLG models struggle to replicate the high language diversity of the training sentences present in these large datasets, and instead they learn to produce the same generic type of sentences as with considerably less training data BIBREF2, BIBREF3, BIBREF4. Motivated by the rising interest in open-domain dialogue systems and conversational agents, we present ViGGO – a smaller but more comprehensive dataset in the video game domain, introducing several generalizable dialogue acts (DAs), making it more suitable for training versatile and more conversational NLG models. The dataset provides almost 7K pairs of structured meaning representations (MRs) and crowdsourced reference utterances about more than 100 video games. Table TABREF2 lists three examples. Video games are a vast entertainment topic that can naturally be discussed in a casual conversation, similar to movies and music, yet in the dialogue systems community it does not enjoy popularity anywhere close to that of the latter two topics BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. Restaurants have served as the go-to topic in data-to-text NLG for decades, as they offer a sufficiently large set of various attributes and corresponding values to talk about. While they certainly can be a topic of a casual conversation, the existing restaurant datasets BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 are geared more toward a task-oriented dialogue where a system tries to narrow down a restaurant based on the user's preferences and ultimately give a recommendation. Our new video game dataset is designed to be more conversational, and to thus enable neural models to produce utterances more suitable for an open-domain dialogue system. Even the most recent addition to the publicly available restaurant datasets for data-to-text NLG, the E2E dataset BIBREF0, suffers from the lack of a conversational aspect. It has become popular, thanks to its unprecedented size and multiple reference utterances per MR, for training end-to-end neural models, yet it only provides a single DA type. In contrast with the E2E dataset, ViGGO presents utterances of 9 different DAs. Other domains have been represented by task-oriented datasets with multiple DA types, for example the Hotel, Laptop, and TV datasets BIBREF16, BIBREF17. Nevertheless, the DAs in these datasets vary greatly in complexity, and their distribution is thus heavily skewed, typically with two or three similar DAs comprising almost the entire dataset. In our video game dataset, we omitted simple DAs, in particular those that do not require any slots, such as greetings or short prompts, and focused on a set of substantial DAs only. The main contribution of our work is thus a new parallel data-to-text NLG corpus that (1) is more conversational, rather than information seeking or question answering, and thus more suitable for an open-domain dialogue system, (2) represents a new, unexplored domain which, however, has excellent potential for application in conversational agents, and (3) has high-quality, manually cleaned human-produced utterances. The ViGGO Dataset ViGGO features more than 100 different video game titles, whose attributes were harvested using free API access to two of the largest online video game databases: IGDB and GiantBomb. Using these attributes, we generated a set of 2,300 structured MRs. The human reference utterances for the generated MRs were then crowdsourced using vetted workers on the Amazon Mechanical Turk (MTurk) platform BIBREF18, resulting in 6,900 MR-utterance pairs altogether. With the goal of creating a clean, high-quality dataset, we strived to obtain reference utterances with correct mentions of all slots in the corresponding MR through post-processing. The ViGGO Dataset ::: Meaning Representations The MRs in the ViGGO dataset range from 1 to 8 slot-value pairs, and the slots come from a set of 14 different video game attributes. Table TABREF6 details how these slots may be distributed across the 9 different DAs. The inform DA, represented by 3,000 samples, is the most prevalent one, as the average number of slots it contains is significantly higher than that of all the other DAs. Figure FIGREF7 visualizes the MR length distribution across the entire dataset. The slots can be classified into 5 general categories covering most types of information MRs typically convey in data-to-text generation scenarios: Boolean, Numeric, Scalar, Categorical, and List. The first 4 categories are common in other NLG datasets, such as E2E, Laptop, TV, and Hotel, while the List slots are unique to ViGGO. List slots have values which may comprise multiple items from a discrete list of possible items. The ViGGO Dataset ::: Utterances With neural language generation in mind, we crowdsourced 3 reference utterances for each MR so as to provide the models with the information about how the same content can be realized in multiple different ways. At the same time, this allows for a more reliable automatic evaluation by comparing the generated utterances with a set of different references each, covering a broader spectrum of correct ways of expressing the content given by the MR. The raw data, however, contains a significant amount of noise, as is inevitable when crowdsourcing. We therefore created and enforced a robust set of heuristics and regular expressions to account for typos, grammatical errors, undesirable abbreviations, unsolicited information, and missing or incorrect slot realizations. The ViGGO Dataset ::: Data Collection The crowdsourcing of utterances on MTurk took place in three stages. After collecting one third of the utterances, we identified a pool of almost 30 workers who wrote the most diverse and natural-sounding sentences in the context of video games. We then filtered out all utterances of poor quality and had the qualified workers write new ones for the corresponding inputs. Finally, the remaining two thirds of utterances were completed by these workers exclusively. For each DA we created a separate task in order to minimize the workers' confusion. The instructions contained several different examples, as well as counter-examples, and they situated the DA in the context of a hypothetical conversation. The video game attributes to be used were provided for the workers in the form of a table, with their order shuffled so as to avoid any kind of bias. Further details on the data collection and cleaning are included in the Appendix. The ViGGO Dataset ::: Train/Validation/Test Split Despite the fact that the ViGGO dataset is not very large, we strived to make the test set reasonably challenging. To this end, we ensured that, after delexicalizing the name and the developer slots, there were no common MRs between the train set and either of the validation or test set. We maintained a similar MR length and slot distribution across the three partitions. The distribution of DA types, on the other hand, is skewed slightly toward fewer inform DA instances and a higher proportion of the less prevalent DAs in the validation and test sets (see Figure FIGREF11). With the exact partition sizes indicated in the diagram, the final ratio of samples is approximately $7.5:1:1.5$. The ViGGO Dataset ::: ViGGO vs. E2E Our new dataset was constructed under different constraints than the E2E dataset. First, in ViGGO we did not allow any omissions of slot mentions, as those are not justifiable for data-to-text generation with no previous context, and it makes the evaluation ambiguous. Second, the MRs in ViGGO are grounded by real video game data, which can encourage richer and more natural-sounding reference utterances. While ViGGO is only 13% the size of the E2E dataset, the lexical diversity of its utterances is 77% of that in the E2E dataset, as indicated by the “delexicalized vocabulary” column in Table TABREF13. Part of the reason naturally is the presence of additional DAs in ViGGO, and therefore we also indicate the statistics in Table TABREF13 for the inform samples only. The average inform utterance length in ViGGO turns out to be over 30% greater, in terms of both words and sentences per utterance. Finally, we note that, unlike the E2E dataset, our test set does not place any specific emphasis on longer MRs. While the average number of slots per MR in the inform DAs are comparable to the E2E dataset, in general the video game MRs are significantly shorter. This is by design, as shorter, more focused responses are more conversational than consistently dense utterances. Baseline System Evaluation The NLG model we use to establish a baseline for this dataset is a standard Transformer-based BIBREF19 sequence-to-sequence model. For decoding we employ beam search of width 10 ($\alpha = 1.0$). The generated candidates are then reranked according to the heuristically determined slot coverage score. Before training the model on the ViGGO dataset, we confirmed on the E2E dataset that it performed on par with, or even slightly better than, the strong baseline models from the E2E NLG Challenge, namely, TGen BIBREF20 and Slug2Slug BIBREF21. Baseline System Evaluation ::: Automatic Metrics We evaluate our model's performance on the ViGGO dataset using the following standard NLG metrics: BLEU BIBREF22, METEOR BIBREF23, ROUGE-L BIBREF24, and CIDEr BIBREF25. Additionally, with our heuristic slot error rate (SER) metric we approximate the percentage of failed slot realizations (i.e., missed, incorrect, or hallucinated) across the test set. The results are shown in Table TABREF16. Baseline System Evaluation ::: Human Evaluation We let two expert annotators with no prior knowledge of the ViGGO dataset evaluate the outputs of our model. Their task was to rate 240 shuffled utterances (120 generated utterances and 120 human references) each on naturalness and coherence using a 5-point Likert scale. We define naturalness as a measure of how much one would expect to encounter an utterance in a conversation with a human, as opposed to sounding robotic, while coherence measures its grammaticality and fluency. Out of the 120 MRs in each partition, 40 were of the inform type, with the other 8 DAs represented by 10 samples each. In addition to that, we had the annotators rate a sample of 80 utterances from the E2E dataset (40 generated and 40 references) as a sort of a baseline for the human evaluation. With both datasets, our model's outputs were highly rated on both naturalness and coherence (see Table TABREF18). The scores for the ViGGO utterances were overall higher than those for the E2E ones, which we understand as an indication of the video game data being more fluent and conversational. At the same time, we observed that the utterances generated by our model tended to score higher than the reference utterances, though significantly more so for the E2E dataset. This is likely a consequence of the ViGGO dataset being cleaner and less noisy than the E2E dataset. In an additional evaluation of ViGGO, we asked the annotators to classify the utterance samples into the 9 DA groups. For this task they were provided with a brief description of each DA type. The annotators identified the DA incorrectly in only 7% of the samples, which we interpret as a confirmation that our DAs are well-defined. Most of the mistakes can be ascribed to the inherent similarity of the recommend and the suggest DA, as well as to our model often generating give_opinion utterances that resemble the inform ones. Baseline System Evaluation ::: Qualitative Analysis Among all 9 DAs, the one posing the greatest challenge for our model was give_opinion, due to its high diversity of reference utterances. Despite the occasional incoherence, it learned to produce rich and sensible utterances, for instance “Little Nightmares is a pretty good game. Tarsier Studios is a talented developer and the side view perspective makes it easy to play.”. Since our baseline model does not implement any form of a copy mechanism, it fails on instances with out-of-vocabulary terms, such as the values of the specifier slot in the test set. These, in fact, account for almost half of the errors indicated by the SER metric in Table TABREF16. Therefore, more robust models have good potential for improving on our scores. Discussion In Table TABREF20 we demonstrate how the 9 DAs of the ViGGO dataset can support a natural multi-turn exchange on the topic of video games, as a part of a longer casual conversation on different topics. One caveat of using a language generator trained on this dataset in a dialogue system as-is is that multiple subsequent turns discussing the same video game would be repeating its full name. ViGGO was designed for grounded generation but without context, and therefore it is up to the dialogue manager to ensure that pronouns are substituted for the names whenever it would sound more natural in a dialogue. Alternately, the dataset can easily be augmented with automatically constructed samples which omit the name slot in the MR and replace the name with a pronoun in the reference utterance. Conclusion In this paper we presented a new parallel corpus for data-to-text NLG, which contains 9 dialogue acts, making it more conversational than other similar datasets. The crowdsourced utterances were thoroughly cleaned in order to obtain high-quality human references, which we hope will support the recent trend in research to train neural models on small but high-quality data, like humans can. This could possibly be achieved by transferring fundamental knowledge from larger available corpora, such as the E2E dataset, but perhaps by other, completely new, methods. Appendix ::: Additional ViGGO Dataset Examples In Table TABREF22 we present one example of each DA in the ViGGO dataset, including the examples given in Table TABREF2. Appendix ::: Slot Categories In Section SECREF5 we mentioned that the slots in the ViGGO dataset can be classified into 5 general categories. Here we provide more detailed descriptions of the categories: Boolean – binary value, such as “yes”/“no” or “true”/“false” (e.g., has_multiplayer or available_on_steam), Numeric – value is a number or contains number(s) as the salient part (e.g., release_year or exp_release_date), Scalar – values are on a distinct scale (e.g., rating or esrb), Categorical – takes on virtually any value, typically coming from a certain category, such as names or types (e.g., name or developer), List – similar to categorical, where the value can, however, consist of multiple individual items (e.g., genres or player_perspective). Note that in ViGGO the items in the value of a List slot are comma-separated, and therefore the individual items must not contain a comma. There are no restrictions as to whether the values are single-word or multi-word in any of the categories. Appendix ::: Data Collection When generating the MRs for the inform DA, we fixed the slot ratios: the name and genres slots were mandatory in every MR, the player_perspective and release_year were enforced in about half of the MRs, while the remaining slots are present in about 25% of the MRs. At the same time we imposed two constraints on the slot combinations: (1) whenever one of the Steam, Linux or Mac related boolean slots is present in an MR, the platforms slot must be included too, and (2) whenever either of the Linux or Mac slots was picked for an MR, the other one was automatically added too. These two constraints were introduced so as to encourage reference utterances with natural aggregations and contrast relations. The remaining 8 DAs, however, contain significantly fewer slots each (see Table TABREF6). We therefore decided to have the MTurk workers select 5 unique slot combinations for each given video game before writing the corresponding utterances. Since for these DAs we collected less data, we tried to ensure in this way that we have a sufficient number of samples for those slot combinations that are most natural to be mentioned in each of the DAs. While fixing mandatory slots for each DA, we instructed the workers to choose 1 or 2 additional slots depending on the task. The data collection for MRs with only 1 additional slot and for those with 2 was performed separately, so as to prevent workers from taking the easy way out by always selecting just a single slot, given the option. Leaving the slot selection to crowdworkers yields a frequency distribution of all slot combinations, which presumably indicates the suitability of different slots to be mentioned together in a sentence. This meta-information can be made use of in a system's dialogue manager to sample from the observed slot combination distributions instead of sampling randomly or hard-coding the combinations. Figure FIGREF30 shows the distributions of the 8 slot pairs most commonly mentioned together in different DAs. These account for 53% of the selections among the 6 DAs that can take 2 additional slots besides the mandatory ones. We can observe some interesting trends in the distributions, such as that the developer + release_year combination was the most frequent one in the confirm DA, while fairly rare in most of the other DAs. This might be because this pair of a game's attributes is arguably the next best identifier of a game after its name. Appendix ::: Dataset Cleaning A large proportion of the raw data collected contained typos and various errors, as is inevitable when crowdsourcing. We took the following three steps to clean the data. First, we used regular expressions to enforce several standardization policies regarding special characters, punctuation, and the correction of undesired abbreviations/misspellings of standard domain-specific terms (e.g., we would change terms like “Play station” or “PS4” to the uniform “PlayStation”). At the same time, we removed or enforced hyphens uniformly in certain terms, for example, “single-player”. Although phrases such as “first person” should correctly have a hyphen when used as adjective, the turkers used this rule very inconsistently. In order to avoid model outputs being penalized during the evaluation by the arbitrary choice of a hyphen presence or absence in the reference utterances, we decided to remove the hyphen in all such phrases regardless of the noun/adjective use. Second, we developed an extensive set of heuristics to identify slot-related errors. This process revealed the vast majority of missing or incorrect slot mentions, which we subsequently fixed according to the corresponding MRs. Turkers would sometimes also inject a piece of information which was not present in the MR, some of which is not even represented by any of the slots, e.g., plot or main characters. We remove this extraneous information from the utterances so as to avoid confusing the neural model. This step thus involved certain manual work and was thus performed jointly with the third step. Finally, we further resolved the remaining typos, grammatical errors, and unsolicited information. Appendix ::: Model Parameters Even though on the small datasets we work with we do not necessarily expect the Transformer model to perform better than recurrent neural networks, we chose this model for its significantly faster training, without sacrificing the performance. For our experiments a small 2-layer Transformer with 8 heads proved to be sufficient. The input tokens are encoded into embeddings of size 256, and the target sequences were truncated to 60 tokens. The model performed best with dropout values of 0.2. For training of the Transformer models we used the Adam optimizer with a custom learning rate schedule including a brief linear warm-up and a cosine decay.
manually cleaned human-produced utterances
993ee7de848ab6adfe02fa728b3a2c896238859b
993ee7de848ab6adfe02fa728b3a2c896238859b_0
Q: Do they build a generative probabilistic language model for sign language? Text: Motivation and goals The main motivation here is to equip Sign Language (SL) with software and foster implementation as available tools for SL are paradoxically limited in such digital times. For example, translation assisting software would help respond to the high demand for accessible content and information. But equivalent text-to-text software relies on source and target written forms to work, whereas similar SL support seems impossible without major revision of the typical user interface. Video integration seems natural for any kind of SL-related application, and many SL users make it a preferred storage format, including for personal notes. Some go as far as to say it serves enough not to need a written form altogether. The claim is that video provided an acceptable substitute for textual input, and that technology and its availability today compensates what might have been a problem up to recently—virtually everybody now has a camera in their pocket at all times. But when producing a small mistake in a film take, correcting it requires repeating it entirely. One might pull up video editing software to cut and stitch pieces together if lucky enough that the mistake can be resolved in this way, but this requires more than language skills and leads to discontinuous signing (cuts, fade-in/out effects, etc.). For better use and integration in software, we are looking for a way of processing content comparable to word processors, in other words allowing to draft and organise discourse by editing pieces of their contents and to move them around. We call this an editable representation. Also, when searching through video contents, finding data requires both scanning it and playing parts of it repeatedly, the search iterations being based on some memory of the contents. It is analogous to tape recordings when looking for, say, a forgotten rime in a song. It requires an alternation of winding the tape forward or backward, and listening from the new position on for at least enough time to make a decision on the direction of the next search iteration. The later replacing CDs partially reduced this problem as they used to (assuming that they are past technology) have direct access to song (“track”)—and more rarely sub-track “index”—beginnings. But nothing outside of the indexed entries could be found without falling back on some form of manual scan–play process. Similarly, video indexing with time-tagged labels can help access key features of a video, but only tagged content can then be found. There is no possibility of arbitrarily precise data oversight allowing to target and focus any detail. Indexing contents therefore only partially solves the general search problem, and moreover requires to be built beforehand. We are looking for a representation with which data sets can be directly scanned and searched through, both to remove the need for separate indexing, and not to restrict searches to what has been indexed. We call this a queryable representation. Besides, we have been modelling SL for about a decade, mostly pursuing the goal of Sign synthesis. Our progress led to AZee, which has produced increasingly promising results and is now being used to animate avatars BIBREF0 , BIBREF1 , BIBREF2 . We come back to it in a later section, but it is enough for now to say that it is a form of editable and queryable representation of SL. It also allows rendering anonymous SL animation, which is a significant advantage over users filming themselves if contents is to be made public. We call this a synthesisable representation, and would like to retain this property as much as possible. However, AZee is a formal model, only readable if the system is learnt, and its abstraction layers understood. It is not readable in the sense that it would be shaped to match human intuition and recognition of patterns. Given the goal of implementation for human users of the language, we are aiming at a representation that is also user/human-oriented, facilitating content manipulation through software GUIs. That said, we also take the three following statements to be true: This naturally brings us close to the question of a writing system for SL. In this paper, we study writing systems in general, and the existing or possible parallels for SL. We do though acknowledge that the purposes of a writing system encompass a lot more than use in software: note taking, thus ability to handwrite, institutional/legal archiving... Plus, we have enough hindsight on writing today to understand that any design choice for a writing system can yield unforeseen problems a long time afterwards, e.g. dyslexia is reduced when redrawing letters with more contrastive features. We therefore wish to keep clear of any claim regarding what SL writing should be shaping into, or how it should socially be developing, as much as to feed the scientific discussion on SL writing and digitalisation. Writing systems First, not all human languages have a standard written form known and put in practice by its users. Second, the languages that do (or did) are all vocal languages. No SL has such system, known and adopted, but some forms have been proposed. This section presents the scripts for vocal languages; we talk about those for signed languages in the following section. Categories It is common to distinguish language writing systems in two groups, depending on whether its basic symbols encode meaning, or things to pronounce. In the latter case, a combination of several is normally required before meaning can emerge. A system of the former group is called logographic; the latter phonographic. Chinese is the most known example of logographic system: “” is a character meaning “bright”, “” another meaning “rain”. They happen to be pronounced “míng” and “yǔ” respectively, but this is not conveyed by the symbols in a systematic way. The reader must know it to read them out. On the other side is Spanish for example, whose Latin alphabet is used to encode sounds (phonemes) to be pronounced. For example “p” encodes the phoneme /p/, and the character string “pelo” the phonemic sequence /pelɔ/. It happens to mean “hair”, but it is not what is captured by the written symbols. The reader needs to know it to understand the sequence. Zooming closer on this rather coarse dichotomy, we see a finer continuum in the level of linguistic abstraction applied. On the side of symbols encoding meaning, we first reference ideographic systems. They encode full/fixed meanings with symbols like that in fig. FIGREF8 , whose interpretation is “do not smoke in this area”. But they are not the written form of any specific oral language, thus we consider them out of our scope. A logographic system composes linguistic elements on a level that we have seen is at least morphemic, e.g. equivalent to words like “rain”. Then, phonographic systems do not necessarily jump straight to encoding phonemes. There exists various strategies to encode sounds on a sub-morphemic level. Syllabaries like the Japanese hiragana and katakana have minimal units that stand for a full syllable weight in time (mora) each, e.g. the consonant–vowel combinations “” for /ka/ in hiragana, “” for /mi/ in katakana, etc. Abugidas like the Devanagari script used to write Hindi also encode syllabic rhythm but each syllable character reveals the combined phonemes, e.g. “” /ku/ = base glyph “” /k(a)/ + the diacritic for vowel /u/. Alphabets encode sequences of phonemes, with no syllabic marking. Full alphabets give equal status to vowels and consonants like in most European languages, whereas abjads mark only the consonantal base on top of which vowel marking is mainly optional or partial, usually because imposed by the grammatical role or context, e.g. Arabic. Featural systems like the Korean Hangul also encode phonemes, but even reveal the phonetic (articulatory) features to combine to produce them, e.g. plausive+labial+unvoiced. Noticeably, the Hangul script also groups the phonemes to reflect syllabic structure. The progression is one of decreasing level of linguistic abstraction, which more or less follows the order in which respective systems have tended to appear. The first ones in the history of writing were mostly logographic, whereas most new ones emerging are phonographic. Implications In a purely phonographic system, one: must learn the mapping between sounds (or sound features) and written symbols; is able to read out (pronounce) any written input and write down anything heard without understanding; has to know more of the language to make sense of text; uses a relatively small number of symbols all together. Spanish is an almost perfectly phonographic system in this sense. For example, once you have learnt the following bidirectional mappings: “m” is pronounced /m/, “a” /a/, “n” /n/ and “o” /ɔ/, you can read “mano” to pronounce /manɔ/ and write the former when hearing the latter. But the fact that the sequence means “hand” is still out of reach until you learn the language. In a purely logographic system, one would observe symmetric properties: learn direct mappings between written symbols and their meaning; be able to make sense of input text and write down concepts without speaking the language; knowing the language is required to read text out loud or write down oral input; a large number of symbols constitute the script. In the Chinese writing system, usually classified as logographic at first, “” is an example of a non ambiguous logographic character meaning “rain”. It is not necessary to know how to pronounce it to interpret it, and indeed dialects may not all agree on an identical pronunciation for it. But any capable reader will interpret it correctly. Note that even in the “pure” systems assumed in this section, nothing precludes ambiguities in text, which are ubiquitous in language. If phonographic, writing encodes sounds, but similar sounds can bear multiple and distinct meanings, a phenomenon called homophony. For example in English, the sequence “just” is composed of 4 ordered graphemes standing for the respective phonemes /dʒ/, /ʌ/, /s/ and /t/, together pronounced in sequence as /dʒʌst/ and meaning both “equitable” and “only/merely”. The same written–pronounced form is interpretable either way, and the ambiguity will remain as long as context allows. The logographic counterpart to homophony is pure synonymy, i.e. different sounds with undistinguishable meaning. An exclusively logographic system would write such instances in the same way. However, such candidates are rather rare as they will likely carry some nuance at times thus not qualify as identical meanings. Moreover, being pronounced differently is almost always enough to justify different written forms. This is what we mean when we oppose “writing a language” to ideographic pictograms, mentioned and discarded further up. A script is the written form of a language, including its various entries and possibilities for nuances. Neither complete nor fully-exclusive systems In general, no system in use possesses all properties of a given class, in writing and reading directions. In English, many homophones have different spellings. For instance “night” and “knight” are both possible written forms of /naɪt/, a phenomenon called heterography. Conversely, different possible pronunciations are sometimes written with identical forms, e.g. the letter sequence “minute” in “this will take a minute of your time” (meaning: 60 seconds) vs. “only a minute fraction of the total will be lost” (meaning: very little), respectively pronounced /mɪnɪt/ and /maɪnjuːt/. They are called heteronyms. In other writing systems first classified as phonographic: French “a” vs. “à”, both pronounced /a/, mark the difference between a conjugated auxiliary verb and a preposition; Japanese “”, normally standing for /ha/, is actually /wa/ when it is the topic marker (grammatical function for a particle); German “du hast” vs. “du hasst”, both read /duhast/, are formed from inflections of the different verbs “haben” (to have) and “hassen” (to hate) respectively; etc. In all examples above, it is the meaning or function that justifies a distinction in writing or pronunciation, which is not a natural property of the phonographic approach. Some languages are known to have a very high grapheme-to-phoneme and phoneme-to-grapheme correspondence like Finnish or Croatian, but this still often has to exclude things like loan words. Also, punctuation marks and number digits, part of the script as much as the letters, encode lots of meaning and very little or no pronunciation cues. What is more, these scripts consistently separate words with a space, which we argue is alone a highly functional (non-phonographic) feature of the script, as nothing allows to know where the spaces must go on a purely phonemic basis. The system best classified as logographic, Chinese, also has comparable irregularities. For example, a character typically has several reading–meaning pairs. Also, it allows to write pronunciations, which enables transcription of foreign place names for example. On a lower level, characters are often themselves composed of pieces including a phonological clue. For example “”, meaning “crazy, insane” and pronounced “fēng”, combines the key “” (denoting illness) and the character “” (pronounced “fēng”). The latter is a pronunciation hint (here identical to the target) for the whole character; its meaning (“wind”) is irrelevant. These are not natural features of a logographic system. Japanese even famously mixes systems right from the start: it involves the two distinct syllabaries hiragana and katakana, plus kanji characters borrowed from the Chinese script. Used conventionally, hiragana marks grammatical particles and verb or adjective endings, katakana loan words and sometimes emphasis, kanji normally capturing the remaining lexical units. All three systems read out with the same set of syllables. The first two are phonographic and encode them directly with a one-to-one mapping; whereas kanji is as logographic as can be said about written Chinese. So a mix seems always to be present. The two extreme categories, phonographic and logographic, are mostly fantasised, all systems instead showing features of both sides, in variable proportions. Finally, it must also be noted that a lot is not written, and left to be compensated, even in the most featural phonographic system. Features like stress can be essential to pronunciation (i.e. mandatory articulations), but never written. This is true for English. Most short vowels in Arabic, though one could write them, are left for the reader to infer from semantic context, recognise from the written context (surrounding letters and spaces). SL writing As we stated in our introduction, some SL users question the need or benefit of a writing system for Sign Language. They sometimes argue that video captures the subtle articulations of the speaker, whereas any transcription would come with a decrease in precision. Sometimes, writing is even seen as an intrusive feature of vocal language culture, if not a threat. We have explained why video should not be considered sufficient for software processing, but would argue further that it is simply wrong to equate cameras and pencils. A very good and articulate multi-fold discussion is provided by Grushkin on this topic BIBREF3 , in the first half of his article. Designed systems There have been various attempts to devise SL writing systems, from personal propositions with local use to others reaching enough popularity to see some discussion on their potential future as actual writing systems. SignWriting BIBREF4 is by a significant margin the most visible one. It takes the form of a string of pictures, each representing a sign and encoding its basic parameters as attributed to Stokoe BIBREF5 . That is, a hand shape is written for each active hand, by means of a base symbol filled according to its orientation in space (facing forward, towards the signer, etc.). Hand location is given through relative positions of the symbols in the drawing, movement and contact are shown with arrows and other diacritics. Facial expression can be specified in a circle representing the head, as well as shoulder line. Figure FIGREF32 a gives an example of a single sign involving one hand with an extended index configuration, a contact with the temple, a facial expression and a repeated manual rotation. An interesting feature of this system, also a voucher of its relative popularity, is that it has been subject to experiments in deaf classes in several places, in particular to assess how it could be learnt by early language users. It is also the only one to have been granted a Unicode block. Other systems have been proposed more or less following the same underlying principles, e.g. Si5s and its ASLwrite fork (fig. FIGREF32 a), which have been subject to strong promotional efforts. We have also encountered a system developed by a teacher at INJS, a Parisian institution for deaf education in Sign Language. She wished to give students a way to write signs even if they did not know (or have) a written French equivalent. She called it “signographie” (fig. FIGREF32 b). It is interesting to mention because it is also taught and used in class by both teachers and students to support educational communication, though in a different fashion to the SignWriting experiments. We will be referring to this again in the next section. The choice and style of the graphics differ in the systems listed above, which is relevant when implementing some of the functions of writing. For example, standard SignWriting requires colouring zones to show hand orientations, which can be uneasy with a pencil. But they all otherwise share the same features, capabilities and composition rules. So in terms of system features and classification, there is no essential difference between them. Alternatively, and actually before SignWriting was born, a more linear approach had been used, which still has representatives. Bébian BIBREF6 , Stokoe BIBREF5 and Friedman BIBREF7 all separated the manual parameters and linearised them in script, resulting in character sequences, each looking more like words made of letters and covering what would form a single picture in a system of the previous group. With the advent of technology, computers and data processing, more scripts came out falling in this same category of linearised scripts, given how easier it was to design fonts, rely on common input devices like keyboards and display TrueType sequences in word processors. Various such scripts have been proposed since (see figure FIGREF36 ), intended with more or less international coverage: the generic HamNoSys BIBREF8 , BIBREF9 , SignFont BIBREF10 and its follow-up ASL-phabet BIBREF11 for ASL, ELIS BIBREF12 and SEL BIBREF13 for LIBRAS. HamNoSys is certainly the most popular fontified script for SL. It has been used as the main means of representing signs in academic papers by several scholars, even doing away with drawings or pictures at times. More impressively yet, it was implemented, through its XML adaptation SiGML, as the primary input format to an avatar animation software after the turn of the millennium BIBREF14 , BIBREF15 . This is a unique property that was never successfully reached by any other, and which is relevant to one of our goals here (synthesisable). All of the systems mentioned above encode minimal forms to articulate and combine, though the granularity of the minimal forms are variable. They show features such as hand shapes (coarse grain), finger bending (finer grain), hand locations, mouth gestures, and may include more abstract features like symmetry or repetition instead of duplicated symbols. In every case it is a description of what must be articulated, i.e. form features. None of them write anything directly mapped to meaning without an indication of the form. To learn the system is to learn to articulate the symbols, and it is then possible to do so without understanding what is read. In this sense they are all phonographic systems. Also, they all assume a segmentation on the same level as the one that linguists use to gloss signed input. It is the level usually called “lexical”, i.e. of dictionary entries (“signs”) and other non-lexicalised productions such as classifier placements or movements. The latter use the signing space in a way that is more semantically relevant, but they are nonetheless written following the same composition rules in the scripts. In every system, these units are stringed one after the other in a linear sequence, as illustrated in fig. FIGREF38 . At this point we point out that we found no justification for these design features. They seem mostly taken for granted as a starting point, whereas we argue they can be questioned, especially in the light of the wide panel of other known written scripts. For example, why not a single logographic property? Without implying that things should be different at this point, we will at least be showing that they can in many respects, while thorough exploration of the alternate paths has not taken place. Most of the scholarly work we found discussing SL writing systems either take a phonographic goal for granted without even mentioning the distinction BIBREF16 , BIBREF17 , or do talk about the duality only to evacuate logography and favour phonography with no compelling reason to do so BIBREF18 , BIBREF19 . It is probable that this is largely due to a double cultural influence. Firstly, the systems above originate from Western cultures with dominant Indo-European languages all written via phonemic systems juxtaposing whitespace-separated lexical units in linear text. Secondly, the dominant SL theories in the last five decades have been inspired by parametric description of signs in Stokoe's sense, which is rooted in phonology, phoneme inventories and minimal pairs. Every system presented so far assumes such parametric composition, and chooses to represent it in some way. Grushkin BIBREF3 must be one of the rare authors to present Chinese logography seriously and discuss its benefits and drawbacks. He even reports on findings telling that deaf Chinese readers have less difficulty reading logographs than the English do strings of alphabetical letters, which is a door wide open on logography for SL writing. Yet somehow he too ends up closing it, advocating what he calls an “alphabetical” (in our terms here, phonographic) paradigm, ultimately to facilitate literacy in English. After such an admirable plea to equip SL with writing, and so eloquently explaining the need to empower SL and the Deaf culture with an autonomous system (e.g. rejecting any sort of glossing, etc.), we find his last call rather surprising. At least two major differences with the dominant scripts stand out though, which we analyse as coming more or less directly from the difference in the physical channel, because they have to do with simultaneity and iconicity. The first one comes from the fact that within a lexical sign, phonemic composition is simultaneous and not reducible to a sequence, like “just” is the concatenation of /dʒ/+/ʌ/+/s/+/t/. This has forced to choose between two strategies, each breaking something of the alphabetical idea of continuous phonemic sequence. The symbols to be articulated simultaneously are either: packed in a planar arrangement to form one complex unit, as shown boxed in the top line of figure FIGREF38 —the equivalence between the sequence of symbols and that of the production is retained, but the units of the sequence are no more each a minimal phonographic unit; or linearised, and some form of spacing takes place to separate the flattened units (bottom line in the figure)—the sequence of symbols is then to be segmented on two different levels, one of which is no more an account of the production sequence. Incidentally and likely for the same cultural reasons as above, scripts generally follow the left-to-right writing direction (arrows in fig. FIGREF38 ). An exception is SignWriting, which now prefers a vertical top-down direction. The second major difference with common writing systems is about the symbols themselves. Most of the systems (and all the major ones) have embedded some iconicity in the graphics, i.e. a resemblance between the symbols and the way to articulate them. For example, the “5” hand shape (flat hand, fingers spread) is drawn in HamNoSys, in SignWriting, in ASLwrite... They are iconic of the form to produce, involving 5 countable fingers. From what we can tell, most writing systems of vocal languages have actually started this way. Some Chinese logographic characters are even still reminiscent of that fact, e.g. (rain). But they have gradually been abstracted, simplified and conventionalised over time, often giving rise to new or altered meanings, and it is fair to say that writing systems today are not iconic. Whether it is natural for a system to lose iconicity over time, and whether or not there is a different case to be made for Sign Language, we at least call out this difference for now as the notion always has special relevance in Sign Language studies. Spontaneous productions All the scripts presented in the previous section were designed systems, i.e. sets of rules intended to be complete (covering everything deemed necessary) and consistent (identical events captured with identical representations). Aside from those developments, in the few years leading to our present questions, we encountered other uses of pen and paper aimed at SL representation without relying entirely on dominant (“foreign”) language support. First, many SL users taking notes of signed input or preparing signed speeches resort to graphics to represent the original or intended signed production. Whether to capture a sign, path, movement, space location or meaningful relationship between elements of the discourse, graphical schemes found sufficient to express the production are for these users naturally preferred as the added cognitive search for words or phrases in a second language becomes unnecessary. A second example is in teaching deaf students in signing environments. Teachers and deaf education experts encourage the use of visual material for deaf education, even if SL is not the only language used in the programme. At INJS, we met teachers that have pushed this idea further than, say, explanatory diagrams to teach new concepts. According to them, students should be able to turn in work in a written form, the official written language is a foreign one, and Sign Language is best captured with drawings. So in addition to signographie (see § SECREF30 ), they allow the students to draw SL the way they feel it should, provided they understand the signing that motivated the drawing. The school has kindly agreed to share a few of those productions with us. Figure FIGREF43 shows one of the pages of a piece of homework. The third use case is that of text-to-Sign translation. Professionals are taught to draw “deverbalising diagrams” as a first step from input texts to capture all of what must be delivered in SL (the meaning) while enabling to work without the texts (the source form) afterwards, so as to produce a semantically equivalent discourse in SL (the target form) in a way that is detached from the original foreign input. We have begun to work with LSF professionals on possible software assistance to this deverbalising task. We give an example of diagram in fig. FIGREF45 . These diagrams usually lie somewhere on a continuum, already observed by Athané BIBREF20 , between: semantic representations, which capture meaning regardless of any language in which it could be expressed; and what we shall call verbalising diagrams (VD) henceforth, i.e. drawings laid out in such way that they can be followed directly to produce well-formed SL discourse. Fig. FIGREF48 is primarily an example of the first kind. It looks more like an educational diagram than inspired by SL particularly. In the case of translation, such a representation will come from a pure deverbalising effort, and will often be annotated in a second step with numbers to order pieces in a Sign-logical way for SL production, though this step is not always easy. The example figure FIGREF45 is closer to the second end of the continuum, as it produces its own SL-inspired information sequence (the number annotations only confirm the natural flow of its contents), and every piece seems to mirror the way to express its meaning in SL. Given our interest in writing SL specifically, this article will preferably work with the second kind (VD). Unfortunately, these are all local or personal productions, usually intended for short-term use and discarded afterwards. No archive or data compilation exists of such diagrams whereas, after looking at the few shared with us, we came to observe much more consistency and expressiveness than what even their own authors seemed willing to grant them. We therefore believe science ought to take a better look at them, and have begun building a corpus of such diagrams to this effect, aligned with their signed equivalents. The collection involves various linguistic profiles (e.g. nativeness) and uses (e.g. translation, note taking, authoring), elicitation material (e.g. text, video) and genres (e.g. story, definition), etc. It is currently in progress, and we will present the corpus in detail in subsequent publications. But the data collected so far already provides discrete examples of recurrent observations, some of which we wish to expose. In this type of graphical representation, meaning plays a major part in what is written, and on first look a lot more so than form. Let us first look at the atomic level in more detail, i.e. the smallest, non-breakable symbols populating the diagrams. What we observe is: all participants drew dogs to mean the pet animal; none drew what body articulators should do to sign “dog”. The same applies for most icons on the collected pages. Without knowing the language, one can be told—or in this case even guess—what these symbols mean and understand them regardless of how to sign them, and nothing from those symbols tells how to sign them for sure. In this sense they make diagrams lean towards the logographic category of scripts, should they be recognised as such. Although, on this same atomic level, examples of units representing the signed form (and not only its meaning) are found in three circumstances: illustrative/depicting units, e.g. fig. FIGREF52 a, which represents a jaw drop meaning astonishment and to be reproduced as a form (a kind of short role shift), or fig. FIGREF52 b, which represents the path followed by a mouse underground and whose geometry (wiggling forward then popping out straight up) must be redrawn in the signing space; high salience of form over ease of representation ratio, e.g. fig. FIGREF52 c representing the sign for the notion “the most important, dominant”, which involves a movement of hand shape “thumb-up” (LSF “1”) reaching the top of the other “flat hand”—it is clearly a drawing of the form to articulate, easier to capture in a drawing than the rather abstract notion it conveys; the special case where authors knew a phonographic system, and that it was shared with the potential/intended reader, like in the INJS environment where the teacher's signographie was proposed to the students for use in their diagrams—for example fig. FIGREF52 d, encoding the LSF sign for “result” in that system, was found used as a section title. It is yet to note that all forms in case (1) and many of case (2) are iconic, hence represent their own meaning in some way, which undermines the proposition of a phonographic status for these units. Also, out of the 29 A4-sized pages full of drawings satisfying the premisses of case (3), we only count 11 instances of signographie, which tells us that even in the case of an available phonographic system, the preference for meaning is not overturned. We therefore observe that for atomic elements there is spontaneous preference of users for logography, though phonography is not avoided at all costs. This is a significant difference with the current offer in designed systems, which are exclusively phonographic. Outside of the atomic level, meaning also plays a strong role. Relationships with various arities are shown by linking the participating diagram entities with relative positions, lines and arrows of different styles, sometimes tagged. They represent semantic relations, often in a way similar to semantic graphs BIBREF22 . Figure FIGREF55 is an example of a directional relation between two people, one helping the other, represented by an arrow tagged with a (French) word meaning “help”. It clearly represents the semantic relationship between the two. It is tempting to extend the hypothesis of inclination to meaning to the whole process of diagram drawing. But a lot in the end has to do with form too, again for reasons rooted in iconicity. A high proportion of the diagrams' layout choices not only serve legibility purposes or the needs of a 2d projection on the paper. They also perfectly reflect the spatial arrangements observable in the original signed discourse if any, and in the later productions when the diagram is “read”. For example, figure fig:VD-INJS-jumelage is a student's diagram representing a discourse signed by the INJS teacher who gave the assignment, about exchange trips between two schools including theirs. Figure FIGREF57 shows three relevant moments of the original video, in order of appearance: (a) while she anchors the ASD institution, class and teacher on her left; (b) while the French counterpart is anchored on her right; (c) while she explains that letters were sent both ways between the two schools. It is clear that the student captured the exact same layout in the diagram. Similarly, the full page given in figure FIGREF58 identifies Alice Cogswell (though misspelt) as the 3rd child of a family of 4, who became deaf after an illness. A reader with just enough SL will easily see how this directly maps from the frontal vertical plane of the signing space, with the scene developing from top (parents) to bottom (the focused child). The distinction between meaning and form as the target of the representation is therefore often difficult to make, and we would argue even nonsensical in many cases. By definition, iconicity confounds the two. Thus when it is involved, form is likely identifiable as meaning, and representing one likely represents the other too. Meanwhile, whether they capture form, meaning or both, the symbols used are overwhelmingly iconic, i.e. bear resemblance with what they represent. In this case there is a clear convergence of almost all approaches mentioned up to here: they favour iconic symbols in the script. We consider this an interesting finding: all SL major scripts, including the designed and the spontaneous ones, use iconic symbols whereas none can be categorised as such in vocal language scripts. Contrarily to the scripts presented earlier though, there is no systematic level of reading where a sequence is to be segmented in ordered units. Although parts of them happen to be ordered in some places, the diagrams are essentially two-dimensional. Consequently, it is difficult to raise the question of a direction of reading, or at least to produce any conjecture at this point. Linking to a formal description After observing a first set of verbalising diagrams produced by multiple people, and multiple diagrams for each person, we found recurrent graphical strategies to capture language components. And something struck us even more yet: the ease with which those systematic mappings between graphical forms and meaning could be expressed in AZee. This section explores these new waters a little deeper. AZee AZee is a framework to represent SL in a way that is both linguistically relevant and formal, in other words unambiguously interpretable by both humans, e.g. for linguistic accounts of phenomena, and computers, e.g. for synthesis. We have published about it enough to avoid too long a diversion here BIBREF23 , BIBREF24 , BIBREF0 , but this section summarises the key elements and properties of the model, on which we build our next proposition. AZee is an umbrella term for: the general approach to SL description, summarised below, based on production rules and free synchronisation of the whole body articulator set; a programming language to formalise those rules; the software tool able to compile correctly formed input and generate sign scores, then usable by external software to synthesise and render animations. The entire approach is built around the duality between observable form and semantic function, and aimed at bridging them together. To do so for a given SL, production rules are formalised, each associating necessary forms to an identified semantic function. For example in LSF, the semantic function “dog” is associated the form shown in fig. FIGREF63 a. This association allows to define a production rule which when applied generates a signing score specifying the gestures/movements (forms) to articulate to mean “dog”. Notably, all of this is done with no level distinction such as morphology vs. lexicon. The fact that the result of “dog” is often categorised as a lexical production is irrelevant as far as the model is concerned. A rule can be parametrised if parts of its form depend on an interpretable piece of context. The meaning “surgical cut, scar between points INLINEFORM0 and INLINEFORM1 on the body” is associated a form which includes a movement between INLINEFORM2 and INLINEFORM3 . Fig. FIGREF63 b gives an example with both points on the abdomen. The rule is parametrised with the two point arguments accordingly, and specifies a resulting score whose description depends on those arguments. Later applying the rule to two (meaningful and only given in context) points of the body will automatically generate the appropriate form, in accordance with those points. Parameters can be of any type defined by AZee (geometric vector, numerical, left/right body side...). In particular, rules can be parametrised with signing score arguments to allow recursive use of production rule applications as arguments for others. For example, the semantic function “it is generally agreed that INLINEFORM0 ” produces a signing score whose specification is a mouth gesture (lip pout) synchronised with INLINEFORM1 with a time offset (see fig. FIGREF67 ). We have elsewhere called this function “non-subjectivity” BIBREF0 . This all together allows to generate functional expressions, which when evaluated produce complex utterances nesting scores in one another. For example, expression (E1) below combines 4 rules and evaluates to the signing score given in fig. FIGREF69 . info-about(dog(), non-subjectivity(nice-kind())) In the expression: “info-about( INLINEFORM0 , INLINEFORM1 )” means “ INLINEFORM2 is the (focused) information about INLINEFORM3 ” and produces the score in fig. FIGREF68 ; “nice-kind” has a self-explanatory name and produces the form shown in fig. FIGREF63 c when applied. For legibility, AZee expressions are often represented as trees where child nodes are the nested expressions used as arguments of the parent in the right order. For (E1): [.info-about dog [.non-subjectivity nice-kind ] ] It was observed that recursively combined rules produce scores that can be interpreted as a whole as the combination of their respective interpretations. For example, the reader might already have interpreted (E1) as meaning “dogs are [generally thought to be] nice”, which is what one interprets from the production scored in fig. FIGREF69 (to the extent that such constructed examples allow out of context). Therefore while AZee trees look like syntactic trees, they are rather comparable to semantic representations because unlike syntactic trees, every rule node carries meaning (or would not exist at all). Bridging VD to AZee The quest for AZee production rules in LSF has now been going on for a few years. And with no claim of it being complete yet, our current set usually allows to almost cover monologues of the informative type such as news reports. This makes us consider the approach as worth pursuing and the state of the rule set, if not definite, solid enough to entrust. Now as we hinted while introducing the section, patterns in the collected diagrams were found, which could easily be expressed with identified AZee rules. Let us list a few. We do not give counts or statistics as they will not be meaningful at this stage, but we do give a few of the clear qualitative tendencies. We have already mentioned trivial examples while discussing the tendency for logography on what we called the atomic level. For example, the drawing of a dog corresponds to the rule “dog”. But other patterns arise on higher levels. An example of a repeated pattern is the use of context bars: a piece of the drawing INLINEFORM0 (which can itself combine multiple pieces) is “followed” by a straight line separating it from a second piece INLINEFORM1 . The already shown fig. FIGREF45 contains an example of this feature. The overall interpretation is that information INLINEFORM2 is focused, but given in the context of INLINEFORM3 . In the SL equivalent (source or result), the portion corresponding to INLINEFORM4 is always signed first, and followed by INLINEFORM5 . We often make the same interpretation of the following colour change pattern: a scene INLINEFORM0 is drawn, on top of which a piece of drawing INLINEFORM1 is superimposed in a different colour, with INLINEFORM2 being signed after INLINEFORM3 in the SL production. Fig. FIGREF52 b is an example of such colour change. The two animal entities set up a context scene (initial positions) in which the blue arrow (movement of the mouse) is the focused event. Fig. FIGREF43 exhibits the same pattern on a larger scale: the exchange of letters drawn in red is the focused information, and occurs between two sides set up as context by the rest of the drawing. And the same goes for fig. FIGREF58 . Of course this is limited to cases where colour was available; some informants have indeed chosen to use a single colour. Also, we found other uses of colour change, but focusing/highlighting a piece INLINEFORM4 in a contextualising scene INLINEFORM5 is a frequent one. The noteworthy thing here is that both colour highlighting and context bars have a direct mapping to the one-rule AZee expression “context( INLINEFORM6 , INLINEFORM7 )”, which means “ INLINEFORM8 in context INLINEFORM9 ”. Another repeated pattern is when two pieces INLINEFORM0 and INLINEFORM1 are drawn side by side with more or less similar sizes and an equal sign (“=”) in between, as shown in fig. FIGREF72 . These instances match the AZee expression “info-about( INLINEFORM2 , INLINEFORM3 )”, already introduced above. A graphical pattern using bullet lists (e.g. fig. FIGREF73 ) also emerged for exhaustive (closed) enumerations of simultaneously true/applicable items, which has its AZee rule “each-of”... Such regularities keep surfacing in the diagrams. We must investigate them further and try our observations statistically on the full corpus to come. It will exceed 200 drawings, and keep growing as more informants might still wish to contribute afterwards. But to summarise, at this point we make the two following observations: spontaneous verbalising diagrams and AZee trees are both in essence close to semantic representations shaped for production in the target SL; they share common structuring and composition elements to represent the meaning. This apparent proximity made us want to explore the possibility of a bridge between the two types of representation. Let us first acknowledge an interesting symmetry between them. First, even in the form of trees, AZee expressions are of a mathematical/formal nature, in other words friendly only to those familiar with the model, not human-oriented drawings easy to draw and decipher. On the contrary, VDs are graphical objects spontaneously used by many who wish to put SL in some form of writing. Thus they can only be viewed as accessible to humans, and considering our goal, as a way to ease the interface between users and software. Conversely, VDs do not provide full access to the forms to produce to read them. Whereas, unlike formal semantic representations like conceptual dependency BIBREF26 , semantic graphs BIBREF22 or more theoretical concepts like “interlingua” which are intended to be detached from any specific language, every AZee expression produces definite forms. That is to say that given any representation in AZee, a computer program can automatically generate the corresponding sign score. In the terms defined in our introduction, it is synthesisable, which is a desired property for our editable form. This property comes from the fact that when building an AZee rule set, any abstraction of observed signed forms behind semantically relevant rules is done by embedding a link to the factorised forms inside the abstracted rule. So the forms are hidden in subsequent expressions invoking the rule, but retrievable to produce a result. We have elsewhere called this building “from the target and back” BIBREF27 . Looking to translate forward from VD to AZee would currently be an ill-defined task as we have seen that VD is all but non-standard. Instead, we propose to follow the idea above and build a graphical tool kit back from AZee, in other words to define graphics, icons, symbol layouts, etc. for AZee rules and structures we already know exist for sure. Then like no AZee rule exists without an associated interpretation and form description, no graphics will be made available without an associated AZee equivalent—which itself comes with meaning and forms to produce. So the Ariadne's thread leading to the ultimate forms will always be preserved. A new type of editable SL representation The simplest plan to start building back from AZee without losing coverage is to assign a graphical form to every possible node of an AZee tree. Such node is of either kind below: a rule node referring to a named production rule: this is the most expected case, and indeed that of all nodes in example (E1); a native node containing an AZee expression to build or reference basic/geometric objects like a numerical value or a point in space or on the body: for example, two native leaf nodes would come under “scar, surgical cut” as arguments for the parent rule node. On top of this, here are two additional recommendations we wish to follow: the first tentative graphics for an AZee target should be close to any spontaneous regularity already observed in the diagrams, as it should maximise intuitive reading and minimise the difference with VD; for atomic symbols or icons, prefer Unicode characters, as their code points are secured across future fonts and systems, and many are available, all the more so as emojis are entering the standard set. For example, the rule “dog” can be assigned the atomic symbol (a) in fig. FIGREF81 , and “nice-kind” the symbol (b). They are both standard characters, with respective code points U+1F415 and U+1F493. Furthermore, rule “non-subjectivity” can be graphically represented with the addition of, say, a tick mark over its argument INLINEFORM0 as shown in (c), and template “info-about( INLINEFORM1 , INLINEFORM2 )” with the arrangement (d) observed in our data. Accounting for the recursive nature of AZee expressions, diagrams for INLINEFORM3 , INLINEFORM4 and INLINEFORM5 can themselves be diagrams of complex expressions, which creates the possibility of recursive diagrams. Combining these graphical operations, expression (E1) would then be encoded as shown in figure FIGREF82 . This is very similar to the way arithmetic expressions are written in the standard math script, as their elements are operators and atomic elements nested in one another to form one recursive structure. That is, figure FIGREF82 is to (E1) as the written expression INLINEFORM0 is to the recursive structure below, representing operator and argument nesting levels: [.square-root [.fraction [.sum INLINEFORM0 INLINEFORM1 ] INLINEFORM2 ] ] More graphics can later be defined for larger AZee templates, i.e. AZee sub-trees of more than a single node, for constructions that are frequent and semantically salient enough. For example, figures FIGREF57 a and FIGREF57 b anchor a discourse INLINEFORM0 on the left-hand side ( INLINEFORM1 is signed with hands and shoulder line turned to the left), then another INLINEFORM2 on the right. This is a frequent form for comparison or opposition of some sort between INLINEFORM3 and INLINEFORM4 , for which a possible AZee tree is: [.each-of [.localised-discourse Lssp INLINEFORM0 ] [.localised-discourse Rssp INLINEFORM1 ] ] This template could be abstracted as a whole into a binary graphic operator to further abstract the combination into something that directly makes sense to the users like “opposition of INLINEFORM0 and INLINEFORM1 ”. For example, we could represent it as shown in figure FIGREF84 . Following the same principle, we can provide more and more useful abstractions of AZee templates into compact graphical representations. At any rate, we see how this develops into a planar (2d) representation with a recursive structure, equivalent in content to an AZee expression but more helpful in appearance to human apprehension. This completes the list of properties initially expressed as our goals for use in software. Namely, our proposition is: editable, as pieces can be copy-pasted and edited like formulae in many math editors; queryable, as the input structure can be parsed by a computer and its contents searched; human-readable, because it is graphical, and the graphics chosen to be iconic and resemble what humans already produce spontaneously; synthesisable, as they are equivalent to synthesisable AZee expressions, as explained in § SECREF70 . The prospect opened here is that of SL editing software that enables saving, modifying and sharing content, enables quick search functionalities like the now ubiquitous “Ctrl+F” shortcut in applications, and is linkable to signing avatars for regular (oral) SL rendering. Prospect for writing SL As we said right at the beginning, we expect users of a Sign Language to want software input to match their own written practice if they have one some day. Thus we now want to consider our above proposition outside of its intended scope of software integration, and characterise its properties in the midst of the scripts mentioned so far. For a better grasp of what such proposition would turn out like, we have written the full AZee tree for a signed version of the short story La bise et le soleil, and encoded each node with a symbol as proposed, with only a few approximations or assumptions when AZee coverage was still limited (unexplored language phenomena). Figure FIGREF90 shows the result for the ten pieces of the overall 80-second performance. The actual look of the writing shown here here is outside of our present consideration. The symbols and line styles put out are all only tentative, if not mere dummies to instantiate our theoretical approach as it can develop. We even leave out explaining the encoded tree, since our interest at this point is only to characterise the type of script we are dealing with, and some of its properties. The first question of logography vs. phonography proves tricky if we state the fact that except for native nodes, every glyph refers to a form–meaning association. In other words then, what is written is not either a form or a meaning, but necessarily both. For example, the tick mark suggested in fig. FIGREF81 c maps to both the “generally found true” interpretation and the lip pout form, jointly. From this angle, we would have to question the logo–phono dichotomy at its roots. But while symbols could in theory be arbitrary, they never appear to be so in spontaneous VD, rather as we noticed, they systematically appear as iconic of something. So if we keep supporting inclusion of observed VD symbols in our script, we are led to prefer iconic symbols over abstract ones. As we have already mentioned, this compares to many proposed systems for SL, and similarly contrasts with the writing systems used in the vocal world. Then, the question is raised of what they should be iconic of, and the two-way characterisation of the script makes sense again if we consider the iconic references chosen for its symbols. The nature and structure of the script seems to invalidate the initial question, but the iconicity put into its glyphs reenables it. For example, the tick mark proposed in figure FIGREF81 c is more iconic of its meaning, but could instead be written as lips to indicate the associated form. We have already touched on this while trying to characterise VD as a type of script in section SECREF42 . We have seen that in most of spontaneous VD, glyphs were iconic of the meaning, while they would still occasionally refer to form. For instance it is possible that lips be favoured over the tick mark, as the choice can fall under list item list-item:salient-form-abstract-meaning. Whatever the ultimate choices, it seems unlikely that such system grow into the 100% phonographic state of the other designed propositions. By analogy with VD, we rather predict a mix with logographic references, if not even a dominant number of them. As we have shown in § SECREF24 , unbalanced mixes of phonography with logography are a typical feature of the world's writing systems. This time our system would therefore be no different. The issue of directionality must also be revisited, because no linear layout is assumed to begin with. We do note some directional reading of operators with arity greater than one and whose reversal would seem unnatural (depending on the subject's culture and literacy) or ambiguous (an arrow can be reversed, but not an equal sign). But they are local issues related to specific operators used in the script, not a property of the script design itself which would organise, say, a whole page systematically. Instead, it is 2-dimensional, or planar. What is more, it gives an account of the embeddings of the written discourse pieces nested in one another, which allows to qualify the system as inherently recursive. This scripting layout is identical to that of the preferred mathematical script, which lays out a planar recursive structure, including local directionality for specific operations. So these last two properties would be rather new if ported to a natural language script, but are not alien to scripting, handwriting or reading practice. Conclusion In this paper, we created an editable, queryable, human-readable and synthesisable representation to implement in SL-related software. It builds on two complementary grounds: The point of sourcing from VD is to tap into already existing practice, increase intuitive use and human apprehension of the system, and to inspire the graphical layout of the resulting diagrams. The point of AZee is to take advantage of the formal background, and cancel the subjectivity that is intrinsic to VD's personal and non-standardised practice. By building backwards from the formal base, we also guarantee synthesisable input. We now wish to try out our proposition by implementing and demonstrating a simple software editor. In a final section, we have opened the prospect of seeing our proposition used outside of the restricted scope of software interaction, in particular as a writing system. We have compared its properties to the other writing paradigms presented, existing for vocal languages or designed for SLs. We have shown that it fits key characteristics of writing systems (mixing phonography and logography), while exhibiting more or less new properties (iconic, planar, recursive). We hope that after a few developments, SL users will eventually be tempted to test it as it will at least help us shape up what is implemented in software, but also feed the important and difficult reflection on SL writing and literacy. Whether or not it actually proves robust or adaptable to all uses signers will require of a writing system in the future is of course not yet possible to tell. Acknowledgements We express warm thanks to INJS (Paris, France) and Interpretis (Toulouse, France) for sharing their pupils' and translator's diagram productions, as well as for the time spent discussing their use and purpose in their profession.
No
43ee69902a5fc1e3c7bacc4456d3f779c45a911d
43ee69902a5fc1e3c7bacc4456d3f779c45a911d_0
Q: Does CLSTM have any benefits over BERT? Text: Introduction Documents have sequential structure at different hierarchical levels of abstraction: a document is typically composed of a sequence of sections that have a sequence of paragraphs, a paragraph is essentially a sequence of sentences, each sentence has sequences of phrases that are comprised of a sequence of words, etc. Capturing this hierarchical sequential structure in a language model (LM) BIBREF0 can potentially give the model more predictive accuracy, as we have seen in previous work BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . A useful aspect of text that can be utilized to improve the performance of LMs is long-range context. For example, let us consider the following three text segments: 1) Sir Ahmed Salman Rushdie is a British Indian novelist and essayist. He is said to combine magical realism with historical fiction. 2) Calvin Harris & HAIM combine their powers for a magical music video. 3) Herbs have enormous magical power, as they hold the earth's energy within them. Consider an LM that is trained on a dataset having the example sentences given above — given the word “magical”, what should be the most likely next word according to the LM: realism, music, or power? In this example, that would depend on the longer-range context of the segment in which the word “magical” occurs. One way in which the context can be captured succinctly is by using the topic of the text segment (e.g., topic of the sentence, paragraph). If the context has the topic “literature”, the most likely next word should be “realism”. This observation motivated us to explore the use of topics of text segments to capture hierarchical and long-range context of text in LMs. In this paper, we consider Long-Short Term Memory (LSTM) models BIBREF6 , a specific kind of Recurrent Neural Networks (RNNs). The LSTM model and its different variants have achieved impressive performance in different sequence learning problems in speech, image, music and text analysis BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , where it is useful in capturing long-range dependencies in sequences. LSTMs substantially improve our ability to handle long-range dependencies, though they still have some limitations in this regard BIBREF16 , BIBREF1 . RNN-based language models (RNN-LMs) were proposed by Mikolov et al. BIBREF17 , and in particular the variant using LSTMs was introduced by Sundermeyer et al. BIBREF18 . In this paper, we work with LSTM-based LMs. Typically LSTMs used for language modeling consider only words as features. Mikolov et al. BIBREF19 proposed a conditional RNN-LM for adding context — we extend this approach of using context in RNN-LMs to LSTMs, train the LSTM models on large-scale data, and propose new tasks beyond next work prediction. We incorporate contextual features (namely, topics based on different segments of text) into the LSTM model, and call the resulting model Contextual LSTM (CLSTM). In this work we evaluate how adding contextual features in the CLSTM improves the following tasks: 1) Word prediction: Given the words and topic seen so far in the current sentence, predict the most likely next word. This task is important for sentence completion in applications like predictive keyboard, where long-range context can improve word/phrase prediction during text entry on a mobile phone. 2) Next sentence selection: Given a sequence of sentences, find the most likely next sentence from a set of candidates. This is an important task in question/answering, where topic can be useful in selecting the best answer from a set of template answers. This task is also relevant in other applications like Smart Reply BIBREF20 , for predicting the best response to an email from a set of candidate responses. 3) Sentence topic prediction: Given the words and topic of the current sentence, predict the topic of the next sentence. We consider two scenarios: (a) where we don't know the words of the next sentence, (b) where we know the words of the next sentence. Scenario (a) is relevant for applications where we don't know the words of a user's next utterance, e.g., while predicting the topic of response of the user of a dialog system, which is useful in knowing the intent of the user; in scenario (b) we try to predict the topic/intent of an utterance, which is common in a topic modeling task. The main contributions of this paper are as follows: 1) We propose a new Contextual LSTM (CLSTM) model, and demonstrate how it can be useful in tasks like word prediction, next sentence scoring and sentence topic prediction – our experiments show that incorporating context into an LSTM model (via the CLSTM) gives improvements compared to a baseline LSTM model. This can have potential impact for a wide variety of NLP applications where these tasks are relevant, e.g. sentence completion, question/answering, paraphrase generation, dialog systems. 2) We trained the CLSTM (and the corresponding baseline LSTM) models on two large-scale document corpora: English documents in Wikipedia, and a recent snapshot of English Google News documents. The vocabulary we handled in the modeling here was also large: 130K words for Wikipedia, 100K for Google news. Our experiments and analysis demonstrate that the CLSTM model that combines the power of topics with word-level features yields significant performance gains over a strong baseline LSTM model that uses only word-level features. For example, in the next sentence selection task, CLSTM gets a performance improvement of 21% and 18% respectively over the LSTM model on the English Wikipedia and Google News datasets. 3) We show initial promising results with a model where we learn the thought embedding in an unsupervised manner through the model structure, instead of using supervised extraneous topic as side information (details in Section "Using Unsupervised Topic Signals" ). Related Work There are various approaches that try to fit a generative model for full documents. These include models that capture the content structure using Hidden Markov Models (HMMs) BIBREF21 , or semantic parsing techniques to identify the underlying meanings in text segments BIBREF22 . Hierarchical models have been used successfully in many applications, including hierarchical Bayesian models BIBREF23 , BIBREF24 , hierarchical probabilistic models BIBREF25 , hierarchical HMMs BIBREF26 and hierarchical CRFs BIBREF27 . As mentioned in Section "Model" , RNN-based language models (RNN-LMs) were proposed by Mikolov et al. BIBREF17 , and the variant using LSTMs was introduced by Sundermeyer et al. BIBREF18 – in this paper, we work with LSTM-based LMs. Mikolov et al. BIBREF19 proposed a conditional RNN-LM for adding context — we extend this approach of using context in RNN-LMs to LSTMs. Recent advances in deep learning can model hierarchical structure using deep belief networks BIBREF28 , BIBREF5 , BIBREF29 , BIBREF30 , especially using a hierarchical recurrent neural network (RNN) framework. In Clockwork RNNs BIBREF31 the hidden layer is partitioned into separate modules, each processing inputs at its own individual temporal granularity. Connectionist Temporal Classification or CTC BIBREF32 does not explicitly segment the input in the hidden layer – it instead uses a forward-backward algorithm to sum over all possible segments, and determines the normalized probability of the target sequence given the input sequence. Other approaches include a hybrid NN-HMM model BIBREF33 , where the temporal dependency is handled by an HMM and the dependency between adjacent frames is handled by a neural net (NN). In this model, each node of the convolutional hidden layer corresponds to a higher-level feature. Some NN models have also used context for modeling text. Paragraph vectors BIBREF34 , BIBREF35 propose an unsupervised algorithm that learns a latent variable from a sample of words from the context of a word, and uses the learned latent context representation as an auxiliary input to an underlying skip-gram or Continuous Bag-of-words (CBOW) model. Another model that uses the context of a word infers the Latent Dirichlet Allocation (LDA) topics of the context before a word and uses those to modify a RNN model predicting the word BIBREF19 . Tree-structured LSTMs BIBREF36 , BIBREF5 extend chain-structured LSTMs to the tree structure and propose a principled approach of considering long-distance interaction over hierarchies, e.g., language or image parse structures. Convolution networks have been used for multi-level text understanding, starting from character-level inputs all the way to abstract text concepts BIBREF37 . Skip thought vectors have also been used to train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage BIBREF38 . Other related work include Document Context Language models BIBREF39 , where the authors have multi-level recurrent neural network language models that incorporate context from within a sentence and from previous sentences. Lin et al. BIBREF40 use a hierarchical RNN structure for document-level as well as sentence-level modeling – they evaluate their models using word prediction perplexity, as well as an approach of coherence evaluation by trying to predict sentence-level ordering in a document. In this work, we explore the use of long-range hierarchical signals (e.g., sentence level or paragraph level topic) for text analysis using a LSTM-based sequence model, on large-scale data — to the best of our knowledge this kind of contextual LSTM models, which model the context using a 2-level LSTM architecture, have not been trained before at scale on text data for the NLP tasks mentioned in Section "Model" . Word Prediction Of the three different tasks outlined in Section "Model" , we focus first on the word prediction task, where the goal is to predict the next word in a sentence given the words and context (captured via topic) seen previously. Let $s_i$ be the $i^{th}$ sentence in a sequence of sentences, $w_{i,j}$ be the $j^{th}$ word of sentence $s_i$ , $n_i$ be the number of words in $s_i$ , and $w_{i,j} \ldots w_{i,k}$ indicate the sequence of words from word $j$ to word $k$ in sentence $i^{th}$0 . Note that sentence $i^{th}$1 is equivalent to the sequence of words $i^{th}$2 . Let $i^{th}$3 be the random variable denoting the topic – it is computed based on a particular subsequence of words seen from the first word of the sequence ( $i^{th}$4 ) to the current word ( $i^{th}$5 ). This topic can be based on the current sentence segment (i.e., $i^{th}$6 ), or the previous sentence (i.e., $i^{th}$7 ), etc. Details regarding the topic computation are outlined in Section "Results on Google News data" . Using this notation, the word prediction task in our case can be specified as follows: given a model with parameters $\Theta $ , words $w_{0,0} \ldots w_{i,j}$ and the topic $T$ computed from a subsequence of the words from the beginning of the sequence, find the next word $w_{i,j+1}$ that maximizes the probability: $P(w_{i,j+1} | w_{0,0} \ldots w_{i,j}, T, \Theta )$ . Model For our approach, as explained before, we introduce the power of context into a standard LSTM model. LSTM is a recurrent neural network that is useful for capturing long-range dependencies in sequences. The LSTM model has multiple LSTM cells, where each LSTM cell models the digital memory in a neural network. It has gates that allow the LSTM to store and access information over time. For example, the input/output gates control cell input/output, while the forget gate controls the state of the cell. The word-prediction LSTM model was implemented in the large-scale distributed Google Brain framework BIBREF41 . The model takes words encoded in 1-hot encoding from the input, converts them to an embedding vector, and consumes the word vectors one at a time. The model is trained to predict the next word, given a sequence of words already seen. The core algorithm used to train the LSTM parameters is BPTT BIBREF42 , using a softmax layer that uses the id of the next word as the ground truth. To adapt the LSTM cell that takes words to a CLSTM cell that takes as input both words and topics, we modify the equations representing the operations of the LSTM cell BIBREF43 to add the topic vector $T$ to the input gate, forget gate, cell and output gate ( $T$ is the embedding of the discrete topic vector). In each of the following equations, the term in bold is the modification made to the original LSTM equation. $$i_t &=& \sigma (W_{xi} x_t + W_{hi} h_{t-1} + W_{ci} c_{t-1} + b_i + \bf {W_{Ti} T)} \nonumber \\ f_t &=& \sigma (W_{xf} x_t + W_{hf} h_{t-1} + W_{cf} c_{t-1} + b_f + \bf {W_{Ti} T)} \nonumber \\ c_t &=& f_t c_{t-1} + i_t \tanh (W_{xc} x_t + W_{hc} h_{t-1} + b_c + \bf {W_{Ti} T)} \nonumber \\ o_t &=& \sigma (W_{xo} x_t + W_{ho} h_{t-1} + W_{co} c_{t} + b_o + \bf {W_{Ti} T)} \nonumber \\ h_t &=& o_t \tanh (c_t) $$ (Eq. 2) In these equations $i$ , $f$ and $o$ are the input gate, forget gate and output gate respectively, $x$ is the input, $b$ is the bias term, $c$ is the cell memory, and $h$ is the output. As an example, consider the input gate equation: $$i_t =& \sigma (W_{xi} x_t + W_{hi} h_{t-1} + W_{ci} c_{t-1} + b_i) \nonumber \\ =& \sigma ([W_{xi} \ W_{hi} \ W_{ci} \ 1] [x_t \ h_{t-1} \ c_{t-1} \ b_i]^T) $$ (Eq. 3) When we add the topic signal $T$ to the input gate, the equation is modified to: $$i_t =& \sigma (W_{xi} x_t + W_{hi} h_{t-1} + W_{ci} c_{t-1} + b_i + W_{Ti} T) \nonumber \\ =& \sigma ([W_{xi} \ W_{Ti} \ W_{hi} \ W_{ci} \ 1] [x_t \ T \ h_{t-1} \ c_{t-1} \ b_i]^T) $$ (Eq. 4) Comparing the last two equations, Equations 3 and 4 , we see that having a topic vector $T$ added into the CLSTM cell is equivalent to considering a composite input $[x_i$ $T]$ to the LSTM cell that concatenates the word embedding and topic embedding vectors. This approach of concatenating topic and word embeddings in the input worked better in practice than other strategies for combining topics with words. Figure 1 shows the schematic figure of a CLSTM model that considers both word and topic input vectors. Note that we add the topic input to each LSTM cell since each LSTM cell can potentially have a different topic. For example, when the topic is based on the sentence segment seen so far (see Section UID10 ), the topic is based on the current sentence prefix — so, each LSTM cell can potentially have a different topic. Note that in some setups each LSTM cell in a layer could have the same topic, e.g., when the topic is derived from the words in the previous sentence. We trained a baseline LSTM model on the words of $A_i$ , $B_i$ and $C_i$ to predict the words of $D_i$ . The CLSTM model uses words from $A_i$ , $B_i$ , $C_i$ , and topics of $A_i$ , $B_i$ , $C_i$ and $B_i$0 , to predict the words of $B_i$1 . Note that in this case we can use the topic of $B_i$2 since all the candidate next sentences are given as input in the next sentence scoring task. For 1024 hidden units, the perplexity of the baseline LSTM model after convergence of model training is 27.66, while the perplexity of the CLSTM model at convergence is 24.81. This relative win of 10.3% in an intrinsic evaluation measure (like perplexity) was the basis for confidence in expecting good performance when using this CLSTM model for the next sentence scoring task. For the sentence topic prediction task, we determined through ablation experiments that the unrolled model architecture, where each sentence in a paragraph is modeled by a separate LSTM model, has better performance than the rolled-up model architecture used for word prediction where the sentences in a paragraph are input to a single LSTM. HTM: Supervised Topic Labels The topics of the text segments can be estimated using different unsupervised methods (e.g., clustering) or supervised methods (e.g., hierarchical classification). For the word prediction task we use HTM, a hierarchical topic model for supervised classification of text into a hierarchy of topic categories, based on the Google Rephil large-scale clustering tool BIBREF44 . There are about 750 categories at the leaf level of the HTM topic hierarchy. Given a segment of text, HTM gives a probability distribution over the categories in the hierarchy, including both leaf and intermediate categories. We currently choose highest probability topic as the most-likely category of the text segment. Experiments We trained different types of CLSTM models for the word prediction task. The different types of features used in the different CLSTM models are shown schematically in Figure 2 . The hierarchical features that we used in different variants of the word prediction model are: PrevSentTopic = TopicID of the topic computed based on all the words of the previous sentence, i.e., $T = Topic(w_{i-1,0} \ldots w_{i-1,n_{i-1}-1})$ . SentSegTopic = TopicID of the topic computed based on the words of the current sentence prefix until the current word, i.e., $T = Topic(w_{i,0} \ldots w_{i,j})$ . ParaSegTopic = TopicID of the topic computed based on the paragraph prefix until the current word, i.e., $T = Topic(w_{0,0} \ldots w_{i,j})$ . where $T$ is defined in Section "Word Prediction" . For our experiments, we used the whole English corpus from Wikipedia (snapshot from 2014/09/17). There were 4.7 million documents in the Wikipedia dataset, which we randomly divided into 3 parts: 80% was used as train, 10% as validation and 10% as test set. Some relevant statistics of the train, test and validation data sets of the Wikipedia corpus are given in Table 1 . We created the vocabulary from the words in the training data, filtering out words that occurred less than a particular threshold count in the total dataset (threshold was 200 for Wikipedia). This resulted in a vocabulary with 129K unique terms, giving us an out-of-vocabulary rate of 3% on the validation dataset. For different types of text segments (e.g., segment, sentence, paragraph) in the training data, we queried HTM and got the most likely topic category. That gave us a total of $\approx $ 1600 topic categories in the dataset. We trained different CLSTM models with different feature variants till convergence, and evaluated their perplexity on the holdout test data. Here are some key observations about the results (details in Table 2 ): 1) The “Word + SentSegTopic + ParaSegTopic” CLSTM model is the best model, getting the best perplexity. This particular LSTM model uses both sentence-level and paragraph-level topics as features, implying that both local and long-range context is important for getting the best performance. 2) When current segment topic is present, the topic of the previous sentence does not matter. 3) As we increased the number of hidden units, the performance started improving. However, beyond 1024 hidden units, there were diminishing returns — the gain in performance was out-weighed by the substantial increase in computational overhead. Note that we also trained a distributed n-gram model with “stupid backoff” smoothing BIBREF45 on the Wikipedia dataset, and it gave a perplexity of $\approx $ 80 on the validation set. We did not train a n-gram model with Knesner-Ney (KN) smoothing on the Wikipedia data, but on the Google News data (from a particular snapshot) the KN smoothed n-gram model gave a perplexity of 74 (using 5-grams). Note that we were not able to compare our CLSTM models to other existing techniques for integrating topic information into LSTM models (e.g., Mikolov et al. BIBREF19 ), since we didn't have access to implementations of these approaches that can scale to the vocabulary sizes ( $\approx $ 100K) and dataset sizes we worked with (e.g., English Wikipedia, Google News snapshot). Hence, we used a finely-tuned LSTM model as a baseline, which we also trained at scale on these datasets. In our experiments we used the output of HTM as the topic of each sentence. Ideally we would associate a “supervised topic” with each sentence (e.g., the supervision provided by human raters). However, due to the difficulty of getting such human ratings at scale, we used the HTM model to find topics for the sentences. Note that the HTM model is trained on human ratings. We trained 2 baseline models on this dataset. The Word model uses the words of the current sentence to predict the topic of the next sentence – it determines how well we can predict the topic of the next sentence, given the words of the current sentence. We also trained another baseline model, SentTopic, which uses the sentence topic of the current sentence to predict the topic of the next sentence – the performance of this model will give us an idea of the inherent difficulty of the task of topic prediction. We trained a CLSTM model (Word+SentTopic) that uses both words and topic of the current sentence to predict the topic of the next sentence. Figure 2 shows the hierarchical features used in the CLSTM model. We trained all models with different number of hidden units: 256, 512, 1024. Each model was trained till convergence. Table 4 shows the comparison of the perplexity of the different models. The CLSTM model beats the baseline SentTopic model by more than 12%, showing that using hierarchical features is useful for the task of sentence topic prediction too. Next Sentence Selection We next focus on the next sentence scoring task, where we are given a sequence of sentences and the goal is to find the most probable next sentence from a set of candidate sentences. An example of this task is shown in Figure 3 . The task can be stated as follows: given a model with parameters $\Theta $ , a sequence of $p-1$ sentences $s_0 \ldots s_{p-2}$ (with their corresponding topics $T_0 \ldots T_{p-2}$ ), find the most likely next sentence $s_{p-1}$ from a candidate set of next sentences $S$ , such that: $ s_{p-1} = \arg \max _{s \in S} P(s | s_0 \ldots s_{p-2}, T_0 \ldots T_{p-2}, \Theta ). $ Problem Instantiation Suppose we are given a set of sequences, where each sequence consists of 4 sentences (i.e., we consider $p$ =4). Let each sequence be $S_i = <A_i B_i C_i D_i>$ , and the set of sequences be $\lbrace S_1,\ldots ,S_k\rbrace $ . Given the prefix $A_i B_i C_i$ of the sequence $S_i$ as context (which we will denote to be $Context_i$ ), we consider the task of correctly identifying the next sentence $D_i$ from a candidate set of sentences: $\lbrace D_0, D_1,\ldots , D_{k-1}\rbrace $ . For each sequence $S_i$ , we compute the accuracy of identifying the next sentence correctly. The accuracy of the model in detecting the correct next sentence is computed over the set of sequences $\lbrace S_1,\ldots ,S_k\rbrace $ . Approach We train LSTM and CLSTM models specifically for the next sentence prediction task. Given the context $Context_i$ , the models find the $D_i$ among the set $\lbrace D_0 \ldots D_{k-1}\rbrace $ that gives the maximum (normalized) score, defined as follows: $$\forall i, score = \frac{P(D_i | Context_i)}{\frac{1}{k} \sum _{j=0}^{k-1} P(D_i | Context_j)}$$ (Eq. 21) In the above score, the conditional probability terms are estimated using inference on the LSTM and CLSTM models. In the numerator, the probability of the word sequence in $D_i$ , given the prefix context $Context_i$ , is estimated by running inference on a model whose state is already seeded by the sequence $A_iB_iC_i$ (as shown in Figure 4 ). The normalizer term $\frac{1}{k} \sum _{j=0}^{k-1} P(D_i | Context_j)$ in the denominator of Equation 21 is the point estimate of the marginal probability $P(D_i)$ computed over the set of sequences, where the prior probability of each prefix context is assumed equal, i.e., $P(Context_j) = \frac{1}{k}, j \in [0, k-1]$ . The normalizer term adjusts the score to account for the popularity of a sentence $D_i$ that naturally has a high marginal probability $P(D_i)$ — we do not allow the popularity of $D_i$ to lead to a high score. Note that for task of next sentence scoring, it's ok to use words of the next sentence when selecting the “best” next sentence. This is because in the task, the possible alternatives are all provided to the model, and the main goal of the model is scoring the alternatives and selecting the best one. This setting is seen in some real-world applications, e.g., predicting the best response to an email from a set of candidate responses BIBREF20 . Experimental Results We ran next sentence scoring experiments with a dataset generated from the test set of the corpora. We divide the test dataset into 100 non-overlapping subsets. To create the dataset for next sentence scoring, we did the following: (a) sample 50 sentence sequences $<A_i B_i C_i D_i>$ from 50 separate paragraphs, randomly sampled from 1 subset of the test set – we call this a block; (b) consider 100 such blocks in the next sentence scoring dataset. So, overall there are 5000 sentence sequences in the final dataset. For each sequence prefix $A_i B_i C_i$ , the model has to choose the best next sentence $D_i$ from the competing set of next sentences. The average accuracy of the baseline LSTM model on this dataset is $52\%$ , while the average accuracy of the CLSTM model using word + sentence-level topic features is $63\%$ (as shown in Table 3 ). So the CLSTM model has an average improvement of $21\%$ over the LSTM model on this dataset. Note that on this task, the average accuracy of a random predictor that randomly picks the next sentence from a set of candidate sentences would be $2\%$ . We also ran other experiments, where the negatives (i.e., 49 other sentences in the set of 50) were not chosen randomly — in one case we considered all the 50 sentences to come from the same HTM topic, making the task of selecting the best sentence more difficult. In this case, as expected, the gain from using the context in CLSTM was larger — the CLSTM model gave larger improvement over the baseline LSTM model than in the case of having a random set of negatives. Error Analysis Figures 5 - 7 analyze different types of errors made by the LSTM and the CLSTM models, using samples drawn from the test dataset. Sentence Topic Prediction The final task we consider is the following: if we are given the words and the topic of the current sentence, can we predict the topic of the next sentence? This is an interesting problem for dialog systems, where we ask the question: given the utterance of a speaker, can we predict the topic of their next utterance? This can be used in various applications in dialog systems, e.g., intent modeling. The sentence topic prediction problem can be formulated as follows: given a model with parameters $\Theta $ , words in the sentence $s_i$ and corresponding topic $T_i$ , find the next sentence topic $T_{i+1}$ that maximizes the following probability – $P(T_{i+1} | s_i, T_i, \Theta )$ . Note that in this case we train a model to predict the topic target instead of the joint word/topic target, since we empirically determined that training a model with a joint target gave lower accuracy in predicting the topic compared to a model that only tries to predict the topic as a target. Comparison to BOW-DNN baseline For the task of sentence topic prediction, we also compared the CLSTM model to a Bag-of-Words Deep Neural Network (BOW-DNN) baseline BIBREF46 . The BOW-DNN model extracts bag of words from the input text, and a DNN layer is used to extract higher-level features from the bag of words. For this experiment, the task setup we considered was slightly different in order to facilitate more direct comparison. The goal was to predict the topic of the next sentence, given words of the next sentence. The BOW-DNN model was trained only on word features, and got a test set perplexity of 16.5 on predicting the sentence topic. The CLSTM model, trained on word and topic-level features, got a perplexity of 15.3 on the same test set using 1024 hidden units, thus outperforming the BOW-DNN model by 7.3%. Using Unsupervised Topic Signals In our experiments with topic features, we have so far considered supervised topic categories obtained from an extraneous source (namely, HTM). One question arises: if we do not use extraneous topics to summarize long-range context, would we get any improvements in performance with unsupervised topic signals? To answer this question, we experimented with “thought embeddings” that are intrinsically generated from the previous context. Here, the thought embedding from the previous LSTM is used as the topic feature in the current LSTM (as shown in Figure 8 ), when making predictions of the topic of the next sentence – we call this context-based thought embedding the “thought vector”. In our approach, the thought vector inferred from the LSTM encoding of sentence $n-1$ is used as a feature for the LSTM for sentence $n$ , in a recurrent fashion. Note that the LSTMs for each sentence in Figure 8 are effectively connected into one long chain, since we don't reset the hidden state at the end of each sentence — so the LSTM for the current sentence has access to the LSTM state of the previous sentence (and hence indirectly to its topic). But we found that directly adding the topic of the previous sentence to all the LSTM cells of the current sentence is beneficial, since it constraints all the current LSTM cells during training and explicitly adds a bias to the model. Our experiments showed that it's beneficial to denoise the thought vector signal using a low-dimensional embedding, by adding roundoff-based projection. Initial experiments using thought vector for sentence-topic prediction look promising. A CLSTM model that used word along with thought vector (PrevSentThought feature in the model) from the previous sentence as features gave a 3% improvement in perplexity compared to a baseline LSTM model that used only words as features. Table 5 shows the detailed results. When we used thought vectors, our results improved over using a word-only model but fell short of a CLSTM model that used both words and context topics derived from HTM. In the future, we would like to do more extensive experiments using better low-dimensional projections (e.g., using clustering or bottleneck mechanisms), so that we can get comparable performance to supervised topic modeling approaches like HTM. Another point to note — we have used HTM as a topic model in our experiments as that was readily available to us. However, the CLSTM model can also use other types of context topic vectors generated by different kinds of topic modeling approaches, e.g., LDA, KMeans. Results on Google News data We also ran experiments on a sample of documents taken from a recent (2015/07/06) snapshot of the internal Google News English corpus. This subset had 4.3 million documents, which we divided into train, test and validation datasets. Some relevant statistics of the datasets are given in Table 6 . We filtered out words that occurred less than 100 times, giving us a vocabulary of 100K terms. We trained the baseline LSTM and CLSTM models for the different tasks, each having 1024 hidden units. Here are the key results: 1) Word prediction task: LSTM using only words as features had perplexity of $\approx $ 37. CLSTM improves on LSTM by $\approx $ 2%, using words, sentence segment topics and paragraph sentence topics. 2) Next sentence selection task: LSTM gave an accuracy of $\approx $ 39%. CLSTM had an accuracy of $\approx $ 46%, giving a 18% improvement on average. 3) Next sentence topic prediction task: LSTM using only current sentence topic as feature gave perplexity of $\approx $ 5. CLSTM improves on LSTM by $\approx $ 9%, using word and current sentence topic as features. As we see, we get similar improvements of CLSTM model over LSTM model for both the Wikipedia and Google News datasets, for each of the chosen NLP tasks. Conclusions We have shown how using contextual features in a CLSTM model can be beneficial for different NLP tasks like word prediction, next sentence selection and topic prediction. For the word prediction task CLSTM improves on state-of-the-art LSTM by 2-3% on perplexity, for the next sentence selection task CLSTM improves on LSTM by $\approx $ 20% on accuracy on average, while for the topic prediction task CLSTM improves on state-of-the-art LSTM by $\approx $ 10% (and improves on BOW-DNN by $\approx $ 7%). These gains are all quite significant and we get similar gains on the Google News dataset (Section "Results on Google News data" ), which shows the generalizability of our approach. Initial results using unsupervised topic signal using with vectors, instead of supervised topic models, are promising. The gains obtained by using the context in the CLSTM model has major implications of performance improvements in multiple important NLP applications, ranging from sentence completion, question/answering, and paraphrase generation to different applications in dialog systems. Future Work Our initial experiments on using unsupervised thought vectors for capturing long-range context in CLSTM models gave promising results. A natural extension of the thought vector model in Figure 8 is a model that has a connection between the hidden layers, to be able to model the “continuity of thought”. Figure 9 shows one such hierarchical LSTM (HLSTM) model, which has a 2-level hierarchy: a lower-level LSTM for modeling the words in a sentence, and a higher-level LSTM for modeling the sentences in a paragraph. The thought vector connection from the LSTM cell in layer $n$ to the LSTM cells in layer $n-1$ (corresponding to the next sentence) enables concepts from the previous context to be propagated forward, enabling the “thought” vector of a sentence to influence words of the next sentence. The connection between the sentence-level hidden nodes also allows the model to capture the continuity of thought. We would like to experiment with this model in the future. We would also like to explore the benefits of contextual features in other applications of language modeling, e.g., generating better paraphrases by using word and topic features. Another interesting application could be using topic-level signals in conversation modeling, e.g., using Dialog Acts as a topic-level feature for next utterance prediction. Acknowledgments: We would like to thank Ray Kurzweil, Geoffrey Hinton, Dan Bikel, Lukasz Kaiser and Javier Snaider for useful feedback on this work. We would also like to thank Louis Shao and Yun-hsuan Sung for help in running some experiments.
Unanswerable
702e2d02c25a2f3f6b1be8ad3d448b502b8ced9c
702e2d02c25a2f3f6b1be8ad3d448b502b8ced9c_0
Q: How do they obtain human generated policies? Text: Introduction Humans in general find it relatively easy to have chat-like conversations that are both coherent and engaging at the same time. While not all human chat is engaging, it is arguably coherent BIBREF0 , and it can cover large vocabularies across a wide range of conversational topics. In addition, each contribution by a partner conversant may exhibit multiple sentences, such as greeting+question or acknowledgement+statement+question. The topics raised in a conversation may go back and forth without losing coherence. All of these phenomena represent big challenges for current data-driven chatbots. We present a novel approach for chatbot training based on the reinforcement learning BIBREF1 , unsupervised learning BIBREF2 and deep learning BIBREF3 paradigms. In contrast to other learning approaches for Deep Reinforcement Learning chatbots that rely on partially labelled dialogue data BIBREF4 , BIBREF5 , our approach assumes only unlabelled data. Our learning scenario is as follows: given a dataset of human-human dialogues in raw text (without any manually provided labels), an ensemble of Deep Reinforcement Learning (DRL) agents take the role of one of the two partner conversants in order to learn to select human-like sentences when exposed to both human-like and non-human-like sentences. In our learning scenario the agent-environment interactions consist of agent-data interactions – there is no user simulator as in task-oriented dialogue systems BIBREF6 , BIBREF7 . During each verbal contribution and during training, the DRL agents This process—illustrated in Figure FIGREF6 —is carried out iteratively until the end of a dialogue for as many dialogues as necessary, i.e. until there is no further improvement in the agents' performance. During each verbal contribution at test time, the agent exhibiting the highest predictive dialogue reward is selected for human-agent interactions. This article makes the following contributions to neural-based chatbots: In the next two sections, 2 and 3, we review related work on neural-based chatbots and provide related background on deep reinforcement learning. Then we describe our proposed approach and methodology in section 4. This is followed by a comprehensive set of automatic and human evaluations in section 5, which use (i) a dataset of chitchat conversations, and (ii) human ratings of human-chatbot dialogues. Section 6 draws conclusions and discusses avenues for future research. Background A reinforcement learning agent induces its behaviour from interacting with an environment through trial and error, where situations (representations of sentences in a dialogue history) are mapped to actions (follow-up sentences) by maximising a long-term reward signal. Such an agent is typically characterised by: (i) a finite set of states INLINEFORM0 that describe all possible situations in the environment; (ii) a finite set of actions INLINEFORM1 to change in the environment from one situation to another; (iii) a state transition function INLINEFORM2 that specifies the next state INLINEFORM3 for having taken action INLINEFORM4 in the current state INLINEFORM5 ; (iv) a reward function INLINEFORM6 that specifies a numerical value given to the agent for taking action INLINEFORM7 in state INLINEFORM8 and transitioning to state INLINEFORM9 ; and (v) a policy INLINEFORM10 that defines a mapping from states to actions BIBREF1 , BIBREF29 . The goal of a reinforcement learning agent is to find an optimal policy by maximising its cumulative discounted reward defined as DISPLAYFORM0 where function INLINEFORM0 represents the maximum sum of rewards INLINEFORM1 discounted by factor INLINEFORM2 at each time step. While a reinforcement learning agent takes actions with probability INLINEFORM3 during training, it selects the best action at test time according to DISPLAYFORM0 A deep reinforcement learning agent approximates INLINEFORM0 using a multi-layer neural network BIBREF30 . The INLINEFORM1 function is parameterised as INLINEFORM2 , where INLINEFORM3 are the parameters or weights of the neural network (recurrent neural network in our case). Estimating these weights requires a dataset of learning experiences INLINEFORM4 (also referred to as `experience replay memory'), where every experience is described as a tuple INLINEFORM5 . Inducing a INLINEFORM6 function consists in applying Q-learning updates over minibatches of experience INLINEFORM7 drawn uniformly at random from the full dataset INLINEFORM8 . This process is implemented in learning algorithms using Deep Q-Networks (DQN) such as those described in BIBREF30 , BIBREF31 , BIBREF32 , and the following section describes a DQN-based algorithm for human-chatbot interaction. Proposed Approach This section explains the main components of Figure FIGREF6 as follows. Motivated by BIBREF33 , we first describe the ensemble of Deep Reinforcement Learning (DRL) agents, we then explain how to conceive a finite set of dialogue actions from raw text, and finally we describe how to assign dialogue rewards for training DRL-based chatbots. Ensemble of DRL Chatbots We assume that all deep reinforcement learning agents in our ensemble use the same neural network architecture and learning algorithm. They only differ in the portion of data used for training and consequently the weights in their trained models—see BIBREF34 , BIBREF35 for alternative approaches. Our agents aim to maximise their cumulative reward over time according to DISPLAYFORM0 where INLINEFORM0 is the numerical reward given at time step INLINEFORM1 for choosing action INLINEFORM2 in state INLINEFORM3 , INLINEFORM4 is a discounting factor, and INLINEFORM5 is the optimal action-value function using weights INLINEFORM6 in the neural network of chatbot INLINEFORM7 . During training, a DRL agent will choose actions in a probabilistic manner in order to explore new INLINEFORM8 pairs for discovering better rewards or to exploit already learnt values—with a reduced level of exploration over time and an increased level of exploitation over time. During testing, our ensemble-based DRL chatbot will choose the best actions INLINEFORM9 according to DISPLAYFORM0 where INLINEFORM0 is a trajectory of state-action pairs of chatbot INLINEFORM1 , and INLINEFORM2 is a function that predicts the dialogue reward of chatbot INLINEFORM3 as in BIBREF36 . Given the set of trajectories for all agents—where each agent takes its own decisions and updates its environment states accordingly—the agent with the highest predictive reward is selected, i.e. the one with the least amount of errors in the interaction. Our DRL agents implement the procedure above using a generalisation of DQN-based methods BIBREF30 , BIBREF31 , BIBREF32 —see Algorithm SECREF15 , explained as follows. After initialising replay memory INLINEFORM0 with learning experience INLINEFORM1 , dialogue history INLINEFORM2 with sentences INLINEFORM3 , action-value function INLINEFORM4 and target action-value function INLINEFORM5 , we sample a training dialogue from our data of human-human conversations (lines 1-4). Once a conversation starts, it is mapped to its corresponding sentence embedding representation, i.e. `sentence vectors' as described in Section SECREF26 (lines 5-6). Then a set of candidate responses is generated including (1) the true human response and (2) a set of randomly chosen responses (distractors). The candidate responses are clustered as described in the next section and the resulting actions are taken into account by the agent for action selection (lines 8-10). Once an action is chosen, it is conveyed to the environment, a reward is observed as described at the end of this section, and the agent's partner response is observed in order to update the dialogue history INLINEFORM0 (lines 11-14). In response to the update above, the new sentence embedding representation is extracted from INLINEFORM0 for updating the replay memory INLINEFORM1 with experience INLINEFORM2 (lines 15-16). Then a minibatch of experiences INLINEFORM0 is sampled from INLINEFORM1 for updating weights INLINEFORM2 according to the error derived from the difference between the target value INLINEFORM3 and the predicted value INLINEFORM4 (see lines 18 and 20), which is based on the following weight updates: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 is a learning rate hyperparameter. The target action-value function INLINEFORM0 and environment state INLINEFORM1 are updated accordingly (lines 21-22), and this iterative procedure continues until convergence. ChatDQN Learning [1] Initialise Deep Q-Networks with replay memory INLINEFORM0 , dialogue history INLINEFORM1 , action-value function INLINEFORM2 with random weights INLINEFORM3 , and target action-value functions INLINEFORM4 with INLINEFORM5 Initialise clustering model from training dialogue data Sample a training dialogue (human-human sentences) Append first sentence to dialogue history INLINEFORM6 INLINEFORM7 sentence embedding representation of INLINEFORM8 Generate noisy candidate response sentences INLINEFORM9 INLINEFORM10 Execute chosen clustered action INLINEFORM11 Observe human-likeness dialogue reward INLINEFORM12 Observe environment response (agent's partner) Append agent and environment responses to INLINEFORM13 INLINEFORM14 sentence embedding representation of INLINEFORM15 Append learning experience INLINEFORM16 to INLINEFORM17 Sample random minibatch INLINEFORM18 from INLINEFORM19 INLINEFORM20 Set INLINEFORM21 Gradient descent step on INLINEFORM22 with respect to INLINEFORM23 Reset INLINEFORM24 every INLINEFORM25 steps INLINEFORM26 INLINEFORM27 end of dialogue Reset dialogue history INLINEFORM28 convergence Sentence and Dialogue Clustering Actions in reinforcement learning chatbots correspond to sentences, and their size is infinite assuming all possible combinations of word sequences in a given language. This is especially true in the case of open-ended conversations that make use of large vocabularies, as opposed to task-oriented conversations that make use of smaller (restricted) vocabularies. A clustered action is a group of sentences sharing a similar or related meaning via sentence vectors derived from word embeddings BIBREF37 , BIBREF38 . We represent sentences via their mean word vectors—similarly as in Deep Averaging Networks BIBREF39 —denoted as INLINEFORM0 , where INLINEFORM1 is the vector of coefficients of word INLINEFORM2 , INLINEFORM3 is the number of words in sentence INLINEFORM4 , and INLINEFORM5 is the embedding vector of sentence INLINEFORM6 . Similarly, a clustered dialogue is a group of conversations sharing a similar or related topic(s) via their clustered actions. We represent dialogues via their clustered actions. Dialogue clustering in this way can be seen as a two-stage approach, where sentences are clustered in the first step and dialogues are clustered in the second step. In our proposed approach, each DRL agent is trained on a cluster of dialogues. While there are multiple ways of selecting features for clustering and also multiple clustering algorithms, the following requirements arise for chatbots: (1) unlabelled data due to human-human dialogues in raw text (this makes it difficult to evaluate the goodness of clustering features and algorithms), and (2) scalability to clustering a large set of data points (especially in the case of sentences, which are substantially different between them due to their open-ended nature). Given a set of data points INLINEFORM0 and a similarity metric INLINEFORM1 , the task is to find a set of INLINEFORM2 groups with a clustering algorithm. In our case each data point INLINEFORM3 corresponds to a dialogue or a sentence. For scalability purposes, we use the K-Means++ algorithm BIBREF40 and the Euclidean distance INLINEFORM4 with INLINEFORM5 dimensions, and consider INLINEFORM6 as a hyperparameter – though other clustering algorithms and distance metrics can be used with our approach. In this way, a trained sentence clustering model assigns a cluster ID INLINEFORM7 to features INLINEFORM8 , where the number of actions (in a DRL agent) refers to the number of sentence clusters, i.e. INLINEFORM9 . Human-Likeness Rewards Specifying reward functions in reinforcement learning dialogue agents is often a difficult aspect. We propose to derive rewards from human-human dialogues by assigning positive values to contextualised responses seen in the data, and negative values to randomly chosen responses due to lacking coherence (also referred to as `non-human-like responses') – see example in Tables TABREF29 and TABREF30 . An episode or dialogue reward can thus be computed as INLINEFORM0 , where index INLINEFORM1 refers to the dialogue in focus, index INLINEFORM2 to the dialogue turn in focus, and INLINEFORM3 is given according to DISPLAYFORM0 Table TABREF29 shows an example of a well rewarded dialogue (without distortions) and Table TABREF30 shows an example of a poorly rewarded dialogue (with distortions). Other dialogues can exhibit similar dialogue rewards or something in between (ranging between INLINEFORM0 and INLINEFORM1 ), depending on the amount of distortions—the higher the amount of distortions the lower the dialogue reward. We employ the algorithm described in BIBREF36 for generating dialogues with varying amounts of distortions (i.e. different degrees of human-likeness), which we use for training and testing reward prediction models using supervised regression. Given our extended dataset INLINEFORM0 with (noisy) dialogue histories INLINEFORM1 represented with sequences of sentence vectors, the goal is to predict dialogue scores INLINEFORM2 as accurately as possible. Alternative and automatically derived values between -1 and +1 are also possible but considered as future work. Section SECREF67 provides an evaluation of our reward function and its correlation with human judgement. We show that albeit simple, our reward function is highly correlated with our judges' ratings. Methodology Our proposed approach can be summarised through the following methodology: Collect or adopt a dataset of human-human dialogues (as in SECREF39 ) Design or adopt a suitable reward function (as in SECREF27 ) Train a neural regressor for predicting dialogue rewards (as in BIBREF36 ) Perform sentence and dialogue clustering in order to define the action set and training datasets (as in SECREF26 ) Train a Deep Reinforcement Learning agent per dialogue cluster (as described in SECREF15 ) Test the ensemble of agents together with the predictor of dialogue rewards (as in SECREF51 and SECREF67 ), and iterate from Step 1 if needed Deploy your trained chatbot subject to satisfactory results in Step 6 Data We used the Persona-Chat dataset, stats are shown in Table TABREF41 . Experimental Setting Our agents' states model dialogue histories as sequences of sentence vectors—using GloVe-based BIBREF38 mean word vectors BIBREF39 —with pre-trained embeddings. All our experiments use a 2-layer Gated Recurrent Unit (GRU) neural network BIBREF42 . At each time step INLINEFORM1 in the dialogue history, the first hidden layer generates a hidden state INLINEFORM2 as follows: DISPLAYFORM0 where INLINEFORM0 refers to a set of sentence vectors of the dialogue history, INLINEFORM1 is a reset gate that decides how much of the previous state to forget, INLINEFORM2 is an update gate that decides how much to update its activation, INLINEFORM3 is an internal state, INLINEFORM4 and INLINEFORM5 are the Sigmoid and hyperbolic Tangent functions (respectively), INLINEFORM6 and INLINEFORM7 are learnt weights, and INLINEFORM8 refers to the element-wise multiplication. If the equations above are summarised as INLINEFORM9 we get the following output action taking into account both hidden layers in our neural net: INLINEFORM10 , where INLINEFORM11 and INLINEFORM12 . While a small number of sentence clusters may result in actions being assigned to potentially the same cluster, a larger number of sentence clusters would mitigate the problem, but the larger the number of clusters the larger the computational cost—i.e. more parameters in the neural net. Table TABREF45 shows example outputs of our sentence clustering using 100 clusters on our training data. A manual inspection showed that while clustered sentences sometimes do not seem very similar, they made a lot of sense and they produced reasonable outputs. Our human evaluation (see Section SECREF67 ) confirms this. All our experiments use INLINEFORM0 due to a reasonable compromise between system performance and computational expense. The purpose of our second clustering model is to split our original training data into a group of data subsets, one subset for each ChatDQN agent in our ensemble. We explored different numbers of clusters (20, 50, 100) and noted that the larger the number of clusters the (substantially) higher the computational expense . We chose 100 clusters for our experiments due to higher average episode rewards of cluster-based agents than non-cluster-based ones. Figure FIGREF46 shows visualisations of our sentence and dialogue clustering using 100 clusters on our training data of 17.8K data points. A manual inspection was not as straightforward as analysing sentences due to the large variation of open-ended sets of sentences—see next section for further results. Automatic Evaluation We compared three DQN-based algorithms (DQN BIBREF30 , Double DQN BIBREF31 and Dueling DQN BIBREF32 ) in order to choose a baseline single agent and the learning algorithm for our ensemble of agents. The goal of each agent is to choose the human-generated sentences (actions) out of a set of candidate responses (20 available at each dialogue turn). Figure FIGREF52 (left) shows learning curves for these three learning algorithms, where we can observe that all agents indeed improve their performance (in terms of average episode reward) over time. It can also be observed that DQN and Double DQN performed similarly, and that Dueling DQN was outperformed by its counterpart algorithms. Due to its simplicity, we thus opted for using DQN as our main algorithm for the remainder of our experiments. Figure FIGREF52 (right) shows the performance of 100 ChatDQN agents (one per dialogue cluster), where we also observe that all agents improve their performance over time. It can be noted however that the achieved average episode reward of INLINEFORM0 -1 is much greater than that of the single agent corresponding to INLINEFORM1 -5.5. Additional experiments reported that the lower the number of clusters the lower the average episode reward during training. We thus opted for using 100 dialogue clusters in the remainder of our experiments. We analysed the performance of our agents further by using the test set of 999 totally unseen dialogues during training. We clustered the test set using our trained dialogue clustering model in order to assess the goodness of each agent in dialogues that were similar but not the same. The box plots in Figure FIGREF55 report the performance of our DRL agents according to the following metrics while tested on training data and test data: Avg. Episode Reward, Avg. F1 score, Avg. Recall@1, and Average Recall@5. One can quickly observe the striking performance gap between testing on training data vs. testing on test data. This can be interpreted as ChatDQN agents being able to learn well how to select actions on training data, but not being able to replicate the same behaviour on test data. This may not be surprising given that only 720 sentences (out of 263,862 training sentences and 15,586 test sentences) are shared between both sets, and it is presumably a realistic scenario seen that even humans rarely use the exact same sentences in multiple conversations. On the one hand our results also suggest that our training dataset is rather modest, and that a larger dataset is needed for improved performance. On the other hand, our results help us to raise the question `Can chitchat chatbots with reasonable performance be trained on modest datasets— i.e. with thousands of dialogues instead of millions?' If so, the generalisation abilities of chatbots need to be improved in future work. If not, large (or very large) datasets should receive more attention in future work on neural-based chatbots. Finally, we compared the performance of 5 dialogue agents on 999 dialogues with 20 candidate sentences at every dialogue turn: Upper Bound, which corresponds to the true human sentences in the test dataset; Lower Bound, which selects a sentence randomly from other dialogues than the one in focus; Ensemble, which selects a sentence using 100 agents trained on clustered dialogues as described in section SECREF4 – the agent in focus is chosen using a regressor as predictor of dialogue reward INLINEFORM0 using a similar neural net as the ChatDQN agents except for the final layer having one node and using Batch Normalisation BIBREF44 between hidden layers as in BIBREF36 ; Single Agent, which selects a sentence using a single ChatDQN agent trained on the whole training set; and Seq2Seq, which selects a sentence using a 2-layer LSTM recurrent neural net with attention – from the Parlai framework (http://www.parl.ai) BIBREF21 , trained using the same data as the agents above. Table TABREF66 shows the results of our automatic evaluation, where the ensemble of ChatDQN agents performed substantially better than the single agent and Seq2Seq model. Human Evaluation In addition to our results above, we carried out a human evaluation using 15 human judges. Each judge was given a form of consent for participating in the study, and was asked to rate 500 dialogues (100 core dialogues—from the test dataset—with 5 different agent responses, dialogues presented in random order) according to the following metrics: Fluency (Is the dialogue naturally articulated as written by a human?), Engagingness (Is the dialogue interesting and pleasant to read?), and Consistency (without contradictions across sentences). This resulted in INLINEFORM0 ratings from all judges. Figure FIGREF70 shows an example dialogue with ratings ranging from 1=strongly disagree to 5=strongly agree. Figure FIGREF71 shows average ratings (and corresponding error bars) per conversational agent and per metric. As expected, the Upper Bound agent achieved the best scores and the Lower Bound agent the lowest scores. The ranking of our agents in Table TABREF66 is in agreement with the human evaluation, where the Ensemble agent outperforms the Seq2Seq agent, and the latter outperforms Single Agent. The difference in performance between the Ensemble agent and the Seq2Seq agent is significant at INLINEFORM0 for the Fluency metric and at INLINEFORM1 for the other metrics (Engagingness and Consistency)—based on a two-tailed Wilcoxon Signed Rank Test. Furthermore, we analysed the predictive power of dialogue rewards, derived from our reward function, against human ratings on test data. This analysis revealed positive high correlations between them as shown in Figure FIGREF72 . These scatter plots show data points of test dialogues (the X-axes include Gaussian noise drawn from INLINEFORM0 for better visualisation), which obtained Pearson correlation scores between 0.90 and 0.91 for all metrics (Fluency, Engagingness and Consistency). This is in favour of our proposed reward function and supports its application to training open-ended dialogue agents. Conclusions and Future Work We present a novel approach for training Deep Reinforcement Learning (DRL) chatbots. It uses an ensemble of 100 DRL agents based on clustered dialogues, clustered actions, and rewards derived from human-human dialogues without any manual annotations. The task of the agents is to learn to choose human-like actions (sentences) out of candidate responses including human generated and randomly chosen sentences. Our ensemble trains specialised agents with particular dialogue strategies according to their dialogue clusters. At test time, the agent with the highest predicted reward is used during a dialogue. Experimental results using chitchat dialogue data report that DRL agents learn human-like dialogue policies when tested on training data, but their generalisation ability in a test set of unseen dialogues (with mostly unseen sentences, only 4.62% seen sentences to be precise) remains a key challenge for future research in this field. As part of our study, we found the following: Future work can investigate further the proposed learning approach for improved generalisation in test dialogues. Some research avenues are as follows.
derive rewards from human-human dialogues by assigning positive values to contextualised responses seen in the data, and negative values to randomly chosen responses due to lacking coherence
a83a351539fb0b6acb5bdee32323dd924f4fd1e7
a83a351539fb0b6acb5bdee32323dd924f4fd1e7_0
Q: How many agents do they ensemble over? Text: Introduction Humans in general find it relatively easy to have chat-like conversations that are both coherent and engaging at the same time. While not all human chat is engaging, it is arguably coherent BIBREF0 , and it can cover large vocabularies across a wide range of conversational topics. In addition, each contribution by a partner conversant may exhibit multiple sentences, such as greeting+question or acknowledgement+statement+question. The topics raised in a conversation may go back and forth without losing coherence. All of these phenomena represent big challenges for current data-driven chatbots. We present a novel approach for chatbot training based on the reinforcement learning BIBREF1 , unsupervised learning BIBREF2 and deep learning BIBREF3 paradigms. In contrast to other learning approaches for Deep Reinforcement Learning chatbots that rely on partially labelled dialogue data BIBREF4 , BIBREF5 , our approach assumes only unlabelled data. Our learning scenario is as follows: given a dataset of human-human dialogues in raw text (without any manually provided labels), an ensemble of Deep Reinforcement Learning (DRL) agents take the role of one of the two partner conversants in order to learn to select human-like sentences when exposed to both human-like and non-human-like sentences. In our learning scenario the agent-environment interactions consist of agent-data interactions – there is no user simulator as in task-oriented dialogue systems BIBREF6 , BIBREF7 . During each verbal contribution and during training, the DRL agents This process—illustrated in Figure FIGREF6 —is carried out iteratively until the end of a dialogue for as many dialogues as necessary, i.e. until there is no further improvement in the agents' performance. During each verbal contribution at test time, the agent exhibiting the highest predictive dialogue reward is selected for human-agent interactions. This article makes the following contributions to neural-based chatbots: In the next two sections, 2 and 3, we review related work on neural-based chatbots and provide related background on deep reinforcement learning. Then we describe our proposed approach and methodology in section 4. This is followed by a comprehensive set of automatic and human evaluations in section 5, which use (i) a dataset of chitchat conversations, and (ii) human ratings of human-chatbot dialogues. Section 6 draws conclusions and discusses avenues for future research. Background A reinforcement learning agent induces its behaviour from interacting with an environment through trial and error, where situations (representations of sentences in a dialogue history) are mapped to actions (follow-up sentences) by maximising a long-term reward signal. Such an agent is typically characterised by: (i) a finite set of states INLINEFORM0 that describe all possible situations in the environment; (ii) a finite set of actions INLINEFORM1 to change in the environment from one situation to another; (iii) a state transition function INLINEFORM2 that specifies the next state INLINEFORM3 for having taken action INLINEFORM4 in the current state INLINEFORM5 ; (iv) a reward function INLINEFORM6 that specifies a numerical value given to the agent for taking action INLINEFORM7 in state INLINEFORM8 and transitioning to state INLINEFORM9 ; and (v) a policy INLINEFORM10 that defines a mapping from states to actions BIBREF1 , BIBREF29 . The goal of a reinforcement learning agent is to find an optimal policy by maximising its cumulative discounted reward defined as DISPLAYFORM0 where function INLINEFORM0 represents the maximum sum of rewards INLINEFORM1 discounted by factor INLINEFORM2 at each time step. While a reinforcement learning agent takes actions with probability INLINEFORM3 during training, it selects the best action at test time according to DISPLAYFORM0 A deep reinforcement learning agent approximates INLINEFORM0 using a multi-layer neural network BIBREF30 . The INLINEFORM1 function is parameterised as INLINEFORM2 , where INLINEFORM3 are the parameters or weights of the neural network (recurrent neural network in our case). Estimating these weights requires a dataset of learning experiences INLINEFORM4 (also referred to as `experience replay memory'), where every experience is described as a tuple INLINEFORM5 . Inducing a INLINEFORM6 function consists in applying Q-learning updates over minibatches of experience INLINEFORM7 drawn uniformly at random from the full dataset INLINEFORM8 . This process is implemented in learning algorithms using Deep Q-Networks (DQN) such as those described in BIBREF30 , BIBREF31 , BIBREF32 , and the following section describes a DQN-based algorithm for human-chatbot interaction. Proposed Approach This section explains the main components of Figure FIGREF6 as follows. Motivated by BIBREF33 , we first describe the ensemble of Deep Reinforcement Learning (DRL) agents, we then explain how to conceive a finite set of dialogue actions from raw text, and finally we describe how to assign dialogue rewards for training DRL-based chatbots. Ensemble of DRL Chatbots We assume that all deep reinforcement learning agents in our ensemble use the same neural network architecture and learning algorithm. They only differ in the portion of data used for training and consequently the weights in their trained models—see BIBREF34 , BIBREF35 for alternative approaches. Our agents aim to maximise their cumulative reward over time according to DISPLAYFORM0 where INLINEFORM0 is the numerical reward given at time step INLINEFORM1 for choosing action INLINEFORM2 in state INLINEFORM3 , INLINEFORM4 is a discounting factor, and INLINEFORM5 is the optimal action-value function using weights INLINEFORM6 in the neural network of chatbot INLINEFORM7 . During training, a DRL agent will choose actions in a probabilistic manner in order to explore new INLINEFORM8 pairs for discovering better rewards or to exploit already learnt values—with a reduced level of exploration over time and an increased level of exploitation over time. During testing, our ensemble-based DRL chatbot will choose the best actions INLINEFORM9 according to DISPLAYFORM0 where INLINEFORM0 is a trajectory of state-action pairs of chatbot INLINEFORM1 , and INLINEFORM2 is a function that predicts the dialogue reward of chatbot INLINEFORM3 as in BIBREF36 . Given the set of trajectories for all agents—where each agent takes its own decisions and updates its environment states accordingly—the agent with the highest predictive reward is selected, i.e. the one with the least amount of errors in the interaction. Our DRL agents implement the procedure above using a generalisation of DQN-based methods BIBREF30 , BIBREF31 , BIBREF32 —see Algorithm SECREF15 , explained as follows. After initialising replay memory INLINEFORM0 with learning experience INLINEFORM1 , dialogue history INLINEFORM2 with sentences INLINEFORM3 , action-value function INLINEFORM4 and target action-value function INLINEFORM5 , we sample a training dialogue from our data of human-human conversations (lines 1-4). Once a conversation starts, it is mapped to its corresponding sentence embedding representation, i.e. `sentence vectors' as described in Section SECREF26 (lines 5-6). Then a set of candidate responses is generated including (1) the true human response and (2) a set of randomly chosen responses (distractors). The candidate responses are clustered as described in the next section and the resulting actions are taken into account by the agent for action selection (lines 8-10). Once an action is chosen, it is conveyed to the environment, a reward is observed as described at the end of this section, and the agent's partner response is observed in order to update the dialogue history INLINEFORM0 (lines 11-14). In response to the update above, the new sentence embedding representation is extracted from INLINEFORM0 for updating the replay memory INLINEFORM1 with experience INLINEFORM2 (lines 15-16). Then a minibatch of experiences INLINEFORM0 is sampled from INLINEFORM1 for updating weights INLINEFORM2 according to the error derived from the difference between the target value INLINEFORM3 and the predicted value INLINEFORM4 (see lines 18 and 20), which is based on the following weight updates: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 is a learning rate hyperparameter. The target action-value function INLINEFORM0 and environment state INLINEFORM1 are updated accordingly (lines 21-22), and this iterative procedure continues until convergence. ChatDQN Learning [1] Initialise Deep Q-Networks with replay memory INLINEFORM0 , dialogue history INLINEFORM1 , action-value function INLINEFORM2 with random weights INLINEFORM3 , and target action-value functions INLINEFORM4 with INLINEFORM5 Initialise clustering model from training dialogue data Sample a training dialogue (human-human sentences) Append first sentence to dialogue history INLINEFORM6 INLINEFORM7 sentence embedding representation of INLINEFORM8 Generate noisy candidate response sentences INLINEFORM9 INLINEFORM10 Execute chosen clustered action INLINEFORM11 Observe human-likeness dialogue reward INLINEFORM12 Observe environment response (agent's partner) Append agent and environment responses to INLINEFORM13 INLINEFORM14 sentence embedding representation of INLINEFORM15 Append learning experience INLINEFORM16 to INLINEFORM17 Sample random minibatch INLINEFORM18 from INLINEFORM19 INLINEFORM20 Set INLINEFORM21 Gradient descent step on INLINEFORM22 with respect to INLINEFORM23 Reset INLINEFORM24 every INLINEFORM25 steps INLINEFORM26 INLINEFORM27 end of dialogue Reset dialogue history INLINEFORM28 convergence Sentence and Dialogue Clustering Actions in reinforcement learning chatbots correspond to sentences, and their size is infinite assuming all possible combinations of word sequences in a given language. This is especially true in the case of open-ended conversations that make use of large vocabularies, as opposed to task-oriented conversations that make use of smaller (restricted) vocabularies. A clustered action is a group of sentences sharing a similar or related meaning via sentence vectors derived from word embeddings BIBREF37 , BIBREF38 . We represent sentences via their mean word vectors—similarly as in Deep Averaging Networks BIBREF39 —denoted as INLINEFORM0 , where INLINEFORM1 is the vector of coefficients of word INLINEFORM2 , INLINEFORM3 is the number of words in sentence INLINEFORM4 , and INLINEFORM5 is the embedding vector of sentence INLINEFORM6 . Similarly, a clustered dialogue is a group of conversations sharing a similar or related topic(s) via their clustered actions. We represent dialogues via their clustered actions. Dialogue clustering in this way can be seen as a two-stage approach, where sentences are clustered in the first step and dialogues are clustered in the second step. In our proposed approach, each DRL agent is trained on a cluster of dialogues. While there are multiple ways of selecting features for clustering and also multiple clustering algorithms, the following requirements arise for chatbots: (1) unlabelled data due to human-human dialogues in raw text (this makes it difficult to evaluate the goodness of clustering features and algorithms), and (2) scalability to clustering a large set of data points (especially in the case of sentences, which are substantially different between them due to their open-ended nature). Given a set of data points INLINEFORM0 and a similarity metric INLINEFORM1 , the task is to find a set of INLINEFORM2 groups with a clustering algorithm. In our case each data point INLINEFORM3 corresponds to a dialogue or a sentence. For scalability purposes, we use the K-Means++ algorithm BIBREF40 and the Euclidean distance INLINEFORM4 with INLINEFORM5 dimensions, and consider INLINEFORM6 as a hyperparameter – though other clustering algorithms and distance metrics can be used with our approach. In this way, a trained sentence clustering model assigns a cluster ID INLINEFORM7 to features INLINEFORM8 , where the number of actions (in a DRL agent) refers to the number of sentence clusters, i.e. INLINEFORM9 . Human-Likeness Rewards Specifying reward functions in reinforcement learning dialogue agents is often a difficult aspect. We propose to derive rewards from human-human dialogues by assigning positive values to contextualised responses seen in the data, and negative values to randomly chosen responses due to lacking coherence (also referred to as `non-human-like responses') – see example in Tables TABREF29 and TABREF30 . An episode or dialogue reward can thus be computed as INLINEFORM0 , where index INLINEFORM1 refers to the dialogue in focus, index INLINEFORM2 to the dialogue turn in focus, and INLINEFORM3 is given according to DISPLAYFORM0 Table TABREF29 shows an example of a well rewarded dialogue (without distortions) and Table TABREF30 shows an example of a poorly rewarded dialogue (with distortions). Other dialogues can exhibit similar dialogue rewards or something in between (ranging between INLINEFORM0 and INLINEFORM1 ), depending on the amount of distortions—the higher the amount of distortions the lower the dialogue reward. We employ the algorithm described in BIBREF36 for generating dialogues with varying amounts of distortions (i.e. different degrees of human-likeness), which we use for training and testing reward prediction models using supervised regression. Given our extended dataset INLINEFORM0 with (noisy) dialogue histories INLINEFORM1 represented with sequences of sentence vectors, the goal is to predict dialogue scores INLINEFORM2 as accurately as possible. Alternative and automatically derived values between -1 and +1 are also possible but considered as future work. Section SECREF67 provides an evaluation of our reward function and its correlation with human judgement. We show that albeit simple, our reward function is highly correlated with our judges' ratings. Methodology Our proposed approach can be summarised through the following methodology: Collect or adopt a dataset of human-human dialogues (as in SECREF39 ) Design or adopt a suitable reward function (as in SECREF27 ) Train a neural regressor for predicting dialogue rewards (as in BIBREF36 ) Perform sentence and dialogue clustering in order to define the action set and training datasets (as in SECREF26 ) Train a Deep Reinforcement Learning agent per dialogue cluster (as described in SECREF15 ) Test the ensemble of agents together with the predictor of dialogue rewards (as in SECREF51 and SECREF67 ), and iterate from Step 1 if needed Deploy your trained chatbot subject to satisfactory results in Step 6 Data We used the Persona-Chat dataset, stats are shown in Table TABREF41 . Experimental Setting Our agents' states model dialogue histories as sequences of sentence vectors—using GloVe-based BIBREF38 mean word vectors BIBREF39 —with pre-trained embeddings. All our experiments use a 2-layer Gated Recurrent Unit (GRU) neural network BIBREF42 . At each time step INLINEFORM1 in the dialogue history, the first hidden layer generates a hidden state INLINEFORM2 as follows: DISPLAYFORM0 where INLINEFORM0 refers to a set of sentence vectors of the dialogue history, INLINEFORM1 is a reset gate that decides how much of the previous state to forget, INLINEFORM2 is an update gate that decides how much to update its activation, INLINEFORM3 is an internal state, INLINEFORM4 and INLINEFORM5 are the Sigmoid and hyperbolic Tangent functions (respectively), INLINEFORM6 and INLINEFORM7 are learnt weights, and INLINEFORM8 refers to the element-wise multiplication. If the equations above are summarised as INLINEFORM9 we get the following output action taking into account both hidden layers in our neural net: INLINEFORM10 , where INLINEFORM11 and INLINEFORM12 . While a small number of sentence clusters may result in actions being assigned to potentially the same cluster, a larger number of sentence clusters would mitigate the problem, but the larger the number of clusters the larger the computational cost—i.e. more parameters in the neural net. Table TABREF45 shows example outputs of our sentence clustering using 100 clusters on our training data. A manual inspection showed that while clustered sentences sometimes do not seem very similar, they made a lot of sense and they produced reasonable outputs. Our human evaluation (see Section SECREF67 ) confirms this. All our experiments use INLINEFORM0 due to a reasonable compromise between system performance and computational expense. The purpose of our second clustering model is to split our original training data into a group of data subsets, one subset for each ChatDQN agent in our ensemble. We explored different numbers of clusters (20, 50, 100) and noted that the larger the number of clusters the (substantially) higher the computational expense . We chose 100 clusters for our experiments due to higher average episode rewards of cluster-based agents than non-cluster-based ones. Figure FIGREF46 shows visualisations of our sentence and dialogue clustering using 100 clusters on our training data of 17.8K data points. A manual inspection was not as straightforward as analysing sentences due to the large variation of open-ended sets of sentences—see next section for further results. Automatic Evaluation We compared three DQN-based algorithms (DQN BIBREF30 , Double DQN BIBREF31 and Dueling DQN BIBREF32 ) in order to choose a baseline single agent and the learning algorithm for our ensemble of agents. The goal of each agent is to choose the human-generated sentences (actions) out of a set of candidate responses (20 available at each dialogue turn). Figure FIGREF52 (left) shows learning curves for these three learning algorithms, where we can observe that all agents indeed improve their performance (in terms of average episode reward) over time. It can also be observed that DQN and Double DQN performed similarly, and that Dueling DQN was outperformed by its counterpart algorithms. Due to its simplicity, we thus opted for using DQN as our main algorithm for the remainder of our experiments. Figure FIGREF52 (right) shows the performance of 100 ChatDQN agents (one per dialogue cluster), where we also observe that all agents improve their performance over time. It can be noted however that the achieved average episode reward of INLINEFORM0 -1 is much greater than that of the single agent corresponding to INLINEFORM1 -5.5. Additional experiments reported that the lower the number of clusters the lower the average episode reward during training. We thus opted for using 100 dialogue clusters in the remainder of our experiments. We analysed the performance of our agents further by using the test set of 999 totally unseen dialogues during training. We clustered the test set using our trained dialogue clustering model in order to assess the goodness of each agent in dialogues that were similar but not the same. The box plots in Figure FIGREF55 report the performance of our DRL agents according to the following metrics while tested on training data and test data: Avg. Episode Reward, Avg. F1 score, Avg. Recall@1, and Average Recall@5. One can quickly observe the striking performance gap between testing on training data vs. testing on test data. This can be interpreted as ChatDQN agents being able to learn well how to select actions on training data, but not being able to replicate the same behaviour on test data. This may not be surprising given that only 720 sentences (out of 263,862 training sentences and 15,586 test sentences) are shared between both sets, and it is presumably a realistic scenario seen that even humans rarely use the exact same sentences in multiple conversations. On the one hand our results also suggest that our training dataset is rather modest, and that a larger dataset is needed for improved performance. On the other hand, our results help us to raise the question `Can chitchat chatbots with reasonable performance be trained on modest datasets— i.e. with thousands of dialogues instead of millions?' If so, the generalisation abilities of chatbots need to be improved in future work. If not, large (or very large) datasets should receive more attention in future work on neural-based chatbots. Finally, we compared the performance of 5 dialogue agents on 999 dialogues with 20 candidate sentences at every dialogue turn: Upper Bound, which corresponds to the true human sentences in the test dataset; Lower Bound, which selects a sentence randomly from other dialogues than the one in focus; Ensemble, which selects a sentence using 100 agents trained on clustered dialogues as described in section SECREF4 – the agent in focus is chosen using a regressor as predictor of dialogue reward INLINEFORM0 using a similar neural net as the ChatDQN agents except for the final layer having one node and using Batch Normalisation BIBREF44 between hidden layers as in BIBREF36 ; Single Agent, which selects a sentence using a single ChatDQN agent trained on the whole training set; and Seq2Seq, which selects a sentence using a 2-layer LSTM recurrent neural net with attention – from the Parlai framework (http://www.parl.ai) BIBREF21 , trained using the same data as the agents above. Table TABREF66 shows the results of our automatic evaluation, where the ensemble of ChatDQN agents performed substantially better than the single agent and Seq2Seq model. Human Evaluation In addition to our results above, we carried out a human evaluation using 15 human judges. Each judge was given a form of consent for participating in the study, and was asked to rate 500 dialogues (100 core dialogues—from the test dataset—with 5 different agent responses, dialogues presented in random order) according to the following metrics: Fluency (Is the dialogue naturally articulated as written by a human?), Engagingness (Is the dialogue interesting and pleasant to read?), and Consistency (without contradictions across sentences). This resulted in INLINEFORM0 ratings from all judges. Figure FIGREF70 shows an example dialogue with ratings ranging from 1=strongly disagree to 5=strongly agree. Figure FIGREF71 shows average ratings (and corresponding error bars) per conversational agent and per metric. As expected, the Upper Bound agent achieved the best scores and the Lower Bound agent the lowest scores. The ranking of our agents in Table TABREF66 is in agreement with the human evaluation, where the Ensemble agent outperforms the Seq2Seq agent, and the latter outperforms Single Agent. The difference in performance between the Ensemble agent and the Seq2Seq agent is significant at INLINEFORM0 for the Fluency metric and at INLINEFORM1 for the other metrics (Engagingness and Consistency)—based on a two-tailed Wilcoxon Signed Rank Test. Furthermore, we analysed the predictive power of dialogue rewards, derived from our reward function, against human ratings on test data. This analysis revealed positive high correlations between them as shown in Figure FIGREF72 . These scatter plots show data points of test dialogues (the X-axes include Gaussian noise drawn from INLINEFORM0 for better visualisation), which obtained Pearson correlation scores between 0.90 and 0.91 for all metrics (Fluency, Engagingness and Consistency). This is in favour of our proposed reward function and supports its application to training open-ended dialogue agents. Conclusions and Future Work We present a novel approach for training Deep Reinforcement Learning (DRL) chatbots. It uses an ensemble of 100 DRL agents based on clustered dialogues, clustered actions, and rewards derived from human-human dialogues without any manual annotations. The task of the agents is to learn to choose human-like actions (sentences) out of candidate responses including human generated and randomly chosen sentences. Our ensemble trains specialised agents with particular dialogue strategies according to their dialogue clusters. At test time, the agent with the highest predicted reward is used during a dialogue. Experimental results using chitchat dialogue data report that DRL agents learn human-like dialogue policies when tested on training data, but their generalisation ability in a test set of unseen dialogues (with mostly unseen sentences, only 4.62% seen sentences to be precise) remains a key challenge for future research in this field. As part of our study, we found the following: Future work can investigate further the proposed learning approach for improved generalisation in test dialogues. Some research avenues are as follows.
100
b8ffb81e74c1c1ad552051aca8741b0141ae6e97
b8ffb81e74c1c1ad552051aca8741b0141ae6e97_0
Q: What is the task of slot filling? Text: Introduction Coreference resolution systems group noun phrases (mentions) that refer to the same entity into the same chain. Mentions can be full names (e.g., John Miller), pronouns (e.g., he), demonstratives (e.g., this), comparatives (e.g., the first) or descriptions of the entity (e.g. the 40-year-old) BIBREF0 . Although coreference resolution has been a research focus for several years, systems are still far away from being perfect. Nevertheless, there are many tasks in natural language processing (NLP) which would benefit from coreference information, such as information extraction, question answering or summarization BIBREF1 . In BIBREF2 , for example, we showed that coreference information can also be incorporated into word embedding training. In general, coreference resolution systems can be used as a pre-processing step or as a part of a pipeline of different modules. Slot Filling is an information extraction task which has become popular in the last years BIBREF3 . It is a shared task organized by the Text Analysis Conference (TAC). The task aims at extracting information about persons, organizations or geo-political entities from a large collection of news, web and discussion forum documents. An example is “Steve Jobs” for the slot “X founded Apple”. Thinking of a text passage like “Steve Jobs was an American businessman. In 1976, he co-founded Apple”, it is clear that coreference resolution can play an important role for finding the correct slot filler value. In this study, we investigate how coreference resolution could help to improve performance on slot filling and which challenges exist. Furthermore, we present how we pre-processed the TAC source corpus with a coreference resolution system in order to be able to run the slot filling system more efficiently. In addition to this paper, we also publish the results of this pre-processing since it required long computation time and much resources. Related work The slot filling task has been organized since 2009. The top ranked systems of the last years achieved F1 scores of 37.28 (2013) BIBREF4 , 36.77 (2014) BIBREF5 and 31.48 (2015). In 2015, the task has been merged with the Cold Start track of the same conference. This led to several changes in the number of relations, the evaluation documents and the outputs expected from the systems BIBREF6 . Previous studies and error analyses have shown that coreference resolution is an important component to increase the recall of slot filling systems BIBREF7 , BIBREF8 , BIBREF9 . analysis2012 identified coreference failures as the second most frequent error source of slot filling systems (after inference failures). In most cases, nominal anaphors were not resolved correctly. analysisRecall investigated possible causes of recall loss in a slot filling system. They described that coreference resolution provided higher recall but might be inefficient since it requires a lot of time and resources. Moreover, they argued that the overall results of a slot filling system might be better without coreference resolution since it can have a negative impact on precision. In contrast, our experiments in this study show that the increased number of true positives when using coreference resolution has a much higher impact on the final results. For coping with the problem of time-consuming coreference resolution, we prepared and publish KBPchains, a coreference resource for slot filling. Slot filling task The main idea of slot filling is to extend a knowledge base by extracting pre-defined relations between (named) entities from text data. Systems are provided with a large collection of text documents and a query file including entities and the relations to find in the text. As output, they have to provide the second argument for each relation. For entity “Apple” and relation “org:founded_by”, for example, the systems need to extract “Steve Jobs”, “Steve Wozniak” and “Ronald Wayne” along with text passages for justification. This task combines several NLP challenges like information retrieval, information extraction, relation classification and knowledge inference. Until 2014, the slot filling shared task included 41 relations (25 for persons and 16 for organizations) BIBREF3 . Since 2015, these relations have been extended to all possible inverse relations which introduced a new query entity type (geo-political entity) and augmented the set of relations to 64 (27 for persons, 20 for organizations and 17 for geo-political entities) BIBREF6 . Table 1 provides exemplary relations for the different entity types. The input for a slot filling system is an xml query containing the name and type of the entity, an exemplary occurence of the entity in the document collection and the slot to be filled. The expected output of the system contains, i.a., a provenance for the slot filler in the document collection, the slot filler itself, the type of the filler ( $\in {PER, ORG, GPE, STRING}$ ), its offsets in the document collection, as well as a confidence value of the system. The document collection from which the slot fillers should be extracted is quite large: until 2014, it consisted of about 2.1 million documents, in 2015 the number was reduced to about 50,000 documents. The documents comprise newswire, web and discussion forum texts. Therefore, the slot filling task is more than relation extraction for pre-defined relations: It also includes challenges like information retrieval and coping with different genres. Most slot filling systems are a pipeline of different components, such as query expansion, information retrieval, candidate extraction, candidate classification and postprocessing. Figure 1 depicts a typical system. We performed a detailed analysis of the errors of these components and found that one of the most important sources of error is failure of coreference resolution in the candidate extraction step. Coreference resolution for slot filling In our study, we have identified two main reasons why coreference resolution can improve slot filling performance. The first reason is that both arguments of a relation can be pronouns referring to the entity or filler in question. Consider the relation “per:parents” and the sentence “Bill is the father of Jane.” Both entities “Bill” and “Jane” might have been mentioned in sentences before and could now be replaced by pronouns: “He is the father of Jane”, “Bill is her father” or “He is her father”. If a slot filling system only extracts sentences with appearances of the full name of a person, it might miss many relevant sentences which can reduce the recall of the whole system drastically. As analysisRecall pointed out, the recall losses cannot be recovered by subsequent pipeline modules. The second reason is that coreference resolution can provide slot fillers “for free”: If a phrase like “The Hawaii-born” is coreferent to the entity in question, it not only provides an additional sentence with information about the entity but also directly the location of birth (without the need of classification). Similar phrases can provide the age, a title or the religion of a person or the location of headquarters of an organization. Coreference resource As motivated above, coreference information is a very important resource for participants of the slot filling task or related knowledge base population tasks on the same documents. Since we found that the coreference resolution component is one of the bottlenecks which considerably slows down our slot filling pipeline, we have pre-processed the TAC source corpus by tagging its documents using Stanford CoreNLP BIBREF10 . We call this resource of coreference chains KBPchains and share it (in the form of document-offset spans) on our website. Although CoreNLP is publicly available, KBPchains will save researchers much time and resources (cf., analysisRecall who mentioned the need for efficient coreference resolution when processing the large slot filling corpora). Table 2 lists statistics about the extracted coreference chains and their mentions. In addition to the minimum, maximum, average and median numbers of chains per document, mentions per chain and words per mention, we also report the number of mentions which are pronouns, the number of singletons (chains consisting of only one mention) and the number of chains with only identical mentions. Analysis of coreference resolution errors Coreference resolution systems produce acceptable results but are still far away from being perfect. In an analysis of the results of Stanford CoreNLP on the TAC source corpus in the context of our slot filling system, we found the following flaws being most prominent: Wrongly linked pronoun chains, unlinked pronoun chains and no recognition of coreferent phrases like “the 42-year-old”, “the author” or “the California-based company”. In the following, we describe the effect of these failures on the slot filling system. Wronly linked pronoun chains. If a pronoun chain is wrongly linked to the entity in question, all sentences with pronouns of this chain will be extracted as sentences containing information about the entity. This increases the number of falsely extracted sentences and as a result also the number of possible filler candidates. All those false positive filler candidates will be passed to the candidate evaluation module and can easily lead to a lower precision in the final output. (Either because the candidate evaluation makes a wrong decision, too or because – in the worst case – the relation in question holds between the pronoun and the filler candidate but not between the entity in question and the filler candidate.) Unlinked pronoun chains. If a coreference chain consists of only pronouns without any entity mention, the slot filling system cannot decide to which entity it belongs to and will omit it. If the pronouns of the chain are coreferent to the entity in question, the chance that the slot filling system misses information which are relevant to the slot in question is quite high. As a result, the recall of the end-to-end system will be reduced. A solution to this problem could be a post-processing of these unlinked pronoun chains, a challenge we will investigate in the future. No recognition of nominal anaphors. Phrases like “the 42-year-old” or “the California-based company” may occur directly after a sentence with the entity in question but are often not recognized as being coreferent to it. However, if they refer to this entity, they first contain possibly relevant information (like the age of a person). Second, the sentence in which they appear could mention additional information about the entity. Omitting these sentences and these phrases can therefore reduce the recall of the slot filling system. In our system, we cope with these cases by explicitely looking for such phrases in the sentence following a mention of the entity in question. Additional findings. We perform a manual analysis of the extracted coreference chains in ten randomly chosen documents with the following results. Experiments with end-to-end system In order to investigate the impact of coreference resolution on slot filling empirically, we perform end-to-end experiments on the TAC evaluation data from 2015. Our system with coreference resolution was one of the top-performing systems in the official evaluations 2015 BIBREF11 . It follows the pipeline shown in Figure 1 . For a more detailed descriptions of its component, see BIBREF11 . Table 3 shows its results with (+) and without (-) coreference resolution in the candidate extraction component. The number of true positives is reduced considerably (from 361 to 321) when the system does not use coreference information. The number of false positives is also lower, but the final results show that the impact of the number of true positives is larger since it affects both precision and recall: The F1 score drops by more than 6 points when omitting coreference resolution. To conclude, in order to provide the classification and postprocessing modules with a recall as high as possible, coreference resolution is a crucial part of the system. Despite of the errors identified in Section "Analysis of coreference resolution errors" , an automatic coreference system still performs well enough to improve the performance on slot filling. Conclusion In this work, we analyzed the impact of coreference resolution on the NLP task slot filling. We showed that coreference information improves the slot filling system performance and outlined the most important challenges we have discovered in an analysis of coreference resolution errors. Since the TAC source corpus is very large, we will publish KBPchains, a resource containing the coreference chains which we have extracted automatically. Acknowledgments Heike Adel is a recipient of the Google European Doctoral Fellowship in Natural Language Processing and this research is supported by this fellowship. This work was also supported by DFG (grant SCHU 2246/4-2).
The task aims at extracting information about persons, organizations or geo-political entities from a large collection of news, web and discussion forum documents.
21f615bf19253fc27ea838012bc088f4d10cdafd
21f615bf19253fc27ea838012bc088f4d10cdafd_0
Q: Do they report results only on English data? Text: Introduction Recent years have seen the proliferation of deceptive information online. With the increasing necessity to validate the information from the Internet, automatic fact checking has emerged as an important research topic. It is at the core of multiple applications, e.g., discovery of fake news, rumor detection in social media, information verification in question answering systems, detection of information manipulation agents, and assistive technologies for investigative journalism. At the same time, it touches many aspects, such as credibility of users and sources, information veracity, information verification, and linguistic aspects of deceptive language. In this paper, we present an approach to fact-checking with the following design principles: (i) generality, (ii) robustness, (iii) simplicity, (iv) reusability, and (v) strong machine learning modeling. Indeed, the system makes very few assumptions about the task, and looks for supportive information directly on the Web. Our system works fully automatically. It does not use any heavy feature engineering and can be easily used in combination with task-specific approaches as well, as a core subsystem. Finally, it combines the representational strength of recurrent neural networks with kernel-based classification. The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False. Figure FIGREF1 presents a real example from one of the datasets we experiment with. The left-hand side of the figure contains a True example, while the right-hand side shows a False one. We show the original claims from snopes.com, the query generated by our system, and the information retrieved from the Web (most relevant snippet and text selection from the web page). The veracity of the claim can be inferred from the textual information. Our contributions can be summarized as follows: The remainder of this paper is organized as follows. Section SECREF2 introduces our method for fact checking claims using external sources. Section SECREF3 presents our experiments and discusses the results. Section SECREF4 describes an application of our approach to a different dataset and a slightly different task: fact checking in community question answering forums. Section SECREF5 presents related work. Finally, Section SECREF6 concludes and suggests some possible directions for future work. The Fact-Checking System Given a claim, our system searches for support information on the Web in order to verify whether the claim is likely to be true. The three steps in this process are (i) external support retrieval, (ii) text representation, and (iii) veracity prediction. External Support Retrieval This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 . As we aim to develop a general-purpose fact checking system, we use an approach for query generation that does not incorporate any features that are specific to claim verification (e.g., no temporal indicators). We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable. Finally, if our query has returned no results, we iteratively relax it by dropping the final tokens one at a time. Text Representation Next, we build the representation of a claim and the corresponding snippets and Web pages. First, we calculate three similarities (a) between the claim and a snippet, or (b) between the claim and a Web page: (i) cosine with tf-idf, (ii) cosine over embeddings, and (iii) containment BIBREF1 . We calculate the embedding of a text as the average of the embeddings of its words; for this, we use pre-trained embeddings from GloVe BIBREF2 . Moreover, as a Web page can be long, we first split it into a set of rolling sentence triplets, then we calculate the similarities between the claim and each triplet, and we take the highest scoring triplet. Finally, as we have up to ten hits from the search engine, we take the maximum and also the average of the three similarities over the snippets and over the Web pages. We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN. Veracity Prediction Next, we build classifiers: neural network (NN), support vector machines (SVM), and a combination thereof (SVM+NN). The architecture of our NN is shown on top of Figure FIGREF7 . We have five LSTM sub-networks, one for each of the text sources from two search engines: Claim, Google Web page, Google snippet, Bing Web page, and Bing snippet. The claim is fed into the neural network as-is. As we can have multiple snippets, we only use the best-matching one as described above. Similarly, we only use a single best-matching triple of consecutive sentences from a Web page. We further feed the network with the similarity features described above. All these vectors are concatenated and fully connected to a much more compact hidden layer that captures the task-specific embeddings. This layer is connected to a softmax output unit to classify the claim as true or false. The bottom of Figure FIGREF7 represents the generic architecture of each of the LSTM components. The input text is transformed into a sequence of word embeddings, which is then passed to the bidirectional LSTM layer to obtain a representation for the full sequence. Our second classifier is an SVM with an RBF kernel. The input is the same as for the NN: word embeddings and similarities. However, the word embeddings this time are calculated by averaging rather than using a bi-LSTM. Finally, we combine the SVM with the NN by augmenting the input to the SVM with the values of the units in the hidden layer. This represents a task-specific embedding of the input example, and in our experiments it turned out to be quite helpful. Unlike in the SVM only model, this time we use the bi-LSTM embeddings as an input to the SVM. Ultimately, this yields a combination of deep learning and task-specific embeddings with RBF kernels. Dataset We used part of the rumor detection dataset created by BIBREF3 . While they analyzed a claim based on a set of potentially related tweets, we focus on the claim itself and on the use of supporting information from the Web. The dataset consists of 992 sets of tweets, 778 of which are generated starting from a claim on snopes.com, which ma2016detecting converted into a query. Another 214 sets of tweets are tweet clusters created by other researchers BIBREF4 , BIBREF5 with no claim behind them. ma2016detecting ignored the claim and did not release it as part of their dataset. We managed to find the original claim for 761 out of the 778 snopes.com-based clusters. Our final dataset consists of 761 claims from snopes.com, which span various domains including politics, local news, and fun facts. Each of the claims is labeled as factually true (34%) or as a false rumor (66%). We further split the data into 509 for training, 132 for development, and 120 for testing. As the original split for the dataset was not publicly available, and as we only used a subset of their data, we had to make a new training and testing split. Note that we ignored the tweets, as we wanted to focus on a complementary source of information: the Web. Moreover, ma2016detecting used manual queries, while we use a fully automatic method. Finally, we augmented the dataset with Web-retrieved snippets, Web pages, and sentence triplets from Web pages. Experimental Setup We tuned the architecture (i.e., the number of layers and their size) and the hyper-parameters of the neural network on the development dataset. The best configuration uses a bidirectional LSTM with 25 units. It further uses a RMSprop optimizer with 0.001 initial learning rate, L2 regularization with INLINEFORM0 =0.1, and 0.5 dropout after the LSTM layers. The size of the hidden layer is 60 with tanh activations. We use a batch of 32 and we train for 400 epochs. For the SVM model, we merged the development and the training dataset, and we then ran a 5-fold cross-validation with grid-search, looking for the best kernel and its parameters. We ended up selecting an RBF kernel with INLINEFORM0 and INLINEFORM1 0.01. Evaluation Metrics The evaluation metrics we use are P (precision), R (recall), and F INLINEFORM0 , which we calculate with respect to the false and to the true claims. We further report AvgR (macro-average recall), AvgF INLINEFORM1 (macro-average F INLINEFORM2 ), and Acc (accuracy). Results Table TABREF14 shows the results on the test dataset. We can see that both the NN and the SVM models improve over the majority class baseline (all false rumors) by a sizable margin. Moreover, the NN consistently outperforms the SVM by a margin on all measures. Yet, adding the task-specific embeddings from the NN as features of the SVM yields overall improvements over both the SVM and the NN in terms of avgR, avgF INLINEFORM0 , and accuracy. We can see that both the SVM and the NN overpredict the majority class (false claims); however, the combined SVM+NN model is quite balanced between the two classes. Table TABREF22 compares the performance of the SVM with and without task-specific embeddings from the NN, when training on Web pages vs. snippets, returned by Google vs. Bing vs. both. The NN embeddings consistently help the SVM in all cases. Moreover, while the baseline SVM using snippets is slightly better than when using Web pages, there is almost no difference between snippets vs. Web pages when NN embeddings are added to the basic SVM. Finally, gathering external support from either Google or Bing makes practically no difference, and using the results from both together does not yield much further improvement. Thus, (i) the search engines already do a good job at generating relevant snippets, and one does not need to go and download the full Web pages, and (ii) the choice of a given search engine is not an important factor. These are good news for the practicality of our approach. Unfortunately, direct comparison with respect to BIBREF3 is not possible. First, we only use a subset of their examples: 761 out of 993 (see Section SECREF17 ), and we also have a different class distribution. More importantly, they have a very different formulation of the task: for them, the claim is not available as input (in fact, there has never been a claim for 21% of their examples); rather an example consists of a set of tweets retrieved using manually written queries. In contrast, our system is fully automatic and does not use tweets at all. Furthermore, their most important information source is the change in tweets volume over time, which we cannot use. Still, our results are competitive to theirs when they do not use temporal features. To put the results in perspective, we can further try to make an indirect comparison to the very recent paper by BIBREF6 . They also present a model to classify true vs. false claims extracted from snopes.com, by using information extracted from the Web. Their formulation of the task is the same as ours, but our corpora and label distributions are not the same, which makes a direct comparison impossible. Still, we can see that regarding overall classification accuracy they improve a baseline from 73.7% to 84.02% with their best model, i.e., a 39.2% relative error reduction. In our case, we go from 66.7% to 80.0%, i.e., an almost identical 39.9% error reduction. These results are very encouraging, especially given the fact that our model is much simpler than theirs regarding the sources of information used (they model the stance of the text, the reliability of the sources, the language style of the articles, and the temporal footprint). Application to cQA Next, we tested the generality of our approach by applying it to a different setup: fact-checking the answers in community question answering (cQA) forums. As this is a new problem, for which no dataset exists, we created one. We augmented with factuality annotations the cQA dataset from SemEval-2016 Task 3 (CQA-QA-2016) BIBREF7 . Overall, we annotated 249 question–answer, or INLINEFORM0 - INLINEFORM1 , pairs (from 71 threads): 128 factually true and 121 factually false answers. Each question in CQA-QA-2016 has a subject, a body, and meta information: ID, category (e.g., Education, and Moving to Qatar), date and time of posting, user name and ID. We selected only the factual questions such as “What is Ooredoo customer service number?”, thus filtering out all (i) socializing, e.g., “What was your first car?”, (ii) requests for opinion/advice/guidance, e.g., “Which is the best bank around??”, and (iii) questions containing multiple sub-questions, e.g., “Is there a land route from Doha to Abudhabi. If yes; how is the road and how long is the journey?” Next, we annotated for veracity the answers to the retained questions. Note that in CQA-QA-2016, each answer has a subject, a body, meta information (answer ID, user name and ID), and a judgment about how well it addresses the question of its thread: Good vs. Potentially Useful vs. Bad . We only annotated the Good answers. We further discarded answers whose factuality was very time-sensitive (e.g., “It is Friday tomorrow.”, “It was raining last week.”), or for which the annotators were unsure. We targeted very high quality, and thus we did not use crowdsourcing for the annotation, as pilot annotations showed that the task was very difficult and that it was not possible to guarantee that Turkers would do all the necessary verification, e.g., gathering evidence from trusted sources. Instead, all examples were first annotated independently by four annotators, and then each example was discussed in detail to come up with a final label. We ended up with 249 Good answers to 71 different questions, which we annotated for factuality: 128 Positive and 121 Negative examples. See Table TABREF26 for details. We further split our dataset into 185 INLINEFORM0 – INLINEFORM1 pairs for training, 31 for development, and 32 for testing, preserving the general positive:negative ratio, and making sure that the questions for the INLINEFORM2 – INLINEFORM3 pairs did not overlap between the splits. Figure FIGREF23 presents an excerpt of an example from the dataset, with one question and three answers selected from a longer thread. Answer INLINEFORM4 contains false information, while INLINEFORM5 and INLINEFORM6 are true, as can be checked on an official governmental website. We had to fit our system for this problem, as here we do not have claims, but a question and an answer. So, we constructed the query from the concatenation of INLINEFORM0 and INLINEFORM1 . Moreover, as Google and Bing performed similarly, we only report results using Google. We limited our run to snippets only, as we have found them rich enough above (see Section SECREF3 ). Also, we had a list of reputed and Qatar-related sources for the domain, and we limited our results to these sources only. This time, we had more options to calculate similarities compared to the rumors dataset: we can compare against INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 – INLINEFORM5 ; we chose to go with the latter. For the LSTM representations, we use both the question and the answer. Table TABREF27 shows the results on the cQA dataset. Once again, our models outperformed all baselines by a margin. This time, the predictions of all models are balanced between the two classes, which is probably due to the dataset being well balanced in general. The SVM model performs better than the NN by itself. This is due to the fact that the cQA dataset is significantly smaller than the rumor detection one. Thus, the neural network could not be trained effectively by itself. Nevertheless, the task-specific representations were useful and combining them with the SVM model yielded consistent improvements on all the measures once again. Related Work Journalists, online users, and researchers are well aware of the proliferation of false information on the Web, and topics such as information credibility and fact checking are becoming increasingly important as research directions. For example, there was a recent 2016 special issue of the ACM Transactions on Information Systems journal on Trust and Veracity of Information in Social Media BIBREF9 , there was a SemEval-2017 shared task on Rumor Detection BIBREF10 , and there is an upcoming lab at CLEF-2018 on Automatic Identification and Verification of Claims in Political Debates BIBREF11 . The credibility of contents on the Web has been questioned by researches for a long time. While in the early days the main research focus was on online news portals BIBREF12 , BIBREF13 , BIBREF14 , the interest has eventually shifted towards social media BIBREF4 , BIBREF15 , BIBREF6 , BIBREF16 , which are abundant in sophisticated malicious users such as opinion manipulation trolls, paid BIBREF17 or just perceived BIBREF18 , BIBREF19 , sockpuppets BIBREF20 , Internet water army BIBREF21 , and seminar users BIBREF22 . For instance, BIBREF23 studied the credibility of Twitter accounts (as opposed to tweet posts), and found that both the topical content of information sources and social network structure affect source credibility. Other work, closer to ours, aims at addressing credibility assessment of rumors on Twitter as a problem of finding false information about a newsworthy event BIBREF4 . This model considers user reputation, writing style, and various time-based features, among others. Other efforts have focused on news communities. For example, several truth discovery algorithms are combined in an ensemble method for veracity estimation in the VERA system BIBREF24 . They proposed a platform for end-to-end truth discovery from the Web: extracting unstructured information from multiple sources, combining information about single claims, running an ensemble of algorithms, and visualizing and explaining the results. They also explore two different real-world application scenarios for their system: fact checking for crisis situations and evaluation of trustworthiness of a rumor. However, the input to their model is structured data, while here we are interested in unstructured text as input. Similarly, the task defined by BIBREF25 combines three objectives: assessing the credibility of a set of posted articles, estimating the trustworthiness of sources, and predicting user's expertise. They considered a manifold of features characterizing language, topics and Web-specific statistics (e.g., review ratings) on top of a continuous conditional random fields model. In follow-up work, BIBREF26 proposed a model to support or refute claims from snopes.com and Wikipedia by considering supporting information gathered from the Web. They used the same task formulation for claims as we do, but different datasets. In yet another follow-up work, Popat:2017:TLE:3041021.3055133 proposed a complex model that considers stance, source reliability, language style, and temporal information. Our approach to fact checking is related: we verify facts on the Web. However, we use a much simpler and feature-light system, and a different machine learning model. Yet, our model performs very similarly to this latter work (even though a direct comparison is not possible as the datasets differ), which is a remarkable achievement given the fact that we consider less knowledge sources, we have a conceptually simpler model, and we have six times less training data than Popat:2017:TLE:3041021.3055133. Another important research direction is on using tweets and temporal information for checking the factuality of rumors. For example, BIBREF27 used temporal patterns of rumor dynamics to detect false rumors and to predict their frequency. BIBREF27 focused on detecting false rumors in Twitter using time series. They used the change of social context features over a rumor's life cycle in order to detect rumors at an early stage after they were broadcast. A more general approach for detecting rumors is explored by BIBREF3 , who used recurrent neural networks to learn hidden representations that capture the variation of contextual information of relevant posts over time. Unlike this work, we do not use microblogs, but we query the Web directly in search for evidence. Again, while direct comparison to the work of BIBREF3 is not possible, due to differences in dataset and task formulation, we can say that our framework is competitive when temporal information is not used. More importantly, our approach is orthogonal to theirs in terms of information sources used, and thus, we believe there is potential in combining the two approaches. In the context of question answering, there has been work on assessing the credibility of an answer, e.g., based on intrinsic information BIBREF28 , i.e., without any external resources. In this case, the reliability of an answer is measured by computing the divergence between language models of the question and of the answer. The spawn of community-based question answering Websites also allowed for the use of other kinds of information. Click counts, link analysis (e.g., PageRank), and user votes have been used to assess the quality of a posted answer BIBREF29 , BIBREF30 , BIBREF31 . Nevertheless, these studies address the answers' credibility level just marginally. Efforts to determine the credibility of an answer in order to assess its overall quality required the inclusion of content-based information BIBREF32 , e.g., verbs and adjectives such as suppose and probably, which cast doubt on the answer. Similarly, BIBREF33 used source credibility (e.g., does the document come from a government Website?), sentiment analysis, and answer contradiction compared to other related answers. Overall, credibility assessment for question answering has been mostly modeled at the feature level, with the goal of assessing the quality of the answers. A notable exception is the work of BIBREF34 , where credibility is treated as a task of its own right. Yet, note that credibility is different from factuality (our focus here) as the former is a subjective perception about whether a statement is credible, rather than verifying it as true or false as a matter of fact; still, these notions are often wrongly mixed in the literature. To the best of our knowledge, no previous work has targeted fact-checking of answers in the context of community Question Answering by gathering external support. Conclusions and Future Work We have presented and evaluated a general-purpose method for fact checking that relies on retrieving supporting information from the Web and comparing it to the claim using machine learning. Our method is lightweight in terms of features and can be very efficient because it shows good performance by only using the snippets provided by the search engines. The combination of the representational power of neural networks with the classification of kernel-based methods has proven to be crucial for making balanced predictions and obtaining good results. Overall, the strong performance of our model across two different fact-checking tasks confirms its generality and potential applicability for different domains and for different fact-checking task formulations. In future work, we plan to test the generality of our approach by applying it to these and other datasets in combination with complementary methods, e.g., those focusing on microblogs and temporal information in Twitter to make predictions about rumors BIBREF27 , BIBREF3 . We also want to explore the possibility of providing justifications for our predictions, and we plan to integrate our method into a real-world application. Acknowledgments This research was performed by the Arabic Language Technologies group at Qatar Computing Research Institute, HBKU, within the Interactive sYstems for Answer Search project (Iyas).
Yes
1ed006dde28f6946ad2f8bd204f61eda0059a515
1ed006dde28f6946ad2f8bd204f61eda0059a515_0
Q: Does this system improve on the SOTA? Text: Introduction Recent years have seen the proliferation of deceptive information online. With the increasing necessity to validate the information from the Internet, automatic fact checking has emerged as an important research topic. It is at the core of multiple applications, e.g., discovery of fake news, rumor detection in social media, information verification in question answering systems, detection of information manipulation agents, and assistive technologies for investigative journalism. At the same time, it touches many aspects, such as credibility of users and sources, information veracity, information verification, and linguistic aspects of deceptive language. In this paper, we present an approach to fact-checking with the following design principles: (i) generality, (ii) robustness, (iii) simplicity, (iv) reusability, and (v) strong machine learning modeling. Indeed, the system makes very few assumptions about the task, and looks for supportive information directly on the Web. Our system works fully automatically. It does not use any heavy feature engineering and can be easily used in combination with task-specific approaches as well, as a core subsystem. Finally, it combines the representational strength of recurrent neural networks with kernel-based classification. The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False. Figure FIGREF1 presents a real example from one of the datasets we experiment with. The left-hand side of the figure contains a True example, while the right-hand side shows a False one. We show the original claims from snopes.com, the query generated by our system, and the information retrieved from the Web (most relevant snippet and text selection from the web page). The veracity of the claim can be inferred from the textual information. Our contributions can be summarized as follows: The remainder of this paper is organized as follows. Section SECREF2 introduces our method for fact checking claims using external sources. Section SECREF3 presents our experiments and discusses the results. Section SECREF4 describes an application of our approach to a different dataset and a slightly different task: fact checking in community question answering forums. Section SECREF5 presents related work. Finally, Section SECREF6 concludes and suggests some possible directions for future work. The Fact-Checking System Given a claim, our system searches for support information on the Web in order to verify whether the claim is likely to be true. The three steps in this process are (i) external support retrieval, (ii) text representation, and (iii) veracity prediction. External Support Retrieval This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 . As we aim to develop a general-purpose fact checking system, we use an approach for query generation that does not incorporate any features that are specific to claim verification (e.g., no temporal indicators). We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable. Finally, if our query has returned no results, we iteratively relax it by dropping the final tokens one at a time. Text Representation Next, we build the representation of a claim and the corresponding snippets and Web pages. First, we calculate three similarities (a) between the claim and a snippet, or (b) between the claim and a Web page: (i) cosine with tf-idf, (ii) cosine over embeddings, and (iii) containment BIBREF1 . We calculate the embedding of a text as the average of the embeddings of its words; for this, we use pre-trained embeddings from GloVe BIBREF2 . Moreover, as a Web page can be long, we first split it into a set of rolling sentence triplets, then we calculate the similarities between the claim and each triplet, and we take the highest scoring triplet. Finally, as we have up to ten hits from the search engine, we take the maximum and also the average of the three similarities over the snippets and over the Web pages. We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN. Veracity Prediction Next, we build classifiers: neural network (NN), support vector machines (SVM), and a combination thereof (SVM+NN). The architecture of our NN is shown on top of Figure FIGREF7 . We have five LSTM sub-networks, one for each of the text sources from two search engines: Claim, Google Web page, Google snippet, Bing Web page, and Bing snippet. The claim is fed into the neural network as-is. As we can have multiple snippets, we only use the best-matching one as described above. Similarly, we only use a single best-matching triple of consecutive sentences from a Web page. We further feed the network with the similarity features described above. All these vectors are concatenated and fully connected to a much more compact hidden layer that captures the task-specific embeddings. This layer is connected to a softmax output unit to classify the claim as true or false. The bottom of Figure FIGREF7 represents the generic architecture of each of the LSTM components. The input text is transformed into a sequence of word embeddings, which is then passed to the bidirectional LSTM layer to obtain a representation for the full sequence. Our second classifier is an SVM with an RBF kernel. The input is the same as for the NN: word embeddings and similarities. However, the word embeddings this time are calculated by averaging rather than using a bi-LSTM. Finally, we combine the SVM with the NN by augmenting the input to the SVM with the values of the units in the hidden layer. This represents a task-specific embedding of the input example, and in our experiments it turned out to be quite helpful. Unlike in the SVM only model, this time we use the bi-LSTM embeddings as an input to the SVM. Ultimately, this yields a combination of deep learning and task-specific embeddings with RBF kernels. Dataset We used part of the rumor detection dataset created by BIBREF3 . While they analyzed a claim based on a set of potentially related tweets, we focus on the claim itself and on the use of supporting information from the Web. The dataset consists of 992 sets of tweets, 778 of which are generated starting from a claim on snopes.com, which ma2016detecting converted into a query. Another 214 sets of tweets are tweet clusters created by other researchers BIBREF4 , BIBREF5 with no claim behind them. ma2016detecting ignored the claim and did not release it as part of their dataset. We managed to find the original claim for 761 out of the 778 snopes.com-based clusters. Our final dataset consists of 761 claims from snopes.com, which span various domains including politics, local news, and fun facts. Each of the claims is labeled as factually true (34%) or as a false rumor (66%). We further split the data into 509 for training, 132 for development, and 120 for testing. As the original split for the dataset was not publicly available, and as we only used a subset of their data, we had to make a new training and testing split. Note that we ignored the tweets, as we wanted to focus on a complementary source of information: the Web. Moreover, ma2016detecting used manual queries, while we use a fully automatic method. Finally, we augmented the dataset with Web-retrieved snippets, Web pages, and sentence triplets from Web pages. Experimental Setup We tuned the architecture (i.e., the number of layers and their size) and the hyper-parameters of the neural network on the development dataset. The best configuration uses a bidirectional LSTM with 25 units. It further uses a RMSprop optimizer with 0.001 initial learning rate, L2 regularization with INLINEFORM0 =0.1, and 0.5 dropout after the LSTM layers. The size of the hidden layer is 60 with tanh activations. We use a batch of 32 and we train for 400 epochs. For the SVM model, we merged the development and the training dataset, and we then ran a 5-fold cross-validation with grid-search, looking for the best kernel and its parameters. We ended up selecting an RBF kernel with INLINEFORM0 and INLINEFORM1 0.01. Evaluation Metrics The evaluation metrics we use are P (precision), R (recall), and F INLINEFORM0 , which we calculate with respect to the false and to the true claims. We further report AvgR (macro-average recall), AvgF INLINEFORM1 (macro-average F INLINEFORM2 ), and Acc (accuracy). Results Table TABREF14 shows the results on the test dataset. We can see that both the NN and the SVM models improve over the majority class baseline (all false rumors) by a sizable margin. Moreover, the NN consistently outperforms the SVM by a margin on all measures. Yet, adding the task-specific embeddings from the NN as features of the SVM yields overall improvements over both the SVM and the NN in terms of avgR, avgF INLINEFORM0 , and accuracy. We can see that both the SVM and the NN overpredict the majority class (false claims); however, the combined SVM+NN model is quite balanced between the two classes. Table TABREF22 compares the performance of the SVM with and without task-specific embeddings from the NN, when training on Web pages vs. snippets, returned by Google vs. Bing vs. both. The NN embeddings consistently help the SVM in all cases. Moreover, while the baseline SVM using snippets is slightly better than when using Web pages, there is almost no difference between snippets vs. Web pages when NN embeddings are added to the basic SVM. Finally, gathering external support from either Google or Bing makes practically no difference, and using the results from both together does not yield much further improvement. Thus, (i) the search engines already do a good job at generating relevant snippets, and one does not need to go and download the full Web pages, and (ii) the choice of a given search engine is not an important factor. These are good news for the practicality of our approach. Unfortunately, direct comparison with respect to BIBREF3 is not possible. First, we only use a subset of their examples: 761 out of 993 (see Section SECREF17 ), and we also have a different class distribution. More importantly, they have a very different formulation of the task: for them, the claim is not available as input (in fact, there has never been a claim for 21% of their examples); rather an example consists of a set of tweets retrieved using manually written queries. In contrast, our system is fully automatic and does not use tweets at all. Furthermore, their most important information source is the change in tweets volume over time, which we cannot use. Still, our results are competitive to theirs when they do not use temporal features. To put the results in perspective, we can further try to make an indirect comparison to the very recent paper by BIBREF6 . They also present a model to classify true vs. false claims extracted from snopes.com, by using information extracted from the Web. Their formulation of the task is the same as ours, but our corpora and label distributions are not the same, which makes a direct comparison impossible. Still, we can see that regarding overall classification accuracy they improve a baseline from 73.7% to 84.02% with their best model, i.e., a 39.2% relative error reduction. In our case, we go from 66.7% to 80.0%, i.e., an almost identical 39.9% error reduction. These results are very encouraging, especially given the fact that our model is much simpler than theirs regarding the sources of information used (they model the stance of the text, the reliability of the sources, the language style of the articles, and the temporal footprint). Application to cQA Next, we tested the generality of our approach by applying it to a different setup: fact-checking the answers in community question answering (cQA) forums. As this is a new problem, for which no dataset exists, we created one. We augmented with factuality annotations the cQA dataset from SemEval-2016 Task 3 (CQA-QA-2016) BIBREF7 . Overall, we annotated 249 question–answer, or INLINEFORM0 - INLINEFORM1 , pairs (from 71 threads): 128 factually true and 121 factually false answers. Each question in CQA-QA-2016 has a subject, a body, and meta information: ID, category (e.g., Education, and Moving to Qatar), date and time of posting, user name and ID. We selected only the factual questions such as “What is Ooredoo customer service number?”, thus filtering out all (i) socializing, e.g., “What was your first car?”, (ii) requests for opinion/advice/guidance, e.g., “Which is the best bank around??”, and (iii) questions containing multiple sub-questions, e.g., “Is there a land route from Doha to Abudhabi. If yes; how is the road and how long is the journey?” Next, we annotated for veracity the answers to the retained questions. Note that in CQA-QA-2016, each answer has a subject, a body, meta information (answer ID, user name and ID), and a judgment about how well it addresses the question of its thread: Good vs. Potentially Useful vs. Bad . We only annotated the Good answers. We further discarded answers whose factuality was very time-sensitive (e.g., “It is Friday tomorrow.”, “It was raining last week.”), or for which the annotators were unsure. We targeted very high quality, and thus we did not use crowdsourcing for the annotation, as pilot annotations showed that the task was very difficult and that it was not possible to guarantee that Turkers would do all the necessary verification, e.g., gathering evidence from trusted sources. Instead, all examples were first annotated independently by four annotators, and then each example was discussed in detail to come up with a final label. We ended up with 249 Good answers to 71 different questions, which we annotated for factuality: 128 Positive and 121 Negative examples. See Table TABREF26 for details. We further split our dataset into 185 INLINEFORM0 – INLINEFORM1 pairs for training, 31 for development, and 32 for testing, preserving the general positive:negative ratio, and making sure that the questions for the INLINEFORM2 – INLINEFORM3 pairs did not overlap between the splits. Figure FIGREF23 presents an excerpt of an example from the dataset, with one question and three answers selected from a longer thread. Answer INLINEFORM4 contains false information, while INLINEFORM5 and INLINEFORM6 are true, as can be checked on an official governmental website. We had to fit our system for this problem, as here we do not have claims, but a question and an answer. So, we constructed the query from the concatenation of INLINEFORM0 and INLINEFORM1 . Moreover, as Google and Bing performed similarly, we only report results using Google. We limited our run to snippets only, as we have found them rich enough above (see Section SECREF3 ). Also, we had a list of reputed and Qatar-related sources for the domain, and we limited our results to these sources only. This time, we had more options to calculate similarities compared to the rumors dataset: we can compare against INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 – INLINEFORM5 ; we chose to go with the latter. For the LSTM representations, we use both the question and the answer. Table TABREF27 shows the results on the cQA dataset. Once again, our models outperformed all baselines by a margin. This time, the predictions of all models are balanced between the two classes, which is probably due to the dataset being well balanced in general. The SVM model performs better than the NN by itself. This is due to the fact that the cQA dataset is significantly smaller than the rumor detection one. Thus, the neural network could not be trained effectively by itself. Nevertheless, the task-specific representations were useful and combining them with the SVM model yielded consistent improvements on all the measures once again. Related Work Journalists, online users, and researchers are well aware of the proliferation of false information on the Web, and topics such as information credibility and fact checking are becoming increasingly important as research directions. For example, there was a recent 2016 special issue of the ACM Transactions on Information Systems journal on Trust and Veracity of Information in Social Media BIBREF9 , there was a SemEval-2017 shared task on Rumor Detection BIBREF10 , and there is an upcoming lab at CLEF-2018 on Automatic Identification and Verification of Claims in Political Debates BIBREF11 . The credibility of contents on the Web has been questioned by researches for a long time. While in the early days the main research focus was on online news portals BIBREF12 , BIBREF13 , BIBREF14 , the interest has eventually shifted towards social media BIBREF4 , BIBREF15 , BIBREF6 , BIBREF16 , which are abundant in sophisticated malicious users such as opinion manipulation trolls, paid BIBREF17 or just perceived BIBREF18 , BIBREF19 , sockpuppets BIBREF20 , Internet water army BIBREF21 , and seminar users BIBREF22 . For instance, BIBREF23 studied the credibility of Twitter accounts (as opposed to tweet posts), and found that both the topical content of information sources and social network structure affect source credibility. Other work, closer to ours, aims at addressing credibility assessment of rumors on Twitter as a problem of finding false information about a newsworthy event BIBREF4 . This model considers user reputation, writing style, and various time-based features, among others. Other efforts have focused on news communities. For example, several truth discovery algorithms are combined in an ensemble method for veracity estimation in the VERA system BIBREF24 . They proposed a platform for end-to-end truth discovery from the Web: extracting unstructured information from multiple sources, combining information about single claims, running an ensemble of algorithms, and visualizing and explaining the results. They also explore two different real-world application scenarios for their system: fact checking for crisis situations and evaluation of trustworthiness of a rumor. However, the input to their model is structured data, while here we are interested in unstructured text as input. Similarly, the task defined by BIBREF25 combines three objectives: assessing the credibility of a set of posted articles, estimating the trustworthiness of sources, and predicting user's expertise. They considered a manifold of features characterizing language, topics and Web-specific statistics (e.g., review ratings) on top of a continuous conditional random fields model. In follow-up work, BIBREF26 proposed a model to support or refute claims from snopes.com and Wikipedia by considering supporting information gathered from the Web. They used the same task formulation for claims as we do, but different datasets. In yet another follow-up work, Popat:2017:TLE:3041021.3055133 proposed a complex model that considers stance, source reliability, language style, and temporal information. Our approach to fact checking is related: we verify facts on the Web. However, we use a much simpler and feature-light system, and a different machine learning model. Yet, our model performs very similarly to this latter work (even though a direct comparison is not possible as the datasets differ), which is a remarkable achievement given the fact that we consider less knowledge sources, we have a conceptually simpler model, and we have six times less training data than Popat:2017:TLE:3041021.3055133. Another important research direction is on using tweets and temporal information for checking the factuality of rumors. For example, BIBREF27 used temporal patterns of rumor dynamics to detect false rumors and to predict their frequency. BIBREF27 focused on detecting false rumors in Twitter using time series. They used the change of social context features over a rumor's life cycle in order to detect rumors at an early stage after they were broadcast. A more general approach for detecting rumors is explored by BIBREF3 , who used recurrent neural networks to learn hidden representations that capture the variation of contextual information of relevant posts over time. Unlike this work, we do not use microblogs, but we query the Web directly in search for evidence. Again, while direct comparison to the work of BIBREF3 is not possible, due to differences in dataset and task formulation, we can say that our framework is competitive when temporal information is not used. More importantly, our approach is orthogonal to theirs in terms of information sources used, and thus, we believe there is potential in combining the two approaches. In the context of question answering, there has been work on assessing the credibility of an answer, e.g., based on intrinsic information BIBREF28 , i.e., without any external resources. In this case, the reliability of an answer is measured by computing the divergence between language models of the question and of the answer. The spawn of community-based question answering Websites also allowed for the use of other kinds of information. Click counts, link analysis (e.g., PageRank), and user votes have been used to assess the quality of a posted answer BIBREF29 , BIBREF30 , BIBREF31 . Nevertheless, these studies address the answers' credibility level just marginally. Efforts to determine the credibility of an answer in order to assess its overall quality required the inclusion of content-based information BIBREF32 , e.g., verbs and adjectives such as suppose and probably, which cast doubt on the answer. Similarly, BIBREF33 used source credibility (e.g., does the document come from a government Website?), sentiment analysis, and answer contradiction compared to other related answers. Overall, credibility assessment for question answering has been mostly modeled at the feature level, with the goal of assessing the quality of the answers. A notable exception is the work of BIBREF34 , where credibility is treated as a task of its own right. Yet, note that credibility is different from factuality (our focus here) as the former is a subjective perception about whether a statement is credible, rather than verifying it as true or false as a matter of fact; still, these notions are often wrongly mixed in the literature. To the best of our knowledge, no previous work has targeted fact-checking of answers in the context of community Question Answering by gathering external support. Conclusions and Future Work We have presented and evaluated a general-purpose method for fact checking that relies on retrieving supporting information from the Web and comparing it to the claim using machine learning. Our method is lightweight in terms of features and can be very efficient because it shows good performance by only using the snippets provided by the search engines. The combination of the representational power of neural networks with the classification of kernel-based methods has proven to be crucial for making balanced predictions and obtaining good results. Overall, the strong performance of our model across two different fact-checking tasks confirms its generality and potential applicability for different domains and for different fact-checking task formulations. In future work, we plan to test the generality of our approach by applying it to these and other datasets in combination with complementary methods, e.g., those focusing on microblogs and temporal information in Twitter to make predictions about rumors BIBREF27 , BIBREF3 . We also want to explore the possibility of providing justifications for our predictions, and we plan to integrate our method into a real-world application. Acknowledgments This research was performed by the Arabic Language Technologies group at Qatar Computing Research Institute, HBKU, within the Interactive sYstems for Answer Search project (Iyas).
Unanswerable
29d917cc38a56a179395d0f3a2416fca41a01659
29d917cc38a56a179395d0f3a2416fca41a01659_0
Q: How are the potentially relevant text fragments identified? Text: Introduction Recent years have seen the proliferation of deceptive information online. With the increasing necessity to validate the information from the Internet, automatic fact checking has emerged as an important research topic. It is at the core of multiple applications, e.g., discovery of fake news, rumor detection in social media, information verification in question answering systems, detection of information manipulation agents, and assistive technologies for investigative journalism. At the same time, it touches many aspects, such as credibility of users and sources, information veracity, information verification, and linguistic aspects of deceptive language. In this paper, we present an approach to fact-checking with the following design principles: (i) generality, (ii) robustness, (iii) simplicity, (iv) reusability, and (v) strong machine learning modeling. Indeed, the system makes very few assumptions about the task, and looks for supportive information directly on the Web. Our system works fully automatically. It does not use any heavy feature engineering and can be easily used in combination with task-specific approaches as well, as a core subsystem. Finally, it combines the representational strength of recurrent neural networks with kernel-based classification. The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False. Figure FIGREF1 presents a real example from one of the datasets we experiment with. The left-hand side of the figure contains a True example, while the right-hand side shows a False one. We show the original claims from snopes.com, the query generated by our system, and the information retrieved from the Web (most relevant snippet and text selection from the web page). The veracity of the claim can be inferred from the textual information. Our contributions can be summarized as follows: The remainder of this paper is organized as follows. Section SECREF2 introduces our method for fact checking claims using external sources. Section SECREF3 presents our experiments and discusses the results. Section SECREF4 describes an application of our approach to a different dataset and a slightly different task: fact checking in community question answering forums. Section SECREF5 presents related work. Finally, Section SECREF6 concludes and suggests some possible directions for future work. The Fact-Checking System Given a claim, our system searches for support information on the Web in order to verify whether the claim is likely to be true. The three steps in this process are (i) external support retrieval, (ii) text representation, and (iii) veracity prediction. External Support Retrieval This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 . As we aim to develop a general-purpose fact checking system, we use an approach for query generation that does not incorporate any features that are specific to claim verification (e.g., no temporal indicators). We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable. Finally, if our query has returned no results, we iteratively relax it by dropping the final tokens one at a time. Text Representation Next, we build the representation of a claim and the corresponding snippets and Web pages. First, we calculate three similarities (a) between the claim and a snippet, or (b) between the claim and a Web page: (i) cosine with tf-idf, (ii) cosine over embeddings, and (iii) containment BIBREF1 . We calculate the embedding of a text as the average of the embeddings of its words; for this, we use pre-trained embeddings from GloVe BIBREF2 . Moreover, as a Web page can be long, we first split it into a set of rolling sentence triplets, then we calculate the similarities between the claim and each triplet, and we take the highest scoring triplet. Finally, as we have up to ten hits from the search engine, we take the maximum and also the average of the three similarities over the snippets and over the Web pages. We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN. Veracity Prediction Next, we build classifiers: neural network (NN), support vector machines (SVM), and a combination thereof (SVM+NN). The architecture of our NN is shown on top of Figure FIGREF7 . We have five LSTM sub-networks, one for each of the text sources from two search engines: Claim, Google Web page, Google snippet, Bing Web page, and Bing snippet. The claim is fed into the neural network as-is. As we can have multiple snippets, we only use the best-matching one as described above. Similarly, we only use a single best-matching triple of consecutive sentences from a Web page. We further feed the network with the similarity features described above. All these vectors are concatenated and fully connected to a much more compact hidden layer that captures the task-specific embeddings. This layer is connected to a softmax output unit to classify the claim as true or false. The bottom of Figure FIGREF7 represents the generic architecture of each of the LSTM components. The input text is transformed into a sequence of word embeddings, which is then passed to the bidirectional LSTM layer to obtain a representation for the full sequence. Our second classifier is an SVM with an RBF kernel. The input is the same as for the NN: word embeddings and similarities. However, the word embeddings this time are calculated by averaging rather than using a bi-LSTM. Finally, we combine the SVM with the NN by augmenting the input to the SVM with the values of the units in the hidden layer. This represents a task-specific embedding of the input example, and in our experiments it turned out to be quite helpful. Unlike in the SVM only model, this time we use the bi-LSTM embeddings as an input to the SVM. Ultimately, this yields a combination of deep learning and task-specific embeddings with RBF kernels. Dataset We used part of the rumor detection dataset created by BIBREF3 . While they analyzed a claim based on a set of potentially related tweets, we focus on the claim itself and on the use of supporting information from the Web. The dataset consists of 992 sets of tweets, 778 of which are generated starting from a claim on snopes.com, which ma2016detecting converted into a query. Another 214 sets of tweets are tweet clusters created by other researchers BIBREF4 , BIBREF5 with no claim behind them. ma2016detecting ignored the claim and did not release it as part of their dataset. We managed to find the original claim for 761 out of the 778 snopes.com-based clusters. Our final dataset consists of 761 claims from snopes.com, which span various domains including politics, local news, and fun facts. Each of the claims is labeled as factually true (34%) or as a false rumor (66%). We further split the data into 509 for training, 132 for development, and 120 for testing. As the original split for the dataset was not publicly available, and as we only used a subset of their data, we had to make a new training and testing split. Note that we ignored the tweets, as we wanted to focus on a complementary source of information: the Web. Moreover, ma2016detecting used manual queries, while we use a fully automatic method. Finally, we augmented the dataset with Web-retrieved snippets, Web pages, and sentence triplets from Web pages. Experimental Setup We tuned the architecture (i.e., the number of layers and their size) and the hyper-parameters of the neural network on the development dataset. The best configuration uses a bidirectional LSTM with 25 units. It further uses a RMSprop optimizer with 0.001 initial learning rate, L2 regularization with INLINEFORM0 =0.1, and 0.5 dropout after the LSTM layers. The size of the hidden layer is 60 with tanh activations. We use a batch of 32 and we train for 400 epochs. For the SVM model, we merged the development and the training dataset, and we then ran a 5-fold cross-validation with grid-search, looking for the best kernel and its parameters. We ended up selecting an RBF kernel with INLINEFORM0 and INLINEFORM1 0.01. Evaluation Metrics The evaluation metrics we use are P (precision), R (recall), and F INLINEFORM0 , which we calculate with respect to the false and to the true claims. We further report AvgR (macro-average recall), AvgF INLINEFORM1 (macro-average F INLINEFORM2 ), and Acc (accuracy). Results Table TABREF14 shows the results on the test dataset. We can see that both the NN and the SVM models improve over the majority class baseline (all false rumors) by a sizable margin. Moreover, the NN consistently outperforms the SVM by a margin on all measures. Yet, adding the task-specific embeddings from the NN as features of the SVM yields overall improvements over both the SVM and the NN in terms of avgR, avgF INLINEFORM0 , and accuracy. We can see that both the SVM and the NN overpredict the majority class (false claims); however, the combined SVM+NN model is quite balanced between the two classes. Table TABREF22 compares the performance of the SVM with and without task-specific embeddings from the NN, when training on Web pages vs. snippets, returned by Google vs. Bing vs. both. The NN embeddings consistently help the SVM in all cases. Moreover, while the baseline SVM using snippets is slightly better than when using Web pages, there is almost no difference between snippets vs. Web pages when NN embeddings are added to the basic SVM. Finally, gathering external support from either Google or Bing makes practically no difference, and using the results from both together does not yield much further improvement. Thus, (i) the search engines already do a good job at generating relevant snippets, and one does not need to go and download the full Web pages, and (ii) the choice of a given search engine is not an important factor. These are good news for the practicality of our approach. Unfortunately, direct comparison with respect to BIBREF3 is not possible. First, we only use a subset of their examples: 761 out of 993 (see Section SECREF17 ), and we also have a different class distribution. More importantly, they have a very different formulation of the task: for them, the claim is not available as input (in fact, there has never been a claim for 21% of their examples); rather an example consists of a set of tweets retrieved using manually written queries. In contrast, our system is fully automatic and does not use tweets at all. Furthermore, their most important information source is the change in tweets volume over time, which we cannot use. Still, our results are competitive to theirs when they do not use temporal features. To put the results in perspective, we can further try to make an indirect comparison to the very recent paper by BIBREF6 . They also present a model to classify true vs. false claims extracted from snopes.com, by using information extracted from the Web. Their formulation of the task is the same as ours, but our corpora and label distributions are not the same, which makes a direct comparison impossible. Still, we can see that regarding overall classification accuracy they improve a baseline from 73.7% to 84.02% with their best model, i.e., a 39.2% relative error reduction. In our case, we go from 66.7% to 80.0%, i.e., an almost identical 39.9% error reduction. These results are very encouraging, especially given the fact that our model is much simpler than theirs regarding the sources of information used (they model the stance of the text, the reliability of the sources, the language style of the articles, and the temporal footprint). Application to cQA Next, we tested the generality of our approach by applying it to a different setup: fact-checking the answers in community question answering (cQA) forums. As this is a new problem, for which no dataset exists, we created one. We augmented with factuality annotations the cQA dataset from SemEval-2016 Task 3 (CQA-QA-2016) BIBREF7 . Overall, we annotated 249 question–answer, or INLINEFORM0 - INLINEFORM1 , pairs (from 71 threads): 128 factually true and 121 factually false answers. Each question in CQA-QA-2016 has a subject, a body, and meta information: ID, category (e.g., Education, and Moving to Qatar), date and time of posting, user name and ID. We selected only the factual questions such as “What is Ooredoo customer service number?”, thus filtering out all (i) socializing, e.g., “What was your first car?”, (ii) requests for opinion/advice/guidance, e.g., “Which is the best bank around??”, and (iii) questions containing multiple sub-questions, e.g., “Is there a land route from Doha to Abudhabi. If yes; how is the road and how long is the journey?” Next, we annotated for veracity the answers to the retained questions. Note that in CQA-QA-2016, each answer has a subject, a body, meta information (answer ID, user name and ID), and a judgment about how well it addresses the question of its thread: Good vs. Potentially Useful vs. Bad . We only annotated the Good answers. We further discarded answers whose factuality was very time-sensitive (e.g., “It is Friday tomorrow.”, “It was raining last week.”), or for which the annotators were unsure. We targeted very high quality, and thus we did not use crowdsourcing for the annotation, as pilot annotations showed that the task was very difficult and that it was not possible to guarantee that Turkers would do all the necessary verification, e.g., gathering evidence from trusted sources. Instead, all examples were first annotated independently by four annotators, and then each example was discussed in detail to come up with a final label. We ended up with 249 Good answers to 71 different questions, which we annotated for factuality: 128 Positive and 121 Negative examples. See Table TABREF26 for details. We further split our dataset into 185 INLINEFORM0 – INLINEFORM1 pairs for training, 31 for development, and 32 for testing, preserving the general positive:negative ratio, and making sure that the questions for the INLINEFORM2 – INLINEFORM3 pairs did not overlap between the splits. Figure FIGREF23 presents an excerpt of an example from the dataset, with one question and three answers selected from a longer thread. Answer INLINEFORM4 contains false information, while INLINEFORM5 and INLINEFORM6 are true, as can be checked on an official governmental website. We had to fit our system for this problem, as here we do not have claims, but a question and an answer. So, we constructed the query from the concatenation of INLINEFORM0 and INLINEFORM1 . Moreover, as Google and Bing performed similarly, we only report results using Google. We limited our run to snippets only, as we have found them rich enough above (see Section SECREF3 ). Also, we had a list of reputed and Qatar-related sources for the domain, and we limited our results to these sources only. This time, we had more options to calculate similarities compared to the rumors dataset: we can compare against INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 – INLINEFORM5 ; we chose to go with the latter. For the LSTM representations, we use both the question and the answer. Table TABREF27 shows the results on the cQA dataset. Once again, our models outperformed all baselines by a margin. This time, the predictions of all models are balanced between the two classes, which is probably due to the dataset being well balanced in general. The SVM model performs better than the NN by itself. This is due to the fact that the cQA dataset is significantly smaller than the rumor detection one. Thus, the neural network could not be trained effectively by itself. Nevertheless, the task-specific representations were useful and combining them with the SVM model yielded consistent improvements on all the measures once again. Related Work Journalists, online users, and researchers are well aware of the proliferation of false information on the Web, and topics such as information credibility and fact checking are becoming increasingly important as research directions. For example, there was a recent 2016 special issue of the ACM Transactions on Information Systems journal on Trust and Veracity of Information in Social Media BIBREF9 , there was a SemEval-2017 shared task on Rumor Detection BIBREF10 , and there is an upcoming lab at CLEF-2018 on Automatic Identification and Verification of Claims in Political Debates BIBREF11 . The credibility of contents on the Web has been questioned by researches for a long time. While in the early days the main research focus was on online news portals BIBREF12 , BIBREF13 , BIBREF14 , the interest has eventually shifted towards social media BIBREF4 , BIBREF15 , BIBREF6 , BIBREF16 , which are abundant in sophisticated malicious users such as opinion manipulation trolls, paid BIBREF17 or just perceived BIBREF18 , BIBREF19 , sockpuppets BIBREF20 , Internet water army BIBREF21 , and seminar users BIBREF22 . For instance, BIBREF23 studied the credibility of Twitter accounts (as opposed to tweet posts), and found that both the topical content of information sources and social network structure affect source credibility. Other work, closer to ours, aims at addressing credibility assessment of rumors on Twitter as a problem of finding false information about a newsworthy event BIBREF4 . This model considers user reputation, writing style, and various time-based features, among others. Other efforts have focused on news communities. For example, several truth discovery algorithms are combined in an ensemble method for veracity estimation in the VERA system BIBREF24 . They proposed a platform for end-to-end truth discovery from the Web: extracting unstructured information from multiple sources, combining information about single claims, running an ensemble of algorithms, and visualizing and explaining the results. They also explore two different real-world application scenarios for their system: fact checking for crisis situations and evaluation of trustworthiness of a rumor. However, the input to their model is structured data, while here we are interested in unstructured text as input. Similarly, the task defined by BIBREF25 combines three objectives: assessing the credibility of a set of posted articles, estimating the trustworthiness of sources, and predicting user's expertise. They considered a manifold of features characterizing language, topics and Web-specific statistics (e.g., review ratings) on top of a continuous conditional random fields model. In follow-up work, BIBREF26 proposed a model to support or refute claims from snopes.com and Wikipedia by considering supporting information gathered from the Web. They used the same task formulation for claims as we do, but different datasets. In yet another follow-up work, Popat:2017:TLE:3041021.3055133 proposed a complex model that considers stance, source reliability, language style, and temporal information. Our approach to fact checking is related: we verify facts on the Web. However, we use a much simpler and feature-light system, and a different machine learning model. Yet, our model performs very similarly to this latter work (even though a direct comparison is not possible as the datasets differ), which is a remarkable achievement given the fact that we consider less knowledge sources, we have a conceptually simpler model, and we have six times less training data than Popat:2017:TLE:3041021.3055133. Another important research direction is on using tweets and temporal information for checking the factuality of rumors. For example, BIBREF27 used temporal patterns of rumor dynamics to detect false rumors and to predict their frequency. BIBREF27 focused on detecting false rumors in Twitter using time series. They used the change of social context features over a rumor's life cycle in order to detect rumors at an early stage after they were broadcast. A more general approach for detecting rumors is explored by BIBREF3 , who used recurrent neural networks to learn hidden representations that capture the variation of contextual information of relevant posts over time. Unlike this work, we do not use microblogs, but we query the Web directly in search for evidence. Again, while direct comparison to the work of BIBREF3 is not possible, due to differences in dataset and task formulation, we can say that our framework is competitive when temporal information is not used. More importantly, our approach is orthogonal to theirs in terms of information sources used, and thus, we believe there is potential in combining the two approaches. In the context of question answering, there has been work on assessing the credibility of an answer, e.g., based on intrinsic information BIBREF28 , i.e., without any external resources. In this case, the reliability of an answer is measured by computing the divergence between language models of the question and of the answer. The spawn of community-based question answering Websites also allowed for the use of other kinds of information. Click counts, link analysis (e.g., PageRank), and user votes have been used to assess the quality of a posted answer BIBREF29 , BIBREF30 , BIBREF31 . Nevertheless, these studies address the answers' credibility level just marginally. Efforts to determine the credibility of an answer in order to assess its overall quality required the inclusion of content-based information BIBREF32 , e.g., verbs and adjectives such as suppose and probably, which cast doubt on the answer. Similarly, BIBREF33 used source credibility (e.g., does the document come from a government Website?), sentiment analysis, and answer contradiction compared to other related answers. Overall, credibility assessment for question answering has been mostly modeled at the feature level, with the goal of assessing the quality of the answers. A notable exception is the work of BIBREF34 , where credibility is treated as a task of its own right. Yet, note that credibility is different from factuality (our focus here) as the former is a subjective perception about whether a statement is credible, rather than verifying it as true or false as a matter of fact; still, these notions are often wrongly mixed in the literature. To the best of our knowledge, no previous work has targeted fact-checking of answers in the context of community Question Answering by gathering external support. Conclusions and Future Work We have presented and evaluated a general-purpose method for fact checking that relies on retrieving supporting information from the Web and comparing it to the claim using machine learning. Our method is lightweight in terms of features and can be very efficient because it shows good performance by only using the snippets provided by the search engines. The combination of the representational power of neural networks with the classification of kernel-based methods has proven to be crucial for making balanced predictions and obtaining good results. Overall, the strong performance of our model across two different fact-checking tasks confirms its generality and potential applicability for different domains and for different fact-checking task formulations. In future work, we plan to test the generality of our approach by applying it to these and other datasets in combination with complementary methods, e.g., those focusing on microblogs and temporal information in Twitter to make predictions about rumors BIBREF27 , BIBREF3 . We also want to explore the possibility of providing justifications for our predictions, and we plan to integrate our method into a real-world application. Acknowledgments This research was performed by the Arabic Language Technologies group at Qatar Computing Research Institute, HBKU, within the Interactive sYstems for Answer Search project (Iyas).
Generate a query out of the claim and querying a search engine, rank the words by means of TF-IDF, use IBM's AlchemyAPI to identify named entities, generate queries of 5–10 tokens, which execute against a search engine, and collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable.
ad4658c64056b6eddda00d3cbc55944ae01eb437
ad4658c64056b6eddda00d3cbc55944ae01eb437_0
Q: What algorithm and embedding dimensions are used to build the task-specific embeddings? Text: Introduction Recent years have seen the proliferation of deceptive information online. With the increasing necessity to validate the information from the Internet, automatic fact checking has emerged as an important research topic. It is at the core of multiple applications, e.g., discovery of fake news, rumor detection in social media, information verification in question answering systems, detection of information manipulation agents, and assistive technologies for investigative journalism. At the same time, it touches many aspects, such as credibility of users and sources, information veracity, information verification, and linguistic aspects of deceptive language. In this paper, we present an approach to fact-checking with the following design principles: (i) generality, (ii) robustness, (iii) simplicity, (iv) reusability, and (v) strong machine learning modeling. Indeed, the system makes very few assumptions about the task, and looks for supportive information directly on the Web. Our system works fully automatically. It does not use any heavy feature engineering and can be easily used in combination with task-specific approaches as well, as a core subsystem. Finally, it combines the representational strength of recurrent neural networks with kernel-based classification. The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False. Figure FIGREF1 presents a real example from one of the datasets we experiment with. The left-hand side of the figure contains a True example, while the right-hand side shows a False one. We show the original claims from snopes.com, the query generated by our system, and the information retrieved from the Web (most relevant snippet and text selection from the web page). The veracity of the claim can be inferred from the textual information. Our contributions can be summarized as follows: The remainder of this paper is organized as follows. Section SECREF2 introduces our method for fact checking claims using external sources. Section SECREF3 presents our experiments and discusses the results. Section SECREF4 describes an application of our approach to a different dataset and a slightly different task: fact checking in community question answering forums. Section SECREF5 presents related work. Finally, Section SECREF6 concludes and suggests some possible directions for future work. The Fact-Checking System Given a claim, our system searches for support information on the Web in order to verify whether the claim is likely to be true. The three steps in this process are (i) external support retrieval, (ii) text representation, and (iii) veracity prediction. External Support Retrieval This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 . As we aim to develop a general-purpose fact checking system, we use an approach for query generation that does not incorporate any features that are specific to claim verification (e.g., no temporal indicators). We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable. Finally, if our query has returned no results, we iteratively relax it by dropping the final tokens one at a time. Text Representation Next, we build the representation of a claim and the corresponding snippets and Web pages. First, we calculate three similarities (a) between the claim and a snippet, or (b) between the claim and a Web page: (i) cosine with tf-idf, (ii) cosine over embeddings, and (iii) containment BIBREF1 . We calculate the embedding of a text as the average of the embeddings of its words; for this, we use pre-trained embeddings from GloVe BIBREF2 . Moreover, as a Web page can be long, we first split it into a set of rolling sentence triplets, then we calculate the similarities between the claim and each triplet, and we take the highest scoring triplet. Finally, as we have up to ten hits from the search engine, we take the maximum and also the average of the three similarities over the snippets and over the Web pages. We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN. Veracity Prediction Next, we build classifiers: neural network (NN), support vector machines (SVM), and a combination thereof (SVM+NN). The architecture of our NN is shown on top of Figure FIGREF7 . We have five LSTM sub-networks, one for each of the text sources from two search engines: Claim, Google Web page, Google snippet, Bing Web page, and Bing snippet. The claim is fed into the neural network as-is. As we can have multiple snippets, we only use the best-matching one as described above. Similarly, we only use a single best-matching triple of consecutive sentences from a Web page. We further feed the network with the similarity features described above. All these vectors are concatenated and fully connected to a much more compact hidden layer that captures the task-specific embeddings. This layer is connected to a softmax output unit to classify the claim as true or false. The bottom of Figure FIGREF7 represents the generic architecture of each of the LSTM components. The input text is transformed into a sequence of word embeddings, which is then passed to the bidirectional LSTM layer to obtain a representation for the full sequence. Our second classifier is an SVM with an RBF kernel. The input is the same as for the NN: word embeddings and similarities. However, the word embeddings this time are calculated by averaging rather than using a bi-LSTM. Finally, we combine the SVM with the NN by augmenting the input to the SVM with the values of the units in the hidden layer. This represents a task-specific embedding of the input example, and in our experiments it turned out to be quite helpful. Unlike in the SVM only model, this time we use the bi-LSTM embeddings as an input to the SVM. Ultimately, this yields a combination of deep learning and task-specific embeddings with RBF kernels. Dataset We used part of the rumor detection dataset created by BIBREF3 . While they analyzed a claim based on a set of potentially related tweets, we focus on the claim itself and on the use of supporting information from the Web. The dataset consists of 992 sets of tweets, 778 of which are generated starting from a claim on snopes.com, which ma2016detecting converted into a query. Another 214 sets of tweets are tweet clusters created by other researchers BIBREF4 , BIBREF5 with no claim behind them. ma2016detecting ignored the claim and did not release it as part of their dataset. We managed to find the original claim for 761 out of the 778 snopes.com-based clusters. Our final dataset consists of 761 claims from snopes.com, which span various domains including politics, local news, and fun facts. Each of the claims is labeled as factually true (34%) or as a false rumor (66%). We further split the data into 509 for training, 132 for development, and 120 for testing. As the original split for the dataset was not publicly available, and as we only used a subset of their data, we had to make a new training and testing split. Note that we ignored the tweets, as we wanted to focus on a complementary source of information: the Web. Moreover, ma2016detecting used manual queries, while we use a fully automatic method. Finally, we augmented the dataset with Web-retrieved snippets, Web pages, and sentence triplets from Web pages. Experimental Setup We tuned the architecture (i.e., the number of layers and their size) and the hyper-parameters of the neural network on the development dataset. The best configuration uses a bidirectional LSTM with 25 units. It further uses a RMSprop optimizer with 0.001 initial learning rate, L2 regularization with INLINEFORM0 =0.1, and 0.5 dropout after the LSTM layers. The size of the hidden layer is 60 with tanh activations. We use a batch of 32 and we train for 400 epochs. For the SVM model, we merged the development and the training dataset, and we then ran a 5-fold cross-validation with grid-search, looking for the best kernel and its parameters. We ended up selecting an RBF kernel with INLINEFORM0 and INLINEFORM1 0.01. Evaluation Metrics The evaluation metrics we use are P (precision), R (recall), and F INLINEFORM0 , which we calculate with respect to the false and to the true claims. We further report AvgR (macro-average recall), AvgF INLINEFORM1 (macro-average F INLINEFORM2 ), and Acc (accuracy). Results Table TABREF14 shows the results on the test dataset. We can see that both the NN and the SVM models improve over the majority class baseline (all false rumors) by a sizable margin. Moreover, the NN consistently outperforms the SVM by a margin on all measures. Yet, adding the task-specific embeddings from the NN as features of the SVM yields overall improvements over both the SVM and the NN in terms of avgR, avgF INLINEFORM0 , and accuracy. We can see that both the SVM and the NN overpredict the majority class (false claims); however, the combined SVM+NN model is quite balanced between the two classes. Table TABREF22 compares the performance of the SVM with and without task-specific embeddings from the NN, when training on Web pages vs. snippets, returned by Google vs. Bing vs. both. The NN embeddings consistently help the SVM in all cases. Moreover, while the baseline SVM using snippets is slightly better than when using Web pages, there is almost no difference between snippets vs. Web pages when NN embeddings are added to the basic SVM. Finally, gathering external support from either Google or Bing makes practically no difference, and using the results from both together does not yield much further improvement. Thus, (i) the search engines already do a good job at generating relevant snippets, and one does not need to go and download the full Web pages, and (ii) the choice of a given search engine is not an important factor. These are good news for the practicality of our approach. Unfortunately, direct comparison with respect to BIBREF3 is not possible. First, we only use a subset of their examples: 761 out of 993 (see Section SECREF17 ), and we also have a different class distribution. More importantly, they have a very different formulation of the task: for them, the claim is not available as input (in fact, there has never been a claim for 21% of their examples); rather an example consists of a set of tweets retrieved using manually written queries. In contrast, our system is fully automatic and does not use tweets at all. Furthermore, their most important information source is the change in tweets volume over time, which we cannot use. Still, our results are competitive to theirs when they do not use temporal features. To put the results in perspective, we can further try to make an indirect comparison to the very recent paper by BIBREF6 . They also present a model to classify true vs. false claims extracted from snopes.com, by using information extracted from the Web. Their formulation of the task is the same as ours, but our corpora and label distributions are not the same, which makes a direct comparison impossible. Still, we can see that regarding overall classification accuracy they improve a baseline from 73.7% to 84.02% with their best model, i.e., a 39.2% relative error reduction. In our case, we go from 66.7% to 80.0%, i.e., an almost identical 39.9% error reduction. These results are very encouraging, especially given the fact that our model is much simpler than theirs regarding the sources of information used (they model the stance of the text, the reliability of the sources, the language style of the articles, and the temporal footprint). Application to cQA Next, we tested the generality of our approach by applying it to a different setup: fact-checking the answers in community question answering (cQA) forums. As this is a new problem, for which no dataset exists, we created one. We augmented with factuality annotations the cQA dataset from SemEval-2016 Task 3 (CQA-QA-2016) BIBREF7 . Overall, we annotated 249 question–answer, or INLINEFORM0 - INLINEFORM1 , pairs (from 71 threads): 128 factually true and 121 factually false answers. Each question in CQA-QA-2016 has a subject, a body, and meta information: ID, category (e.g., Education, and Moving to Qatar), date and time of posting, user name and ID. We selected only the factual questions such as “What is Ooredoo customer service number?”, thus filtering out all (i) socializing, e.g., “What was your first car?”, (ii) requests for opinion/advice/guidance, e.g., “Which is the best bank around??”, and (iii) questions containing multiple sub-questions, e.g., “Is there a land route from Doha to Abudhabi. If yes; how is the road and how long is the journey?” Next, we annotated for veracity the answers to the retained questions. Note that in CQA-QA-2016, each answer has a subject, a body, meta information (answer ID, user name and ID), and a judgment about how well it addresses the question of its thread: Good vs. Potentially Useful vs. Bad . We only annotated the Good answers. We further discarded answers whose factuality was very time-sensitive (e.g., “It is Friday tomorrow.”, “It was raining last week.”), or for which the annotators were unsure. We targeted very high quality, and thus we did not use crowdsourcing for the annotation, as pilot annotations showed that the task was very difficult and that it was not possible to guarantee that Turkers would do all the necessary verification, e.g., gathering evidence from trusted sources. Instead, all examples were first annotated independently by four annotators, and then each example was discussed in detail to come up with a final label. We ended up with 249 Good answers to 71 different questions, which we annotated for factuality: 128 Positive and 121 Negative examples. See Table TABREF26 for details. We further split our dataset into 185 INLINEFORM0 – INLINEFORM1 pairs for training, 31 for development, and 32 for testing, preserving the general positive:negative ratio, and making sure that the questions for the INLINEFORM2 – INLINEFORM3 pairs did not overlap between the splits. Figure FIGREF23 presents an excerpt of an example from the dataset, with one question and three answers selected from a longer thread. Answer INLINEFORM4 contains false information, while INLINEFORM5 and INLINEFORM6 are true, as can be checked on an official governmental website. We had to fit our system for this problem, as here we do not have claims, but a question and an answer. So, we constructed the query from the concatenation of INLINEFORM0 and INLINEFORM1 . Moreover, as Google and Bing performed similarly, we only report results using Google. We limited our run to snippets only, as we have found them rich enough above (see Section SECREF3 ). Also, we had a list of reputed and Qatar-related sources for the domain, and we limited our results to these sources only. This time, we had more options to calculate similarities compared to the rumors dataset: we can compare against INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 – INLINEFORM5 ; we chose to go with the latter. For the LSTM representations, we use both the question and the answer. Table TABREF27 shows the results on the cQA dataset. Once again, our models outperformed all baselines by a margin. This time, the predictions of all models are balanced between the two classes, which is probably due to the dataset being well balanced in general. The SVM model performs better than the NN by itself. This is due to the fact that the cQA dataset is significantly smaller than the rumor detection one. Thus, the neural network could not be trained effectively by itself. Nevertheless, the task-specific representations were useful and combining them with the SVM model yielded consistent improvements on all the measures once again. Related Work Journalists, online users, and researchers are well aware of the proliferation of false information on the Web, and topics such as information credibility and fact checking are becoming increasingly important as research directions. For example, there was a recent 2016 special issue of the ACM Transactions on Information Systems journal on Trust and Veracity of Information in Social Media BIBREF9 , there was a SemEval-2017 shared task on Rumor Detection BIBREF10 , and there is an upcoming lab at CLEF-2018 on Automatic Identification and Verification of Claims in Political Debates BIBREF11 . The credibility of contents on the Web has been questioned by researches for a long time. While in the early days the main research focus was on online news portals BIBREF12 , BIBREF13 , BIBREF14 , the interest has eventually shifted towards social media BIBREF4 , BIBREF15 , BIBREF6 , BIBREF16 , which are abundant in sophisticated malicious users such as opinion manipulation trolls, paid BIBREF17 or just perceived BIBREF18 , BIBREF19 , sockpuppets BIBREF20 , Internet water army BIBREF21 , and seminar users BIBREF22 . For instance, BIBREF23 studied the credibility of Twitter accounts (as opposed to tweet posts), and found that both the topical content of information sources and social network structure affect source credibility. Other work, closer to ours, aims at addressing credibility assessment of rumors on Twitter as a problem of finding false information about a newsworthy event BIBREF4 . This model considers user reputation, writing style, and various time-based features, among others. Other efforts have focused on news communities. For example, several truth discovery algorithms are combined in an ensemble method for veracity estimation in the VERA system BIBREF24 . They proposed a platform for end-to-end truth discovery from the Web: extracting unstructured information from multiple sources, combining information about single claims, running an ensemble of algorithms, and visualizing and explaining the results. They also explore two different real-world application scenarios for their system: fact checking for crisis situations and evaluation of trustworthiness of a rumor. However, the input to their model is structured data, while here we are interested in unstructured text as input. Similarly, the task defined by BIBREF25 combines three objectives: assessing the credibility of a set of posted articles, estimating the trustworthiness of sources, and predicting user's expertise. They considered a manifold of features characterizing language, topics and Web-specific statistics (e.g., review ratings) on top of a continuous conditional random fields model. In follow-up work, BIBREF26 proposed a model to support or refute claims from snopes.com and Wikipedia by considering supporting information gathered from the Web. They used the same task formulation for claims as we do, but different datasets. In yet another follow-up work, Popat:2017:TLE:3041021.3055133 proposed a complex model that considers stance, source reliability, language style, and temporal information. Our approach to fact checking is related: we verify facts on the Web. However, we use a much simpler and feature-light system, and a different machine learning model. Yet, our model performs very similarly to this latter work (even though a direct comparison is not possible as the datasets differ), which is a remarkable achievement given the fact that we consider less knowledge sources, we have a conceptually simpler model, and we have six times less training data than Popat:2017:TLE:3041021.3055133. Another important research direction is on using tweets and temporal information for checking the factuality of rumors. For example, BIBREF27 used temporal patterns of rumor dynamics to detect false rumors and to predict their frequency. BIBREF27 focused on detecting false rumors in Twitter using time series. They used the change of social context features over a rumor's life cycle in order to detect rumors at an early stage after they were broadcast. A more general approach for detecting rumors is explored by BIBREF3 , who used recurrent neural networks to learn hidden representations that capture the variation of contextual information of relevant posts over time. Unlike this work, we do not use microblogs, but we query the Web directly in search for evidence. Again, while direct comparison to the work of BIBREF3 is not possible, due to differences in dataset and task formulation, we can say that our framework is competitive when temporal information is not used. More importantly, our approach is orthogonal to theirs in terms of information sources used, and thus, we believe there is potential in combining the two approaches. In the context of question answering, there has been work on assessing the credibility of an answer, e.g., based on intrinsic information BIBREF28 , i.e., without any external resources. In this case, the reliability of an answer is measured by computing the divergence between language models of the question and of the answer. The spawn of community-based question answering Websites also allowed for the use of other kinds of information. Click counts, link analysis (e.g., PageRank), and user votes have been used to assess the quality of a posted answer BIBREF29 , BIBREF30 , BIBREF31 . Nevertheless, these studies address the answers' credibility level just marginally. Efforts to determine the credibility of an answer in order to assess its overall quality required the inclusion of content-based information BIBREF32 , e.g., verbs and adjectives such as suppose and probably, which cast doubt on the answer. Similarly, BIBREF33 used source credibility (e.g., does the document come from a government Website?), sentiment analysis, and answer contradiction compared to other related answers. Overall, credibility assessment for question answering has been mostly modeled at the feature level, with the goal of assessing the quality of the answers. A notable exception is the work of BIBREF34 , where credibility is treated as a task of its own right. Yet, note that credibility is different from factuality (our focus here) as the former is a subjective perception about whether a statement is credible, rather than verifying it as true or false as a matter of fact; still, these notions are often wrongly mixed in the literature. To the best of our knowledge, no previous work has targeted fact-checking of answers in the context of community Question Answering by gathering external support. Conclusions and Future Work We have presented and evaluated a general-purpose method for fact checking that relies on retrieving supporting information from the Web and comparing it to the claim using machine learning. Our method is lightweight in terms of features and can be very efficient because it shows good performance by only using the snippets provided by the search engines. The combination of the representational power of neural networks with the classification of kernel-based methods has proven to be crucial for making balanced predictions and obtaining good results. Overall, the strong performance of our model across two different fact-checking tasks confirms its generality and potential applicability for different domains and for different fact-checking task formulations. In future work, we plan to test the generality of our approach by applying it to these and other datasets in combination with complementary methods, e.g., those focusing on microblogs and temporal information in Twitter to make predictions about rumors BIBREF27 , BIBREF3 . We also want to explore the possibility of providing justifications for our predictions, and we plan to integrate our method into a real-world application. Acknowledgments This research was performed by the Arabic Language Technologies group at Qatar Computing Research Institute, HBKU, within the Interactive sYstems for Answer Search project (Iyas).
task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN
89b9e298993dbedd3637189c3f37c0c4791041a1
89b9e298993dbedd3637189c3f37c0c4791041a1_0
Q: What data is used to build the task-specific embeddings? Text: Introduction Recent years have seen the proliferation of deceptive information online. With the increasing necessity to validate the information from the Internet, automatic fact checking has emerged as an important research topic. It is at the core of multiple applications, e.g., discovery of fake news, rumor detection in social media, information verification in question answering systems, detection of information manipulation agents, and assistive technologies for investigative journalism. At the same time, it touches many aspects, such as credibility of users and sources, information veracity, information verification, and linguistic aspects of deceptive language. In this paper, we present an approach to fact-checking with the following design principles: (i) generality, (ii) robustness, (iii) simplicity, (iv) reusability, and (v) strong machine learning modeling. Indeed, the system makes very few assumptions about the task, and looks for supportive information directly on the Web. Our system works fully automatically. It does not use any heavy feature engineering and can be easily used in combination with task-specific approaches as well, as a core subsystem. Finally, it combines the representational strength of recurrent neural networks with kernel-based classification. The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False. Figure FIGREF1 presents a real example from one of the datasets we experiment with. The left-hand side of the figure contains a True example, while the right-hand side shows a False one. We show the original claims from snopes.com, the query generated by our system, and the information retrieved from the Web (most relevant snippet and text selection from the web page). The veracity of the claim can be inferred from the textual information. Our contributions can be summarized as follows: The remainder of this paper is organized as follows. Section SECREF2 introduces our method for fact checking claims using external sources. Section SECREF3 presents our experiments and discusses the results. Section SECREF4 describes an application of our approach to a different dataset and a slightly different task: fact checking in community question answering forums. Section SECREF5 presents related work. Finally, Section SECREF6 concludes and suggests some possible directions for future work. The Fact-Checking System Given a claim, our system searches for support information on the Web in order to verify whether the claim is likely to be true. The three steps in this process are (i) external support retrieval, (ii) text representation, and (iii) veracity prediction. External Support Retrieval This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 . As we aim to develop a general-purpose fact checking system, we use an approach for query generation that does not incorporate any features that are specific to claim verification (e.g., no temporal indicators). We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable. Finally, if our query has returned no results, we iteratively relax it by dropping the final tokens one at a time. Text Representation Next, we build the representation of a claim and the corresponding snippets and Web pages. First, we calculate three similarities (a) between the claim and a snippet, or (b) between the claim and a Web page: (i) cosine with tf-idf, (ii) cosine over embeddings, and (iii) containment BIBREF1 . We calculate the embedding of a text as the average of the embeddings of its words; for this, we use pre-trained embeddings from GloVe BIBREF2 . Moreover, as a Web page can be long, we first split it into a set of rolling sentence triplets, then we calculate the similarities between the claim and each triplet, and we take the highest scoring triplet. Finally, as we have up to ten hits from the search engine, we take the maximum and also the average of the three similarities over the snippets and over the Web pages. We further use as features the embeddings of the claim, of the best-scoring snippet, and of the best-scoring sentence triplet from a Web page. We calculate these embeddings (i) as the average of the embeddings of the words in the text, and also (ii) using LSTM encodings, which we train for the task as part of a deep neural network (NN). We also use a task-specific embedding of the claim together with all the above evidence about it, which comes from the last hidden layer of the NN. Veracity Prediction Next, we build classifiers: neural network (NN), support vector machines (SVM), and a combination thereof (SVM+NN). The architecture of our NN is shown on top of Figure FIGREF7 . We have five LSTM sub-networks, one for each of the text sources from two search engines: Claim, Google Web page, Google snippet, Bing Web page, and Bing snippet. The claim is fed into the neural network as-is. As we can have multiple snippets, we only use the best-matching one as described above. Similarly, we only use a single best-matching triple of consecutive sentences from a Web page. We further feed the network with the similarity features described above. All these vectors are concatenated and fully connected to a much more compact hidden layer that captures the task-specific embeddings. This layer is connected to a softmax output unit to classify the claim as true or false. The bottom of Figure FIGREF7 represents the generic architecture of each of the LSTM components. The input text is transformed into a sequence of word embeddings, which is then passed to the bidirectional LSTM layer to obtain a representation for the full sequence. Our second classifier is an SVM with an RBF kernel. The input is the same as for the NN: word embeddings and similarities. However, the word embeddings this time are calculated by averaging rather than using a bi-LSTM. Finally, we combine the SVM with the NN by augmenting the input to the SVM with the values of the units in the hidden layer. This represents a task-specific embedding of the input example, and in our experiments it turned out to be quite helpful. Unlike in the SVM only model, this time we use the bi-LSTM embeddings as an input to the SVM. Ultimately, this yields a combination of deep learning and task-specific embeddings with RBF kernels. Dataset We used part of the rumor detection dataset created by BIBREF3 . While they analyzed a claim based on a set of potentially related tweets, we focus on the claim itself and on the use of supporting information from the Web. The dataset consists of 992 sets of tweets, 778 of which are generated starting from a claim on snopes.com, which ma2016detecting converted into a query. Another 214 sets of tweets are tweet clusters created by other researchers BIBREF4 , BIBREF5 with no claim behind them. ma2016detecting ignored the claim and did not release it as part of their dataset. We managed to find the original claim for 761 out of the 778 snopes.com-based clusters. Our final dataset consists of 761 claims from snopes.com, which span various domains including politics, local news, and fun facts. Each of the claims is labeled as factually true (34%) or as a false rumor (66%). We further split the data into 509 for training, 132 for development, and 120 for testing. As the original split for the dataset was not publicly available, and as we only used a subset of their data, we had to make a new training and testing split. Note that we ignored the tweets, as we wanted to focus on a complementary source of information: the Web. Moreover, ma2016detecting used manual queries, while we use a fully automatic method. Finally, we augmented the dataset with Web-retrieved snippets, Web pages, and sentence triplets from Web pages. Experimental Setup We tuned the architecture (i.e., the number of layers and their size) and the hyper-parameters of the neural network on the development dataset. The best configuration uses a bidirectional LSTM with 25 units. It further uses a RMSprop optimizer with 0.001 initial learning rate, L2 regularization with INLINEFORM0 =0.1, and 0.5 dropout after the LSTM layers. The size of the hidden layer is 60 with tanh activations. We use a batch of 32 and we train for 400 epochs. For the SVM model, we merged the development and the training dataset, and we then ran a 5-fold cross-validation with grid-search, looking for the best kernel and its parameters. We ended up selecting an RBF kernel with INLINEFORM0 and INLINEFORM1 0.01. Evaluation Metrics The evaluation metrics we use are P (precision), R (recall), and F INLINEFORM0 , which we calculate with respect to the false and to the true claims. We further report AvgR (macro-average recall), AvgF INLINEFORM1 (macro-average F INLINEFORM2 ), and Acc (accuracy). Results Table TABREF14 shows the results on the test dataset. We can see that both the NN and the SVM models improve over the majority class baseline (all false rumors) by a sizable margin. Moreover, the NN consistently outperforms the SVM by a margin on all measures. Yet, adding the task-specific embeddings from the NN as features of the SVM yields overall improvements over both the SVM and the NN in terms of avgR, avgF INLINEFORM0 , and accuracy. We can see that both the SVM and the NN overpredict the majority class (false claims); however, the combined SVM+NN model is quite balanced between the two classes. Table TABREF22 compares the performance of the SVM with and without task-specific embeddings from the NN, when training on Web pages vs. snippets, returned by Google vs. Bing vs. both. The NN embeddings consistently help the SVM in all cases. Moreover, while the baseline SVM using snippets is slightly better than when using Web pages, there is almost no difference between snippets vs. Web pages when NN embeddings are added to the basic SVM. Finally, gathering external support from either Google or Bing makes practically no difference, and using the results from both together does not yield much further improvement. Thus, (i) the search engines already do a good job at generating relevant snippets, and one does not need to go and download the full Web pages, and (ii) the choice of a given search engine is not an important factor. These are good news for the practicality of our approach. Unfortunately, direct comparison with respect to BIBREF3 is not possible. First, we only use a subset of their examples: 761 out of 993 (see Section SECREF17 ), and we also have a different class distribution. More importantly, they have a very different formulation of the task: for them, the claim is not available as input (in fact, there has never been a claim for 21% of their examples); rather an example consists of a set of tweets retrieved using manually written queries. In contrast, our system is fully automatic and does not use tweets at all. Furthermore, their most important information source is the change in tweets volume over time, which we cannot use. Still, our results are competitive to theirs when they do not use temporal features. To put the results in perspective, we can further try to make an indirect comparison to the very recent paper by BIBREF6 . They also present a model to classify true vs. false claims extracted from snopes.com, by using information extracted from the Web. Their formulation of the task is the same as ours, but our corpora and label distributions are not the same, which makes a direct comparison impossible. Still, we can see that regarding overall classification accuracy they improve a baseline from 73.7% to 84.02% with their best model, i.e., a 39.2% relative error reduction. In our case, we go from 66.7% to 80.0%, i.e., an almost identical 39.9% error reduction. These results are very encouraging, especially given the fact that our model is much simpler than theirs regarding the sources of information used (they model the stance of the text, the reliability of the sources, the language style of the articles, and the temporal footprint). Application to cQA Next, we tested the generality of our approach by applying it to a different setup: fact-checking the answers in community question answering (cQA) forums. As this is a new problem, for which no dataset exists, we created one. We augmented with factuality annotations the cQA dataset from SemEval-2016 Task 3 (CQA-QA-2016) BIBREF7 . Overall, we annotated 249 question–answer, or INLINEFORM0 - INLINEFORM1 , pairs (from 71 threads): 128 factually true and 121 factually false answers. Each question in CQA-QA-2016 has a subject, a body, and meta information: ID, category (e.g., Education, and Moving to Qatar), date and time of posting, user name and ID. We selected only the factual questions such as “What is Ooredoo customer service number?”, thus filtering out all (i) socializing, e.g., “What was your first car?”, (ii) requests for opinion/advice/guidance, e.g., “Which is the best bank around??”, and (iii) questions containing multiple sub-questions, e.g., “Is there a land route from Doha to Abudhabi. If yes; how is the road and how long is the journey?” Next, we annotated for veracity the answers to the retained questions. Note that in CQA-QA-2016, each answer has a subject, a body, meta information (answer ID, user name and ID), and a judgment about how well it addresses the question of its thread: Good vs. Potentially Useful vs. Bad . We only annotated the Good answers. We further discarded answers whose factuality was very time-sensitive (e.g., “It is Friday tomorrow.”, “It was raining last week.”), or for which the annotators were unsure. We targeted very high quality, and thus we did not use crowdsourcing for the annotation, as pilot annotations showed that the task was very difficult and that it was not possible to guarantee that Turkers would do all the necessary verification, e.g., gathering evidence from trusted sources. Instead, all examples were first annotated independently by four annotators, and then each example was discussed in detail to come up with a final label. We ended up with 249 Good answers to 71 different questions, which we annotated for factuality: 128 Positive and 121 Negative examples. See Table TABREF26 for details. We further split our dataset into 185 INLINEFORM0 – INLINEFORM1 pairs for training, 31 for development, and 32 for testing, preserving the general positive:negative ratio, and making sure that the questions for the INLINEFORM2 – INLINEFORM3 pairs did not overlap between the splits. Figure FIGREF23 presents an excerpt of an example from the dataset, with one question and three answers selected from a longer thread. Answer INLINEFORM4 contains false information, while INLINEFORM5 and INLINEFORM6 are true, as can be checked on an official governmental website. We had to fit our system for this problem, as here we do not have claims, but a question and an answer. So, we constructed the query from the concatenation of INLINEFORM0 and INLINEFORM1 . Moreover, as Google and Bing performed similarly, we only report results using Google. We limited our run to snippets only, as we have found them rich enough above (see Section SECREF3 ). Also, we had a list of reputed and Qatar-related sources for the domain, and we limited our results to these sources only. This time, we had more options to calculate similarities compared to the rumors dataset: we can compare against INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 – INLINEFORM5 ; we chose to go with the latter. For the LSTM representations, we use both the question and the answer. Table TABREF27 shows the results on the cQA dataset. Once again, our models outperformed all baselines by a margin. This time, the predictions of all models are balanced between the two classes, which is probably due to the dataset being well balanced in general. The SVM model performs better than the NN by itself. This is due to the fact that the cQA dataset is significantly smaller than the rumor detection one. Thus, the neural network could not be trained effectively by itself. Nevertheless, the task-specific representations were useful and combining them with the SVM model yielded consistent improvements on all the measures once again. Related Work Journalists, online users, and researchers are well aware of the proliferation of false information on the Web, and topics such as information credibility and fact checking are becoming increasingly important as research directions. For example, there was a recent 2016 special issue of the ACM Transactions on Information Systems journal on Trust and Veracity of Information in Social Media BIBREF9 , there was a SemEval-2017 shared task on Rumor Detection BIBREF10 , and there is an upcoming lab at CLEF-2018 on Automatic Identification and Verification of Claims in Political Debates BIBREF11 . The credibility of contents on the Web has been questioned by researches for a long time. While in the early days the main research focus was on online news portals BIBREF12 , BIBREF13 , BIBREF14 , the interest has eventually shifted towards social media BIBREF4 , BIBREF15 , BIBREF6 , BIBREF16 , which are abundant in sophisticated malicious users such as opinion manipulation trolls, paid BIBREF17 or just perceived BIBREF18 , BIBREF19 , sockpuppets BIBREF20 , Internet water army BIBREF21 , and seminar users BIBREF22 . For instance, BIBREF23 studied the credibility of Twitter accounts (as opposed to tweet posts), and found that both the topical content of information sources and social network structure affect source credibility. Other work, closer to ours, aims at addressing credibility assessment of rumors on Twitter as a problem of finding false information about a newsworthy event BIBREF4 . This model considers user reputation, writing style, and various time-based features, among others. Other efforts have focused on news communities. For example, several truth discovery algorithms are combined in an ensemble method for veracity estimation in the VERA system BIBREF24 . They proposed a platform for end-to-end truth discovery from the Web: extracting unstructured information from multiple sources, combining information about single claims, running an ensemble of algorithms, and visualizing and explaining the results. They also explore two different real-world application scenarios for their system: fact checking for crisis situations and evaluation of trustworthiness of a rumor. However, the input to their model is structured data, while here we are interested in unstructured text as input. Similarly, the task defined by BIBREF25 combines three objectives: assessing the credibility of a set of posted articles, estimating the trustworthiness of sources, and predicting user's expertise. They considered a manifold of features characterizing language, topics and Web-specific statistics (e.g., review ratings) on top of a continuous conditional random fields model. In follow-up work, BIBREF26 proposed a model to support or refute claims from snopes.com and Wikipedia by considering supporting information gathered from the Web. They used the same task formulation for claims as we do, but different datasets. In yet another follow-up work, Popat:2017:TLE:3041021.3055133 proposed a complex model that considers stance, source reliability, language style, and temporal information. Our approach to fact checking is related: we verify facts on the Web. However, we use a much simpler and feature-light system, and a different machine learning model. Yet, our model performs very similarly to this latter work (even though a direct comparison is not possible as the datasets differ), which is a remarkable achievement given the fact that we consider less knowledge sources, we have a conceptually simpler model, and we have six times less training data than Popat:2017:TLE:3041021.3055133. Another important research direction is on using tweets and temporal information for checking the factuality of rumors. For example, BIBREF27 used temporal patterns of rumor dynamics to detect false rumors and to predict their frequency. BIBREF27 focused on detecting false rumors in Twitter using time series. They used the change of social context features over a rumor's life cycle in order to detect rumors at an early stage after they were broadcast. A more general approach for detecting rumors is explored by BIBREF3 , who used recurrent neural networks to learn hidden representations that capture the variation of contextual information of relevant posts over time. Unlike this work, we do not use microblogs, but we query the Web directly in search for evidence. Again, while direct comparison to the work of BIBREF3 is not possible, due to differences in dataset and task formulation, we can say that our framework is competitive when temporal information is not used. More importantly, our approach is orthogonal to theirs in terms of information sources used, and thus, we believe there is potential in combining the two approaches. In the context of question answering, there has been work on assessing the credibility of an answer, e.g., based on intrinsic information BIBREF28 , i.e., without any external resources. In this case, the reliability of an answer is measured by computing the divergence between language models of the question and of the answer. The spawn of community-based question answering Websites also allowed for the use of other kinds of information. Click counts, link analysis (e.g., PageRank), and user votes have been used to assess the quality of a posted answer BIBREF29 , BIBREF30 , BIBREF31 . Nevertheless, these studies address the answers' credibility level just marginally. Efforts to determine the credibility of an answer in order to assess its overall quality required the inclusion of content-based information BIBREF32 , e.g., verbs and adjectives such as suppose and probably, which cast doubt on the answer. Similarly, BIBREF33 used source credibility (e.g., does the document come from a government Website?), sentiment analysis, and answer contradiction compared to other related answers. Overall, credibility assessment for question answering has been mostly modeled at the feature level, with the goal of assessing the quality of the answers. A notable exception is the work of BIBREF34 , where credibility is treated as a task of its own right. Yet, note that credibility is different from factuality (our focus here) as the former is a subjective perception about whether a statement is credible, rather than verifying it as true or false as a matter of fact; still, these notions are often wrongly mixed in the literature. To the best of our knowledge, no previous work has targeted fact-checking of answers in the context of community Question Answering by gathering external support. Conclusions and Future Work We have presented and evaluated a general-purpose method for fact checking that relies on retrieving supporting information from the Web and comparing it to the claim using machine learning. Our method is lightweight in terms of features and can be very efficient because it shows good performance by only using the snippets provided by the search engines. The combination of the representational power of neural networks with the classification of kernel-based methods has proven to be crucial for making balanced predictions and obtaining good results. Overall, the strong performance of our model across two different fact-checking tasks confirms its generality and potential applicability for different domains and for different fact-checking task formulations. In future work, we plan to test the generality of our approach by applying it to these and other datasets in combination with complementary methods, e.g., those focusing on microblogs and temporal information in Twitter to make predictions about rumors BIBREF27 , BIBREF3 . We also want to explore the possibility of providing justifications for our predictions, and we plan to integrate our method into a real-world application. Acknowledgments This research was performed by the Arabic Language Technologies group at Qatar Computing Research Institute, HBKU, within the Interactive sYstems for Answer Search project (Iyas).
embedding of the claim, Web evidence
75773ee868c0429ccb913eceb367ff0782eeda8a
75773ee868c0429ccb913eceb367ff0782eeda8a_0
Q: Do they evaluate the syntactic parses? Text: Introduction Accurate and efficient semantic parsing is a long-standing goal in natural language processing. There are countless applications for methods that provide deep semantic analyses of sentences. Leveraging semantic information in text may provide improved algorithms for many problems in NLP, such as named entity recognition BIBREF0 , BIBREF1 , BIBREF2 , word sense disambiguation BIBREF3 , BIBREF4 , semantic role labeling BIBREF5 , co-reference resolution BIBREF6 , BIBREF7 , etc. A sufficiently expressive semantic parser may directly provide the solutions to many of these problems. Lower-level language processing tasks, such as those mentioned, may even benefit by incorporating semantic information, especially if the task can be solved jointly during semantic parsing. Knowledge plays a critical role in natural language understanding. The formalisms used by most semantic parsing approaches require an ontology of entities and predicates, with which the semantic content of sentences can be represented. Moreover, even seemingly trivial sentences may have a large number of ambiguous interpretations. Consider the sentence “She started the machine with the GPU,” for example. Without additional knowledge, such as the fact that “machine” can refer to computing devices that contain GPUs, or that computers generally contain devices such as GPUs, the reader cannot determine whether the GPU is part of the machine or if the GPU is a device that is used to start machines. The thesis underlying our research is that natural language understanding requires a belief system; that is, a large set of pre-existing beliefs related to the domain of discourse. Clearly, young children have many beliefs about the world when they learn language, and in fact, the process of learning language is largely one of learning to ground the meanings of words and sentences in these non-linguistically acquired beliefs. In some ways, the idea that language understanding requires a belief system is not new, as natural language researchers have been saying for years that background knowledge is essential to reducing ambiguity in sentence meanings BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . But despite this general acknowledgement of the importance of background knowledge, we see very few natural language understanding systems that actually employ a large belief system as the basis for comprehending sentence meanings, and for determining whether the meaning of a new sentence contradicts, extends, or is already present in its belief system. We present here a step in this direction: a probabilistic semantic parser that uses a large knowledge base (NELL) to form a prior probability distribution on the meanings of sentences it parses, and that "understands" each sentence either by identifying its existing beliefs that correspond to the sentence's meaning, or by creating new beliefs. More precisely, our semantic parser corresponds to a probabilistic generative model that assigns high probability to sentence semantic parses resulting in beliefs it already holds, lower prior probability to parses resulting in beliefs it does not hold but which are consistent with its more abstract knowledge about semantic types of arguments to different relations, and still lower prior probability to parses that contradict its beliefs about which entity types can participate in which relations. This work is only a first step. It is limited in that we currently use it to parse sentences with a simple noun-verb-noun syntax (e.g. "Horses eat hay."), and considers only factual assertions in declarative sentences. Its importance is that it introduces a novel approach in which the semantic parser (a) prefers sentence semantic parses that yield assertions it already believes, while (b) still allowing with lower prior probability sentence interpretations that yield new beliefs involving novel words, and (c) even allowing beliefs inconsistent with its background knowledge about semantic typing of different relations. We introduce algorithms for training the probabilistic grammar and producing parses with high posterior probability, given its prior beliefs and a new sentence. We present experimental evidence of the success and tractability of this approach for sentences with simple syntax, and evidence showing that the incorporated belief system, containing millions of beliefs, allows it to outperform state-of-the-art semantic parsers that do not hold such beliefs. Thus, we provide a principled, probabilistic approach to using a current belief system to guide semantic interpretation of new sentences which, in turn, can be used to augment and extend the belief system. We also argue that our approach can be extended to use the document-level context of a sentence as an additional source of background beliefs. For reasons including but not limited to performance and complexity, most modern parsers operate over tokens, such as words. While this has worked sufficiently well for many applications, this approach assumes that a tokenization preprocessing step produces the correct output. This is nontrivial in many languages, such as Chinese, Thai, Japanese, and Tibetic languages. In addition, a large portion of the English vocabulary is created from the combination of simpler morphemes, such as the words “build-er,” “in-describ-able,” “anti-modern-ist.” Moreover, language can be very noisy. Text messages, communication in social media, and real-world speech are but a few examples of noise obfuscating language. Standard algorithms for tokenization, lemmatization, and other preprocessing are oblivious to the underlying semantics, much less any background knowledge. Incorporating these components into a “joint parsing” framework will enable semantics and background knowledge to jointly inform lower-level processing of language. Our method couples semantics with syntax and other lower-level aspects of language, and can be guided by background knowledge via the semantic prior. We will demonstrate how this can be leveraged in our framework to model the morphology of individual verbs in a temporally-scoped relation extraction task. Semantic statements are the logical expressions that represent meaning in sentences. For example, the semantic statement turn_on_device(person:Ada, device:gpu_cluster) may be used to express the meaning of the sentence example given earlier. There are many languages or semantic formalisms that can be used to encode these logical forms: first-order logic with lambda calculus BIBREF12 , frame semantics BIBREF13 , abstract meaning representation BIBREF14 , dependency-based compositional semantics BIBREF15 , vector-space semantics BIBREF16 , BIBREF17 , for example. Our approach is flexible and does not require the use of a specific semantic formalism. In section "Hierarchical Dirichlet processes" , we review HDPs and describe the setting that we require to define our grammar. We present our approach in section UID17 to perform HDP inference in this new setting. In section "Generative semantic grammar" , we present the main generative process in our framework, and detail our application of the HDP. Although we present our model from a generative perspective, we show in the description of the framework that discriminative techniques can be integrated. Inference in our model is described in section "Inference" . There, we present a chart-driven agenda parser that can leverage the semantic prior to guide its search. Finally, in section "Results" , we evaluate our parser on two relation-extraction tasks: the first is a task to extract simple predicate-argument representations from SVO sentences, and the second is a temporally-scoped relation extraction task that demonstrates our parser's ability to model the morphology of individual words, leading to improved generalization performance over words. Moreover, we demonstrate that the inclusion of background knowledge from a knowledge base improves parsing performance on these tasks. The key contributions of this article are: Background Our model is an extension of context-free grammars (CFGs) BIBREF18 that couples syntax and semantics. To generate a sentence in our framework, the semantic statement is first drawn from a prior. A grammar then recursively constructs a syntax tree top-down, randomly selecting production rules from distributions that depend on the semantic statement. We present a particular incarnation of a grammar in this framework, where hierarchical Dirichlet processes (HDPs) BIBREF19 are used to select production rules randomly. The application of HDPs in our setting is novel, requiring a new inference technique. The use of the term “generative” does not refer to the Chomskian tradition of generative grammar BIBREF20 , although our approach does fall broadly within that framework. Rather, it refers to the fact that our model posits a probabilistic mechanism by which sentences are generated (by the speaker). Performing probabilistic inference under this model yields a parsing algorithm (the listener). This generative approach to modeling grammar underscores the duality between language generation and language understanding. Our grammar can be related to synchronous CFGs (SCFGs) BIBREF21 , which have been extended to perform semantic parsing BIBREF22 , BIBREF23 , BIBREF24 . However, in established use, SCFGs describe the generation of the syntactic and semantic components of sentences simultaneously, which makes the assumption that the induced probability distributions of the semantic and syntactic components factorize in a “parallel” manner. Our model instead describes the generation of the semantic component as a step with occurs prior to the syntactic component. This can be captured in SCFGs as a prior on the semantic start symbol, making no factorization assumptions on this prior. This is particularly useful when employing richer prior distributions on the semantics, such as a model of context or a knowledge base. Adaptor grammars BIBREF25 provide a framework that can jointly model the syntactic structure of sentences in addition to the morphologies of individual words BIBREF26 . Unlike previous work with adaptor grammars, our method couples syntax with semantics, and can be guided by background knowledge via the semantic prior. We will demonstrate how this can be leveraged in our framework to model the morphology of individual verbs in a temporally-scoped relation extraction task. Cohen10 show how to perform dependency grammar induction using adaptor grammars. While grammar induction in our framework constitutes an interesting research problem, we do not address it in this work. As in other parsing approaches, an equivalence can be drawn between our parsing problem and the problem of finding shortest paths in hypergraphs BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 . Our algorithm can then be understood as an application of $\textrm {A}^*$ search for the $k$ -best paths in a very large hypergraph. Our parser incorporates prior knowledge to guide its search, such as from an ontology and the set of beliefs in a knowledge base. Using this kind of approach, the parser can be biased to find context-appropriate interpretations in otherwise ambiguous or terse utterances. While systems such as DurrettK14, NakasholeM15, KimMoldovan1995, and Salloum09 use background knowledge about the semantic types of different noun phrases to improve their ability to perform entity linking, co-reference resolution, prepositional phrase attachment, information extraction, and question answering, and systems such as RatinovR12, DurrettK14, and ProkofyevTLVDC15 link noun phrases to Wikipedia entries to improve their ability to resolve co-references, these uses of background knowledge remain fragmentary. Krishnamurthy2014 developed a CCG parser that incorporates background knowledge from a knowledge base during training through distant supervision, but their method is not able to do so during parsing. Our parser can be trained once, and then applied to a variety of settings, each with a different context or semantic prior. Hierarchical Dirichlet processes A core component of our statistical model is the Dirichlet process (DP) BIBREF32 , which can be understood as a distribution over probability distributions. If a distribution $G$ is drawn from a DP, we can write $G\sim \text{DP}(\alpha ,H)$ , where the DP is characterized by two parameters: a concentration parameter $\alpha >0$ and a base distribution $H$ . The DP has the useful property that $\mathbb {E}[G] = H$ , and the concentration parameter $\alpha $ describes the “closeness” of $G$ to the base distribution $H$ . In typical use, a number of parameters $\theta _i$ are drawn from a discrete distribution $G$ , which is itself drawn from a Dirichlet process. The observations $G\sim \text{DP}(\alpha ,H)$0 are drawn using the parameters $G\sim \text{DP}(\alpha ,H)$1 from another distribution $G\sim \text{DP}(\alpha ,H)$2 . This may be written as: $$G &\sim \text{DP}(\alpha , H), \\ \theta _1,\dots ,\theta _n &\sim G, \\ y_i &\sim F(\theta _i),$$ (Eq. 6) for $i=1,\hdots ,n$ . In our application, we will define $H$ to be a finite Dirichlet distribution and $F$ is a categorical distribution. $G$ can be marginalized out in the model above, resulting in the Chinese restaurant process representation BIBREF33 : $$\phi _1, \phi _2, \dots &\sim H, \\ z_i &= {\left\lbrace \begin{array}{ll} j & \text{with probability } \frac{\#\lbrace k < i : z_k = j\rbrace }{\alpha + i - 1}, \\ j^{new} & \text{with probability } \frac{\alpha }{\alpha + i - 1}, \end{array}\right.} \\ \theta _i &= \phi _{z_i} \text{ for } i = 1, \dots , n, \\ y_i &\sim F(\theta _i),$$ (Eq. 7) where $z_1 = 1$ , $j^{new} = \max \lbrace z_1,\dots ,z_{i-1}\rbrace + 1$ is the indicator of a new table, and the quantity $\#\lbrace k < i : z_k = j\rbrace $ is the number of observations that were assigned to table $j$ . The analogy is to imagine a restaurant where customers enter one at a time. Each customer chooses to sit at table $j$ with probability proportional to the number of people currently sitting at table $j$ , or at a new table $j^{new}$ with probability proportional to $\alpha $ . The $i^{th}$ customer's choice is represented as $z_i$ . As shown in later sections, this representation of the DP is amenable to inference using Markov chain Monte Carlo (MCMC) methods BIBREF34 , BIBREF35 . The hierarchical Dirichlet process (HDP) is an extension of the Dirichlet process for use in hierarchical modeling BIBREF19 . An advantage of this approach is that statistical strength can be shared across nodes that belong to the same subtree. In an HDP, every node $\textbf {n}$ in a fixed tree $T$ is associated with a distribution $G^\textbf {n}$ , and: $$G^\textbf {0} &\sim \text{DP}(\alpha ^{\textbf {0}}, H), \\ G^\textbf {n} &\sim \text{DP}(\alpha ^{\textbf {n}}, G^{\pi (\textbf {n})}),$$ (Eq. 8) where $\pi (\textbf {n})$ is the parent node of $\textbf {n}$ , and $\textbf {0}$ is the root of $T$ . In our application, the base distribution at the root $H$ is Dirichlet. We can draw observations $y_1,\hdots ,y_n$ from the HDP, given a sequence $x_1,\hdots ,x_n$ of $n$ paths from the root $\textbf {0}$ to a leaf: $$\theta _i &\sim G^{x_i}, \\ y_i &\sim F(\theta _i),$$ (Eq. 9) for $i=1,\hdots ,n$ . For notational brevity, we write this equivalently as $y_i\sim \text{HDP}(x_i,T)$ . Just as marginalizing the Dirichlet process yields the Chinese restaurant process, marginalizing the HDP yields the Chinese restaurant franchise (CRF). For every node in the HDP tree $\textbf {n} \in T$ , there is a “Chinese restaurant” consisting of an infinite number of tables. Every table $i$ in this restaurant at node $\textbf {n}$ is assigned to a table in the parent restaurant. The assignment variable $z_i^\textbf {n}$ is the index of the parent table to which table $i$ in node $\textbf {n}$ is assigned. $$\phi _1^\textbf {0}, \phi _2^\textbf {0}, \dots &\sim H, \\ \text{ for every node } \textbf {n} \in T, \hspace{14.22636pt} z_i^\textbf {n} &= {\left\lbrace \begin{array}{ll} j & \text{with probability } \propto n^{\pi (\textbf {n})}_j, \\ j^{new} & \text{with probability } \propto \alpha ^{\pi (\textbf {n})}, \end{array}\right.} \\ \phi _i^\textbf {n} &= \phi _{z_i^\textbf {n}}^{\pi (\textbf {n})},$$ (Eq. 10) where $\pi (\textbf {n})$ is the parent of node $\textbf {n}$ , and $n^{\pi (\textbf {n})}_j$ is the current number of customers at node $\pi (\textbf {n})$ sitting at table $j$ . We are mildly abusing notation here, since $n^{\pi (\textbf {n})}_j$ and $n^{\pi (\textbf {n})}$ refer to the number of customers at the time $z_i^\textbf {n}$ is drawn (which increases as additional $z_i^\textbf {n}$ are drawn). To draw the observation $y_i$ , we start with the leaf node at the end of the path $\textbf {n}$0 : $$\theta _i &= \phi ^{x_i}_k, \\ y_i &\sim F(\theta _i),$$ (Eq. 11) where $k - 1 = \#\lbrace j < i : x_j = x_i\rbrace $ is the number of previous observations drawn from node $x_i$ . Inference In this section, we describe our method for performing posterior inference in the HDP. Let $\mathbf {z} = \lbrace z^\textbf {n}_i : \textbf {n} \in T, i = 1,2,\hdots \rbrace $ be the set of table assignment variables in the HDP. If the distributions $H$ and $F$ are conditionally conjugate, as they are in our application, the $\mathbf {\phi }$ variables can be integrated out in closed form: $$p(\mathbf {z}|\mathbf {x},\mathbf {y}) = p(\mathbf {x}) p(\mathbf {z}) \int p(\mathbf {y}|\mathbf {x},\mathbf {z},\mathbf {\phi }) d\mathbf {\phi }.$$ (Eq. 13) The posterior $p(\mathbf {z}|\mathbf {x},\mathbf {y})$ is intractable to compute exactly, and so we approximate it by sampling. We obtain samples from $\mathbf {z}|\mathbf {x},\mathbf {y}$ by performing collapsed Gibbs sampling as described in section 5.1 of journals/jasa/Teh06: we repeatedly sample $\mathbf {z}$ from its conditional distribution, with $\mathbf {\phi }$ integrated out: $$z^\textbf {n}_i | \mathbf {x}, \mathbf {y}, z^\textbf {n}_{-i} = {\left\lbrace \begin{array}{ll} j &\text{with prob.} \propto \#\lbrace k\ne i : z^\textbf {n}_k = j\rbrace \cdot p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j), \\ j^{new} &\text{with prob.} \propto \alpha ^\textbf {n} \cdot p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j^{new}), \end{array}\right.} $$ (Eq. 14) where $y^\textbf {n}_i$ is the set of “descendant” observations of table $i$ in node $\textbf {n}$ (this includes observations assigned directly to the table, in addition to those assigned to tables further down in the hierarchy which themselves are assigned to this table), $y^\textbf {n}_{-i} = \mathbf {y} \setminus y^\textbf {n}_i$ is the set of all other observations, and $z^\textbf {n}_{-i} = \mathbf {z} \setminus z^\textbf {n}_i$ is the set of all other table assignment variables. Computing $p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j)$ is straightforward since we can follow the chain of table assignments to the root. Let $r^\textbf {n}_i$ be the root cluster assignment of the table $i$ at node $\textbf {n}$ . In fact, we found it advantageous for performance to keep track of the root cluster assignments $\mathbf {r}$ for every table in the hierarchy. Thus, when $i$0 , it must be the case that $i$1 were drawn from $i$2 with parameter $i$3 . Computing $p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j^{new})$ requires marginalizing over the assignment of the new table $z^{\pi (\textbf {n})}_{j^{new}}$ : $$p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j^{new}) = &\sum _{k=1}^{m^{\pi (\textbf {n})}} \frac{n_k^{\pi (\textbf {n})}}{n^{\pi (\textbf {n})} + \alpha ^{\pi (\textbf {n})}} p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^{\pi (\textbf {n})}_{j^{new}} = k) \nonumber \\ &+ \frac{\alpha ^{\pi (\textbf {n})}}{n^{\pi (\textbf {n})} + \alpha ^{\pi (\textbf {n})}} p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^{\pi (\textbf {n})}_{j^{new}} = k^{new}),$$ (Eq. 15) where $m^{\pi (\textbf {n})}$ is the number of occupied tables at the node $\pi (\textbf {n})$ . At the root node $\pi (\textbf {n}) = \textbf {0}$ , the above probability is just the prior of $y^\textbf {n}_i$ . We observe that the above probabilities are linear functions of the likelihoods $p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, r^\textbf {n}_i = k)$ for various root cluster assignments $r^\textbf {n}_i = k$ . Implemented naively, generating a single sample from equation 14 can take time linear in the number of clusters at the root, which would result in a quadratic-time algorithm for a single Gibbs iteration over all $\mathbf {z}$ . However, we can exploit sparsity in the root cluster assignment likelihoods to improve performance. When $H = \text{Dir}(\beta )$ is a Dirichlet distribution and $F$ is a categorical, then the collapsed root cluster assignment likelihood is: $$p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, r^\textbf {n}_i = k) = \frac{\prod _t \left( \beta _t + \#\lbrace t \in y^\textbf {0}_k\rbrace \right)^{(\#\lbrace t \in y^\textbf {n}_i\rbrace )}}{\left(\sum _t \beta _t + \# y^\textbf {0}_k \right)^{(\# y^\textbf {n}_i)}}.$$ (Eq. 16) Here, $a^{(b)}$ is the rising factorial $a(a + 1)(a + 2)\hdots (a + b - 1) = \frac{\Gamma (a + b)}{\Gamma (a)}$ , and $\#\lbrace t \in y^\textbf {n}_i\rbrace $ is the number of elements in $y^\textbf {n}_i$ with value $t$ . Notice that the denominator depends only on the sizes and not on the contents of $y^\textbf {n}_i$ and $y^\textbf {0}_k$ . Caching the denominator values for common sizes of $y^\textbf {n}_i$ and $y^\textbf {0}_k$ can allow the sampler to avoid needless recomputation. This is especially useful in our application since many of the tables at the root tend to be small. Similarly, observe that the numerator factor is 1 for values of $t$ where $a(a + 1)(a + 2)\hdots (a + b - 1) = \frac{\Gamma (a + b)}{\Gamma (a)}$0 . Thus, the time required to compute the above probability is linear in the number of unique elements of $a(a + 1)(a + 2)\hdots (a + b - 1) = \frac{\Gamma (a + b)}{\Gamma (a)}$1 , which can improve the scalability of our sampler. We perform the above computations in log space to avoid numerical overflow. In previous uses of the HDP, the paths $x_i$ are assumed to be fixed. For instance, in document modeling, the paths correspond to documents or predefined categories of documents. In our application, however, the paths may be random. In fact, we will later show that our parser heavily relies on the posterior predictive distribution over paths, where the paths correspond to semantic parses. More precisely, given a collection of training observations $\mathbf {y} = \lbrace y_1,\hdots ,y_n\rbrace $ with their paths $\mathbf {x} = \lbrace x_1,\hdots ,x_n\rbrace $ , we want to compute the probability of a new path $x^{new}$ given a new observation $y^{new}$ : $$p(x^{new}|y^{new},\mathbf {x},\mathbf {y}) &\propto p(x^{new}) \int p(y^{new}|\mathbf {z},x^{new}) p(\mathbf {z}|\mathbf {x},\mathbf {y}) d\mathbf {z}, \\ &\approx \frac{p(x^{new})}{N_{samples}} \sum _{\mathbf {z}^* \sim \mathbf {z}|\mathbf {x},\mathbf {y}} p(y^{new}|\mathbf {z}^*,x^{new}). $$ (Eq. 18) Once we have the posterior samples $\mathbf {z}^*$ , we can compute the quantity $p(y^{new}|\mathbf {z}^*,x^{new})$ by marginalizing over the table assignment for the new observation $y$ : $$p(y^{new}|\mathbf {z}^*,x^{new}) = &\sum _{j=1}^{m^{x^{new}}} \frac{n_j^{x^{new}}}{n^{x^{new}} + \alpha ^{x^{new}}} \hspace{2.84544pt} p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_j) \nonumber \\ &+ \frac{\alpha ^{x^{new}}}{n^{x^{new}} + \alpha ^{x^{new}}} \hspace{2.84544pt} p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_{j^{new}}).$$ (Eq. 19) Here, $m^{x^{new}}$ is the number of occupied tables at node $x^{new}$ , $n^{x^{new}}_j$ is the number of customers sitting at table $j$ at node $x^{new}$ , and $n^{x^{new}}$ is the total number of customers at node $x^{new}$ . The first term $p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_j)$ can be computed since the $j^{th}$ table exists and is assigned to a table in its parent node, which in turn is assigned to a table in its parent node, and so on. We can follow the chain of table assignments to the root. In the second term, the observation is assigned to a new table, whose assignment is unknown, and so we marginalize again over the assignment in the parent node for this new table: $$p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_{j^{new}}) = &\sum _{j=1}^{m^{\pi (x^{new})}} \frac{n_j^{\pi (x^{new})}}{n^{\pi (x^{new})} + \alpha ^{\pi (x^{new})}} \hspace{5.69046pt} p\left(y^{new} \Big | \mathbf {z}^*, \theta ^{new} = \phi _j^{\pi (x^{new})}\right) \nonumber \\ &+ \frac{\alpha ^{\pi (x^{new})}}{n^{\pi (x^{new})} + \alpha ^{\pi (x^{new})}} \hspace{5.69046pt} p\left(y^{new} \Big | \mathbf {z}^*, \theta ^{new} = \phi _{j^{new}}^{\pi (x^{new})}\right),$$ (Eq. 20) where $\pi (x^{new})$ is the parent node of $x^{new}$ . Again, the probability in the first term can be computed as before, but the probability in the second term depends on the assignment of the new table, which is unknown. Thus, since it is possible that a new table will be created at every level in the hierarchy up to the root, we can apply this formula recursively. At the root $\textbf {0}$ , the probability $p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi _{j^{new}}^\textbf {0})$ is just the prior probability of $y^{new}$ . If the tree $T$ is small, it is straightforward to compute the quantity in equation for every path $x^{new}$ in the tree, using the method described above. In our application however, the size of $T$ depends on the size of the ontology, and may easily become very large. In this case, the naïve approach becomes computationally infeasible. As such, we develop an algorithm to incrementally find the $k$ best paths that maximize the quantity in equation . For sparse distributions, where most of the probability mass is concentrated in a small number of paths $x^{new}$ , this algorithm can effectively characterize the predictive distribution in equation 18 . The algorithm is essentially a search over nodes in the tree, starting at the root and descending the nodes of the tree $T$ , guided through paths of high probability. Each search state $\texttt {s}$ consists of the following fields: $\texttt {s.n}$ is the current position of the search in the tree. $\texttt {s.v}$ is an array of probability scores of length $N_{samples}$ . Each element in this array represents the probability of drawing the observation $y^{new}$ from the current node $\texttt {s.n}$ , and thus is identical to the probability of assigning $y^{new}$ to a new table at any child node of $\texttt {s.n}$ . This is useful to compute the quantity in equation using the recursive method as described above. The search is outlined in algorithm UID17 . We observe that the quantity in equation is a sum of independent functions, each being a linear combination of the terms $p(y^{new}|\mathbf {z}^*_i,\theta ^{new} = \phi _j^\textbf {n})$ over the tables available at node $\textbf {n}$ and the new table $p(y^{new}|\mathbf {z}^*_i,\theta ^{new} = \phi _{j^{new}}^\textbf {n})$ (this latter probability is stored in $\texttt {s.v}_i$ ). Thus, the upper bound on equation over all paths that pass through node $\texttt {s.n}$ is: $$\max _{\lbrace x^{new}:\texttt {s.n} \in x^{new}\rbrace } \frac{p(x^{new})}{N_{samples}} \sum _{i=1}^{N_{samples}} \max _{j=1,\hdots ,m^\texttt {s.n}} \left\lbrace p(y^{new}|\mathbf {z}^*_i,\theta ^{new}=\phi _j^\texttt {s.n}) , \texttt {s.v}_i \right\rbrace . $$ (Eq. 23) We sort elements in the priority queue using this expression. Fnfunction IfElseIfElseif elifelse Whilewhile{ Repeatrepeatuntil initialize priority queue with initial state $\texttt {s}$ $\texttt {s.n} \leftarrow \textbf {0}$ *[h]start at the root $\normalfont i=1,\hdots ,N_{samples},$ $\texttt {s.v}_i \leftarrow \sum _{j=1}^{m^\textbf {0}} \frac{n_j^\textbf {0}}{n^\textbf {0} + \alpha ^\textbf {0}} p(y^{new}|\mathbf {z}^*_i, \theta ^{new} = \phi _j^\textbf {0}) + \frac{\alpha ^\textbf {0}}{n^\textbf {0} + \alpha ^\textbf {0}} p(y^{new} | \mathbf {z}^*_i, \theta ^{new} = \phi ^\textbf {0}_{j^{new}})$ there are $k$ completed paths pop state s from the priority queue $\normalfont \texttt {s.n}$ is a leaf complete the path s.n with probability $\frac{p\lbrace x^{new} = \texttt {s.n}\rbrace }{N_{samples}} \sum _{i=1}^{N_{samples}} \texttt {s.v}_i$ child node $\normalfont \textbf {c}$ of $\normalfont \texttt {s.n}$ , create new search state $\texttt {s}^*$ $\texttt {s.n} \leftarrow \textbf {0}$0 $\texttt {s.n} \leftarrow \textbf {0}$1 $\texttt {s.n} \leftarrow \textbf {0}$2 push $\texttt {s.n} \leftarrow \textbf {0}$3 onto priority queue with key in equation 23 Search algorithm to find the $\texttt {s.n} \leftarrow \textbf {0}$4 best paths in the HDP that maximize the quantity in equation . As a result, once the algorithm has completed $\texttt {s.n} \leftarrow \textbf {0}$5 items, we are guaranteed that the search has found $\texttt {s.n} \leftarrow \textbf {0}$6 best paths. Thus, an “iterator” data structure can be efficiently implemented using this algorithm, which returns paths $\texttt {s.n} \leftarrow \textbf {0}$7 in order of decreasing predictive probability, with the first item being optimal. The search algorithm can be modified for other representations of the HDP, and can be extended to the case where $\texttt {s.n} \leftarrow \textbf {0}$8 and $\texttt {s.n} \leftarrow \textbf {0}$9 are not conjugate. It may also be incorporated into a larger inference procedure to jointly infer the paths $\normalfont i=1,\hdots ,N_{samples},$0 and the latent variables in the HDP. It is also straightforward to compute predictive probabilities where the path $\normalfont i=1,\hdots ,N_{samples},$1 is restricted to a subset of paths $\normalfont i=1,\hdots ,N_{samples},$2 : $\normalfont i=1,\hdots ,N_{samples},$3 . To do so, the algorithm is restricted to only expand nodes that belong to paths in $\normalfont i=1,\hdots ,N_{samples},$4 . An important concern when performing inference with very large trees $T$ is that it is not feasible to explicitly store every node in memory. Fortunately, collapsed Gibbs sampling does not require storing nodes whose descendants have zero observations. In addition, algorithm UID17 can be augmented to avoid storing these nodes, as well. To do so, we make the observation that for any node $\textbf {n} \in T$ in the tree whose descendants have no observations, $\textbf {n}$ will have zero occupied tables. Therefore, the probability $p(y^{new}|\mathbf {z}^*,x^{new}) = p(y^{new}|\mathbf {z}^*,\theta ^{new}=\phi ^\textbf {n}_{j^{new}})$ is identical for any path $x^{new}$ that passes through $\textbf {n}$ . Thus, when the search reaches node $\textbf {n}$ , it can simultaneously complete all paths $x^{new}$ that pass through $\textbf {n}$ , and avoid expanding nodes with zero observations among its descendants. As a result, we only need to explicitly store a number of nodes linear in the size of the training data, which enables practical inference with very large hierarchies. There is a caveat that arises when we wish to compute a joint predictive probability $p(x^{new}_1, \hdots , x^{new}_k | y^{new}_1, \hdots , y^{new}_k, \mathbf {x}, \mathbf {y})$ , where we have multiple novel observations. Re-writing equation 18 in this setting, we have: $$p(x^{new}_1, \hdots , x^{new}_k &| y^{new}_1, \hdots , y^{new}_k, \mathbf {x}, \mathbf {y}) \nonumber \\ &\propto p(\mathbf {x}^{new}) \int p(y^{new}_1,\hdots ,y^{new}_k|\mathbf {z}^*,\mathbf {x}^{new}) p(\mathbf {z}|\mathbf {x},\mathbf {y})d\mathbf {z}. $$ (Eq. 24) For the CRF, the joint likelihood $p(y^{new}_1,\hdots ,y^{new}_k|\mathbf {z}^*,\mathbf {x}^{new})$ does not factorize, since the observations are not independent (they are exchangeable). One workaround is to use a representation of the HDP where the joint likelihood factorizes, such as the direct assignment representation BIBREF19 . Another approach is to approximate the joint likelihood with the factorized likelihood. In our parser, we instead make the following approximation: $$p(y^{new}_1,\hdots ,y^{new}_k | \mathbf {x}^{new},\mathbf {x},\mathbf {y}) &= \prod _{i=1}^k p(y^{new}_i | y^{new}_1,\hdots ,y^{new}_{i-1}, \mathbf {x}^{new},\mathbf {x},\mathbf {y}) \\ &\approx \prod _{i=1}^k p(y^{new}_i | \mathbf {x}^{new},\mathbf {x},\mathbf {y}). $$ (Eq. 25) Substituting into equation 24 , we obtain: $$p(\mathbf {x}^{new} | \mathbf {y}^{new}, \mathbf {x}, \mathbf {y}) \propto p(\mathbf {x}^{new}) \prod _{i=1}^k \int p(y^{new}_i|\mathbf {z}^*,\mathbf {x}^{new}) p(\mathbf {z}|\mathbf {x},\mathbf {y})d\mathbf {z}.$$ (Eq. 26) When the size of the training data $(\mathbf {x},\mathbf {y})$ is large with respect to the test data $(\mathbf {x}^{new},\mathbf {y}^{new})$ , the approximation works well, which we also find to be the case in our experiments. Generative semantic grammar We present a generative model of text sentences. In this model, semantic statements are generated probabilistically from some higher-order process. Given each semantic statement, a formal grammar selects text phrases, which are concatenated to form the output sentence. We present the model such that it remains flexible with regard to the semantic formalism. Even though our grammar can be viewed as an extension of context-free grammars, it is important to note that our model of grammar is only conditionally context-free, given the semantic statement. Otherwise, if the semantic information is marginalized out, the grammar is sensitive to context. Definition Let $\mathcal {N}$ be a set of nonterminals, and let $\mathcal {W}$ be a set of terminals. Let $\mathbf {R}$ be a set of production rules which can be written in the form $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$ where $\textrm {A}\in \mathcal {N}$ and $\textrm {B}_1,\hdots ,\textrm {B}_k\in \mathcal {W}{2mu}\cup {2mu}\mathcal {N}$ . The tuple $(\mathcal {W},\mathcal {N},\mathbf {R})$ is a context-free grammar (CFG) BIBREF18 . We couple syntax with semantics by augmenting the production rules $\mathbf {R}$ . In every production rule $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$ in $\mathbf {R}$ , we assign to every right-hand side symbol $B_i$ a surjective operation $f_i : \mathcal {X}_A\mapsto \mathcal {X}_{B_i}$ that transforms semantic statements, where $\mathcal {X}_A$ is the set of semantic statements associated with the symbol $\textrm {A}$ and $\mathcal {X}_{B_i}$ is the set of semantic statements associated with the symbol $\textrm {B}_i$ . Intuitively, the operation describes how the semantic statement is “passed on” to the child nonterminals in the generative process. During parsing, these operations will describe how simpler semantic statements combine to form larger statements, enabling semantic compositionality. For example, suppose we have a semantic statement $x = \textit {has\_color(reptile:frog,color:green)}$ and the production rule $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$0 . We can pair the semantic operation $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$1 with the $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$2 in the right-hand side such that $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$3 selects the subject argument. Similarly, we can pair the semantic operation $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$4 with the $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$5 in the right-hand side such that $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$6 is the identity operation. The augmented production rule is $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$7 and the set of augmented rules is $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$8 . In parsing, we require the computation of the inverse of semantic operations, which is the preimage of a given semantic statement $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$9 . Continuing the example above, $\mathbf {R}$0 returns a set that contains the statement $\mathbf {R}$1 in addition to statements like $\mathbf {R}$2 . To complete the definition of our grammar, we need to specify the method that, given a nonterminal $\mathrm {A} \in \mathcal {N}$ and a semantic statement $x \in \mathcal {X}_A$ , selects a production rule from the set of rules in $\mathbf {R}^*$ with the left-hand side nonterminal $A$ . To accomplish this, we define $\texttt {select}_{A,x}$ as a distribution over rules from $\mathbf {R}^*$ that has $\textrm {A}$ as its left-hand side, dependent on $x$ . We will later provide a number of example definitions of this $\texttt {select}_{A,x}$ distribution. Thus, a grammar in our framework is fully specified by the tuple $(\mathcal {W},\mathcal {N},\mathbf {R}^*,\texttt {select})$ . Note that other semantic grammar formalisms can be fit into this framework. For example, in categorical grammars, a lexicon describes the mapping from elementary components of language (such as words) to a syntactic category and a semantic meaning. Rules of inference are available to combine these lexical items into (tree-structured) derivations, eventually resulting in a syntactic and semantic interpretation of the full sentence BIBREF36 , BIBREF37 . In our framework, we imagine this process in reverse. The set $\mathcal {X}_S$ is the set of all derivable semantic statements with syntactic category $S$ . The generative process begins by selecting one statement from this set $x \in \mathcal {X}_S$ . Next, we consider all applications of the rules of inference that would yield $x$ , with each unique application of an inference rule being equivalent to a production rule in our framework. We select one of these production rules according to our generative process and continue recursively. The items in the lexicon are equivalent to preterminal production rules in our framework. Thus, the generative process below describes a way to endow parses in categorical grammar with a probability measure. This can be used, for example, to extend earlier work on generative models with CCG BIBREF38 , BIBREF39 . Different choices of the $\texttt {select}$ distribution induce different probability distributions over parses. We do not see a straightforward way to fit linear or log-linear models over full parses into our framework, where a vector of features can be computed for each full parse BIBREF40 , BIBREF41 . This is due to our assumption that, given the semantic statement, the probability of a parse factorizes over the production rules used to construct that parse. However, the select distribution can be defined using linear and log-linear models, as we will describe in section "Selecting production rules" . Generative process The process for generating sentences in this framework begins by drawing a semantic statement $x\in \mathcal {X}_S$ where $\textrm {S}$ is the root nonterminal. Thus, there is a prior distribution $p(x)$ for all $x\in \mathcal {X}_S$ . Next, the syntax is generated top-down starting at $\textrm {S}$ . We draw a production rule with $\textrm {S}$ as the left-hand side from $\texttt {select}_{S,x}$ . The semantic transformation operations $f_i$ are applied to $x$ and the process is repeated for the right-hand side nonterminals. More concretely, we define the following operation $\texttt {expand}$ which takes two arguments: a symbol $\textrm {S}$0 and a semantic statement $\textrm {S}$1 . FExpandexpand FYieldyield FSelectselect tworuled [H] $x$ , A ${\rm A}\in \mathcal {W}$ *[h]simply return the word if A is a terminal A *[h]select a production rule with form $\textrm {A}\rightarrow \textrm {B}_1,\hdots ,\textrm {B}_k$ $(\textrm {A}, \textrm {B}_1, \hdots , \textrm {B}_k, f_1, \hdots , f_k) \sim _{\textrm {A},x}$ $((f_1(x), \textrm {B}_1), \hdots , (f_k(x), \textrm {B}_k))$ The yield operation concatenates strings into a single output string. Then, the output sentence $y$ is generated simply by $y=\texttt {expand}(x, \textrm {S})$ . Depending on the application, we may require that the generative process capitalizes the first letter of the output sentence, and/or appends terminating punctuation to the end. A noise model may also be appended to the generative process. The above algorithm may be easily extended to also return the full syntax tree. Selecting production rules There are many possible choices for the $\texttt {select}$ distribution. The most straightforward is to define a categorical distribution over the available production rules, and simply draw the selected rule from this distribution. The result would be a simple extension of probabilistic context-free grammars (PCFGs) that couples semantics with syntax. However, this would remove any dependence between the semantic statement and the production rule selection. To illustrate the importance of this dependence, consider generating a sentence with the semantic statement athlete_plays_sport(athlete:roger_federer,sport:tennis) using the grammar in figure 2 (the process is graphically depicted in figure 3 ). We start with the root nonterminal $\textrm {S}$ : We can only select the first production rule, and so we apply the semantic operation select_arg1 on the semantic statement to obtain athlete:roger_federer for the right-hand side nonterminal $\textrm {N}$ . We apply the semantic operation delete_arg1 to obtain athlete_plays_sport( $\cdot $ ,sport:tennis) for $\textrm {VP}$ . Expanding $\textrm {N}$ , we select a terminal symbol given the semantic statement athlete:roger_federer. Suppose “Andre Agassi” is returned. Now, we expand the $\textrm {VP}$ symbol. We draw from $\texttt {select}_{\textrm {VP}}$ to choose one of the two available production rules. Suppose the rule $\textrm {VP} \rightarrow \textrm {V \hspace{2.84544pt} N}$ is selected. Thus, we apply the identity operation for the $\textrm {V}$ nonterminal to obtain athlete_plays_sport( $\cdot $ ,sport:tennis). We similarly apply select_arg2 for the $\textrm {N}$ nonterminal to obtain sport:tennis. We expand the $\textrm {V}$ nonterminal, drawing from $\texttt {select}_{\textrm {V}}$ on the semantic statement athlete_plays_sport( $\cdot $ ,sport:tennis). Suppose “plays” is returned. Finally, we expand the $\textrm {N}$ nonterminal, drawing from $\texttt {select}_{\textrm {N}}$ with the statement sport:tennis. Suppose “tennis” is returned. We concatenate all returned strings to form the sentence “Andre Agassi plays tennis.” However, now consider generating another sentence with the same grammar for the statement athlete_plays_sport(athlete:roger_federer, sport:swimming). In UID32 of the above process, the select distribution would necessarily have to depend on the semantic statement. In English, the probability of observing a sentence of the form $\textrm {N} \hspace{2.84544pt} \textrm {V} \hspace{2.84544pt} \textrm {N}$ ('Rachmaninoff makes music') versus $\textrm {N} \hspace{2.84544pt} \textrm {V}$ ('Rachmaninoff composes') depends on the underlying semantic statement. To capture this dependence, we use HDPs to define the select distribution. Every nonterminal $\textrm {A}\in \mathcal {N}$ is associated with an HDP, and in order to fully specify the grammar, we need to specify the structure of each HDP tree. Let $T_A$ be the tree associated with the nonterminal $\textrm {A}$ . The model is flexible with how the trees are defined, but we construct trees with the following method. First, select $m$ discrete features $g_1,\hdots ,g_m$ where each $g_i : \mathcal {X} \mapsto \mathbb {Z}$ and $\mathbb {Z}$ is the set of integers. These features operate on semantic statements. For example, suppose we restrict the space of semantic statements to be the set of single predicate instances (triples). The relations in an ontology can be assigned unique integer indices, and so we may define a semantic feature as a function which simply returns the index of the predicate given a semantic statement. We construct the HDP tree $T_A$ starting with the root, we add a child node for every possible output of $g_1$ . We repeat the process recursively, constructing a complete tree of depth $m + 1$ . As an example, we will construct a tree for the nonterminal $\textrm {VP}$ for the example grammar in figure 2 . Suppose in our ontology, we have the predicates athlete_plays_sport and musician_plays_instrument, labeled 0 and 1, respectively. The ontology also contains the concepts athlete:roger_federer, sport:tennis, and sport:swimming, also labeled 0, 1, and 2, respectively. We define the first feature $g_1$ to return the predicate index. The second feature $g_2$ returns the index of the concept in the second argument of the semantic statement. The tree is constructed starting with the root, we add a child node for each predicate in the ontology: athlete_plays_sport and musician_plays_instrument. Next, for each child node, we add a grandchild node for every concept in the ontology: athlete:roger_federer, sport:tennis, and sport:swimming. The resulting tree $T_{VP}$ has depth 2, with a root node with 2 child nodes, and each child node has 3 grandchild nodes. This construction enables the select distribution for the nonterminal $\textrm {VP}$ to depend on the predicate and the second argument of the semantic statement. With the fully-specified HDPs and their corresponding trees, we have fully specified select. When sampling from $\texttt {select}_{\textrm {A},x}$ for the nonterminal $\textrm {A} \in \mathcal {N}$ and a semantic statement $x \in \mathcal {X}$ , we compute the $m$ semantic features for the given semantic statement: $g_1(x), g_2(x), \hdots , g_m(x)$ . This sequence of indices specifies a path from the root of the tree down to a leaf. We then simply draw a production rule observation from this leaf node, and return the result: $r \sim \text{HDP}(x, T_A) = \texttt {select}_{\textrm {A},x}$ . There are many other alternatives for defining the select distribution. For instance, a log-linear model can be used to learn dependence on a set of features. The HDP provides statistical advantages, smoothing the learned distributions, resulting in a model more robust to data sparsity issues. In order to describe inference in this framework, we must define additional concepts and notation. For a nonterminal $\textrm {A}\in \mathcal {N}$ , observe that the paths from the root to the leaves of its HDP tree induce a partition on the set of semantic statements $\mathcal {X}_A$ . More precisely, two semantic statements $x_1,x_2 \in \mathcal {X}_A$ belong to the same equivalence class if they correspond to the same path in an HDP tree.
No
11ed8c4d98a4e8994990edba54319efe9c6745f2
11ed8c4d98a4e8994990edba54319efe9c6745f2_0
Q: What knowledge bases do they use? Text: Introduction Accurate and efficient semantic parsing is a long-standing goal in natural language processing. There are countless applications for methods that provide deep semantic analyses of sentences. Leveraging semantic information in text may provide improved algorithms for many problems in NLP, such as named entity recognition BIBREF0 , BIBREF1 , BIBREF2 , word sense disambiguation BIBREF3 , BIBREF4 , semantic role labeling BIBREF5 , co-reference resolution BIBREF6 , BIBREF7 , etc. A sufficiently expressive semantic parser may directly provide the solutions to many of these problems. Lower-level language processing tasks, such as those mentioned, may even benefit by incorporating semantic information, especially if the task can be solved jointly during semantic parsing. Knowledge plays a critical role in natural language understanding. The formalisms used by most semantic parsing approaches require an ontology of entities and predicates, with which the semantic content of sentences can be represented. Moreover, even seemingly trivial sentences may have a large number of ambiguous interpretations. Consider the sentence “She started the machine with the GPU,” for example. Without additional knowledge, such as the fact that “machine” can refer to computing devices that contain GPUs, or that computers generally contain devices such as GPUs, the reader cannot determine whether the GPU is part of the machine or if the GPU is a device that is used to start machines. The thesis underlying our research is that natural language understanding requires a belief system; that is, a large set of pre-existing beliefs related to the domain of discourse. Clearly, young children have many beliefs about the world when they learn language, and in fact, the process of learning language is largely one of learning to ground the meanings of words and sentences in these non-linguistically acquired beliefs. In some ways, the idea that language understanding requires a belief system is not new, as natural language researchers have been saying for years that background knowledge is essential to reducing ambiguity in sentence meanings BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . But despite this general acknowledgement of the importance of background knowledge, we see very few natural language understanding systems that actually employ a large belief system as the basis for comprehending sentence meanings, and for determining whether the meaning of a new sentence contradicts, extends, or is already present in its belief system. We present here a step in this direction: a probabilistic semantic parser that uses a large knowledge base (NELL) to form a prior probability distribution on the meanings of sentences it parses, and that "understands" each sentence either by identifying its existing beliefs that correspond to the sentence's meaning, or by creating new beliefs. More precisely, our semantic parser corresponds to a probabilistic generative model that assigns high probability to sentence semantic parses resulting in beliefs it already holds, lower prior probability to parses resulting in beliefs it does not hold but which are consistent with its more abstract knowledge about semantic types of arguments to different relations, and still lower prior probability to parses that contradict its beliefs about which entity types can participate in which relations. This work is only a first step. It is limited in that we currently use it to parse sentences with a simple noun-verb-noun syntax (e.g. "Horses eat hay."), and considers only factual assertions in declarative sentences. Its importance is that it introduces a novel approach in which the semantic parser (a) prefers sentence semantic parses that yield assertions it already believes, while (b) still allowing with lower prior probability sentence interpretations that yield new beliefs involving novel words, and (c) even allowing beliefs inconsistent with its background knowledge about semantic typing of different relations. We introduce algorithms for training the probabilistic grammar and producing parses with high posterior probability, given its prior beliefs and a new sentence. We present experimental evidence of the success and tractability of this approach for sentences with simple syntax, and evidence showing that the incorporated belief system, containing millions of beliefs, allows it to outperform state-of-the-art semantic parsers that do not hold such beliefs. Thus, we provide a principled, probabilistic approach to using a current belief system to guide semantic interpretation of new sentences which, in turn, can be used to augment and extend the belief system. We also argue that our approach can be extended to use the document-level context of a sentence as an additional source of background beliefs. For reasons including but not limited to performance and complexity, most modern parsers operate over tokens, such as words. While this has worked sufficiently well for many applications, this approach assumes that a tokenization preprocessing step produces the correct output. This is nontrivial in many languages, such as Chinese, Thai, Japanese, and Tibetic languages. In addition, a large portion of the English vocabulary is created from the combination of simpler morphemes, such as the words “build-er,” “in-describ-able,” “anti-modern-ist.” Moreover, language can be very noisy. Text messages, communication in social media, and real-world speech are but a few examples of noise obfuscating language. Standard algorithms for tokenization, lemmatization, and other preprocessing are oblivious to the underlying semantics, much less any background knowledge. Incorporating these components into a “joint parsing” framework will enable semantics and background knowledge to jointly inform lower-level processing of language. Our method couples semantics with syntax and other lower-level aspects of language, and can be guided by background knowledge via the semantic prior. We will demonstrate how this can be leveraged in our framework to model the morphology of individual verbs in a temporally-scoped relation extraction task. Semantic statements are the logical expressions that represent meaning in sentences. For example, the semantic statement turn_on_device(person:Ada, device:gpu_cluster) may be used to express the meaning of the sentence example given earlier. There are many languages or semantic formalisms that can be used to encode these logical forms: first-order logic with lambda calculus BIBREF12 , frame semantics BIBREF13 , abstract meaning representation BIBREF14 , dependency-based compositional semantics BIBREF15 , vector-space semantics BIBREF16 , BIBREF17 , for example. Our approach is flexible and does not require the use of a specific semantic formalism. In section "Hierarchical Dirichlet processes" , we review HDPs and describe the setting that we require to define our grammar. We present our approach in section UID17 to perform HDP inference in this new setting. In section "Generative semantic grammar" , we present the main generative process in our framework, and detail our application of the HDP. Although we present our model from a generative perspective, we show in the description of the framework that discriminative techniques can be integrated. Inference in our model is described in section "Inference" . There, we present a chart-driven agenda parser that can leverage the semantic prior to guide its search. Finally, in section "Results" , we evaluate our parser on two relation-extraction tasks: the first is a task to extract simple predicate-argument representations from SVO sentences, and the second is a temporally-scoped relation extraction task that demonstrates our parser's ability to model the morphology of individual words, leading to improved generalization performance over words. Moreover, we demonstrate that the inclusion of background knowledge from a knowledge base improves parsing performance on these tasks. The key contributions of this article are: Background Our model is an extension of context-free grammars (CFGs) BIBREF18 that couples syntax and semantics. To generate a sentence in our framework, the semantic statement is first drawn from a prior. A grammar then recursively constructs a syntax tree top-down, randomly selecting production rules from distributions that depend on the semantic statement. We present a particular incarnation of a grammar in this framework, where hierarchical Dirichlet processes (HDPs) BIBREF19 are used to select production rules randomly. The application of HDPs in our setting is novel, requiring a new inference technique. The use of the term “generative” does not refer to the Chomskian tradition of generative grammar BIBREF20 , although our approach does fall broadly within that framework. Rather, it refers to the fact that our model posits a probabilistic mechanism by which sentences are generated (by the speaker). Performing probabilistic inference under this model yields a parsing algorithm (the listener). This generative approach to modeling grammar underscores the duality between language generation and language understanding. Our grammar can be related to synchronous CFGs (SCFGs) BIBREF21 , which have been extended to perform semantic parsing BIBREF22 , BIBREF23 , BIBREF24 . However, in established use, SCFGs describe the generation of the syntactic and semantic components of sentences simultaneously, which makes the assumption that the induced probability distributions of the semantic and syntactic components factorize in a “parallel” manner. Our model instead describes the generation of the semantic component as a step with occurs prior to the syntactic component. This can be captured in SCFGs as a prior on the semantic start symbol, making no factorization assumptions on this prior. This is particularly useful when employing richer prior distributions on the semantics, such as a model of context or a knowledge base. Adaptor grammars BIBREF25 provide a framework that can jointly model the syntactic structure of sentences in addition to the morphologies of individual words BIBREF26 . Unlike previous work with adaptor grammars, our method couples syntax with semantics, and can be guided by background knowledge via the semantic prior. We will demonstrate how this can be leveraged in our framework to model the morphology of individual verbs in a temporally-scoped relation extraction task. Cohen10 show how to perform dependency grammar induction using adaptor grammars. While grammar induction in our framework constitutes an interesting research problem, we do not address it in this work. As in other parsing approaches, an equivalence can be drawn between our parsing problem and the problem of finding shortest paths in hypergraphs BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 . Our algorithm can then be understood as an application of $\textrm {A}^*$ search for the $k$ -best paths in a very large hypergraph. Our parser incorporates prior knowledge to guide its search, such as from an ontology and the set of beliefs in a knowledge base. Using this kind of approach, the parser can be biased to find context-appropriate interpretations in otherwise ambiguous or terse utterances. While systems such as DurrettK14, NakasholeM15, KimMoldovan1995, and Salloum09 use background knowledge about the semantic types of different noun phrases to improve their ability to perform entity linking, co-reference resolution, prepositional phrase attachment, information extraction, and question answering, and systems such as RatinovR12, DurrettK14, and ProkofyevTLVDC15 link noun phrases to Wikipedia entries to improve their ability to resolve co-references, these uses of background knowledge remain fragmentary. Krishnamurthy2014 developed a CCG parser that incorporates background knowledge from a knowledge base during training through distant supervision, but their method is not able to do so during parsing. Our parser can be trained once, and then applied to a variety of settings, each with a different context or semantic prior. Hierarchical Dirichlet processes A core component of our statistical model is the Dirichlet process (DP) BIBREF32 , which can be understood as a distribution over probability distributions. If a distribution $G$ is drawn from a DP, we can write $G\sim \text{DP}(\alpha ,H)$ , where the DP is characterized by two parameters: a concentration parameter $\alpha >0$ and a base distribution $H$ . The DP has the useful property that $\mathbb {E}[G] = H$ , and the concentration parameter $\alpha $ describes the “closeness” of $G$ to the base distribution $H$ . In typical use, a number of parameters $\theta _i$ are drawn from a discrete distribution $G$ , which is itself drawn from a Dirichlet process. The observations $G\sim \text{DP}(\alpha ,H)$0 are drawn using the parameters $G\sim \text{DP}(\alpha ,H)$1 from another distribution $G\sim \text{DP}(\alpha ,H)$2 . This may be written as: $$G &\sim \text{DP}(\alpha , H), \\ \theta _1,\dots ,\theta _n &\sim G, \\ y_i &\sim F(\theta _i),$$ (Eq. 6) for $i=1,\hdots ,n$ . In our application, we will define $H$ to be a finite Dirichlet distribution and $F$ is a categorical distribution. $G$ can be marginalized out in the model above, resulting in the Chinese restaurant process representation BIBREF33 : $$\phi _1, \phi _2, \dots &\sim H, \\ z_i &= {\left\lbrace \begin{array}{ll} j & \text{with probability } \frac{\#\lbrace k < i : z_k = j\rbrace }{\alpha + i - 1}, \\ j^{new} & \text{with probability } \frac{\alpha }{\alpha + i - 1}, \end{array}\right.} \\ \theta _i &= \phi _{z_i} \text{ for } i = 1, \dots , n, \\ y_i &\sim F(\theta _i),$$ (Eq. 7) where $z_1 = 1$ , $j^{new} = \max \lbrace z_1,\dots ,z_{i-1}\rbrace + 1$ is the indicator of a new table, and the quantity $\#\lbrace k < i : z_k = j\rbrace $ is the number of observations that were assigned to table $j$ . The analogy is to imagine a restaurant where customers enter one at a time. Each customer chooses to sit at table $j$ with probability proportional to the number of people currently sitting at table $j$ , or at a new table $j^{new}$ with probability proportional to $\alpha $ . The $i^{th}$ customer's choice is represented as $z_i$ . As shown in later sections, this representation of the DP is amenable to inference using Markov chain Monte Carlo (MCMC) methods BIBREF34 , BIBREF35 . The hierarchical Dirichlet process (HDP) is an extension of the Dirichlet process for use in hierarchical modeling BIBREF19 . An advantage of this approach is that statistical strength can be shared across nodes that belong to the same subtree. In an HDP, every node $\textbf {n}$ in a fixed tree $T$ is associated with a distribution $G^\textbf {n}$ , and: $$G^\textbf {0} &\sim \text{DP}(\alpha ^{\textbf {0}}, H), \\ G^\textbf {n} &\sim \text{DP}(\alpha ^{\textbf {n}}, G^{\pi (\textbf {n})}),$$ (Eq. 8) where $\pi (\textbf {n})$ is the parent node of $\textbf {n}$ , and $\textbf {0}$ is the root of $T$ . In our application, the base distribution at the root $H$ is Dirichlet. We can draw observations $y_1,\hdots ,y_n$ from the HDP, given a sequence $x_1,\hdots ,x_n$ of $n$ paths from the root $\textbf {0}$ to a leaf: $$\theta _i &\sim G^{x_i}, \\ y_i &\sim F(\theta _i),$$ (Eq. 9) for $i=1,\hdots ,n$ . For notational brevity, we write this equivalently as $y_i\sim \text{HDP}(x_i,T)$ . Just as marginalizing the Dirichlet process yields the Chinese restaurant process, marginalizing the HDP yields the Chinese restaurant franchise (CRF). For every node in the HDP tree $\textbf {n} \in T$ , there is a “Chinese restaurant” consisting of an infinite number of tables. Every table $i$ in this restaurant at node $\textbf {n}$ is assigned to a table in the parent restaurant. The assignment variable $z_i^\textbf {n}$ is the index of the parent table to which table $i$ in node $\textbf {n}$ is assigned. $$\phi _1^\textbf {0}, \phi _2^\textbf {0}, \dots &\sim H, \\ \text{ for every node } \textbf {n} \in T, \hspace{14.22636pt} z_i^\textbf {n} &= {\left\lbrace \begin{array}{ll} j & \text{with probability } \propto n^{\pi (\textbf {n})}_j, \\ j^{new} & \text{with probability } \propto \alpha ^{\pi (\textbf {n})}, \end{array}\right.} \\ \phi _i^\textbf {n} &= \phi _{z_i^\textbf {n}}^{\pi (\textbf {n})},$$ (Eq. 10) where $\pi (\textbf {n})$ is the parent of node $\textbf {n}$ , and $n^{\pi (\textbf {n})}_j$ is the current number of customers at node $\pi (\textbf {n})$ sitting at table $j$ . We are mildly abusing notation here, since $n^{\pi (\textbf {n})}_j$ and $n^{\pi (\textbf {n})}$ refer to the number of customers at the time $z_i^\textbf {n}$ is drawn (which increases as additional $z_i^\textbf {n}$ are drawn). To draw the observation $y_i$ , we start with the leaf node at the end of the path $\textbf {n}$0 : $$\theta _i &= \phi ^{x_i}_k, \\ y_i &\sim F(\theta _i),$$ (Eq. 11) where $k - 1 = \#\lbrace j < i : x_j = x_i\rbrace $ is the number of previous observations drawn from node $x_i$ . Inference In this section, we describe our method for performing posterior inference in the HDP. Let $\mathbf {z} = \lbrace z^\textbf {n}_i : \textbf {n} \in T, i = 1,2,\hdots \rbrace $ be the set of table assignment variables in the HDP. If the distributions $H$ and $F$ are conditionally conjugate, as they are in our application, the $\mathbf {\phi }$ variables can be integrated out in closed form: $$p(\mathbf {z}|\mathbf {x},\mathbf {y}) = p(\mathbf {x}) p(\mathbf {z}) \int p(\mathbf {y}|\mathbf {x},\mathbf {z},\mathbf {\phi }) d\mathbf {\phi }.$$ (Eq. 13) The posterior $p(\mathbf {z}|\mathbf {x},\mathbf {y})$ is intractable to compute exactly, and so we approximate it by sampling. We obtain samples from $\mathbf {z}|\mathbf {x},\mathbf {y}$ by performing collapsed Gibbs sampling as described in section 5.1 of journals/jasa/Teh06: we repeatedly sample $\mathbf {z}$ from its conditional distribution, with $\mathbf {\phi }$ integrated out: $$z^\textbf {n}_i | \mathbf {x}, \mathbf {y}, z^\textbf {n}_{-i} = {\left\lbrace \begin{array}{ll} j &\text{with prob.} \propto \#\lbrace k\ne i : z^\textbf {n}_k = j\rbrace \cdot p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j), \\ j^{new} &\text{with prob.} \propto \alpha ^\textbf {n} \cdot p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j^{new}), \end{array}\right.} $$ (Eq. 14) where $y^\textbf {n}_i$ is the set of “descendant” observations of table $i$ in node $\textbf {n}$ (this includes observations assigned directly to the table, in addition to those assigned to tables further down in the hierarchy which themselves are assigned to this table), $y^\textbf {n}_{-i} = \mathbf {y} \setminus y^\textbf {n}_i$ is the set of all other observations, and $z^\textbf {n}_{-i} = \mathbf {z} \setminus z^\textbf {n}_i$ is the set of all other table assignment variables. Computing $p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j)$ is straightforward since we can follow the chain of table assignments to the root. Let $r^\textbf {n}_i$ be the root cluster assignment of the table $i$ at node $\textbf {n}$ . In fact, we found it advantageous for performance to keep track of the root cluster assignments $\mathbf {r}$ for every table in the hierarchy. Thus, when $i$0 , it must be the case that $i$1 were drawn from $i$2 with parameter $i$3 . Computing $p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j^{new})$ requires marginalizing over the assignment of the new table $z^{\pi (\textbf {n})}_{j^{new}}$ : $$p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^\textbf {n}_i = j^{new}) = &\sum _{k=1}^{m^{\pi (\textbf {n})}} \frac{n_k^{\pi (\textbf {n})}}{n^{\pi (\textbf {n})} + \alpha ^{\pi (\textbf {n})}} p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^{\pi (\textbf {n})}_{j^{new}} = k) \nonumber \\ &+ \frac{\alpha ^{\pi (\textbf {n})}}{n^{\pi (\textbf {n})} + \alpha ^{\pi (\textbf {n})}} p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, z^{\pi (\textbf {n})}_{j^{new}} = k^{new}),$$ (Eq. 15) where $m^{\pi (\textbf {n})}$ is the number of occupied tables at the node $\pi (\textbf {n})$ . At the root node $\pi (\textbf {n}) = \textbf {0}$ , the above probability is just the prior of $y^\textbf {n}_i$ . We observe that the above probabilities are linear functions of the likelihoods $p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, r^\textbf {n}_i = k)$ for various root cluster assignments $r^\textbf {n}_i = k$ . Implemented naively, generating a single sample from equation 14 can take time linear in the number of clusters at the root, which would result in a quadratic-time algorithm for a single Gibbs iteration over all $\mathbf {z}$ . However, we can exploit sparsity in the root cluster assignment likelihoods to improve performance. When $H = \text{Dir}(\beta )$ is a Dirichlet distribution and $F$ is a categorical, then the collapsed root cluster assignment likelihood is: $$p(y^\textbf {n}_i | \mathbf {x}, y^\textbf {n}_{-i}, z^\textbf {n}_{-i}, r^\textbf {n}_i = k) = \frac{\prod _t \left( \beta _t + \#\lbrace t \in y^\textbf {0}_k\rbrace \right)^{(\#\lbrace t \in y^\textbf {n}_i\rbrace )}}{\left(\sum _t \beta _t + \# y^\textbf {0}_k \right)^{(\# y^\textbf {n}_i)}}.$$ (Eq. 16) Here, $a^{(b)}$ is the rising factorial $a(a + 1)(a + 2)\hdots (a + b - 1) = \frac{\Gamma (a + b)}{\Gamma (a)}$ , and $\#\lbrace t \in y^\textbf {n}_i\rbrace $ is the number of elements in $y^\textbf {n}_i$ with value $t$ . Notice that the denominator depends only on the sizes and not on the contents of $y^\textbf {n}_i$ and $y^\textbf {0}_k$ . Caching the denominator values for common sizes of $y^\textbf {n}_i$ and $y^\textbf {0}_k$ can allow the sampler to avoid needless recomputation. This is especially useful in our application since many of the tables at the root tend to be small. Similarly, observe that the numerator factor is 1 for values of $t$ where $a(a + 1)(a + 2)\hdots (a + b - 1) = \frac{\Gamma (a + b)}{\Gamma (a)}$0 . Thus, the time required to compute the above probability is linear in the number of unique elements of $a(a + 1)(a + 2)\hdots (a + b - 1) = \frac{\Gamma (a + b)}{\Gamma (a)}$1 , which can improve the scalability of our sampler. We perform the above computations in log space to avoid numerical overflow. In previous uses of the HDP, the paths $x_i$ are assumed to be fixed. For instance, in document modeling, the paths correspond to documents or predefined categories of documents. In our application, however, the paths may be random. In fact, we will later show that our parser heavily relies on the posterior predictive distribution over paths, where the paths correspond to semantic parses. More precisely, given a collection of training observations $\mathbf {y} = \lbrace y_1,\hdots ,y_n\rbrace $ with their paths $\mathbf {x} = \lbrace x_1,\hdots ,x_n\rbrace $ , we want to compute the probability of a new path $x^{new}$ given a new observation $y^{new}$ : $$p(x^{new}|y^{new},\mathbf {x},\mathbf {y}) &\propto p(x^{new}) \int p(y^{new}|\mathbf {z},x^{new}) p(\mathbf {z}|\mathbf {x},\mathbf {y}) d\mathbf {z}, \\ &\approx \frac{p(x^{new})}{N_{samples}} \sum _{\mathbf {z}^* \sim \mathbf {z}|\mathbf {x},\mathbf {y}} p(y^{new}|\mathbf {z}^*,x^{new}). $$ (Eq. 18) Once we have the posterior samples $\mathbf {z}^*$ , we can compute the quantity $p(y^{new}|\mathbf {z}^*,x^{new})$ by marginalizing over the table assignment for the new observation $y$ : $$p(y^{new}|\mathbf {z}^*,x^{new}) = &\sum _{j=1}^{m^{x^{new}}} \frac{n_j^{x^{new}}}{n^{x^{new}} + \alpha ^{x^{new}}} \hspace{2.84544pt} p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_j) \nonumber \\ &+ \frac{\alpha ^{x^{new}}}{n^{x^{new}} + \alpha ^{x^{new}}} \hspace{2.84544pt} p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_{j^{new}}).$$ (Eq. 19) Here, $m^{x^{new}}$ is the number of occupied tables at node $x^{new}$ , $n^{x^{new}}_j$ is the number of customers sitting at table $j$ at node $x^{new}$ , and $n^{x^{new}}$ is the total number of customers at node $x^{new}$ . The first term $p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_j)$ can be computed since the $j^{th}$ table exists and is assigned to a table in its parent node, which in turn is assigned to a table in its parent node, and so on. We can follow the chain of table assignments to the root. In the second term, the observation is assigned to a new table, whose assignment is unknown, and so we marginalize again over the assignment in the parent node for this new table: $$p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi ^{x^{new}}_{j^{new}}) = &\sum _{j=1}^{m^{\pi (x^{new})}} \frac{n_j^{\pi (x^{new})}}{n^{\pi (x^{new})} + \alpha ^{\pi (x^{new})}} \hspace{5.69046pt} p\left(y^{new} \Big | \mathbf {z}^*, \theta ^{new} = \phi _j^{\pi (x^{new})}\right) \nonumber \\ &+ \frac{\alpha ^{\pi (x^{new})}}{n^{\pi (x^{new})} + \alpha ^{\pi (x^{new})}} \hspace{5.69046pt} p\left(y^{new} \Big | \mathbf {z}^*, \theta ^{new} = \phi _{j^{new}}^{\pi (x^{new})}\right),$$ (Eq. 20) where $\pi (x^{new})$ is the parent node of $x^{new}$ . Again, the probability in the first term can be computed as before, but the probability in the second term depends on the assignment of the new table, which is unknown. Thus, since it is possible that a new table will be created at every level in the hierarchy up to the root, we can apply this formula recursively. At the root $\textbf {0}$ , the probability $p(y^{new} | \mathbf {z}^*, \theta ^{new} = \phi _{j^{new}}^\textbf {0})$ is just the prior probability of $y^{new}$ . If the tree $T$ is small, it is straightforward to compute the quantity in equation for every path $x^{new}$ in the tree, using the method described above. In our application however, the size of $T$ depends on the size of the ontology, and may easily become very large. In this case, the naïve approach becomes computationally infeasible. As such, we develop an algorithm to incrementally find the $k$ best paths that maximize the quantity in equation . For sparse distributions, where most of the probability mass is concentrated in a small number of paths $x^{new}$ , this algorithm can effectively characterize the predictive distribution in equation 18 . The algorithm is essentially a search over nodes in the tree, starting at the root and descending the nodes of the tree $T$ , guided through paths of high probability. Each search state $\texttt {s}$ consists of the following fields: $\texttt {s.n}$ is the current position of the search in the tree. $\texttt {s.v}$ is an array of probability scores of length $N_{samples}$ . Each element in this array represents the probability of drawing the observation $y^{new}$ from the current node $\texttt {s.n}$ , and thus is identical to the probability of assigning $y^{new}$ to a new table at any child node of $\texttt {s.n}$ . This is useful to compute the quantity in equation using the recursive method as described above. The search is outlined in algorithm UID17 . We observe that the quantity in equation is a sum of independent functions, each being a linear combination of the terms $p(y^{new}|\mathbf {z}^*_i,\theta ^{new} = \phi _j^\textbf {n})$ over the tables available at node $\textbf {n}$ and the new table $p(y^{new}|\mathbf {z}^*_i,\theta ^{new} = \phi _{j^{new}}^\textbf {n})$ (this latter probability is stored in $\texttt {s.v}_i$ ). Thus, the upper bound on equation over all paths that pass through node $\texttt {s.n}$ is: $$\max _{\lbrace x^{new}:\texttt {s.n} \in x^{new}\rbrace } \frac{p(x^{new})}{N_{samples}} \sum _{i=1}^{N_{samples}} \max _{j=1,\hdots ,m^\texttt {s.n}} \left\lbrace p(y^{new}|\mathbf {z}^*_i,\theta ^{new}=\phi _j^\texttt {s.n}) , \texttt {s.v}_i \right\rbrace . $$ (Eq. 23) We sort elements in the priority queue using this expression. Fnfunction IfElseIfElseif elifelse Whilewhile{ Repeatrepeatuntil initialize priority queue with initial state $\texttt {s}$ $\texttt {s.n} \leftarrow \textbf {0}$ *[h]start at the root $\normalfont i=1,\hdots ,N_{samples},$ $\texttt {s.v}_i \leftarrow \sum _{j=1}^{m^\textbf {0}} \frac{n_j^\textbf {0}}{n^\textbf {0} + \alpha ^\textbf {0}} p(y^{new}|\mathbf {z}^*_i, \theta ^{new} = \phi _j^\textbf {0}) + \frac{\alpha ^\textbf {0}}{n^\textbf {0} + \alpha ^\textbf {0}} p(y^{new} | \mathbf {z}^*_i, \theta ^{new} = \phi ^\textbf {0}_{j^{new}})$ there are $k$ completed paths pop state s from the priority queue $\normalfont \texttt {s.n}$ is a leaf complete the path s.n with probability $\frac{p\lbrace x^{new} = \texttt {s.n}\rbrace }{N_{samples}} \sum _{i=1}^{N_{samples}} \texttt {s.v}_i$ child node $\normalfont \textbf {c}$ of $\normalfont \texttt {s.n}$ , create new search state $\texttt {s}^*$ $\texttt {s.n} \leftarrow \textbf {0}$0 $\texttt {s.n} \leftarrow \textbf {0}$1 $\texttt {s.n} \leftarrow \textbf {0}$2 push $\texttt {s.n} \leftarrow \textbf {0}$3 onto priority queue with key in equation 23 Search algorithm to find the $\texttt {s.n} \leftarrow \textbf {0}$4 best paths in the HDP that maximize the quantity in equation . As a result, once the algorithm has completed $\texttt {s.n} \leftarrow \textbf {0}$5 items, we are guaranteed that the search has found $\texttt {s.n} \leftarrow \textbf {0}$6 best paths. Thus, an “iterator” data structure can be efficiently implemented using this algorithm, which returns paths $\texttt {s.n} \leftarrow \textbf {0}$7 in order of decreasing predictive probability, with the first item being optimal. The search algorithm can be modified for other representations of the HDP, and can be extended to the case where $\texttt {s.n} \leftarrow \textbf {0}$8 and $\texttt {s.n} \leftarrow \textbf {0}$9 are not conjugate. It may also be incorporated into a larger inference procedure to jointly infer the paths $\normalfont i=1,\hdots ,N_{samples},$0 and the latent variables in the HDP. It is also straightforward to compute predictive probabilities where the path $\normalfont i=1,\hdots ,N_{samples},$1 is restricted to a subset of paths $\normalfont i=1,\hdots ,N_{samples},$2 : $\normalfont i=1,\hdots ,N_{samples},$3 . To do so, the algorithm is restricted to only expand nodes that belong to paths in $\normalfont i=1,\hdots ,N_{samples},$4 . An important concern when performing inference with very large trees $T$ is that it is not feasible to explicitly store every node in memory. Fortunately, collapsed Gibbs sampling does not require storing nodes whose descendants have zero observations. In addition, algorithm UID17 can be augmented to avoid storing these nodes, as well. To do so, we make the observation that for any node $\textbf {n} \in T$ in the tree whose descendants have no observations, $\textbf {n}$ will have zero occupied tables. Therefore, the probability $p(y^{new}|\mathbf {z}^*,x^{new}) = p(y^{new}|\mathbf {z}^*,\theta ^{new}=\phi ^\textbf {n}_{j^{new}})$ is identical for any path $x^{new}$ that passes through $\textbf {n}$ . Thus, when the search reaches node $\textbf {n}$ , it can simultaneously complete all paths $x^{new}$ that pass through $\textbf {n}$ , and avoid expanding nodes with zero observations among its descendants. As a result, we only need to explicitly store a number of nodes linear in the size of the training data, which enables practical inference with very large hierarchies. There is a caveat that arises when we wish to compute a joint predictive probability $p(x^{new}_1, \hdots , x^{new}_k | y^{new}_1, \hdots , y^{new}_k, \mathbf {x}, \mathbf {y})$ , where we have multiple novel observations. Re-writing equation 18 in this setting, we have: $$p(x^{new}_1, \hdots , x^{new}_k &| y^{new}_1, \hdots , y^{new}_k, \mathbf {x}, \mathbf {y}) \nonumber \\ &\propto p(\mathbf {x}^{new}) \int p(y^{new}_1,\hdots ,y^{new}_k|\mathbf {z}^*,\mathbf {x}^{new}) p(\mathbf {z}|\mathbf {x},\mathbf {y})d\mathbf {z}. $$ (Eq. 24) For the CRF, the joint likelihood $p(y^{new}_1,\hdots ,y^{new}_k|\mathbf {z}^*,\mathbf {x}^{new})$ does not factorize, since the observations are not independent (they are exchangeable). One workaround is to use a representation of the HDP where the joint likelihood factorizes, such as the direct assignment representation BIBREF19 . Another approach is to approximate the joint likelihood with the factorized likelihood. In our parser, we instead make the following approximation: $$p(y^{new}_1,\hdots ,y^{new}_k | \mathbf {x}^{new},\mathbf {x},\mathbf {y}) &= \prod _{i=1}^k p(y^{new}_i | y^{new}_1,\hdots ,y^{new}_{i-1}, \mathbf {x}^{new},\mathbf {x},\mathbf {y}) \\ &\approx \prod _{i=1}^k p(y^{new}_i | \mathbf {x}^{new},\mathbf {x},\mathbf {y}). $$ (Eq. 25) Substituting into equation 24 , we obtain: $$p(\mathbf {x}^{new} | \mathbf {y}^{new}, \mathbf {x}, \mathbf {y}) \propto p(\mathbf {x}^{new}) \prod _{i=1}^k \int p(y^{new}_i|\mathbf {z}^*,\mathbf {x}^{new}) p(\mathbf {z}|\mathbf {x},\mathbf {y})d\mathbf {z}.$$ (Eq. 26) When the size of the training data $(\mathbf {x},\mathbf {y})$ is large with respect to the test data $(\mathbf {x}^{new},\mathbf {y}^{new})$ , the approximation works well, which we also find to be the case in our experiments. Generative semantic grammar We present a generative model of text sentences. In this model, semantic statements are generated probabilistically from some higher-order process. Given each semantic statement, a formal grammar selects text phrases, which are concatenated to form the output sentence. We present the model such that it remains flexible with regard to the semantic formalism. Even though our grammar can be viewed as an extension of context-free grammars, it is important to note that our model of grammar is only conditionally context-free, given the semantic statement. Otherwise, if the semantic information is marginalized out, the grammar is sensitive to context. Definition Let $\mathcal {N}$ be a set of nonterminals, and let $\mathcal {W}$ be a set of terminals. Let $\mathbf {R}$ be a set of production rules which can be written in the form $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$ where $\textrm {A}\in \mathcal {N}$ and $\textrm {B}_1,\hdots ,\textrm {B}_k\in \mathcal {W}{2mu}\cup {2mu}\mathcal {N}$ . The tuple $(\mathcal {W},\mathcal {N},\mathbf {R})$ is a context-free grammar (CFG) BIBREF18 . We couple syntax with semantics by augmenting the production rules $\mathbf {R}$ . In every production rule $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$ in $\mathbf {R}$ , we assign to every right-hand side symbol $B_i$ a surjective operation $f_i : \mathcal {X}_A\mapsto \mathcal {X}_{B_i}$ that transforms semantic statements, where $\mathcal {X}_A$ is the set of semantic statements associated with the symbol $\textrm {A}$ and $\mathcal {X}_{B_i}$ is the set of semantic statements associated with the symbol $\textrm {B}_i$ . Intuitively, the operation describes how the semantic statement is “passed on” to the child nonterminals in the generative process. During parsing, these operations will describe how simpler semantic statements combine to form larger statements, enabling semantic compositionality. For example, suppose we have a semantic statement $x = \textit {has\_color(reptile:frog,color:green)}$ and the production rule $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$0 . We can pair the semantic operation $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$1 with the $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$2 in the right-hand side such that $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$3 selects the subject argument. Similarly, we can pair the semantic operation $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$4 with the $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$5 in the right-hand side such that $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$6 is the identity operation. The augmented production rule is $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$7 and the set of augmented rules is $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$8 . In parsing, we require the computation of the inverse of semantic operations, which is the preimage of a given semantic statement $\textrm {A} \rightarrow \textrm {B}_1 \hdots \textrm {B}_k$9 . Continuing the example above, $\mathbf {R}$0 returns a set that contains the statement $\mathbf {R}$1 in addition to statements like $\mathbf {R}$2 . To complete the definition of our grammar, we need to specify the method that, given a nonterminal $\mathrm {A} \in \mathcal {N}$ and a semantic statement $x \in \mathcal {X}_A$ , selects a production rule from the set of rules in $\mathbf {R}^*$ with the left-hand side nonterminal $A$ . To accomplish this, we define $\texttt {select}_{A,x}$ as a distribution over rules from $\mathbf {R}^*$ that has $\textrm {A}$ as its left-hand side, dependent on $x$ . We will later provide a number of example definitions of this $\texttt {select}_{A,x}$ distribution. Thus, a grammar in our framework is fully specified by the tuple $(\mathcal {W},\mathcal {N},\mathbf {R}^*,\texttt {select})$ . Note that other semantic grammar formalisms can be fit into this framework. For example, in categorical grammars, a lexicon describes the mapping from elementary components of language (such as words) to a syntactic category and a semantic meaning. Rules of inference are available to combine these lexical items into (tree-structured) derivations, eventually resulting in a syntactic and semantic interpretation of the full sentence BIBREF36 , BIBREF37 . In our framework, we imagine this process in reverse. The set $\mathcal {X}_S$ is the set of all derivable semantic statements with syntactic category $S$ . The generative process begins by selecting one statement from this set $x \in \mathcal {X}_S$ . Next, we consider all applications of the rules of inference that would yield $x$ , with each unique application of an inference rule being equivalent to a production rule in our framework. We select one of these production rules according to our generative process and continue recursively. The items in the lexicon are equivalent to preterminal production rules in our framework. Thus, the generative process below describes a way to endow parses in categorical grammar with a probability measure. This can be used, for example, to extend earlier work on generative models with CCG BIBREF38 , BIBREF39 . Different choices of the $\texttt {select}$ distribution induce different probability distributions over parses. We do not see a straightforward way to fit linear or log-linear models over full parses into our framework, where a vector of features can be computed for each full parse BIBREF40 , BIBREF41 . This is due to our assumption that, given the semantic statement, the probability of a parse factorizes over the production rules used to construct that parse. However, the select distribution can be defined using linear and log-linear models, as we will describe in section "Selecting production rules" . Generative process The process for generating sentences in this framework begins by drawing a semantic statement $x\in \mathcal {X}_S$ where $\textrm {S}$ is the root nonterminal. Thus, there is a prior distribution $p(x)$ for all $x\in \mathcal {X}_S$ . Next, the syntax is generated top-down starting at $\textrm {S}$ . We draw a production rule with $\textrm {S}$ as the left-hand side from $\texttt {select}_{S,x}$ . The semantic transformation operations $f_i$ are applied to $x$ and the process is repeated for the right-hand side nonterminals. More concretely, we define the following operation $\texttt {expand}$ which takes two arguments: a symbol $\textrm {S}$0 and a semantic statement $\textrm {S}$1 . FExpandexpand FYieldyield FSelectselect tworuled [H] $x$ , A ${\rm A}\in \mathcal {W}$ *[h]simply return the word if A is a terminal A *[h]select a production rule with form $\textrm {A}\rightarrow \textrm {B}_1,\hdots ,\textrm {B}_k$ $(\textrm {A}, \textrm {B}_1, \hdots , \textrm {B}_k, f_1, \hdots , f_k) \sim _{\textrm {A},x}$ $((f_1(x), \textrm {B}_1), \hdots , (f_k(x), \textrm {B}_k))$ The yield operation concatenates strings into a single output string. Then, the output sentence $y$ is generated simply by $y=\texttt {expand}(x, \textrm {S})$ . Depending on the application, we may require that the generative process capitalizes the first letter of the output sentence, and/or appends terminating punctuation to the end. A noise model may also be appended to the generative process. The above algorithm may be easily extended to also return the full syntax tree. Selecting production rules There are many possible choices for the $\texttt {select}$ distribution. The most straightforward is to define a categorical distribution over the available production rules, and simply draw the selected rule from this distribution. The result would be a simple extension of probabilistic context-free grammars (PCFGs) that couples semantics with syntax. However, this would remove any dependence between the semantic statement and the production rule selection. To illustrate the importance of this dependence, consider generating a sentence with the semantic statement athlete_plays_sport(athlete:roger_federer,sport:tennis) using the grammar in figure 2 (the process is graphically depicted in figure 3 ). We start with the root nonterminal $\textrm {S}$ : We can only select the first production rule, and so we apply the semantic operation select_arg1 on the semantic statement to obtain athlete:roger_federer for the right-hand side nonterminal $\textrm {N}$ . We apply the semantic operation delete_arg1 to obtain athlete_plays_sport( $\cdot $ ,sport:tennis) for $\textrm {VP}$ . Expanding $\textrm {N}$ , we select a terminal symbol given the semantic statement athlete:roger_federer. Suppose “Andre Agassi” is returned. Now, we expand the $\textrm {VP}$ symbol. We draw from $\texttt {select}_{\textrm {VP}}$ to choose one of the two available production rules. Suppose the rule $\textrm {VP} \rightarrow \textrm {V \hspace{2.84544pt} N}$ is selected. Thus, we apply the identity operation for the $\textrm {V}$ nonterminal to obtain athlete_plays_sport( $\cdot $ ,sport:tennis). We similarly apply select_arg2 for the $\textrm {N}$ nonterminal to obtain sport:tennis. We expand the $\textrm {V}$ nonterminal, drawing from $\texttt {select}_{\textrm {V}}$ on the semantic statement athlete_plays_sport( $\cdot $ ,sport:tennis). Suppose “plays” is returned. Finally, we expand the $\textrm {N}$ nonterminal, drawing from $\texttt {select}_{\textrm {N}}$ with the statement sport:tennis. Suppose “tennis” is returned. We concatenate all returned strings to form the sentence “Andre Agassi plays tennis.” However, now consider generating another sentence with the same grammar for the statement athlete_plays_sport(athlete:roger_federer, sport:swimming). In UID32 of the above process, the select distribution would necessarily have to depend on the semantic statement. In English, the probability of observing a sentence of the form $\textrm {N} \hspace{2.84544pt} \textrm {V} \hspace{2.84544pt} \textrm {N}$ ('Rachmaninoff makes music') versus $\textrm {N} \hspace{2.84544pt} \textrm {V}$ ('Rachmaninoff composes') depends on the underlying semantic statement. To capture this dependence, we use HDPs to define the select distribution. Every nonterminal $\textrm {A}\in \mathcal {N}$ is associated with an HDP, and in order to fully specify the grammar, we need to specify the structure of each HDP tree. Let $T_A$ be the tree associated with the nonterminal $\textrm {A}$ . The model is flexible with how the trees are defined, but we construct trees with the following method. First, select $m$ discrete features $g_1,\hdots ,g_m$ where each $g_i : \mathcal {X} \mapsto \mathbb {Z}$ and $\mathbb {Z}$ is the set of integers. These features operate on semantic statements. For example, suppose we restrict the space of semantic statements to be the set of single predicate instances (triples). The relations in an ontology can be assigned unique integer indices, and so we may define a semantic feature as a function which simply returns the index of the predicate given a semantic statement. We construct the HDP tree $T_A$ starting with the root, we add a child node for every possible output of $g_1$ . We repeat the process recursively, constructing a complete tree of depth $m + 1$ . As an example, we will construct a tree for the nonterminal $\textrm {VP}$ for the example grammar in figure 2 . Suppose in our ontology, we have the predicates athlete_plays_sport and musician_plays_instrument, labeled 0 and 1, respectively. The ontology also contains the concepts athlete:roger_federer, sport:tennis, and sport:swimming, also labeled 0, 1, and 2, respectively. We define the first feature $g_1$ to return the predicate index. The second feature $g_2$ returns the index of the concept in the second argument of the semantic statement. The tree is constructed starting with the root, we add a child node for each predicate in the ontology: athlete_plays_sport and musician_plays_instrument. Next, for each child node, we add a grandchild node for every concept in the ontology: athlete:roger_federer, sport:tennis, and sport:swimming. The resulting tree $T_{VP}$ has depth 2, with a root node with 2 child nodes, and each child node has 3 grandchild nodes. This construction enables the select distribution for the nonterminal $\textrm {VP}$ to depend on the predicate and the second argument of the semantic statement. With the fully-specified HDPs and their corresponding trees, we have fully specified select. When sampling from $\texttt {select}_{\textrm {A},x}$ for the nonterminal $\textrm {A} \in \mathcal {N}$ and a semantic statement $x \in \mathcal {X}$ , we compute the $m$ semantic features for the given semantic statement: $g_1(x), g_2(x), \hdots , g_m(x)$ . This sequence of indices specifies a path from the root of the tree down to a leaf. We then simply draw a production rule observation from this leaf node, and return the result: $r \sim \text{HDP}(x, T_A) = \texttt {select}_{\textrm {A},x}$ . There are many other alternatives for defining the select distribution. For instance, a log-linear model can be used to learn dependence on a set of features. The HDP provides statistical advantages, smoothing the learned distributions, resulting in a model more robust to data sparsity issues. In order to describe inference in this framework, we must define additional concepts and notation. For a nonterminal $\textrm {A}\in \mathcal {N}$ , observe that the paths from the root to the leaves of its HDP tree induce a partition on the set of semantic statements $\mathcal {X}_A$ . More precisely, two semantic statements $x_1,x_2 \in \mathcal {X}_A$ belong to the same equivalence class if they correspond to the same path in an HDP tree.
NELL
08cbc9b8a8df56ec7be626f89285a621e1350f63
08cbc9b8a8df56ec7be626f89285a621e1350f63_0
Q: Which dataset do they use? Text: Introduction An important property of human communication is that listeners can infer information beyond the literal meaning of an utterance. One well-studied type of inference is scalar inference BIBREF0, BIBREF1, whereby a listener who hears an utterance with a scalar item like some infers the negation of a stronger alternative with all: Chris ate some of the cookies. $\rightsquigarrow $ Chris ate some, but not all, of the cookies. Early accounts of scalar inferences (e.g., BIBREF2, BIBREF1, BIBREF3) considered them to arise by default unless specifically cancelled by the context. However, in a recent corpus study, degen2015investigating showed that there is much more variability in the strength of scalar inferences from some to not all than previously assumed. degen2015investigating further showed that this variability is not random and that several lexical, syntactic, and semantic/pragmatic features of context explain much of the variance in inference strength. Recent Bayesian game-theoretic models of pragmatic reasoning BIBREF4, BIBREF5, which are capable of integrating multiple linguistic cues with world knowledge, are able to correctly predict listeners' pragmatic inferences in many cases (e.g., BIBREF6, BIBREF7). These experimental and modeling results suggest that listeners integrate multiple linguistic and contextual cues in utterance interpretation, raising the question how listeners are able to draw these pragmatic inferences so quickly in interaction. This is an especially pressing problem considering that inference in Bayesian models of pragmatics is intractable when scaling up beyond toy domains to make predictions about arbitrary utterances. One possibility is that language users learn to use shortcuts to the inference (or lack thereof) by learning associations between the speaker's intention and surface-level cues present in the linguistic signal across many instances of encountering a scalar expression like some. In this work, we investigate whether it is possible to learn such associations between cues in the linguistic signal and speaker intentions by training neural network models to predict empirically elicited inference strength ratings from the linguistic input. In this enterprise we follow the recent successes of neural network models in predicting a range of linguistic phenomena such as long distance syntactic dependencies (e.g., BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15), semantic entailments (e.g., BIBREF16, BIBREF17), acceptability judgements BIBREF18, factuality BIBREF19, and, to some extent, speaker commitment BIBREF20. In particular, we ask: How well can a neural network sentence encoder learn to predict human inference strength judgments for utterances with some? To what extent does such a model capture the qualitative effects of hand-mined contextual features previously identified as influencing inference strength? To address these questions, we first compare the performance of neural models that differ in the underlying word embedding model (GloVe, ELMo, or BERT) and in the sentence embedding model (LSTM, LSTM$+$attention). We then probe the best model's behavior through a regression analysis, an analysis of attention weights, and an analysis of predictions on manually constructed minimal sentence pairs. The dataset We used the annotated dataset reported by degen2015investigating, a dataset of the utterances from the Switchboard corpus of telephone dialogues BIBREF21 that contain the word some. The dataset consists of 1,362 unique utterances with a noun phrase containing some (some-NP). For each example with a some-NP, degen2015investigating collected inference strength ratings from at least 10 participants recruited on Amazon's Mechanical Turk. Participants saw both the target utterance and ten utterances from the preceding discourse context. They then rated the similarity between the original utterance like (UNKREF8) and an utterance in which some was replaced with some, but not all like (UNKREF9), on a 7-point Likert scale with endpoints labeled “very different meaning” (1) and “same meaning” (7). Low similarity ratings thus indicate low inference strength, and high similarity ratings indicate high inference strength. I like, I like to read some of the philosophy stuff. I like, I like to read some, but not all, of the philosophy stuff. Using this corpus, degen2015investigating found that several linguistic and contextual factors influenced inference strength ratings, including the partitive form of, subjecthood, the previous mention of the embedded NP referent, determiner strength, and modification of the head noun. Partitive: (UNKREF10a-b) are example utterances from the corpus with and without partitive some-NPs, respectively. Values in parentheses indicate the mean inference strength rating for that item. On average, utterances with partitives yielded stronger inference ratings than ones without. I've seen some of them on repeat. (5.8) You sound like you have some small ones in the background. (1.5) Subjecthood: Utterances in which the some-NP appears in subject position, as in (UNKREF13a), yielded stronger inference ratings than utterances in which the some-NP appears in a different grammatical position, e.g., as a direct object as in (UNKREF13b). Some kids are really having it. (5.9) That would take some planning. (1.4) Previous mention: Discourse properties also have an effect on inference strength. A some-NP with a previously mentioned embedded NP referent yields stronger inferences than a some-NP whose embedded NP referent has not been previously mentioned. For example, (UNKREF16a) contains a some-NP in which them refers to previously mentioned Mission Impossible tape recordings, whereas planning in the some-NP in (UNKREF16b) has not been previously mentioned. I've seen some of them on repeats. (5.8) That would take some planning. (1.4) Modification: BIBREF22 also found a small effect of whether or not the head noun of the some-NP was modified such that some-NP with unmodified head nouns yielded slightly stronger inferences than those with modified head nouns. Determiner strength: Finally, the strength of the determiner some has traditionally been analyzed as having a weak, indefinite, non-presuppositional reading as well as a strong, quantificational, presuppositional reading BIBREF23, BIBREF24. While the weak/strong distinction has been notoriously hard to pin down BIBREF25, degen2015investigating used strength norms elicited independently for each item, which exploited the presuppositional nature of strong some: removing some (of) from utterances with weak some leads to higher ratings on a 7-point Likert scale from `different meaning' to `same meaning' than removing it from utterances with strong some. Items with stronger some – e.g., (UNKREF19a), strength 3.3 – yielded stronger inference ratings than items with weaker some – e.g., (UNKREF19b), strength 6.7. And some people don't vote. (5.2) Well, we could use some rain up here. (2.1) The quantitative findings from degen2015investigating are summarized in Figure FIGREF27, which shows in blue the regression coefficients for all predictors she considered (see the original paper for more detailed descriptions). For our experiments, we randomly split the dataset into a 70% training and 30% test set, resulting in 954 training items and 408 test items. Model The objective of the model is to predict mean inference strength rating $i$ given an utterance (a sequence of words) $U = \lbrace w_1,w_2,...,w_N\rbrace $. While the original participant ratings were on a Likert scale from 1 to 7, we rescale these values to the interval $[0,1]$. Figure FIGREF22 shows the overall model architecture. The model is a sentence classification model akin to the model proposed by BIBREF26. The model first embeds the utterance tokens using pre-trained embedding models, and then forms a sentence representation by passing the embedded tokens through a 2-layer bidirectional LSTM network (biLSTM) BIBREF27 with dropout BIBREF28 followed by a self-attention mechanisms that provides a weighted average of the hidden states of the top-most biLSTM hidden states. This sentence representation is then passed through a transformation layer with a sigmoid activation function, which outputs the predicted score in the interval $[0,1]$. We rescale this predicted value to fall in the original interval $[1,7]$. We evaluated three word embedding models: GloVe, a static pre-trained word embedding matrix BIBREF29, and pre-trained contextual word embedding models in the form of English ELMo BIBREF30, BIBREF31 and English BERT BIBREF32, BIBREF33 models. We used the 100d GloVe embeddings, and we evaluated the 768d uncased BERT-base and 1024d BERT-large models. Experiments ::: Training We used 5-fold cross-validation on the training data to optimize the following hyperparameters. Word embedding model: GloVe, ELMo, BERT-base, BERT-large. Output layer of word embedding models: $[1,3]$ for ELMo, $[1,12]$ for BERT-base, and $[1,24]$ for BERT-large. Number of training epochs: $[1,800]$. Dimension of LSTM hidden states: $\lbrace 100,200,400,800\rbrace $. Dropout rate in LSTM: $\lbrace 0.1,0.2,0.3,0.4\rbrace $. We first optimized the output layer of the word embedding model for each embedding model while keeping all other parameters fixed. We then optimized the other parameters for each embedding model by computing the average correlation between the model predictions and the human ratings across the five cross-validation folds. Architectural variants. We also evaluated all combinations of two architectural variants: First, we evaluated models in which we included the attention layer (LSTM+Attention) or simply used the final hidden state of the LSTM (LSTM) as a sentence representation. Second, since participants providing inference strength ratings also had access to the preceding conversational context, we also compared models that make predictions based only the target utterance with the some-NP and models that make predictions based on target utterances and the preceding conversational context. For the models using GloVe and ELMo, we prepended the conversational context to the target utterance to obtain a joint context and sentence embedding. For models using BERT, we made use of the fact that BERT had been trained to jointly embed two sentences or documents, and we obtained embeddings for the tokens in the target utterance by feeding the target utterance as the first document and the preceding context as the second document into the BERT encoder. For these models, we discarded the hidden states of the preceding context and only used the output of the BERT encoder for the tokens in the target utterance. Implementation details. We implemented the model in PyTorch BIBREF34. We trained the model using the Adam optimizer BIBREF35 with default parameters and a learning rate of 0.001, minimizing the mean squared error of the predicted ratings. In the no-context experiments, we truncated target utterances longer than 30 tokens, and in the experiments with context, we truncated the beginning of the preceding context such that the number of tokens did not exceed 150. Evaluation. We evaluated the model predictions in terms of their correlation $r$ with the human inference strength ratings. For the optimization of hyperparameters and architectural variants, we evaluated the model using 5-fold cross-validation. We then took the best set of parameters and trained a model on all the available training data and evaluated that model on the held-out data. Experiments ::: Tuning results We find that the attention layer improves predictions; that contextual word embeddings lead to better results than the static GloVe embeddings; and that including the conversational context does not improve predictions (see Appendix SECREF8, for learning curves of all models, and Section SECREF6, for a discussion of the role of conversational context). Otherwise, the model is quite insensitive to hyperparameter settings: neither the dimension of the hidden LSTM states nor the dropout rate had considerable effects on the prediction accuracy. We do find, however, that there are differences depending on the BERT and ELMo layer that we use as word representations. We find that higher layers work better than lower layers, suggesting that word representations that are influenced by other utterance tokens are helpful for this task. Based on these optimization runs, we chose the model with attention that uses the BERT-large embeddings but no conversational context for the subsequent experiments and analyses. Experiments ::: Test results Figure FIGREF26 shows the correlation between the best model according to the tuning runs (now trained on all training data) and the empirical ratings on the 408 held-out test items. As this plot shows, the model predictions fall within a close range of the empirical ratings for most of the items ($r=0.78$). Further, similarly as in the empirical data, there seem to be two clusters in the model predictions: one that includes lower ratings and one that includes higher ratings, corresponding to strong and weak scalar inferences, respectively. The only systematic deviation appears to be that the model does not predict any extreme ratings – almost all predictions are greater than 2 or less than 6, whereas the empirical ratings include some cases outside of this range. Overall, these results suggest that the model can learn to closely predict the strength of scalar inferences. However, this result by itself does not provide evidence that the model learned associations between linguistic cues and inference strength, since it could also be that, given the large number of parameters, the model learned spurious correlations independent of the empirically established cue-strength associations. To rule out the latter explanation, we probed the model's behavior in multiple ways, which we discuss next. Model behavior analyses Regression analysis. As a first analysis, we investigated whether the neural network model predictions explain (some of) the variance explained by the linguistic factors that modulate inference strength. To this end, we used a slightly simplified Bayesian implementation of the linear mixed-effects model by degen2015investigating using the brms BIBREF36 and STAN BIBREF37 packages and compared this original model to an extended model that included the output of the above NN model as a predictor. For this comparison, we investigated whether the magnitude of a predictor in the original model significantly decreased in the extended model that included the NN predictions, based on the reasoning that if the NN predictions already explain the variance previously explained by these manually coded predictors, then the original predictor should explain no or less additional variance. We approximated the probability that the magnitude of the coefficient in the extended model including the NN predictor is smaller than the coefficient in the original model, $P(|\beta _i^{extended}| < |\beta _i^{original}|)$, by sampling values for each coefficient from the distributions of the original and the extended models and comparing the magnitude of the sampled coefficients. We repeated this process 1,000,000 times and treated the simulated proportions as approximate probabilities. An issue with this analysis is that estimating the regression model only on the items in the held-out test set yields very wide credible intervals for some of the predictors–in particular for some of the interactions–since the model infers these values from very little data. We therefore performed this (and all subsequent) analyses on the entire data, and obtained the NN predictions through 6-fold cross-validation, so that the NN model always made predictions on data that it had not seen during training. This did yield the same qualitative results as the analyses only performed on the held-out test items (see Appendix SECREF9) but it also provided us with narrower credible intervals that highlight the differences between the coefficient estimates of the two models. Figure FIGREF27 shows the estimates of the coefficients in the original model and the extended model. We find that the NN predictions explain some or all of the variance originally explained by many of the manually coded linguistic features: the estimated magnitude of the predictors for partitive, determiner strength, linguistic mention, subjecthood, modification, utterance length, and two of the interaction terms decreased in the extended model. These results suggest that the NN model indeed learned associations between linguistic features and inference strength rather than only explaining variance caused by individual items. This is particularly true for the grammatical and lexical features; we find that the NN predictor explains most of the variance originally explained by the partitive, subjecthood, and modification predictors. More surprisingly, the NN predictions also explain a lot of the variance originally explained by the determiner strength predictor, which was empirically determined by probing human interpretation and is not encoded explicitly in the surface form utterance. One potential explanation for this is that strong and weak some have different context distributions. For instance, weak some occurs in existential there constructions and with individual-level predicates, whereas strong some tends not to BIBREF23, BIBREF38, BIBREF39. Since pre-trained word embedding models capture a lot of distributional information, the NN model is presumably able to learn this association. Model behavior analyses ::: Attention weight analysis. As a second type of analysis, we analyzed the attention weights that the model used for combining the token embeddings to a sentence embedding. Attention weight analyses have been successfully used for inspecting and debugging model decisions (e.g., BIBREF40, BIBREF41, BIBREF42, BIBREF43; but see BIBREF44, and BIBREF45, for critical discussions of this approach). Based on these results, we expected the model to attend more to tokens that are relevant for making predictions. Given that many of the hand-mined features that predict inference strength occur within or in the vicinity of the some-NP, we should therefore expect the model to attend most to the some-NP. To test this, we first explored whether the model attended on average more to some than to other tokens in the same position. Further, we exploited the fact that subjects generally occur at the beginning of a sentence. If the model attends to the vicinity of the some-NP, the average attention weights should be higher on early positions for utterances with a subject some-NP compared to utterances with a non-subject some-NP, and conversely for late utterance positions. We thus compared the average attention weights for each position in the utterance across utterances with subject versus non-subject some-NPs. To make sure that any effects were not only driven by the attention weight of the some-tokens, we set the attention weights of the token corresponding to some to 0 and re-normalized the attention weights for this analysis. Further, since the attention weights are dependent on the number of tokens in the utterance, it is crucial that the average utterance length across the two compared groups be matched. We addressed this by removing outliers and limiting our analysis to utterances up to length 30 (1,028 utterances), which incidentally equalized the number of tokens across the two groups. (While these exclusions resulted in tiny quantitative difference in the average attention weights, the qualitative patterns are not affected.) The left panel of Figure FIGREF30 shows the average attention weight by position for some versus other tokens. The model assigns much higher weight to some. The center panel of Figure FIGREF30 shows the average attention weight by position for subject vs. non-subject some-NP utterances. The attention weights are generally higher for tokens early in the utterance, but the attention weights of utterances with a subject some-NP are on average higher for tokens early in the utterance compared to utterances with the some-NP in non-subject positions. Both of these findings provide evidence that the model assigns high weight to the tokens within and surrounding the some-NP. In a more targeted analysis to assess whether the model learned to use the partitive cue, we examined whether the model assigned higher attention to the preposition of in partitive some-NPs compared to when of occurred elsewhere. As utterance length was again a potential confound, we conducted the analysis separately on the full set of utterances with raw attention weights and on a subset that included only utterances with at least two instances of of (128 utterances), in which we renormalized the weights of of-tokens to sum to 1. Results are shown in the right panel of Figure FIGREF30. The attention weights were higher for of tokens in partitive some-NPs, suggesting that the model learned an association between partitive of in some-NPs and inference strength. Model behavior analyses ::: Minimal pair analysis. As a final analysis, we constructed artificial minimal pairs that differed along several factors of interest and compared the model predictions. Such methods have been recently used to probe what kind of syntactic dependencies different types of recurrent neural network language models are capable of encoding (e.g., BIBREF12, BIBREF13, BIBREF47, BIBREF48, BIBREF14, BIBREF15), and also allow us to probe whether the model is sensitive to controlled changes in the input. We constructed a set of 25 initial sentences with some-NPs. For each sentence, we created 32 variants that differed in the following four properties of the some-NP: subjecthood, partitive, pre-nominal modification, and post-nominal modification. For the latter three features, we either included or excluded of the or the modifier, respectively. To manipulate subjecthood of the some-NP, we created variants in which some was either the determiner in the subject NP as in (UNKREF36a) or in the object-NP as in (UNKREF36b). We also created passive versions of each of these variants (UNKREF36c-d). Each set of sentences included a unique main verb, a unique pair of NPs, and unique modifiers. The full list of sentences can be found in Appendix SECREF10. Some (of the) (organic) farmers (in the mountains) milked the brown goats who graze on the meadows. The organic farmers in the mountains milked some (of the) (brown) goats (who graze on the meadows). The brown goats who graze on the meadows were milked by some (of the) (organic) farmers (in the mountains). Some (of the) (brown) goats (who graze on the meadows) were milked by the organic farmers in the mountains. Figure FIGREF41 shows the model predictions for the manually constructed sentences grouped by the presence of a partitive construction, the grammatical function of the some-NP, and the presence of a modifier. As in the natural dataset from BIBREF22, sentences with a partitive received higher predicted ratings than sentences without a partitive; sentences with subject some-NPs received higher predicted ratings than sentences with non-subject some-NPs; and sentences with a modified head noun in the some-NP received lower predictions than sentences with an unmodified some-NP. All these results provide additional evidence that the model learned the correct associations. This is particularly remarkable considering the train-test mismatch: the model was trained on noisy transcripts of spoken language that contained many disfluencies and repairs, and was subsequently tested on clean written sentences. Context, revisited In the tuning experiments above, we found that including the preceding conversational context in the input to the model did not improve or lowered prediction accuracy. At the same time, we found that the model is capable of making accurate predictions in most cases without taking the preceding context into account. Taken together, these results suggest either that the conversational context is not necessary and one can draw inferences from the target utterance alone, or that the model does not make adequate use of the preceding context. BIBREF22 did not systematically investigate whether the preceding conversational context was used by participants judging inference strength. To assess the extent to which the preceding context in this dataset affects inference strength, we re-ran her experiment, but without presenting participants with the preceding conversational context. If the context is irrelevant for drawing inferences, then mean inference strength ratings should be very similar across the two experiments, suggesting the model may have rightly learned to not utilize the context. If the presence of context affects inference strength, ratings should differ across experiments, suggesting that the model's method of integrating context is ill-suited to the task. The new, no-context ratings correlated with the original ratings ($r=0.68$, see Appendix SECREF11) but were overall more concentrated towards the center of the scale, suggesting that in many cases, participants who lacked information about the conversational context were unsure about the strength of the scalar inference. Since the original dataset exhibited more of a bi-modal distribution with fewer ratings at the center of the scale, this suggests that the broader conversational context contains important cues to scalar inferences. For our model, these results suggest that the representation of the conversational context is inadequate, which highlights the need for more sophisticated representations of linguistic contexts beyond the target utterance. We further find that the model trained on the original dataset is worse at predicting the no-context ratings ($r=0.66$) than the original ratings ($r=0.78$), which is not surprising considering the imperfect correlation between ratings across experiments, but also provides additional evidence that participants indeed behaved differently in the two experiments. Conclusion and future work We showed that neural network-based sentence encoders are capable of harnessing the linguistic signal to learn to predict human inference strength ratings from some to not all with high accuracy. Further, several model behavior analyses provided consistent evidence that the model learned associations between previously established linguistic features and the strength of scalar inferences. Taken together, these results suggest that it is possible to learn associations between linguistic features and scalar inferences from statistical input consisting of a relatively small set of utterances. In an analysis of the contribution of the conversational context, we found that humans make use of the preceding context whereas the models we considered failed to do so adequately. Considering the importance of context in drawing both scalar and other inferences in communication BIBREF0, BIBREF51, BIBREF52, BIBREF53, BIBREF54, BIBREF6, BIBREF7, the development of appropriate representations of larger context is an exciting avenue for future research. One further interesting line of research would be to extend this work to other pragmatic inferences. Recent experimental work has shown that inference strength is variable across scale and inference type BIBREF55, BIBREF56. We treated some as a case study in this work, but none of our modeling decisions are specific to some. It would be straightforward to train similar models for other types of inferences. Lastly, the fact that the attention weights provided insights into the model's decisions suggests possibilities for using neural network models for developing more precise theories of pragmatic language use. Our goal here was to investigate whether neural networks can learn associations for already established linguistic cues but it would be equally interesting to investigate whether such models could be used to discover new cues, which could then be verified in experimental and corpus work, potentially providing a novel model-driven approach to experimental and formal pragmatics. Hyperparameter tuning Figure FIGREF44 shows the learning curves averaged over the 5 cross-validation tuning runs for models using different word embeddings. As these plots show, the attention layer improves predictions; contextual word embeddings lead to better results than the static GloVe embeddings; and including the conversational context does not improve predictions and in some cases even lowers prediction accuracy. Regression analysis on held-out test data Figure FIGREF45 shows the estimates of the predictors in the original and extended Bayesian mixed-effects models estimated only on the held-out test data. We find the same qualitative effects as in Figure FIGREF27, but since these models were estimated on much less data (only 408 items), there is a lot of uncertainty in the estimates and therefore quantitative comparisons between the coefficients of the different models are less informative. List of manually constructed sentences Tables TABREF46 and TABREF47 show the 25 manually created sentences for the analyses described in the minimal pairs analysis in Section SECREF5. As described in the main text, we created 16 variants of the sentence with the some-NP in subject position (sentences in the left column), and 16 variants of the sentence with the some-NP in object position (sentences in the right column), yielding in total 800 examples. Results from no-context experiment Figure FIGREF48 shows the correlation between the mean inference strength ratings for each item in the experiment from BIBREF22 and the mean strength ratings from the new no-context experiment, discussed in Section SECREF6.
the annotated dataset reported by degen2015investigating, a dataset of the utterances from the Switchboard corpus of telephone dialogues BIBREF21 that contain the word some
6b4de7fef3a543215f16042ce6a29186bf84fea4
6b4de7fef3a543215f16042ce6a29186bf84fea4_0
Q: What pre-trained models did they compare to? Text: Introduction Pre-trained language Models (PLM) such as ELMo BIBREF0, BERT BIBREF1, ERNIE BIBREF2 and XLNet BIBREF3 have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification, natural language inference, named entity recognition and so on. Generally, most of PLMs focus on using attention mechanism BIBREF4 to represent the natural language, such as word-level attention for English and character-level attention for Chinese. Unlike English, in Chinese, words are not separated by explicit delimiters, which means that character is the smallest linguistic unit. However, in most cases, the semantic of single Chinese character is ambiguous. UTF8gbsn For example, in Table 1, using the attention over word 西山 is more intuitive than over the two individual characters 西 and 山. Moreover, previous work has shown that considering the word segmentation information can lead to better language understanding and accordingly benefits various Chines NLP tasks BIBREF5, BIBREF6, BIBREF7. All these factors motivate us to expand the character-level attention mechanism in Chinese PLM to represent attention over words . To this end, there are two main challenges. (1) How to seamlessly integrate word segmentation information into character-level attention module of PLM is an important problem. (2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by automatic segmentation tools BIBREF8 is another challenge. In this paper, we propose a new architecture, named Multi-source Word Alignd Attention (MWA), to solve the above issues. (1) Psycholinguistic experiments BIBREF9, BIBREF10 have shown that readers are likely to pay approximate attention to each character in one Chinese word. Drawing inspiration from such finding, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy BIBREF11. (2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters, and deploy a fusion function to pull together their disparate output. In this way, we can implicitly reduce the error caused by automatic annotation. Extensive experiments are conducted on various Chinese NLP datasets including named entity recognition, sentiment classification, sentence pair matching, natural language inference, etc. The results show that the proposed model brings another gain over BERT BIBREF1, ERNIE BIBREF2 and BERT-wwm BIBREF12, BIBREF13 in all the tasks. Methodology ::: Character-level Pre-trained Encoder The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLM and enhance original models. Given the strong performance of recent deep transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder for our work, and the outputs $\mathbf {H}$ from the last layer of encoder are treated as the enriched contextual representations. Methodology ::: Word-aligned Attention Although the character-level PLM can well capture language knowledge from text, it neglects the semantic information expressed in the word level. Therefore we apply a word-aligned layer on top of the encoder to integrate the word boundary information into representation of character with the attention aggregation mechanism. For an input sequence with with $n$ characters $S=[c_1, c_2, ... , c_n]$, where $c_j$ denotes the $j$-th character, Chinese words segmentation tool $\pi $ is used to partition $S$ into non-overlapping word blocks: where $w_i = \lbrace c_s, c_{s+1}, ..., c_{s+l-1}\rbrace $ is the $i$-th segmented word of length $l$ and $s$ is the index of $w_i$'s first character in $S$. We apply the self-attention with the representations of all input characters to get the character-level attention score matrix $\textbf {A}_c \in \mathbb {R}^{n \times n}$. It can be formulated as: where $\textbf {Q}$ and $\textbf {K}$ are both equal to the collective representation $\textbf {H}$ at the last layer of the Chinese PLM, $\textbf {W}_k \in \mathbb {R}^{d\times d}$ and $\textbf {W}_q \in \mathbb {R}^{d\times d}$ are trainable parameters for projection. While $\textbf {A}_c$ models the relationship between two arbitrarily characters without regard to the word boundary, we argue that incorporating word as atoms in the attention can better represent the semantics, as the literal meaning of each individual characters can be quite different from the implied meaning of the whole word, and the simple weighted sum in character-level cannot capture the semantic interaction between words. To this end, we propose to align $\textbf {A}_c$ in the word level and integrate the inner-word attention. For the sake of simplicity, we rewrite $\textbf {A}_c$ as $[\textbf {a}_c^1, \textbf {a}_c^2, ... ,\textbf {a}_c^n]$, where $\textbf {a}_c^i \in \mathbb {R}^n $ denotes the $i$-th row vector of $\textbf {A}_c$ and the attention score vector of the $i$-th character. Then we deploy $\pi $ to segment $\textbf {A}_c$ according to $\pi (S)$. For example, if $\pi (S) = [\lbrace c_1, c_2\rbrace , \lbrace c_3\rbrace , ...,\lbrace c_{n-1}, c_{n}\rbrace ]$, then In this way, an attention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word. Then, motivated by the psycholinguistic finding that readers are likely to pay approximate attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word character attention. Concretely, we first transform $\lbrace \textbf {a}_c^s,..., \textbf {a}_c^{s+l-1}\rbrace $ into one attention vector $\textbf {a}_w^i$ for $w_i$ with the mixed pooling strategy BIBREF11. Then we execute the piecewise up- mpling operation over each $\textbf {a}_w^i$ to keep input and output dimensions unchanged for the sake of plug and play. The detailed process can be summarized as follows: where $\lambda \in R^1 $ is a weighting trainable variable to balance the mean and max pooling, $\textbf {e}_l=[1,...,1]^T$ represents a $l$-dimensional all-ones vector, $l$ is the length of word $w_i$, $\textbf {e}_l \otimes \textbf {a}_w^i=[\textbf {a}_w^i,...,\textbf {a}_w^i]$ denotes the kronecker product operation between $\textbf {e}_l$ and $\textbf {a}_w^i$, $\hat{\textbf {A}}_c \in \mathbb {R}^{n \times n}$ is the aligned attention matrix. The Eq. (DISPLAY_FORM9-) can help incorporate word segmentation information into character-level attention calculation process, and determine the attention vector of one character from the perspective of the whole word, which is beneficial for eliminating the attention bias caused by character ambiguity. Finally, we get the enhanced character representation produced by word-aligned attention: where $\textbf {V} = \textbf {H}$, $\textbf {W}_v \in \mathbb {R}^{d\times d}$ is a trainable projection matrix. Besides, we also use multi-head attention BIBREF4 to capture information from different representation subspaces jointly, thus we have $K$ different aligned attention matrices $\hat{\textbf {A}}_c^k (1\le k\le K)$ and corresponding output $\hat{\textbf {H}}^k$. With multi-head attention architecture, the output can be expressed as follows: Methodology ::: Multi-source Word-aligned Attention As mentioned in Section SECREF1, our proposed word-aligned attention relies on the segmentation results of CWS tool $\pi $. Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance. In practice, The ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different $\pi $ may provide diverse $\pi (S)$ with various granularities. To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it’s natural to enhance the word-aligned attention layer with multi-source segmentation input. Formally, assume that there are $M$ popular CWS tools employed, we can obtain $M$ different representations $\overline{\textbf {H}}^1, ..., \overline{\textbf {H}}^M $ by Eq. DISPLAY_FORM11. Then we propose to fuse these semantically different representations as follows: where $\textbf {W}_g$ is the parameter matrix and $\tilde{\textbf {H}}$ is the final output of the MWA attention layer. Experiments ::: Experiments Setup To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. For detailed hyper-parameter settings, please see Appendix. Besides, three popular CWS tools thulac BIBREF14, ictclas BIBREF15 and hanlp BIBREF16 are employed to segment the Chinese sentences into words. We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI). The detail of those tasks and the corresponding datasets are introduced in Appendix. Experiments ::: Experiment Results Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon’s test shows that significant difference ($p< 0.01$) exits between our model with baseline models. In detail, On the EC task, we observe 1.46% absolute improvement in F1 score over ERINE. SPM and NLI tasks can also gain benefits from our enhanced representation, achieving an absolute F1 increase of 0.68% and 0.55% over original models averagely. For the NER task, our method improves the performance of BERT by 1.54%, and obtains 1.23% improvement averagely over all baselines. We attribute such significant gain in NER to the particularity of this task. Intuitively, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries. Thus the potential boundary information presented by the additional segmentation input can provide a better guidance to label each character, which is consistent with the conclusion in BIBREF6, BIBREF7. Experiments ::: Ablation Study To demonstrate the effectiveness of our multi-source fusion method in reducing the segmentation error introduced by CWS tools, We further carry out experiments on the EC task with different segmentation inputs. Table TABREF16 presents the comprehensive results on the three segmentation inputs produced by three CWS tools aforementioned. Experimental results show that our model gives quite stable improvement no matter the segmentation input quality. This again suggests the effectiveness of incorporating word segmentation information into character-level PLMs. And by employing multiple segmenters and fusing them together could introduce richer segmentation information and reduce the impact of general existent segmentation error. Conclusion In this paper, we propose an effective architecture Word-aligned Attention to incorporate word segmentation information into character-based pre-trained language models, which is adopted to a variety of downstream NLP tasks as an extend layer in fine-tuned process. And we also employ more segmenters into via proposed Multi-source Word-aligned Attention for reducing segmentation error. The experimental results show the effectiveness of our method. Comparing to BERT, ERNIE and BERT-wwm, our model obtains substantial improvements on various NLP benchmarks. Although we mainly focused on Chinese PLM in this paper, our model would take advantage the capabilities of Word-aligned Attention for word-piece in English NLP task. We are also considering applying this model into pre-training language model for various Language Model task in different grain to capture multi-level language features.
BERT, ERNIE, and BERT-wwm
3a62dd5fece70f8bf876dcbb131223682e3c54b7
3a62dd5fece70f8bf876dcbb131223682e3c54b7_0
Q: How does the fusion method work? Text: Introduction Pre-trained language Models (PLM) such as ELMo BIBREF0, BERT BIBREF1, ERNIE BIBREF2 and XLNet BIBREF3 have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification, natural language inference, named entity recognition and so on. Generally, most of PLMs focus on using attention mechanism BIBREF4 to represent the natural language, such as word-level attention for English and character-level attention for Chinese. Unlike English, in Chinese, words are not separated by explicit delimiters, which means that character is the smallest linguistic unit. However, in most cases, the semantic of single Chinese character is ambiguous. UTF8gbsn For example, in Table 1, using the attention over word 西山 is more intuitive than over the two individual characters 西 and 山. Moreover, previous work has shown that considering the word segmentation information can lead to better language understanding and accordingly benefits various Chines NLP tasks BIBREF5, BIBREF6, BIBREF7. All these factors motivate us to expand the character-level attention mechanism in Chinese PLM to represent attention over words . To this end, there are two main challenges. (1) How to seamlessly integrate word segmentation information into character-level attention module of PLM is an important problem. (2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by automatic segmentation tools BIBREF8 is another challenge. In this paper, we propose a new architecture, named Multi-source Word Alignd Attention (MWA), to solve the above issues. (1) Psycholinguistic experiments BIBREF9, BIBREF10 have shown that readers are likely to pay approximate attention to each character in one Chinese word. Drawing inspiration from such finding, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy BIBREF11. (2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters, and deploy a fusion function to pull together their disparate output. In this way, we can implicitly reduce the error caused by automatic annotation. Extensive experiments are conducted on various Chinese NLP datasets including named entity recognition, sentiment classification, sentence pair matching, natural language inference, etc. The results show that the proposed model brings another gain over BERT BIBREF1, ERNIE BIBREF2 and BERT-wwm BIBREF12, BIBREF13 in all the tasks. Methodology ::: Character-level Pre-trained Encoder The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLM and enhance original models. Given the strong performance of recent deep transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder for our work, and the outputs $\mathbf {H}$ from the last layer of encoder are treated as the enriched contextual representations. Methodology ::: Word-aligned Attention Although the character-level PLM can well capture language knowledge from text, it neglects the semantic information expressed in the word level. Therefore we apply a word-aligned layer on top of the encoder to integrate the word boundary information into representation of character with the attention aggregation mechanism. For an input sequence with with $n$ characters $S=[c_1, c_2, ... , c_n]$, where $c_j$ denotes the $j$-th character, Chinese words segmentation tool $\pi $ is used to partition $S$ into non-overlapping word blocks: where $w_i = \lbrace c_s, c_{s+1}, ..., c_{s+l-1}\rbrace $ is the $i$-th segmented word of length $l$ and $s$ is the index of $w_i$'s first character in $S$. We apply the self-attention with the representations of all input characters to get the character-level attention score matrix $\textbf {A}_c \in \mathbb {R}^{n \times n}$. It can be formulated as: where $\textbf {Q}$ and $\textbf {K}$ are both equal to the collective representation $\textbf {H}$ at the last layer of the Chinese PLM, $\textbf {W}_k \in \mathbb {R}^{d\times d}$ and $\textbf {W}_q \in \mathbb {R}^{d\times d}$ are trainable parameters for projection. While $\textbf {A}_c$ models the relationship between two arbitrarily characters without regard to the word boundary, we argue that incorporating word as atoms in the attention can better represent the semantics, as the literal meaning of each individual characters can be quite different from the implied meaning of the whole word, and the simple weighted sum in character-level cannot capture the semantic interaction between words. To this end, we propose to align $\textbf {A}_c$ in the word level and integrate the inner-word attention. For the sake of simplicity, we rewrite $\textbf {A}_c$ as $[\textbf {a}_c^1, \textbf {a}_c^2, ... ,\textbf {a}_c^n]$, where $\textbf {a}_c^i \in \mathbb {R}^n $ denotes the $i$-th row vector of $\textbf {A}_c$ and the attention score vector of the $i$-th character. Then we deploy $\pi $ to segment $\textbf {A}_c$ according to $\pi (S)$. For example, if $\pi (S) = [\lbrace c_1, c_2\rbrace , \lbrace c_3\rbrace , ...,\lbrace c_{n-1}, c_{n}\rbrace ]$, then In this way, an attention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word. Then, motivated by the psycholinguistic finding that readers are likely to pay approximate attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word character attention. Concretely, we first transform $\lbrace \textbf {a}_c^s,..., \textbf {a}_c^{s+l-1}\rbrace $ into one attention vector $\textbf {a}_w^i$ for $w_i$ with the mixed pooling strategy BIBREF11. Then we execute the piecewise up- mpling operation over each $\textbf {a}_w^i$ to keep input and output dimensions unchanged for the sake of plug and play. The detailed process can be summarized as follows: where $\lambda \in R^1 $ is a weighting trainable variable to balance the mean and max pooling, $\textbf {e}_l=[1,...,1]^T$ represents a $l$-dimensional all-ones vector, $l$ is the length of word $w_i$, $\textbf {e}_l \otimes \textbf {a}_w^i=[\textbf {a}_w^i,...,\textbf {a}_w^i]$ denotes the kronecker product operation between $\textbf {e}_l$ and $\textbf {a}_w^i$, $\hat{\textbf {A}}_c \in \mathbb {R}^{n \times n}$ is the aligned attention matrix. The Eq. (DISPLAY_FORM9-) can help incorporate word segmentation information into character-level attention calculation process, and determine the attention vector of one character from the perspective of the whole word, which is beneficial for eliminating the attention bias caused by character ambiguity. Finally, we get the enhanced character representation produced by word-aligned attention: where $\textbf {V} = \textbf {H}$, $\textbf {W}_v \in \mathbb {R}^{d\times d}$ is a trainable projection matrix. Besides, we also use multi-head attention BIBREF4 to capture information from different representation subspaces jointly, thus we have $K$ different aligned attention matrices $\hat{\textbf {A}}_c^k (1\le k\le K)$ and corresponding output $\hat{\textbf {H}}^k$. With multi-head attention architecture, the output can be expressed as follows: Methodology ::: Multi-source Word-aligned Attention As mentioned in Section SECREF1, our proposed word-aligned attention relies on the segmentation results of CWS tool $\pi $. Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance. In practice, The ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different $\pi $ may provide diverse $\pi (S)$ with various granularities. To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it’s natural to enhance the word-aligned attention layer with multi-source segmentation input. Formally, assume that there are $M$ popular CWS tools employed, we can obtain $M$ different representations $\overline{\textbf {H}}^1, ..., \overline{\textbf {H}}^M $ by Eq. DISPLAY_FORM11. Then we propose to fuse these semantically different representations as follows: where $\textbf {W}_g$ is the parameter matrix and $\tilde{\textbf {H}}$ is the final output of the MWA attention layer. Experiments ::: Experiments Setup To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. For detailed hyper-parameter settings, please see Appendix. Besides, three popular CWS tools thulac BIBREF14, ictclas BIBREF15 and hanlp BIBREF16 are employed to segment the Chinese sentences into words. We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI). The detail of those tasks and the corresponding datasets are introduced in Appendix. Experiments ::: Experiment Results Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon’s test shows that significant difference ($p< 0.01$) exits between our model with baseline models. In detail, On the EC task, we observe 1.46% absolute improvement in F1 score over ERINE. SPM and NLI tasks can also gain benefits from our enhanced representation, achieving an absolute F1 increase of 0.68% and 0.55% over original models averagely. For the NER task, our method improves the performance of BERT by 1.54%, and obtains 1.23% improvement averagely over all baselines. We attribute such significant gain in NER to the particularity of this task. Intuitively, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries. Thus the potential boundary information presented by the additional segmentation input can provide a better guidance to label each character, which is consistent with the conclusion in BIBREF6, BIBREF7. Experiments ::: Ablation Study To demonstrate the effectiveness of our multi-source fusion method in reducing the segmentation error introduced by CWS tools, We further carry out experiments on the EC task with different segmentation inputs. Table TABREF16 presents the comprehensive results on the three segmentation inputs produced by three CWS tools aforementioned. Experimental results show that our model gives quite stable improvement no matter the segmentation input quality. This again suggests the effectiveness of incorporating word segmentation information into character-level PLMs. And by employing multiple segmenters and fusing them together could introduce richer segmentation information and reduce the impact of general existent segmentation error. Conclusion In this paper, we propose an effective architecture Word-aligned Attention to incorporate word segmentation information into character-based pre-trained language models, which is adopted to a variety of downstream NLP tasks as an extend layer in fine-tuned process. And we also employ more segmenters into via proposed Multi-source Word-aligned Attention for reducing segmentation error. The experimental results show the effectiveness of our method. Comparing to BERT, ERNIE and BERT-wwm, our model obtains substantial improvements on various NLP benchmarks. Although we mainly focused on Chinese PLM in this paper, our model would take advantage the capabilities of Word-aligned Attention for word-piece in English NLP task. We are also considering applying this model into pre-training language model for various Language Model task in different grain to capture multi-level language features.
ttention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word, we devise an appropriate aggregation module to fuse the inner-word character attention
34fab25d9ceb9c5942daf4ebdab6c5dd4ff9d3db
34fab25d9ceb9c5942daf4ebdab6c5dd4ff9d3db_0
Q: What dataset did they use? Text: Introduction Pre-trained language Models (PLM) such as ELMo BIBREF0, BERT BIBREF1, ERNIE BIBREF2 and XLNet BIBREF3 have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification, natural language inference, named entity recognition and so on. Generally, most of PLMs focus on using attention mechanism BIBREF4 to represent the natural language, such as word-level attention for English and character-level attention for Chinese. Unlike English, in Chinese, words are not separated by explicit delimiters, which means that character is the smallest linguistic unit. However, in most cases, the semantic of single Chinese character is ambiguous. UTF8gbsn For example, in Table 1, using the attention over word 西山 is more intuitive than over the two individual characters 西 and 山. Moreover, previous work has shown that considering the word segmentation information can lead to better language understanding and accordingly benefits various Chines NLP tasks BIBREF5, BIBREF6, BIBREF7. All these factors motivate us to expand the character-level attention mechanism in Chinese PLM to represent attention over words . To this end, there are two main challenges. (1) How to seamlessly integrate word segmentation information into character-level attention module of PLM is an important problem. (2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by automatic segmentation tools BIBREF8 is another challenge. In this paper, we propose a new architecture, named Multi-source Word Alignd Attention (MWA), to solve the above issues. (1) Psycholinguistic experiments BIBREF9, BIBREF10 have shown that readers are likely to pay approximate attention to each character in one Chinese word. Drawing inspiration from such finding, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy BIBREF11. (2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters, and deploy a fusion function to pull together their disparate output. In this way, we can implicitly reduce the error caused by automatic annotation. Extensive experiments are conducted on various Chinese NLP datasets including named entity recognition, sentiment classification, sentence pair matching, natural language inference, etc. The results show that the proposed model brings another gain over BERT BIBREF1, ERNIE BIBREF2 and BERT-wwm BIBREF12, BIBREF13 in all the tasks. Methodology ::: Character-level Pre-trained Encoder The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLM and enhance original models. Given the strong performance of recent deep transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder for our work, and the outputs $\mathbf {H}$ from the last layer of encoder are treated as the enriched contextual representations. Methodology ::: Word-aligned Attention Although the character-level PLM can well capture language knowledge from text, it neglects the semantic information expressed in the word level. Therefore we apply a word-aligned layer on top of the encoder to integrate the word boundary information into representation of character with the attention aggregation mechanism. For an input sequence with with $n$ characters $S=[c_1, c_2, ... , c_n]$, where $c_j$ denotes the $j$-th character, Chinese words segmentation tool $\pi $ is used to partition $S$ into non-overlapping word blocks: where $w_i = \lbrace c_s, c_{s+1}, ..., c_{s+l-1}\rbrace $ is the $i$-th segmented word of length $l$ and $s$ is the index of $w_i$'s first character in $S$. We apply the self-attention with the representations of all input characters to get the character-level attention score matrix $\textbf {A}_c \in \mathbb {R}^{n \times n}$. It can be formulated as: where $\textbf {Q}$ and $\textbf {K}$ are both equal to the collective representation $\textbf {H}$ at the last layer of the Chinese PLM, $\textbf {W}_k \in \mathbb {R}^{d\times d}$ and $\textbf {W}_q \in \mathbb {R}^{d\times d}$ are trainable parameters for projection. While $\textbf {A}_c$ models the relationship between two arbitrarily characters without regard to the word boundary, we argue that incorporating word as atoms in the attention can better represent the semantics, as the literal meaning of each individual characters can be quite different from the implied meaning of the whole word, and the simple weighted sum in character-level cannot capture the semantic interaction between words. To this end, we propose to align $\textbf {A}_c$ in the word level and integrate the inner-word attention. For the sake of simplicity, we rewrite $\textbf {A}_c$ as $[\textbf {a}_c^1, \textbf {a}_c^2, ... ,\textbf {a}_c^n]$, where $\textbf {a}_c^i \in \mathbb {R}^n $ denotes the $i$-th row vector of $\textbf {A}_c$ and the attention score vector of the $i$-th character. Then we deploy $\pi $ to segment $\textbf {A}_c$ according to $\pi (S)$. For example, if $\pi (S) = [\lbrace c_1, c_2\rbrace , \lbrace c_3\rbrace , ...,\lbrace c_{n-1}, c_{n}\rbrace ]$, then In this way, an attention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word. Then, motivated by the psycholinguistic finding that readers are likely to pay approximate attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word character attention. Concretely, we first transform $\lbrace \textbf {a}_c^s,..., \textbf {a}_c^{s+l-1}\rbrace $ into one attention vector $\textbf {a}_w^i$ for $w_i$ with the mixed pooling strategy BIBREF11. Then we execute the piecewise up- mpling operation over each $\textbf {a}_w^i$ to keep input and output dimensions unchanged for the sake of plug and play. The detailed process can be summarized as follows: where $\lambda \in R^1 $ is a weighting trainable variable to balance the mean and max pooling, $\textbf {e}_l=[1,...,1]^T$ represents a $l$-dimensional all-ones vector, $l$ is the length of word $w_i$, $\textbf {e}_l \otimes \textbf {a}_w^i=[\textbf {a}_w^i,...,\textbf {a}_w^i]$ denotes the kronecker product operation between $\textbf {e}_l$ and $\textbf {a}_w^i$, $\hat{\textbf {A}}_c \in \mathbb {R}^{n \times n}$ is the aligned attention matrix. The Eq. (DISPLAY_FORM9-) can help incorporate word segmentation information into character-level attention calculation process, and determine the attention vector of one character from the perspective of the whole word, which is beneficial for eliminating the attention bias caused by character ambiguity. Finally, we get the enhanced character representation produced by word-aligned attention: where $\textbf {V} = \textbf {H}$, $\textbf {W}_v \in \mathbb {R}^{d\times d}$ is a trainable projection matrix. Besides, we also use multi-head attention BIBREF4 to capture information from different representation subspaces jointly, thus we have $K$ different aligned attention matrices $\hat{\textbf {A}}_c^k (1\le k\le K)$ and corresponding output $\hat{\textbf {H}}^k$. With multi-head attention architecture, the output can be expressed as follows: Methodology ::: Multi-source Word-aligned Attention As mentioned in Section SECREF1, our proposed word-aligned attention relies on the segmentation results of CWS tool $\pi $. Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance. In practice, The ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different $\pi $ may provide diverse $\pi (S)$ with various granularities. To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it’s natural to enhance the word-aligned attention layer with multi-source segmentation input. Formally, assume that there are $M$ popular CWS tools employed, we can obtain $M$ different representations $\overline{\textbf {H}}^1, ..., \overline{\textbf {H}}^M $ by Eq. DISPLAY_FORM11. Then we propose to fuse these semantically different representations as follows: where $\textbf {W}_g$ is the parameter matrix and $\tilde{\textbf {H}}$ is the final output of the MWA attention layer. Experiments ::: Experiments Setup To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. For detailed hyper-parameter settings, please see Appendix. Besides, three popular CWS tools thulac BIBREF14, ictclas BIBREF15 and hanlp BIBREF16 are employed to segment the Chinese sentences into words. We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI). The detail of those tasks and the corresponding datasets are introduced in Appendix. Experiments ::: Experiment Results Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon’s test shows that significant difference ($p< 0.01$) exits between our model with baseline models. In detail, On the EC task, we observe 1.46% absolute improvement in F1 score over ERINE. SPM and NLI tasks can also gain benefits from our enhanced representation, achieving an absolute F1 increase of 0.68% and 0.55% over original models averagely. For the NER task, our method improves the performance of BERT by 1.54%, and obtains 1.23% improvement averagely over all baselines. We attribute such significant gain in NER to the particularity of this task. Intuitively, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries. Thus the potential boundary information presented by the additional segmentation input can provide a better guidance to label each character, which is consistent with the conclusion in BIBREF6, BIBREF7. Experiments ::: Ablation Study To demonstrate the effectiveness of our multi-source fusion method in reducing the segmentation error introduced by CWS tools, We further carry out experiments on the EC task with different segmentation inputs. Table TABREF16 presents the comprehensive results on the three segmentation inputs produced by three CWS tools aforementioned. Experimental results show that our model gives quite stable improvement no matter the segmentation input quality. This again suggests the effectiveness of incorporating word segmentation information into character-level PLMs. And by employing multiple segmenters and fusing them together could introduce richer segmentation information and reduce the impact of general existent segmentation error. Conclusion In this paper, we propose an effective architecture Word-aligned Attention to incorporate word segmentation information into character-based pre-trained language models, which is adopted to a variety of downstream NLP tasks as an extend layer in fine-tuned process. And we also employ more segmenters into via proposed Multi-source Word-aligned Attention for reducing segmentation error. The experimental results show the effectiveness of our method. Comparing to BERT, ERNIE and BERT-wwm, our model obtains substantial improvements on various NLP benchmarks. Although we mainly focused on Chinese PLM in this paper, our model would take advantage the capabilities of Word-aligned Attention for word-piece in English NLP task. We are also considering applying this model into pre-training language model for various Language Model task in different grain to capture multi-level language features.
weibo-100k, Ontonotes, LCQMC and XNLI
2c20426c003f7e3053f8e6c333f8bb744f6f31f8
2c20426c003f7e3053f8e6c333f8bb744f6f31f8_0
Q: What benchmarks did they experiment on? Text: Introduction Pre-trained language Models (PLM) such as ELMo BIBREF0, BERT BIBREF1, ERNIE BIBREF2 and XLNet BIBREF3 have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification, natural language inference, named entity recognition and so on. Generally, most of PLMs focus on using attention mechanism BIBREF4 to represent the natural language, such as word-level attention for English and character-level attention for Chinese. Unlike English, in Chinese, words are not separated by explicit delimiters, which means that character is the smallest linguistic unit. However, in most cases, the semantic of single Chinese character is ambiguous. UTF8gbsn For example, in Table 1, using the attention over word 西山 is more intuitive than over the two individual characters 西 and 山. Moreover, previous work has shown that considering the word segmentation information can lead to better language understanding and accordingly benefits various Chines NLP tasks BIBREF5, BIBREF6, BIBREF7. All these factors motivate us to expand the character-level attention mechanism in Chinese PLM to represent attention over words . To this end, there are two main challenges. (1) How to seamlessly integrate word segmentation information into character-level attention module of PLM is an important problem. (2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by automatic segmentation tools BIBREF8 is another challenge. In this paper, we propose a new architecture, named Multi-source Word Alignd Attention (MWA), to solve the above issues. (1) Psycholinguistic experiments BIBREF9, BIBREF10 have shown that readers are likely to pay approximate attention to each character in one Chinese word. Drawing inspiration from such finding, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy BIBREF11. (2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters, and deploy a fusion function to pull together their disparate output. In this way, we can implicitly reduce the error caused by automatic annotation. Extensive experiments are conducted on various Chinese NLP datasets including named entity recognition, sentiment classification, sentence pair matching, natural language inference, etc. The results show that the proposed model brings another gain over BERT BIBREF1, ERNIE BIBREF2 and BERT-wwm BIBREF12, BIBREF13 in all the tasks. Methodology ::: Character-level Pre-trained Encoder The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLM and enhance original models. Given the strong performance of recent deep transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder for our work, and the outputs $\mathbf {H}$ from the last layer of encoder are treated as the enriched contextual representations. Methodology ::: Word-aligned Attention Although the character-level PLM can well capture language knowledge from text, it neglects the semantic information expressed in the word level. Therefore we apply a word-aligned layer on top of the encoder to integrate the word boundary information into representation of character with the attention aggregation mechanism. For an input sequence with with $n$ characters $S=[c_1, c_2, ... , c_n]$, where $c_j$ denotes the $j$-th character, Chinese words segmentation tool $\pi $ is used to partition $S$ into non-overlapping word blocks: where $w_i = \lbrace c_s, c_{s+1}, ..., c_{s+l-1}\rbrace $ is the $i$-th segmented word of length $l$ and $s$ is the index of $w_i$'s first character in $S$. We apply the self-attention with the representations of all input characters to get the character-level attention score matrix $\textbf {A}_c \in \mathbb {R}^{n \times n}$. It can be formulated as: where $\textbf {Q}$ and $\textbf {K}$ are both equal to the collective representation $\textbf {H}$ at the last layer of the Chinese PLM, $\textbf {W}_k \in \mathbb {R}^{d\times d}$ and $\textbf {W}_q \in \mathbb {R}^{d\times d}$ are trainable parameters for projection. While $\textbf {A}_c$ models the relationship between two arbitrarily characters without regard to the word boundary, we argue that incorporating word as atoms in the attention can better represent the semantics, as the literal meaning of each individual characters can be quite different from the implied meaning of the whole word, and the simple weighted sum in character-level cannot capture the semantic interaction between words. To this end, we propose to align $\textbf {A}_c$ in the word level and integrate the inner-word attention. For the sake of simplicity, we rewrite $\textbf {A}_c$ as $[\textbf {a}_c^1, \textbf {a}_c^2, ... ,\textbf {a}_c^n]$, where $\textbf {a}_c^i \in \mathbb {R}^n $ denotes the $i$-th row vector of $\textbf {A}_c$ and the attention score vector of the $i$-th character. Then we deploy $\pi $ to segment $\textbf {A}_c$ according to $\pi (S)$. For example, if $\pi (S) = [\lbrace c_1, c_2\rbrace , \lbrace c_3\rbrace , ...,\lbrace c_{n-1}, c_{n}\rbrace ]$, then In this way, an attention vector sequence is segmented into several subsequences and each subsequence represents the attention of one word. Then, motivated by the psycholinguistic finding that readers are likely to pay approximate attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word character attention. Concretely, we first transform $\lbrace \textbf {a}_c^s,..., \textbf {a}_c^{s+l-1}\rbrace $ into one attention vector $\textbf {a}_w^i$ for $w_i$ with the mixed pooling strategy BIBREF11. Then we execute the piecewise up- mpling operation over each $\textbf {a}_w^i$ to keep input and output dimensions unchanged for the sake of plug and play. The detailed process can be summarized as follows: where $\lambda \in R^1 $ is a weighting trainable variable to balance the mean and max pooling, $\textbf {e}_l=[1,...,1]^T$ represents a $l$-dimensional all-ones vector, $l$ is the length of word $w_i$, $\textbf {e}_l \otimes \textbf {a}_w^i=[\textbf {a}_w^i,...,\textbf {a}_w^i]$ denotes the kronecker product operation between $\textbf {e}_l$ and $\textbf {a}_w^i$, $\hat{\textbf {A}}_c \in \mathbb {R}^{n \times n}$ is the aligned attention matrix. The Eq. (DISPLAY_FORM9-) can help incorporate word segmentation information into character-level attention calculation process, and determine the attention vector of one character from the perspective of the whole word, which is beneficial for eliminating the attention bias caused by character ambiguity. Finally, we get the enhanced character representation produced by word-aligned attention: where $\textbf {V} = \textbf {H}$, $\textbf {W}_v \in \mathbb {R}^{d\times d}$ is a trainable projection matrix. Besides, we also use multi-head attention BIBREF4 to capture information from different representation subspaces jointly, thus we have $K$ different aligned attention matrices $\hat{\textbf {A}}_c^k (1\le k\le K)$ and corresponding output $\hat{\textbf {H}}^k$. With multi-head attention architecture, the output can be expressed as follows: Methodology ::: Multi-source Word-aligned Attention As mentioned in Section SECREF1, our proposed word-aligned attention relies on the segmentation results of CWS tool $\pi $. Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance. In practice, The ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different $\pi $ may provide diverse $\pi (S)$ with various granularities. To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it’s natural to enhance the word-aligned attention layer with multi-source segmentation input. Formally, assume that there are $M$ popular CWS tools employed, we can obtain $M$ different representations $\overline{\textbf {H}}^1, ..., \overline{\textbf {H}}^M $ by Eq. DISPLAY_FORM11. Then we propose to fuse these semantically different representations as follows: where $\textbf {W}_g$ is the parameter matrix and $\tilde{\textbf {H}}$ is the final output of the MWA attention layer. Experiments ::: Experiments Setup To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. For detailed hyper-parameter settings, please see Appendix. Besides, three popular CWS tools thulac BIBREF14, ictclas BIBREF15 and hanlp BIBREF16 are employed to segment the Chinese sentences into words. We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI). The detail of those tasks and the corresponding datasets are introduced in Appendix. Experiments ::: Experiment Results Table TABREF14 shows the experiment measuring improvements from the MWA attention on test sets of four datasets. Generally, our method consistently outperforms all baselines on all of four tasks, which clearly indicates the advantage of introducing word segmentation information into the encoding of character sequences. Moreover, the Wilcoxon’s test shows that significant difference ($p< 0.01$) exits between our model with baseline models. In detail, On the EC task, we observe 1.46% absolute improvement in F1 score over ERINE. SPM and NLI tasks can also gain benefits from our enhanced representation, achieving an absolute F1 increase of 0.68% and 0.55% over original models averagely. For the NER task, our method improves the performance of BERT by 1.54%, and obtains 1.23% improvement averagely over all baselines. We attribute such significant gain in NER to the particularity of this task. Intuitively, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries. Thus the potential boundary information presented by the additional segmentation input can provide a better guidance to label each character, which is consistent with the conclusion in BIBREF6, BIBREF7. Experiments ::: Ablation Study To demonstrate the effectiveness of our multi-source fusion method in reducing the segmentation error introduced by CWS tools, We further carry out experiments on the EC task with different segmentation inputs. Table TABREF16 presents the comprehensive results on the three segmentation inputs produced by three CWS tools aforementioned. Experimental results show that our model gives quite stable improvement no matter the segmentation input quality. This again suggests the effectiveness of incorporating word segmentation information into character-level PLMs. And by employing multiple segmenters and fusing them together could introduce richer segmentation information and reduce the impact of general existent segmentation error. Conclusion In this paper, we propose an effective architecture Word-aligned Attention to incorporate word segmentation information into character-based pre-trained language models, which is adopted to a variety of downstream NLP tasks as an extend layer in fine-tuned process. And we also employ more segmenters into via proposed Multi-source Word-aligned Attention for reducing segmentation error. The experimental results show the effectiveness of our method. Comparing to BERT, ERNIE and BERT-wwm, our model obtains substantial improvements on various NLP benchmarks. Although we mainly focused on Chinese PLM in this paper, our model would take advantage the capabilities of Word-aligned Attention for word-piece in English NLP task. We are also considering applying this model into pre-training language model for various Language Model task in different grain to capture multi-level language features.
Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM), Natural Language Inference (NLI)
d1909ce77d09983aa1b3ab5c56e2458caefbd442
d1909ce77d09983aa1b3ab5c56e2458caefbd442_0
Q: What were the evaluation metrics used? Text: Introduction In this work, we investigate the problem of task-oriented dialogue in mixed-domain settings. Our work is related to two lines of research in Spoken Dialogue System (SDS), namely task-oriented dialogue system and multi-domain dialogue system. We briefly review the recent literature related to these topics as follows. Task-oriented dialogue systems are computer programs which can assist users to complete tasks in specific domains by understanding user requests and generating appropriate responses within several dialogue turns. Such systems are useful in domain-specific chatbot applications which help users find a restaurant or book a hotel. Conventional approach for building a task-oriented dialogue system is concerned with building a quite complex pipeline of many connected components. These components are usually independently developed which include at least four crucial modules: a natural language understanding module, a dialogue state tracking module, a dialogue policy learning module, and a answer generation module. Since these systems components are usually trained independently, their optimization targets may not fully align with the overall system evaluation criteria BIBREF0. In addition, such a pipeline system often suffers from error propagation where error made by upstream modules are accumuated and got amplified to the downstream ones. To overcome the above limitations of pipeline task-oriented dialogue systems, much research has focused recently in designing end-to-end learning systems with neural network-based models. One key property of task-oriented dialogue model is that it is required to reason and plan over multiple dialogue turns by aggregating useful information during the conversation. Therefore, sequence-to-sequence models such as the encoder-decoder based neural network models are proven to be suitable for both task-oriented and non-task-oriented systems. Serban et al. proposed to build end-to-end dialogue systems using generative hierarchical recurrent encoder-decoder neural network BIBREF1. Li et al. presented persona-based models which incorporate background information and speaking style of interlocutors into LSTM-based seq2seq network so as to improve the modeling of human-like behavior BIBREF2. Wen et al. designed an end-to-end trainable neural dialogue model with modularly connected components BIBREF3. Bordes et al. BIBREF4 proposed a task-oriented dialogue model using end-to-end memory networks. At the same time, many works explored different kinds of networks to model the dialogue state, such as copy-augmented networks BIBREF5, gated memory networks BIBREF6, query-regression networks BIBREF7. These systems do not perform slot-filling or user goal tracking; they rank and select a response from a set of response candidates which are conditioned on the dialogue history. One of the significant effort in developing end-to-end task-oriented systems is the recent Sequicity framework BIBREF8. This framework also relies on the sequence-to-sequence model and can be optimized with supervised or reinforcement learning. The Sequicity framework introduces the concept of belief span (bspan), which is a text span that tracks the dialogue states at each turn. In this framework, the task-oriented dialogue problem is decomposed into two stages: bspan generation and response generation. This framework has been shown to significantly outperform state-of-the-art pipeline-based methods. The second line of work in SDS that is related to this work is concerned with multi-domain dialogue systems. As presented above, one of the key components of a dialogue system is dialogue state tracking, or belief tracking, which maintains the states of conversation. A state is usually composed of user's goals, evidences and information which is accumulated along the sequence of dialogue turns. While the user's goal and evidences are extracted from user's utterances, the useful information is usually aggregated from external resources such as knowledge bases or dialogue ontologies. Such knowledge bases contain slot type and slot value entries in one or several predefined domains. Most approaches have difficulty scaling up with multiple domains due to the dependency of their model parameters on the underlying knowledge bases. Recently, Ramadan et al. BIBREF9 has introduced a novel approach which utilizes semantic similarity between dialogue utterances and knowledge base terms, allowing the information to be shared across domains. This method has been shown not only to scale well to multi-domain dialogues, but also outperform existing state-of-the-art models in single-domain tracking tasks. The problem that we are interested in this work is task-oriented dialogue in mixed-domain settings. This is different from the multi-domain dialogue problem above in several aspects, as follows: First, we investigate the phenomenon of alternating between different dialogue domains in subsequent dialogue turns, where each turn is defined as a pair of user question and machine answer. That is, the domains are mixed between turns. For example, in the first turn, the user requests some information of a restaurant; then in the second turn, he switches to the a different domain, for example, he asks about the weather at a specific location. In a next turn, he would either switch to a new domain or come back to ask about some other property of the suggested restaurant. This is a realistic scenario which usually happens in practical chatbot applications in our observations. We prefer calling this problem mixed-domain dialogue rather than multiple-domain dialogue. Second, we study the effect of the mixed-domain setting in the context of multi-domain dialogue approaches to see how they perform in different experimental scenarios. The main findings of this work include: A specialized state tracking component in multiple domains still plays an important role and gives better results than a state-of-the-art end-to-end task-oriented dialogue system. A combination of specialized state tracking system and an end-to-end task-oriented dialogue system is beneficial in mix-domain dialogue systems. Our hybrid system is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset. These experimental results give some useful insights on data preparation and acquisition in the development of the chatbot platform FPT.AI, which is currently deployed for many practical chatbot applications. The remainder of this paper is structured as follows. First, Section SECREF2 discusses briefly the two methods in building dialogue systems that our method relies on. Next, Section SECREF3 presents experimental settings and results. Finally, Section SECREF4 concludes the paper and gives some directions for future work. Methodology In this section, we present briefly two methods that we use in our experiments which have been mentioned in the previous section. The first method is the Sequicity framework and the second one is the state-of-the-art multi-domain dialogue state tracking approach. Methodology ::: Sequicity Figure FIGREF1 shows the architecture of the Sequicity framework as described in BIBREF8. In essence, in each turn, the Sequicity model first takes a bspan ($B_1$) and a response ($R_1$) which are determined in the previous step, and the current human question ($U_2$) to generate the current bspan. This bspan is then used together with a knowledge base to generate the corresponding machine answer ($R_2$), as shown in the right part of Figure FIGREF1. The left part of that figure shows an example dialogue in a mixed-domain setting (which will be explained in Section SECREF3). Methodology ::: Multi-domain Dialogue State Tracking Figure FIGREF8 shows the architecture of the multi-domain belief tracking with knowledge sharing as described in BIBREF9. This is the state-of-the-art belief tracker for multi-domain dialogue. This system encodes system responses with 3 bidirectional LSTM network and encodes user utterances with 3+1 bidirectional LSTM network. There are in total 7 independent LSTMs. For tracking domain, slot and value, it uses 3 corresponding LSTMs, either for system response or user utterance. There is one special LSTM to track the user affirmation. The semantic similarity between the utterances and ontology terms are learned and shared between domains through their embeddings in the same semantic space. Experiments In this section, we present experimental settings, different scenarios and results. We first present the datasets, then implementation settings, and finally obtained results. Experiments ::: Datasets We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12. In this original dataset, each dialogue is of a single domain where all of its turns are on that domain. Each turn is composed of a sentence pair, one sentence is a user utterance, the other sentence is the corresponding machine response. A dialogue is a sequence of turns. To create mix-domain dialogues for our experiments, we make some changes in this dataset as follows: We keep the dialogues in the calendar domain as they are. We take a half of dialogues in the weather domain and a half of dialogues in the POI domain and mix their turns together, resulting in a dataset of mixed weather-POI dialogues. In this mixed-domain dialogue, there is a turn in the weather domain, followed by a turn in POI domain or vice versa. We call this dataset the sequential turn dataset. Since the start turn of a dialogue has a special role in triggering the learning systems, we decide to create another and different mixed-domain dataset with the following mixing method: The first turn and the last turn of each dialogue are kept as in their original. The internal turns are mixed randomly. We call this dataset the random turn dataset. Some statistics of these mixed-domain datasets are shown in the lower half of the Table TABREF12. Experiments ::: Experimental Settings For the task-oriented Sequicity model, we keep the best parameter settings as reported in the original framework, on the same KVRET dataset BIBREF8. In particular, the hidden size of GRU unit is set to 50; the learning rate of Adam optimizer is 0.003. In addition to the original GRU unit, we also re-run this framework with simple RNN unit to compare the performance of different recurrent network types. The Sequicity tool is freely available for download. For the multi-domain belief tracker model, we set the hidden size of LSTM units to 50 as in the original model; word embedding size is 300 and number of training epochs is 100. The corresponding tool is also freely available for download. Experiments ::: Results Our experimental results are shown in Table TABREF21. The first half of the table contains results for task-oriented dialogue with the Sequicity framework with two scenarios for training data preparation. For each experiment, we run our models for 3 times and their scores are averaged as the final score. The mixed training scenario performs the mixing of both the training data, development data and the test data as described in the previous subsection. The non-mixed training scenario performs the mixing only on the development and test data, keeps the training data unmixed as in the original KVRET dataset. As in the Sequicity framework, we report entity match rate, BLEU score and Success F1 score. Entity match rate evaluates task completion, it determines if a system can generate all correct constraints to search the indicated entities of the user. BLEU score evaluates the language quality of generated responses. Success F1 balances the recall and precision rates of slot answers. For further details on these metrics, please refer to BIBREF8. In the first series of experiments, we evaluate the Sequicity framework on different mixing scenarios and different recurrent units (GRU or RNN), on two mixing methods (sequential turn or random turn), as described previously. We see that when the training data is kept unmixed, the match rates are better than those of the mixed training data. It is interesting to note that the GRU unit is much more sensitive with mixed data than the simple RNN unit with the corresponding absolute point drop of about 10%, compared to about 3.5%. However, the entity match rate is less important than the Success F1 score, where the GRU unit outperforms RNN in both sequential turn and random turn by a large margin. It is logical that if the test data are mixed but the training data are unmixed, we get lower scores than when both the training data and test data are mixed. The GRU unit is also better than the RNN unit on response generation in terms of BLEU scores. We also see that the task-oriented dialogue system has difficulty running on mixed-domain dataset; it achieves only about 75.62% of Success F1 in comparison to about 81.1% (as reported in the Sequicity paper, not shown in our table). Appendix SECREF5 shows some example dialogues generated automatically by our implemented system. In the second series of experiments, we evaluate the belief tracking components of two systems, the specialized multi-domain belief tracker and the Sequicity bspan component. As shown in the lower half of the Table TABREF21, Sequicity capability of belief tracking is much worse than that of the multi-domain belief tracker. The slot accuracy gap between the tools is about 21.6%, the value accuracy gap is about 34.4%; that is a large average gap of 28% of accuracy. This result suggests a future work on combining a specialized belief tracking module with an end-to-end task-oriented dialogue system to improve further the performance of the overall dialogue system. Experiments ::: Error Analysis In this subsection, we present an example of erroneous mixed dialogue with multple turns. Table TABREF23 shows a dialogue in the test set where wrong generated responses of the Sequicity system are marked in bold font. In the first turn, the system predicts incorrectly the bspan, thus generates wrong slot values (heavy traffic and Pizza Hut). The word Pizza Hut is an arbitrary value selected by the system when it cannot capture the correct value home in the bspan. In the second turn, the machine is not able to capture the value this_week. This failure does not manifest immediately at this turn but it is accumulated to make a wrong answer at the third turn (monday instead of this_week). The third turn is of domain weather and the fourth turn is switched to domain POI. The bspan value cleveland is retained through cross domain, resulting in an error in the fourth turn, where cleveland is shown instead of home. This example demonstrates a weakness of the system when being trained on a mixed-domain dataset. In the fifth turn, since the system does not recognize the value fastest in the bspan, it generates a random and wrong value moderate traffic. Note that the generated answer of the sixth turn is correct despite of the wrong predicted bspan; however, it is likely that if the dialogue continues, this wrong bspan may result in more answer mistakes. In such situations, multi-domain belief tracker usually performs better at bspan prediction. Conclusion We have presented the problem of mixed-domain task-oriented dialogue and its empirical results on two datasets. We employ two state-of-the-art, publicly available tools, one is the Sequicity framework for task-oriented dialogue, and another is the multi-domain belief tracking system. The belief tracking capability of the specialized system is much better than that of the end-to-end system. We also show the difficulty of task-oriented dialogue systems on mixed-domain datasets through two series of experiments. These results give some useful insights in combining the approaches to improve the performance of a commercial chatbot platform which is under active development in our company. We plan to extend this current research and integrate its fruitful results into a future version of the platform. Example Dialogues The following is three example dialogues generated by our system. The first dialogue is in single-domain. The next two dialogues are in mixed-domains.
entity match rate, BLEU score, Success F1 score
fc3f0eb297b2308b99eb4661a510c9cdbb6ffba2
fc3f0eb297b2308b99eb4661a510c9cdbb6ffba2_0
Q: What is the size of the dataset? Text: Introduction In this work, we investigate the problem of task-oriented dialogue in mixed-domain settings. Our work is related to two lines of research in Spoken Dialogue System (SDS), namely task-oriented dialogue system and multi-domain dialogue system. We briefly review the recent literature related to these topics as follows. Task-oriented dialogue systems are computer programs which can assist users to complete tasks in specific domains by understanding user requests and generating appropriate responses within several dialogue turns. Such systems are useful in domain-specific chatbot applications which help users find a restaurant or book a hotel. Conventional approach for building a task-oriented dialogue system is concerned with building a quite complex pipeline of many connected components. These components are usually independently developed which include at least four crucial modules: a natural language understanding module, a dialogue state tracking module, a dialogue policy learning module, and a answer generation module. Since these systems components are usually trained independently, their optimization targets may not fully align with the overall system evaluation criteria BIBREF0. In addition, such a pipeline system often suffers from error propagation where error made by upstream modules are accumuated and got amplified to the downstream ones. To overcome the above limitations of pipeline task-oriented dialogue systems, much research has focused recently in designing end-to-end learning systems with neural network-based models. One key property of task-oriented dialogue model is that it is required to reason and plan over multiple dialogue turns by aggregating useful information during the conversation. Therefore, sequence-to-sequence models such as the encoder-decoder based neural network models are proven to be suitable for both task-oriented and non-task-oriented systems. Serban et al. proposed to build end-to-end dialogue systems using generative hierarchical recurrent encoder-decoder neural network BIBREF1. Li et al. presented persona-based models which incorporate background information and speaking style of interlocutors into LSTM-based seq2seq network so as to improve the modeling of human-like behavior BIBREF2. Wen et al. designed an end-to-end trainable neural dialogue model with modularly connected components BIBREF3. Bordes et al. BIBREF4 proposed a task-oriented dialogue model using end-to-end memory networks. At the same time, many works explored different kinds of networks to model the dialogue state, such as copy-augmented networks BIBREF5, gated memory networks BIBREF6, query-regression networks BIBREF7. These systems do not perform slot-filling or user goal tracking; they rank and select a response from a set of response candidates which are conditioned on the dialogue history. One of the significant effort in developing end-to-end task-oriented systems is the recent Sequicity framework BIBREF8. This framework also relies on the sequence-to-sequence model and can be optimized with supervised or reinforcement learning. The Sequicity framework introduces the concept of belief span (bspan), which is a text span that tracks the dialogue states at each turn. In this framework, the task-oriented dialogue problem is decomposed into two stages: bspan generation and response generation. This framework has been shown to significantly outperform state-of-the-art pipeline-based methods. The second line of work in SDS that is related to this work is concerned with multi-domain dialogue systems. As presented above, one of the key components of a dialogue system is dialogue state tracking, or belief tracking, which maintains the states of conversation. A state is usually composed of user's goals, evidences and information which is accumulated along the sequence of dialogue turns. While the user's goal and evidences are extracted from user's utterances, the useful information is usually aggregated from external resources such as knowledge bases or dialogue ontologies. Such knowledge bases contain slot type and slot value entries in one or several predefined domains. Most approaches have difficulty scaling up with multiple domains due to the dependency of their model parameters on the underlying knowledge bases. Recently, Ramadan et al. BIBREF9 has introduced a novel approach which utilizes semantic similarity between dialogue utterances and knowledge base terms, allowing the information to be shared across domains. This method has been shown not only to scale well to multi-domain dialogues, but also outperform existing state-of-the-art models in single-domain tracking tasks. The problem that we are interested in this work is task-oriented dialogue in mixed-domain settings. This is different from the multi-domain dialogue problem above in several aspects, as follows: First, we investigate the phenomenon of alternating between different dialogue domains in subsequent dialogue turns, where each turn is defined as a pair of user question and machine answer. That is, the domains are mixed between turns. For example, in the first turn, the user requests some information of a restaurant; then in the second turn, he switches to the a different domain, for example, he asks about the weather at a specific location. In a next turn, he would either switch to a new domain or come back to ask about some other property of the suggested restaurant. This is a realistic scenario which usually happens in practical chatbot applications in our observations. We prefer calling this problem mixed-domain dialogue rather than multiple-domain dialogue. Second, we study the effect of the mixed-domain setting in the context of multi-domain dialogue approaches to see how they perform in different experimental scenarios. The main findings of this work include: A specialized state tracking component in multiple domains still plays an important role and gives better results than a state-of-the-art end-to-end task-oriented dialogue system. A combination of specialized state tracking system and an end-to-end task-oriented dialogue system is beneficial in mix-domain dialogue systems. Our hybrid system is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset. These experimental results give some useful insights on data preparation and acquisition in the development of the chatbot platform FPT.AI, which is currently deployed for many practical chatbot applications. The remainder of this paper is structured as follows. First, Section SECREF2 discusses briefly the two methods in building dialogue systems that our method relies on. Next, Section SECREF3 presents experimental settings and results. Finally, Section SECREF4 concludes the paper and gives some directions for future work. Methodology In this section, we present briefly two methods that we use in our experiments which have been mentioned in the previous section. The first method is the Sequicity framework and the second one is the state-of-the-art multi-domain dialogue state tracking approach. Methodology ::: Sequicity Figure FIGREF1 shows the architecture of the Sequicity framework as described in BIBREF8. In essence, in each turn, the Sequicity model first takes a bspan ($B_1$) and a response ($R_1$) which are determined in the previous step, and the current human question ($U_2$) to generate the current bspan. This bspan is then used together with a knowledge base to generate the corresponding machine answer ($R_2$), as shown in the right part of Figure FIGREF1. The left part of that figure shows an example dialogue in a mixed-domain setting (which will be explained in Section SECREF3). Methodology ::: Multi-domain Dialogue State Tracking Figure FIGREF8 shows the architecture of the multi-domain belief tracking with knowledge sharing as described in BIBREF9. This is the state-of-the-art belief tracker for multi-domain dialogue. This system encodes system responses with 3 bidirectional LSTM network and encodes user utterances with 3+1 bidirectional LSTM network. There are in total 7 independent LSTMs. For tracking domain, slot and value, it uses 3 corresponding LSTMs, either for system response or user utterance. There is one special LSTM to track the user affirmation. The semantic similarity between the utterances and ontology terms are learned and shared between domains through their embeddings in the same semantic space. Experiments In this section, we present experimental settings, different scenarios and results. We first present the datasets, then implementation settings, and finally obtained results. Experiments ::: Datasets We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12. In this original dataset, each dialogue is of a single domain where all of its turns are on that domain. Each turn is composed of a sentence pair, one sentence is a user utterance, the other sentence is the corresponding machine response. A dialogue is a sequence of turns. To create mix-domain dialogues for our experiments, we make some changes in this dataset as follows: We keep the dialogues in the calendar domain as they are. We take a half of dialogues in the weather domain and a half of dialogues in the POI domain and mix their turns together, resulting in a dataset of mixed weather-POI dialogues. In this mixed-domain dialogue, there is a turn in the weather domain, followed by a turn in POI domain or vice versa. We call this dataset the sequential turn dataset. Since the start turn of a dialogue has a special role in triggering the learning systems, we decide to create another and different mixed-domain dataset with the following mixing method: The first turn and the last turn of each dialogue are kept as in their original. The internal turns are mixed randomly. We call this dataset the random turn dataset. Some statistics of these mixed-domain datasets are shown in the lower half of the Table TABREF12. Experiments ::: Experimental Settings For the task-oriented Sequicity model, we keep the best parameter settings as reported in the original framework, on the same KVRET dataset BIBREF8. In particular, the hidden size of GRU unit is set to 50; the learning rate of Adam optimizer is 0.003. In addition to the original GRU unit, we also re-run this framework with simple RNN unit to compare the performance of different recurrent network types. The Sequicity tool is freely available for download. For the multi-domain belief tracker model, we set the hidden size of LSTM units to 50 as in the original model; word embedding size is 300 and number of training epochs is 100. The corresponding tool is also freely available for download. Experiments ::: Results Our experimental results are shown in Table TABREF21. The first half of the table contains results for task-oriented dialogue with the Sequicity framework with two scenarios for training data preparation. For each experiment, we run our models for 3 times and their scores are averaged as the final score. The mixed training scenario performs the mixing of both the training data, development data and the test data as described in the previous subsection. The non-mixed training scenario performs the mixing only on the development and test data, keeps the training data unmixed as in the original KVRET dataset. As in the Sequicity framework, we report entity match rate, BLEU score and Success F1 score. Entity match rate evaluates task completion, it determines if a system can generate all correct constraints to search the indicated entities of the user. BLEU score evaluates the language quality of generated responses. Success F1 balances the recall and precision rates of slot answers. For further details on these metrics, please refer to BIBREF8. In the first series of experiments, we evaluate the Sequicity framework on different mixing scenarios and different recurrent units (GRU or RNN), on two mixing methods (sequential turn or random turn), as described previously. We see that when the training data is kept unmixed, the match rates are better than those of the mixed training data. It is interesting to note that the GRU unit is much more sensitive with mixed data than the simple RNN unit with the corresponding absolute point drop of about 10%, compared to about 3.5%. However, the entity match rate is less important than the Success F1 score, where the GRU unit outperforms RNN in both sequential turn and random turn by a large margin. It is logical that if the test data are mixed but the training data are unmixed, we get lower scores than when both the training data and test data are mixed. The GRU unit is also better than the RNN unit on response generation in terms of BLEU scores. We also see that the task-oriented dialogue system has difficulty running on mixed-domain dataset; it achieves only about 75.62% of Success F1 in comparison to about 81.1% (as reported in the Sequicity paper, not shown in our table). Appendix SECREF5 shows some example dialogues generated automatically by our implemented system. In the second series of experiments, we evaluate the belief tracking components of two systems, the specialized multi-domain belief tracker and the Sequicity bspan component. As shown in the lower half of the Table TABREF21, Sequicity capability of belief tracking is much worse than that of the multi-domain belief tracker. The slot accuracy gap between the tools is about 21.6%, the value accuracy gap is about 34.4%; that is a large average gap of 28% of accuracy. This result suggests a future work on combining a specialized belief tracking module with an end-to-end task-oriented dialogue system to improve further the performance of the overall dialogue system. Experiments ::: Error Analysis In this subsection, we present an example of erroneous mixed dialogue with multple turns. Table TABREF23 shows a dialogue in the test set where wrong generated responses of the Sequicity system are marked in bold font. In the first turn, the system predicts incorrectly the bspan, thus generates wrong slot values (heavy traffic and Pizza Hut). The word Pizza Hut is an arbitrary value selected by the system when it cannot capture the correct value home in the bspan. In the second turn, the machine is not able to capture the value this_week. This failure does not manifest immediately at this turn but it is accumulated to make a wrong answer at the third turn (monday instead of this_week). The third turn is of domain weather and the fourth turn is switched to domain POI. The bspan value cleveland is retained through cross domain, resulting in an error in the fourth turn, where cleveland is shown instead of home. This example demonstrates a weakness of the system when being trained on a mixed-domain dataset. In the fifth turn, since the system does not recognize the value fastest in the bspan, it generates a random and wrong value moderate traffic. Note that the generated answer of the sixth turn is correct despite of the wrong predicted bspan; however, it is likely that if the dialogue continues, this wrong bspan may result in more answer mistakes. In such situations, multi-domain belief tracker usually performs better at bspan prediction. Conclusion We have presented the problem of mixed-domain task-oriented dialogue and its empirical results on two datasets. We employ two state-of-the-art, publicly available tools, one is the Sequicity framework for task-oriented dialogue, and another is the multi-domain belief tracking system. The belief tracking capability of the specialized system is much better than that of the end-to-end system. We also show the difficulty of task-oriented dialogue systems on mixed-domain datasets through two series of experiments. These results give some useful insights in combining the approaches to improve the performance of a commercial chatbot platform which is under active development in our company. We plan to extend this current research and integrate its fruitful results into a future version of the platform. Example Dialogues The following is three example dialogues generated by our system. The first dialogue is in single-domain. The next two dialogues are in mixed-domains.
3029
27c1c678d3862c7676320ca493537b03a9f0c77a
27c1c678d3862c7676320ca493537b03a9f0c77a_0
Q: What multi-domain dataset is used? Text: Introduction In this work, we investigate the problem of task-oriented dialogue in mixed-domain settings. Our work is related to two lines of research in Spoken Dialogue System (SDS), namely task-oriented dialogue system and multi-domain dialogue system. We briefly review the recent literature related to these topics as follows. Task-oriented dialogue systems are computer programs which can assist users to complete tasks in specific domains by understanding user requests and generating appropriate responses within several dialogue turns. Such systems are useful in domain-specific chatbot applications which help users find a restaurant or book a hotel. Conventional approach for building a task-oriented dialogue system is concerned with building a quite complex pipeline of many connected components. These components are usually independently developed which include at least four crucial modules: a natural language understanding module, a dialogue state tracking module, a dialogue policy learning module, and a answer generation module. Since these systems components are usually trained independently, their optimization targets may not fully align with the overall system evaluation criteria BIBREF0. In addition, such a pipeline system often suffers from error propagation where error made by upstream modules are accumuated and got amplified to the downstream ones. To overcome the above limitations of pipeline task-oriented dialogue systems, much research has focused recently in designing end-to-end learning systems with neural network-based models. One key property of task-oriented dialogue model is that it is required to reason and plan over multiple dialogue turns by aggregating useful information during the conversation. Therefore, sequence-to-sequence models such as the encoder-decoder based neural network models are proven to be suitable for both task-oriented and non-task-oriented systems. Serban et al. proposed to build end-to-end dialogue systems using generative hierarchical recurrent encoder-decoder neural network BIBREF1. Li et al. presented persona-based models which incorporate background information and speaking style of interlocutors into LSTM-based seq2seq network so as to improve the modeling of human-like behavior BIBREF2. Wen et al. designed an end-to-end trainable neural dialogue model with modularly connected components BIBREF3. Bordes et al. BIBREF4 proposed a task-oriented dialogue model using end-to-end memory networks. At the same time, many works explored different kinds of networks to model the dialogue state, such as copy-augmented networks BIBREF5, gated memory networks BIBREF6, query-regression networks BIBREF7. These systems do not perform slot-filling or user goal tracking; they rank and select a response from a set of response candidates which are conditioned on the dialogue history. One of the significant effort in developing end-to-end task-oriented systems is the recent Sequicity framework BIBREF8. This framework also relies on the sequence-to-sequence model and can be optimized with supervised or reinforcement learning. The Sequicity framework introduces the concept of belief span (bspan), which is a text span that tracks the dialogue states at each turn. In this framework, the task-oriented dialogue problem is decomposed into two stages: bspan generation and response generation. This framework has been shown to significantly outperform state-of-the-art pipeline-based methods. The second line of work in SDS that is related to this work is concerned with multi-domain dialogue systems. As presented above, one of the key components of a dialogue system is dialogue state tracking, or belief tracking, which maintains the states of conversation. A state is usually composed of user's goals, evidences and information which is accumulated along the sequence of dialogue turns. While the user's goal and evidences are extracted from user's utterances, the useful information is usually aggregated from external resources such as knowledge bases or dialogue ontologies. Such knowledge bases contain slot type and slot value entries in one or several predefined domains. Most approaches have difficulty scaling up with multiple domains due to the dependency of their model parameters on the underlying knowledge bases. Recently, Ramadan et al. BIBREF9 has introduced a novel approach which utilizes semantic similarity between dialogue utterances and knowledge base terms, allowing the information to be shared across domains. This method has been shown not only to scale well to multi-domain dialogues, but also outperform existing state-of-the-art models in single-domain tracking tasks. The problem that we are interested in this work is task-oriented dialogue in mixed-domain settings. This is different from the multi-domain dialogue problem above in several aspects, as follows: First, we investigate the phenomenon of alternating between different dialogue domains in subsequent dialogue turns, where each turn is defined as a pair of user question and machine answer. That is, the domains are mixed between turns. For example, in the first turn, the user requests some information of a restaurant; then in the second turn, he switches to the a different domain, for example, he asks about the weather at a specific location. In a next turn, he would either switch to a new domain or come back to ask about some other property of the suggested restaurant. This is a realistic scenario which usually happens in practical chatbot applications in our observations. We prefer calling this problem mixed-domain dialogue rather than multiple-domain dialogue. Second, we study the effect of the mixed-domain setting in the context of multi-domain dialogue approaches to see how they perform in different experimental scenarios. The main findings of this work include: A specialized state tracking component in multiple domains still plays an important role and gives better results than a state-of-the-art end-to-end task-oriented dialogue system. A combination of specialized state tracking system and an end-to-end task-oriented dialogue system is beneficial in mix-domain dialogue systems. Our hybrid system is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset. These experimental results give some useful insights on data preparation and acquisition in the development of the chatbot platform FPT.AI, which is currently deployed for many practical chatbot applications. The remainder of this paper is structured as follows. First, Section SECREF2 discusses briefly the two methods in building dialogue systems that our method relies on. Next, Section SECREF3 presents experimental settings and results. Finally, Section SECREF4 concludes the paper and gives some directions for future work. Methodology In this section, we present briefly two methods that we use in our experiments which have been mentioned in the previous section. The first method is the Sequicity framework and the second one is the state-of-the-art multi-domain dialogue state tracking approach. Methodology ::: Sequicity Figure FIGREF1 shows the architecture of the Sequicity framework as described in BIBREF8. In essence, in each turn, the Sequicity model first takes a bspan ($B_1$) and a response ($R_1$) which are determined in the previous step, and the current human question ($U_2$) to generate the current bspan. This bspan is then used together with a knowledge base to generate the corresponding machine answer ($R_2$), as shown in the right part of Figure FIGREF1. The left part of that figure shows an example dialogue in a mixed-domain setting (which will be explained in Section SECREF3). Methodology ::: Multi-domain Dialogue State Tracking Figure FIGREF8 shows the architecture of the multi-domain belief tracking with knowledge sharing as described in BIBREF9. This is the state-of-the-art belief tracker for multi-domain dialogue. This system encodes system responses with 3 bidirectional LSTM network and encodes user utterances with 3+1 bidirectional LSTM network. There are in total 7 independent LSTMs. For tracking domain, slot and value, it uses 3 corresponding LSTMs, either for system response or user utterance. There is one special LSTM to track the user affirmation. The semantic similarity between the utterances and ontology terms are learned and shared between domains through their embeddings in the same semantic space. Experiments In this section, we present experimental settings, different scenarios and results. We first present the datasets, then implementation settings, and finally obtained results. Experiments ::: Datasets We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12. In this original dataset, each dialogue is of a single domain where all of its turns are on that domain. Each turn is composed of a sentence pair, one sentence is a user utterance, the other sentence is the corresponding machine response. A dialogue is a sequence of turns. To create mix-domain dialogues for our experiments, we make some changes in this dataset as follows: We keep the dialogues in the calendar domain as they are. We take a half of dialogues in the weather domain and a half of dialogues in the POI domain and mix their turns together, resulting in a dataset of mixed weather-POI dialogues. In this mixed-domain dialogue, there is a turn in the weather domain, followed by a turn in POI domain or vice versa. We call this dataset the sequential turn dataset. Since the start turn of a dialogue has a special role in triggering the learning systems, we decide to create another and different mixed-domain dataset with the following mixing method: The first turn and the last turn of each dialogue are kept as in their original. The internal turns are mixed randomly. We call this dataset the random turn dataset. Some statistics of these mixed-domain datasets are shown in the lower half of the Table TABREF12. Experiments ::: Experimental Settings For the task-oriented Sequicity model, we keep the best parameter settings as reported in the original framework, on the same KVRET dataset BIBREF8. In particular, the hidden size of GRU unit is set to 50; the learning rate of Adam optimizer is 0.003. In addition to the original GRU unit, we also re-run this framework with simple RNN unit to compare the performance of different recurrent network types. The Sequicity tool is freely available for download. For the multi-domain belief tracker model, we set the hidden size of LSTM units to 50 as in the original model; word embedding size is 300 and number of training epochs is 100. The corresponding tool is also freely available for download. Experiments ::: Results Our experimental results are shown in Table TABREF21. The first half of the table contains results for task-oriented dialogue with the Sequicity framework with two scenarios for training data preparation. For each experiment, we run our models for 3 times and their scores are averaged as the final score. The mixed training scenario performs the mixing of both the training data, development data and the test data as described in the previous subsection. The non-mixed training scenario performs the mixing only on the development and test data, keeps the training data unmixed as in the original KVRET dataset. As in the Sequicity framework, we report entity match rate, BLEU score and Success F1 score. Entity match rate evaluates task completion, it determines if a system can generate all correct constraints to search the indicated entities of the user. BLEU score evaluates the language quality of generated responses. Success F1 balances the recall and precision rates of slot answers. For further details on these metrics, please refer to BIBREF8. In the first series of experiments, we evaluate the Sequicity framework on different mixing scenarios and different recurrent units (GRU or RNN), on two mixing methods (sequential turn or random turn), as described previously. We see that when the training data is kept unmixed, the match rates are better than those of the mixed training data. It is interesting to note that the GRU unit is much more sensitive with mixed data than the simple RNN unit with the corresponding absolute point drop of about 10%, compared to about 3.5%. However, the entity match rate is less important than the Success F1 score, where the GRU unit outperforms RNN in both sequential turn and random turn by a large margin. It is logical that if the test data are mixed but the training data are unmixed, we get lower scores than when both the training data and test data are mixed. The GRU unit is also better than the RNN unit on response generation in terms of BLEU scores. We also see that the task-oriented dialogue system has difficulty running on mixed-domain dataset; it achieves only about 75.62% of Success F1 in comparison to about 81.1% (as reported in the Sequicity paper, not shown in our table). Appendix SECREF5 shows some example dialogues generated automatically by our implemented system. In the second series of experiments, we evaluate the belief tracking components of two systems, the specialized multi-domain belief tracker and the Sequicity bspan component. As shown in the lower half of the Table TABREF21, Sequicity capability of belief tracking is much worse than that of the multi-domain belief tracker. The slot accuracy gap between the tools is about 21.6%, the value accuracy gap is about 34.4%; that is a large average gap of 28% of accuracy. This result suggests a future work on combining a specialized belief tracking module with an end-to-end task-oriented dialogue system to improve further the performance of the overall dialogue system. Experiments ::: Error Analysis In this subsection, we present an example of erroneous mixed dialogue with multple turns. Table TABREF23 shows a dialogue in the test set where wrong generated responses of the Sequicity system are marked in bold font. In the first turn, the system predicts incorrectly the bspan, thus generates wrong slot values (heavy traffic and Pizza Hut). The word Pizza Hut is an arbitrary value selected by the system when it cannot capture the correct value home in the bspan. In the second turn, the machine is not able to capture the value this_week. This failure does not manifest immediately at this turn but it is accumulated to make a wrong answer at the third turn (monday instead of this_week). The third turn is of domain weather and the fourth turn is switched to domain POI. The bspan value cleveland is retained through cross domain, resulting in an error in the fourth turn, where cleveland is shown instead of home. This example demonstrates a weakness of the system when being trained on a mixed-domain dataset. In the fifth turn, since the system does not recognize the value fastest in the bspan, it generates a random and wrong value moderate traffic. Note that the generated answer of the sixth turn is correct despite of the wrong predicted bspan; however, it is likely that if the dialogue continues, this wrong bspan may result in more answer mistakes. In such situations, multi-domain belief tracker usually performs better at bspan prediction. Conclusion We have presented the problem of mixed-domain task-oriented dialogue and its empirical results on two datasets. We employ two state-of-the-art, publicly available tools, one is the Sequicity framework for task-oriented dialogue, and another is the multi-domain belief tracking system. The belief tracking capability of the specialized system is much better than that of the end-to-end system. We also show the difficulty of task-oriented dialogue systems on mixed-domain datasets through two series of experiments. These results give some useful insights in combining the approaches to improve the performance of a commercial chatbot platform which is under active development in our company. We plan to extend this current research and integrate its fruitful results into a future version of the platform. Example Dialogues The following is three example dialogues generated by our system. The first dialogue is in single-domain. The next two dialogues are in mixed-domains.
KVRET
ccb3d21885250bdbfc4c320e99f25923896e70fa
ccb3d21885250bdbfc4c320e99f25923896e70fa_0
Q: Which domains did they explored? Text: Introduction In this work, we investigate the problem of task-oriented dialogue in mixed-domain settings. Our work is related to two lines of research in Spoken Dialogue System (SDS), namely task-oriented dialogue system and multi-domain dialogue system. We briefly review the recent literature related to these topics as follows. Task-oriented dialogue systems are computer programs which can assist users to complete tasks in specific domains by understanding user requests and generating appropriate responses within several dialogue turns. Such systems are useful in domain-specific chatbot applications which help users find a restaurant or book a hotel. Conventional approach for building a task-oriented dialogue system is concerned with building a quite complex pipeline of many connected components. These components are usually independently developed which include at least four crucial modules: a natural language understanding module, a dialogue state tracking module, a dialogue policy learning module, and a answer generation module. Since these systems components are usually trained independently, their optimization targets may not fully align with the overall system evaluation criteria BIBREF0. In addition, such a pipeline system often suffers from error propagation where error made by upstream modules are accumuated and got amplified to the downstream ones. To overcome the above limitations of pipeline task-oriented dialogue systems, much research has focused recently in designing end-to-end learning systems with neural network-based models. One key property of task-oriented dialogue model is that it is required to reason and plan over multiple dialogue turns by aggregating useful information during the conversation. Therefore, sequence-to-sequence models such as the encoder-decoder based neural network models are proven to be suitable for both task-oriented and non-task-oriented systems. Serban et al. proposed to build end-to-end dialogue systems using generative hierarchical recurrent encoder-decoder neural network BIBREF1. Li et al. presented persona-based models which incorporate background information and speaking style of interlocutors into LSTM-based seq2seq network so as to improve the modeling of human-like behavior BIBREF2. Wen et al. designed an end-to-end trainable neural dialogue model with modularly connected components BIBREF3. Bordes et al. BIBREF4 proposed a task-oriented dialogue model using end-to-end memory networks. At the same time, many works explored different kinds of networks to model the dialogue state, such as copy-augmented networks BIBREF5, gated memory networks BIBREF6, query-regression networks BIBREF7. These systems do not perform slot-filling or user goal tracking; they rank and select a response from a set of response candidates which are conditioned on the dialogue history. One of the significant effort in developing end-to-end task-oriented systems is the recent Sequicity framework BIBREF8. This framework also relies on the sequence-to-sequence model and can be optimized with supervised or reinforcement learning. The Sequicity framework introduces the concept of belief span (bspan), which is a text span that tracks the dialogue states at each turn. In this framework, the task-oriented dialogue problem is decomposed into two stages: bspan generation and response generation. This framework has been shown to significantly outperform state-of-the-art pipeline-based methods. The second line of work in SDS that is related to this work is concerned with multi-domain dialogue systems. As presented above, one of the key components of a dialogue system is dialogue state tracking, or belief tracking, which maintains the states of conversation. A state is usually composed of user's goals, evidences and information which is accumulated along the sequence of dialogue turns. While the user's goal and evidences are extracted from user's utterances, the useful information is usually aggregated from external resources such as knowledge bases or dialogue ontologies. Such knowledge bases contain slot type and slot value entries in one or several predefined domains. Most approaches have difficulty scaling up with multiple domains due to the dependency of their model parameters on the underlying knowledge bases. Recently, Ramadan et al. BIBREF9 has introduced a novel approach which utilizes semantic similarity between dialogue utterances and knowledge base terms, allowing the information to be shared across domains. This method has been shown not only to scale well to multi-domain dialogues, but also outperform existing state-of-the-art models in single-domain tracking tasks. The problem that we are interested in this work is task-oriented dialogue in mixed-domain settings. This is different from the multi-domain dialogue problem above in several aspects, as follows: First, we investigate the phenomenon of alternating between different dialogue domains in subsequent dialogue turns, where each turn is defined as a pair of user question and machine answer. That is, the domains are mixed between turns. For example, in the first turn, the user requests some information of a restaurant; then in the second turn, he switches to the a different domain, for example, he asks about the weather at a specific location. In a next turn, he would either switch to a new domain or come back to ask about some other property of the suggested restaurant. This is a realistic scenario which usually happens in practical chatbot applications in our observations. We prefer calling this problem mixed-domain dialogue rather than multiple-domain dialogue. Second, we study the effect of the mixed-domain setting in the context of multi-domain dialogue approaches to see how they perform in different experimental scenarios. The main findings of this work include: A specialized state tracking component in multiple domains still plays an important role and gives better results than a state-of-the-art end-to-end task-oriented dialogue system. A combination of specialized state tracking system and an end-to-end task-oriented dialogue system is beneficial in mix-domain dialogue systems. Our hybrid system is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset. These experimental results give some useful insights on data preparation and acquisition in the development of the chatbot platform FPT.AI, which is currently deployed for many practical chatbot applications. The remainder of this paper is structured as follows. First, Section SECREF2 discusses briefly the two methods in building dialogue systems that our method relies on. Next, Section SECREF3 presents experimental settings and results. Finally, Section SECREF4 concludes the paper and gives some directions for future work. Methodology In this section, we present briefly two methods that we use in our experiments which have been mentioned in the previous section. The first method is the Sequicity framework and the second one is the state-of-the-art multi-domain dialogue state tracking approach. Methodology ::: Sequicity Figure FIGREF1 shows the architecture of the Sequicity framework as described in BIBREF8. In essence, in each turn, the Sequicity model first takes a bspan ($B_1$) and a response ($R_1$) which are determined in the previous step, and the current human question ($U_2$) to generate the current bspan. This bspan is then used together with a knowledge base to generate the corresponding machine answer ($R_2$), as shown in the right part of Figure FIGREF1. The left part of that figure shows an example dialogue in a mixed-domain setting (which will be explained in Section SECREF3). Methodology ::: Multi-domain Dialogue State Tracking Figure FIGREF8 shows the architecture of the multi-domain belief tracking with knowledge sharing as described in BIBREF9. This is the state-of-the-art belief tracker for multi-domain dialogue. This system encodes system responses with 3 bidirectional LSTM network and encodes user utterances with 3+1 bidirectional LSTM network. There are in total 7 independent LSTMs. For tracking domain, slot and value, it uses 3 corresponding LSTMs, either for system response or user utterance. There is one special LSTM to track the user affirmation. The semantic similarity between the utterances and ontology terms are learned and shared between domains through their embeddings in the same semantic space. Experiments In this section, we present experimental settings, different scenarios and results. We first present the datasets, then implementation settings, and finally obtained results. Experiments ::: Datasets We use the publicly available dataset KVRET BIBREF5 in our experiments. This dataset is created by the Wizard-of-Oz method BIBREF10 on Amazon Mechanical Turk platform. This dataset includes dialogues in 3 domains: calendar, weather, navigation (POI) which is suitable for our mix-domain dialogue experiments. There are 2,425 dialogues for training, 302 for validation and 302 for testing, as shown in the upper half of Table TABREF12. In this original dataset, each dialogue is of a single domain where all of its turns are on that domain. Each turn is composed of a sentence pair, one sentence is a user utterance, the other sentence is the corresponding machine response. A dialogue is a sequence of turns. To create mix-domain dialogues for our experiments, we make some changes in this dataset as follows: We keep the dialogues in the calendar domain as they are. We take a half of dialogues in the weather domain and a half of dialogues in the POI domain and mix their turns together, resulting in a dataset of mixed weather-POI dialogues. In this mixed-domain dialogue, there is a turn in the weather domain, followed by a turn in POI domain or vice versa. We call this dataset the sequential turn dataset. Since the start turn of a dialogue has a special role in triggering the learning systems, we decide to create another and different mixed-domain dataset with the following mixing method: The first turn and the last turn of each dialogue are kept as in their original. The internal turns are mixed randomly. We call this dataset the random turn dataset. Some statistics of these mixed-domain datasets are shown in the lower half of the Table TABREF12. Experiments ::: Experimental Settings For the task-oriented Sequicity model, we keep the best parameter settings as reported in the original framework, on the same KVRET dataset BIBREF8. In particular, the hidden size of GRU unit is set to 50; the learning rate of Adam optimizer is 0.003. In addition to the original GRU unit, we also re-run this framework with simple RNN unit to compare the performance of different recurrent network types. The Sequicity tool is freely available for download. For the multi-domain belief tracker model, we set the hidden size of LSTM units to 50 as in the original model; word embedding size is 300 and number of training epochs is 100. The corresponding tool is also freely available for download. Experiments ::: Results Our experimental results are shown in Table TABREF21. The first half of the table contains results for task-oriented dialogue with the Sequicity framework with two scenarios for training data preparation. For each experiment, we run our models for 3 times and their scores are averaged as the final score. The mixed training scenario performs the mixing of both the training data, development data and the test data as described in the previous subsection. The non-mixed training scenario performs the mixing only on the development and test data, keeps the training data unmixed as in the original KVRET dataset. As in the Sequicity framework, we report entity match rate, BLEU score and Success F1 score. Entity match rate evaluates task completion, it determines if a system can generate all correct constraints to search the indicated entities of the user. BLEU score evaluates the language quality of generated responses. Success F1 balances the recall and precision rates of slot answers. For further details on these metrics, please refer to BIBREF8. In the first series of experiments, we evaluate the Sequicity framework on different mixing scenarios and different recurrent units (GRU or RNN), on two mixing methods (sequential turn or random turn), as described previously. We see that when the training data is kept unmixed, the match rates are better than those of the mixed training data. It is interesting to note that the GRU unit is much more sensitive with mixed data than the simple RNN unit with the corresponding absolute point drop of about 10%, compared to about 3.5%. However, the entity match rate is less important than the Success F1 score, where the GRU unit outperforms RNN in both sequential turn and random turn by a large margin. It is logical that if the test data are mixed but the training data are unmixed, we get lower scores than when both the training data and test data are mixed. The GRU unit is also better than the RNN unit on response generation in terms of BLEU scores. We also see that the task-oriented dialogue system has difficulty running on mixed-domain dataset; it achieves only about 75.62% of Success F1 in comparison to about 81.1% (as reported in the Sequicity paper, not shown in our table). Appendix SECREF5 shows some example dialogues generated automatically by our implemented system. In the second series of experiments, we evaluate the belief tracking components of two systems, the specialized multi-domain belief tracker and the Sequicity bspan component. As shown in the lower half of the Table TABREF21, Sequicity capability of belief tracking is much worse than that of the multi-domain belief tracker. The slot accuracy gap between the tools is about 21.6%, the value accuracy gap is about 34.4%; that is a large average gap of 28% of accuracy. This result suggests a future work on combining a specialized belief tracking module with an end-to-end task-oriented dialogue system to improve further the performance of the overall dialogue system. Experiments ::: Error Analysis In this subsection, we present an example of erroneous mixed dialogue with multple turns. Table TABREF23 shows a dialogue in the test set where wrong generated responses of the Sequicity system are marked in bold font. In the first turn, the system predicts incorrectly the bspan, thus generates wrong slot values (heavy traffic and Pizza Hut). The word Pizza Hut is an arbitrary value selected by the system when it cannot capture the correct value home in the bspan. In the second turn, the machine is not able to capture the value this_week. This failure does not manifest immediately at this turn but it is accumulated to make a wrong answer at the third turn (monday instead of this_week). The third turn is of domain weather and the fourth turn is switched to domain POI. The bspan value cleveland is retained through cross domain, resulting in an error in the fourth turn, where cleveland is shown instead of home. This example demonstrates a weakness of the system when being trained on a mixed-domain dataset. In the fifth turn, since the system does not recognize the value fastest in the bspan, it generates a random and wrong value moderate traffic. Note that the generated answer of the sixth turn is correct despite of the wrong predicted bspan; however, it is likely that if the dialogue continues, this wrong bspan may result in more answer mistakes. In such situations, multi-domain belief tracker usually performs better at bspan prediction. Conclusion We have presented the problem of mixed-domain task-oriented dialogue and its empirical results on two datasets. We employ two state-of-the-art, publicly available tools, one is the Sequicity framework for task-oriented dialogue, and another is the multi-domain belief tracking system. The belief tracking capability of the specialized system is much better than that of the end-to-end system. We also show the difficulty of task-oriented dialogue systems on mixed-domain datasets through two series of experiments. These results give some useful insights in combining the approaches to improve the performance of a commercial chatbot platform which is under active development in our company. We plan to extend this current research and integrate its fruitful results into a future version of the platform. Example Dialogues The following is three example dialogues generated by our system. The first dialogue is in single-domain. The next two dialogues are in mixed-domains.
calendar, weather, navigation
61b0db2b5718d409b07f83f912bad6a788bfee5a
61b0db2b5718d409b07f83f912bad6a788bfee5a_0
Q: Do they report results only on English data? Text: Introduction Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed. In the past two decades, researchers from different fields including linguistics, psychology, computer science and mathematics proposed numerous techniques and concepts that aim to solve the AV task. Probably due to the interdisciplinary nature of this research field, AV approaches were becoming more and more diverse, as can be seen in the respective literature. In 2013, for example, Veenman and Li BIBREF2 presented an AV method based on compression, which has its roots in the field of information theory. In 2015, Bagnall BIBREF3 introduced the first deep learning approach that makes use of language modeling, an important key concept in statistical natural language processing. In 2017, Castañeda and Calvo BIBREF4 proposed an AV method that applies a semantic space model through Latent Dirichlet Allocation, a generative statistical model used in information retrieval and computational linguistics. Despite the increasing number of AV approaches, a closer look at the respective studies reveals that only minor attention is paid to their underlying characteristics such as reliability and robustness. These, however, must be taken into account before AV methods can be applied in real forensic settings. The objective of this paper is to fill this gap and to propose important properties and criteria that are not only intended to characterize AV methods, but also allow their assessment in a more systematic manner. By this, we hope to contribute to the further development of this young research field. Based on the proposed properties, we investigate the applicability of 12 existing AV approaches on three self-compiled corpora, where each corpus involves a specific challenge. The rest of this paper is structured as follows. Section SECREF2 discusses the related work that served as an inspiration for our analysis. Section SECREF3 comprises the proposed criteria and properties to characterize AV methods. Section SECREF4 describes the methodology, consisting of the used corpora, examined AV methods, selected performance measures and experiments. Finally, Section SECREF5 concludes the work and outlines future work. Related Work Over the years, researchers in the field of authorship analysis identified a number of challenges and limitations regarding existing studies and approaches. Azarbonyad et al. BIBREF8 , for example, focused on the questions if the writing styles of authors of short texts change over time and how this affects AA. To answer these questions, the authors proposed an AA approach based on time-aware language models that incorporate the temporal changes of the writing style of authors. In one of our experiments, we focus on a similar question, namely, whether it is possible to recognize the writing style of authors, despite of large time spans between their documents. However, there are several differences between our experiment and the study of Azarbonyad et al. First, the authors consider an AA task, where one anonymous document INLINEFORM0 has to be attributed to one of INLINEFORM1 possible candidate authors, while we focus on an AV task, where INLINEFORM2 is compared against one document INLINEFORM3 of a known author. Second, the authors focus on texts with informal language (emails and tweets) in their study, while in our experiment we consider documents written in a formal language (scientific works). Third, Azarbonyad et al. analyzed texts with a time span of four years, while in our experiment the average time span is 15.6 years. Fourth, in contrast to the approach of the authors, none of the 12 examined AV approaches in our experiment considers a special handling of temporal stylistic changes. In recent years, the new research field author obfuscation (AO) evolved, which concerns itself with the task to fool AA or AV methods in a way that the true author cannot be correctly recognized anymore. To achieve this, AO approaches which, according to Gröndahl and Asokan BIBREF9 can be divided into manual, computer-assisted and automatic types, perform a variety of modifications on the texts. These include simple synonym replacements, rule-based substitutions or word order permutations. In 2016, Potthast et al. BIBREF10 presented the first large-scale evaluation of three AO approaches that aim to attack 44 AV methods, which were submitted to the PAN-AV competitions during 2013-2015 BIBREF11 , BIBREF5 , BIBREF12 . One of their findings was that even basic AO approaches have a significant impact on many AV methods. More precisely, the best-performing AO approach was able to flip on average INLINEFORM0 % of an authorship verifier’s decisions towards choosing N (“different author”), while in fact Y (“same author”) was correct BIBREF10 . In contrast to Potthast et al., we do not focus on AO to measure the robustness of AV methods. Instead, we investigate in one experiment the question how trained AV models behave, if the lengths of the questioned documents are getting shorter and shorter. To our best knowledge, this question has not been addressed in previous authorship verification studies. Characteristics of Authorship Verification Before we can assess the applicability of AV methods, it is important to understand their fundamental characteristics. Due to the increasing number of proposed AV approaches in the last two decades, the need arose to develop a systematization including the conception, implementation and evaluation of authorship verification methods. In regard to this, only a few attempts have been made so far. In 2004, for example, Koppel and Schler BIBREF13 described for the first time the connection between AV and unary classification, also known as one-class classification. In 2008, Stein et al. BIBREF14 compiled an overview of important algorithmic building blocks for AV where, among other things, they also formulated three AV problems as decision problems. In 2009, Stamatatos BIBREF15 coined the phrases profile- and instance-based approaches that initially were used in the field of AA, but later found their way also into AV. In 2013 and 2014, Stamatatos et al. BIBREF11 , BIBREF16 introduced the terms intrinsic- and extrinsic models that aim to further distinguish between AV methods. However, a closer look at previous attempts to characterize authorship verification approaches reveals a number of misunderstandings, for instance, when it comes to draw the borders between their underlying classification models. In the following subsections, we clarify these misunderstandings, where we redefine previous definitions and propose new properties that enable a better comparison between AV methods. Reliability (Determinism) Reliability is a fundamental property any AV method must fulfill in order to be applicable in real-world forensic settings. However, since there is no consistent concept nor a uniform definition of the term “reliability” in the context of authorship verification according to the screened literature, we decided to reuse a definition from applied statistics, and adapt it carefully to AV. In his standard reference book, Bollen BIBREF17 gives a clear description for this term: “Reliability is the consistency of measurement” and provides a simple example to illustrate its meaning: At time INLINEFORM0 we ask a large number of persons the same question Q and record their responses. Afterwards, we remove their memory of the dialogue. At time INLINEFORM1 we ask them again the same question Q and record their responses again. “The reliability is the consistency of the responses across individuals for the two time periods. To the extent that all individuals are consistent, the measure is reliable” BIBREF17 . This example deals with the consistency of the measured objects as a factor for the reliability of measurements. In the case of authorship verification, the analyzed objects are static data, and hence these cannot be a source of inconsistency. However, the measurement system itself can behave inconsistently and hence unreliable. This aspect can be described as intra-rater reliability. Reliability in authorship verification is satisfied, if an AV method always generates the same prediction INLINEFORM0 for the same input INLINEFORM1 , or in other words, if the method behaves deterministically. Several AV approaches, including BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF16 fall into this category. In contrast, if an AV method behaves non-deterministically such that two different predictions for INLINEFORM2 are possible, the method can be rated as unreliable. Many AV approaches, including BIBREF4 , BIBREF13 , BIBREF26 , BIBREF1 , BIBREF27 , BIBREF3 , BIBREF28 , BIBREF29 , BIBREF30 belong to this category, since they involve randomness (e. g., weight initialization, feature subsampling, chunk generation or impostor selection), which might distort the evaluation, as every run on a test corpus very likely leads to different results. Under lab conditions, results of non-deterministic AV methods can (and should) be counteracted by averaging multiple runs. However, it remains highly questionable if such methods are generally applicable in realistic forensic cases, where the prediction INLINEFORM3 regarding a verification case INLINEFORM4 might sometimes result in Y and sometimes in N. Optimizability Another important property of an AV method is optimizability. We define an AV method as optimizable, if it is designed in such a way that it offers adjustable hyperparameters that can be tuned against a training/validation corpus, given an optimization method such as grid or random search. Hyperparameters might be, for instance, the selected distance/similarity function, the number of layers and neurons in a neural network or the choice of a kernel method. The majority of existing AV approaches in the literature (for example, BIBREF13 , BIBREF23 , BIBREF24 , BIBREF22 , BIBREF31 , BIBREF4 , BIBREF32 , BIBREF16 ) belong to this category. On the other hand, if a published AV approach involves hyperparameters that have been entirely fixed such that there is no further possibility to improve its performance from outside (without deviating from the definitions in the publication of the method), the method is considered to be non-optimizable. Non-optimizable AV methods are preferable in forensic settings as, here, the existence of a training/validation corpus is not always self-evident. Among the proposed AV approaches in the respective literature, we identified only a small fraction BIBREF21 , BIBREF2 , BIBREF30 that fall into this category. Model Category From a machine learning point of view, authorship verification represents a unary classification problem BIBREF22 , BIBREF13 , BIBREF16 , BIBREF33 , BIBREF14 . Yet, in the literature, it can be observed that sometimes AV is treated as a unary BIBREF25 , BIBREF23 , BIBREF26 , BIBREF16 and sometimes as a binary classification task BIBREF30 , BIBREF32 , BIBREF22 , BIBREF2 . We define the way an AV approach is modeled by the phrase model category. However, before explaining this in more detail, we wish to recall what unary/one-class classification exactly represents. For this, we list the following verbatim quotes, which characterize one-class classification, as can be seen, almost identically (emphasis by us): “In one-class classification it is assumed that only information of one of the classes, the target class, is available. This means that just example objects of the target class can be used and that no information about the other class of outlier objects is present.” BIBREF34 “One-class classification (OCC) [...] consists in making a description of a target class of objects and in detecting whether a new object resembles this class or not. [...] The OCC model is developed using target class samples only.” BIBREF35 “In one-class classification framework, an object is classified as belonging or not belonging to a target class, while only sample examples of objects from the target class are available during the training phase.” BIBREF25 Note that in the context of authorship verification, target class refers to the known author INLINEFORM0 such that for a document INLINEFORM1 of an unknown author INLINEFORM2 the task is to verify whether INLINEFORM3 holds. One of the most important requirements of any existing AV method is a decision criterion, which aims to accept or reject a questioned authorship. A decision criterion can be expressed through a simple scalar threshold INLINEFORM4 or a more complex model INLINEFORM5 such as a hyperplane in a high-dimensional feature space. As a consequence of the above statements, the determination of INLINEFORM6 or INLINEFORM7 has to be performed solely on the basis of INLINEFORM8 , otherwise the AV method cannot be considered to be unary. However, our conducted literature research regarding existing AV approaches revealed that there are uncertainties how to precisely draw the borders between unary and binary AV methods (for instance, BIBREF36 , BIBREF16 , BIBREF33 ). Nonetheless, few attempts have been made to distinguish both categories from another perspective. Potha and Stamatatos BIBREF33 , for example, categorize AV methods as either intrinsic or extrinsic (emphasis by us): “Intrinsic verification models view it [i. e., the verification task] as a one-class classification task and are based exclusively on analysing the similarity between [ INLINEFORM0 ] and [ INLINEFORM1 ]. [...] Such methods [...] do not require any external resources.” BIBREF33 “On the other hand, extrinsic verification models attempt to transform the verification task to a pair classification task by considering external documents to be used as samples of the negative class.” BIBREF33 While we agree with statement (2), the former statement (1) is unsatisfactory, as intrinsic verification models are not necessarily unary. For example, the AV approach GLAD proposed by Hürlimann et al. BIBREF22 directly contradicts statement (1). Here, the authors “decided to cast the problem as a binary classification task where class values are Y [ INLINEFORM0 ] and N [ INLINEFORM1 ]. [...] We do not introduce any negative examples by means of external documents, thus adhering to an intrinsic approach.” BIBREF22 . A misconception similar to statement (1) can be observed in the paper of Jankowska et al. BIBREF24 , who introduced the so-called CNG approach claimed to be a one-class classification method. CNG is intrinsic in that way that it considers only INLINEFORM0 when deciding a problem INLINEFORM1 . However, the decision criterion, which is a threshold INLINEFORM2 , is determined on a set of verification problems, labeled either as Y or N. This incorporates “external resources” for defining the decision criterion, and it constitutes an implementation of binary classification between Y and N in analogy to the statement of Hürlimann et al. BIBREF22 mentioned above. Thus, CNG is in conflict with the unary definition mentioned above. In a subsequent paper BIBREF25 , however, Jankowska et al. refined their approach and introduced a modification, where INLINEFORM3 was determined solely on the basis of INLINEFORM4 . Thus, the modified approach can be considered as a true unary AV method, according to the quoted definitions for unary classification. In 2004, Koppel and Schler BIBREF13 presented the Unmasking approach which, according to the authors, represents a unary AV method. However, if we take a closer look at the learning process of Unmasking, we can see that it is based on a binary SVM classifier that consumes feature vectors (derived from “degradation curves”) labeled as Y (“same author”) or N (“different author”). Unmasking, therefore, cannot be considered to be unary as the decision is not solely based on the documents within INLINEFORM0 , in analogy to the CNG approach of Jankowska et al. BIBREF24 discussed above. It should be highlighted again that the aforementioned three approaches are binary-intrinsic since their decision criteria INLINEFORM1 or INLINEFORM2 was determined on a set of problems labeled in a binary manner (Y and N) while after training, the verification is performed in an intrinsic manner, meaning that INLINEFORM3 and INLINEFORM4 are compared against INLINEFORM5 or INLINEFORM6 but not against documents within other verification problems (cf. Figure FIGREF15 ). A crucial aspect, which might have lead to misperceptions regarding the model category of these approaches in the past, is the fact that two different class domains are involved. On the one hand, there is the class domain of authors, where the task is to distinguish INLINEFORM7 and INLINEFORM8 . On the other hand, there is the elevated or lifted domain of verification problem classes, which are Y and N. The training phase of binary-intrinsic approaches is used for learning to distinguish these two classes, and the verification task can be understood as putting the verification problem as a whole into class Y or class N, whereby the class domain of authors fades from the spotlight (cf. Figure FIGREF15 ). Besides unary and binary-intrinsic methods, there is a third category of approaches, namely binary-extrinsic AV approaches (for example, BIBREF3 , BIBREF30 , BIBREF29 , BIBREF37 , BIBREF32 , BIBREF1 , BIBREF2 ). These methods use external documents during a potentially existing training phase and – more importantly – during testing. In these approaches, the decision between INLINEFORM0 and INLINEFORM1 is put into the focus, where the external documents aim to construct the counter class INLINEFORM2 . Based on the above observations, we conclude that the key requirement for judging the model category of an AV method depends solely on the aspect how its decision criterion INLINEFORM0 or INLINEFORM1 is determined (cf. Figure FIGREF15 ): An AV method is unary if and only if its decision criterion INLINEFORM0 or INLINEFORM1 is determined solely on the basis of the target class INLINEFORM2 during testing. As a consequence, an AV method cannot be considered to be unary if documents not belonging to INLINEFORM3 are used to define INLINEFORM4 or INLINEFORM5 . An AV method is binary-intrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined on a training corpus comprising verification problems labeled either as Y or N (in other words documents of several authors). However, once the training is completed, a binary-intrinsic method has no access to external documents anymore such that the decision regarding the authorship of INLINEFORM2 is made on the basis of the reference data of INLINEFORM3 as well as INLINEFORM4 or INLINEFORM5 . An AV method is binary-extrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined during testing on the basis of external documents that represent the outlier class INLINEFORM2 . Note that optimizable AV methods such as BIBREF18 , BIBREF25 are not excluded to be unary. Provided that INLINEFORM0 or INLINEFORM1 is not subject of the optimization procedure, the model category remains unary. The reason for this is obvious; Hyperparameters might influence the resulting performance of unary AV methods. The decision criterion itself, however, remains unchanged. Implications Each model category has its own implications regarding prerequisites, evaluability, and applicability. One advantage of unary AV methods is that they do not require a specific document collection strategy to construct the counter class INLINEFORM0 , which reduces their complexity. On the downside, the choice of the underlying machine learning model of a unary AV approach is restricted to one-class classification algorithms or unsupervised learning techniques, given a suitable decision criterion. However, a far more important implication of unary AV approaches concerns their performance assessment. Since unary classification (not necessarily AV) approaches depend on a fixed decision criterion INLINEFORM0 or INLINEFORM1 , performance measures such as the area under the ROC curve (AUC) are meaningless. Recall that ROC analysis is used for evaluating classifiers, where the decision threshold is not finally fixed. ROC analysis requires that the classifier generates scores, which are comparable across classification problem instances. The ROC curve and the area under this curve is then computed by considering all possible discrimination thresholds for these scores. While unary AV approaches might produce such scores, introducing a variable INLINEFORM2 would change the semantics of these approaches. Since unary AV approaches have a fixed decision criterion, they provide only a single point in the ROC space. To assess the performance of a unary AV method, it is, therefore, mandatory to consider the confusion matrix that leads to this point in the ROC space. Another implication is that unary AV methods are necessarily instance-based and, thus, require a set INLINEFORM0 of multiple documents of the known author INLINEFORM1 . If only one reference document is available ( INLINEFORM2 ), this document must be artificially turned into multiple samples from the author. In general, unary classification methods need multiple samples from the target class since it is not possible to determine a relative closeness to that class based on only one sample. On the plus side, binary-intrinsic or extrinsic AV methods benefit from the fact that we can choose among a variety of binary and INLINEFORM0 -ary classification models. However, if we consider designing a binary-intrinsic AV method, it should not be overlooked that the involved classifier will learn nothing about individual authors, but only similarities or differences that hold in general for Y and N verification problems BIBREF32 . If, on the other hand, the choice falls on a binary-extrinsic method, a strategy has to be considered for collecting representative documents for the outlier class INLINEFORM0 . Several existing methods such as BIBREF32 , BIBREF1 , BIBREF2 rely on search engines for retrieving appropriate documents, but these search engines might refuse their service if a specified quota is exhausted. Additionally, the retrieved documents render these methods inherently non-deterministic. Moreover, such methods cause relatively high runtimes BIBREF11 , BIBREF5 . Using search engines also requires an active Internet connection, which might not be available or allowed in specific scenarios. But even if we can access the Internet to retrieve documents, there is no guarantee that the true author is not among them. With these points in mind, the applicability of binary-extrinsic methods in real-world cases, i. e., in real forensic settings, remains highly questionable. Methodology In the following, we introduce our three self-compiled corpora, where each corpus represents a different challenge. Next, we describe which authorship verification approaches we considered for the experiments and classify each AV method according to the properties introduced in Section SECREF3 . Afterwards, we explain which performance measures were selected with respect to the conclusion made in Section UID17 . Finally, we describe our experiments, present the results and highlight a number of observations. Corpora A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed. As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems. As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. The chat conversations stem from a variety of sources including emails and instant messengers (e. g., MSN, AOL or Yahoo), where for each conversation, we ensured that only chat lines from the offender were extracted. We applied the same problem construction procedure as for the corpus INLINEFORM1 , which resulted in 1,100 verification problems that again were split into a training and test set given a 40/60% ratio. In contrast to the corpus INLINEFORM2 , we only performed slight preprocessing. Essentially, we removed user names, time-stamps, URLs, multiple blanks as well as annotations that were not part of the original conversations from all chat lines. Moreover, we did not normalize words (for example, shorten words such as “nooooo” to “no”) as we believe that these represent important style markers. Furthermore, we did not remove newlines between the chat lines, as the positions of specific words might play an important role regarding the individual's writing style. As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform. Overall, the postings were written by 100 Reddit users and stem from a variety of subreddits. In order to construct the Y-cases, we selected exactly two postings from disjoint subreddits for each user such that both the known and unknown document INLINEFORM1 and INLINEFORM2 differ in their topic. Regarding the N-cases, we applied the opposite strategy such that INLINEFORM3 and INLINEFORM4 belong to the same topic. The rationale behind this is to figure out to which extent AV methods can be fooled in cases, where the topic matches but not the authorship and vice versa. Since for this specific corpus we have to control the topics of the documents, we did not perform the same procedure applied for INLINEFORM5 and INLINEFORM6 to construct the training and test sets. Instead, we used for the resulting 100 verification problems a 40/60% hold-out split, where both training and test set are entirely disjoint. Examined Authorship Verification Methods As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 . All (optimizable) AV methods were tuned regarding their hyperparameters, according to the original procedure mentioned in the respective paper. However, in the case of the binary-extrinsic methods (GenIM, ImpGI and NNCD) we had to use an alternative impostors generation strategy in our reimplementations, due to technical problems. In the respective papers, the authors used search engine queries to generate the impostor documents, which are needed to model the counter class INLINEFORM0 . Regarding our reimplementations, we used the documents from the static corpora (similarly to the idea of Kocher and Savoy BIBREF30 ) to generate the impostors in the following manner: Let INLINEFORM1 denote a corpus with INLINEFORM2 verification problems. For each INLINEFORM3 we choose all unknown documents INLINEFORM4 in INLINEFORM5 with INLINEFORM6 and append them the impostor set INLINEFORM7 . Here, it should be highlighted that both GenIM and ImpGI consider the number of impostors as a hyperparameter such that the resulting impostor set is a subset of INLINEFORM8 . In contrast to this, NNCD considers all INLINEFORM9 as possible impostors. This fact plays an important role in the later experiments, where we compare the AV approaches to each other. Although our strategy is not flexible like using a search engine, it has one advantage that, here, it is assumed that the true author of an unknown document is not among the impostors, since in our corpora the user/author names are known beforehand. Performance Measures According to our extensive literature research, numerous measures (e. g., Accuracy, F INLINEFORM0 , c@1, AUC, AUC@1, INLINEFORM1 or EER) have been used so far to assess the performance of AV methods. In regard to our experiments, we decided to use c@1 and AUC for several reasons. First, Accuracy, F INLINEFORM2 and INLINEFORM3 are not applicable in cases where AV methods leave verification problems unanswered, which concerns some of our examined AV approaches. Second, using AUC alone is meaningless for non-optimizable AV methods, as explained in Section UID17 . Third, both have been used in the PAN-AV competitions BIBREF5 , BIBREF12 . Note that we also list the confusion matrix outcomes. Experiments Overall, we focus on three experiments, which are based on the corpora introduced in Section SECREF21 : The Effect of Stylistic Variation Across Large Time Spans The Effect of Topical Influence The Effect of Limited Text Length In the following each experiment is described in detail. In this experiment, we seek to answer the question if the writing style of an author INLINEFORM0 can be recognized, given a large time span between two documents of INLINEFORM1 . The motivation behind this experiment is based on the statement of Olsson BIBREF38 that language acquisition is a continuous process, which is not only acquired, but also can be lost. Therefore, an important question that arises here is, if the writing style of a person remains “stable” across a large time span, given the fact that language in each individual's life is never “fixed” BIBREF38 . Regarding this experiment, we used the INLINEFORM2 corpus. The results of the 12 examined AV methods are listed in Table TABREF41 , where it can be seen that the majority of the examined AV methods yield useful recognition results with a maximum value of 0.792 in terms of c@1. With the exception of the binary-intrinsic approach COAV, the remaining top performing methods belong to the binary-extrinsic category. This category of AV methods has also been superior in the PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 , where they outperformed binary-intrinsic and unary approaches three times in a row (2013–2015). The top performing approaches Caravel, COAV and NNCD deserve closer attention. All three are based on character-level language models that capture low-level features similar to character INLINEFORM0 -grams, which have been shown in numerous AA and AV studies (for instance, BIBREF39 , BIBREF26 ) to be highly effective and robust. In BIBREF19 , BIBREF28 , it has been shown that Caravel and COAV were also the two top-performing approaches, where in BIBREF19 they were evaluated on the PAN-2015 AV corpus BIBREF12 , while in BIBREF28 they were applied on texts obtained from Project Gutenberg. Although both approaches perform similarly, they differ in the way how the decision criterion INLINEFORM1 is determined. While COAV requires a training corpus to learn INLINEFORM2 , Caravel assumes that the given test corpus (which provides the impostors) is balanced. Given this assumption, Caravel first computes similarity scores for all verification problems in the corpus and then sets INLINEFORM3 to the median of all similarities (cf. Figure FIGREF49 ). Thus, from a machine learning perspective, there is some undue training on the test set. Moreover, the applicability of Caravel in realistic scenarios is questionable, as a forensic case is not part of a corpus where the Y/N-distribution is known beforehand. Another interesting observation can be made regarding COAV, NNCD and OCCAV. Although all three differ regarding their model category, they use the same underlying compression algorithm (PPMd) that is responsible for generating the language model. While the former two approaches perform similarly well, OCCAV achieves a poor c@1 score ( INLINEFORM0 ). An obvious explanation for this is a wrongly calibrated threshold INLINEFORM1 , as can be seen from the confusion matrix, where almost all answers are N-predictions. Regarding the NNCD approach, one should consider that INLINEFORM2 is compared against INLINEFORM3 as well as INLINEFORM4 impostors within a corpus comprised of INLINEFORM5 verification problems. Therefore, a Y-result is correct with relatively high certainty (i. e., the method has high precision compared to other approaches with a similar c@1 score), as NNCD decided that author INLINEFORM6 fits best to INLINEFORM7 among INLINEFORM8 candidates. In contrast to Caravel, NNCD only retrieves the impostors from the given corpus, but it does not exploit background knowledge about the distribution of problems in the corpus. Overall, the results indicate that it is possible to recognize writing styles across large time spans. To gain more insights regarding the question which features led to the correct predictions, we inspected the AVeer method. Although the method achieved only average results, it benefits from the fact that it can be interpreted easily, as it relies on a simple distance function, a fixed threshold INLINEFORM0 and predefined feature categories such as function words. Regarding the correctly recognized Y-cases, we noticed that conjunctive adverbs such as “hence”, “therefore” or “moreover” contributed mostly to AVeer's correct predictions. However, a more in-depth analysis is required in future work to figure out whether the decisions of the remaining methods are also primarily affected by these features. In this experiment, we investigate the question if the writing style of authors can be recognized under the influence of topical bias. In real-world scenarios, the topic of the documents within a verification problem INLINEFORM0 is not always known beforehand, which can lead to a serious challenge regarding the recognition of the writing style. Imagine, for example, that INLINEFORM1 consists of a known and unknown document INLINEFORM2 and INLINEFORM3 that are written by the same author ( INLINEFORM4 ) while at the same time differ regarding their topic. In such a case, an AV method that it focusing “too much” on the topic (for example on specific nouns or phrases) will likely predict a different authorship ( INLINEFORM5 ). On the other hand, when INLINEFORM6 and INLINEFORM7 match regarding their topic, while being written by different authors, a topically biased AV method might erroneously predict INLINEFORM8 . In the following we show to which extent these assumptions hold. As a data basis for this experiment, we used the INLINEFORM0 corpus introduced in Section UID30 . The results regarding the 12 AV methods are given in Table TABREF44 , where it can be seen that our assumptions hold. All examined AV methods (with no exception) are fooled by the topical bias in the corpus. Here, the highest achieved results in terms of c@1 and AUC are very close to random guessing. A closer look at the confusion matrix outcomes reveals that some methods, for example ImpGI and OCCAV, perform almost entirely inverse to each other, where the former predicts nothing but Y and the latter nothing but N (except 1 Y). Moreover, we can assume that the lower c@1 is, the stronger is the focus of the respective AV method on the topic of the documents. Overall, the results of this experiment suggest that none of the examined AV methods is robust against topical influence. In our third experiment, we investigate the question how text lengths affect the results of the examined AV methods. The motivation behind this experiment is based on the observation of Stamatatos et al. BIBREF12 that text length is an important issue, which has not been thoroughly studied within authorship verification research. To address this issue, we make use of the INLINEFORM0 corpus introduced in Section UID28 . The corpus is suitable for this purpose, as it comprises a large number of verification problems, where more than 90% of all documents have sufficient text lengths ( INLINEFORM1 2,000 characters). This allows a stepwise truncation and by this to analyze the effect between the text lengths and the recognition results. However, before considering this, we first focus on the results (shown in Table TABREF46 ) after applying all 12 AV methods on the original test corpus. As can be seen in Table TABREF46 , almost all approaches perform very well with c@1 scores up to 0.991. Although these results are quite impressive, it should be noted that a large fraction of the documents comprises thousands of words. Thus, the methods can learn precise representations based on a large variety of features, which in turn enable a good determination of (dis)similarities between known/unknown documents. To investigate this issue in more detail, we constructed four versions of the test corpus and equalized the unknown document lengths to 250, 500, 1000, and 2000 characters. Then, we applied the top performing AV methods with a c@1 value INLINEFORM0 on the four corpora. Here, we reused the same models and hyperparameters (including the decision criteria INLINEFORM1 and INLINEFORM2 ) that were determined on the training corpus. The intention behind this was to observe the robustness of the trained AV models, given the fact that during training they were confronted with longer documents. The results are illustrated in Figure FIGREF47 , where it can be observed that GLAD yields the most stable results across the four corpora versions, where even for the corpus with the 250 characters long unknown documents, it achieves a c@1 score of 0.727. Surprisingly, Unmasking performs similarly well, despite of the fact that the method has been designed for longer texts i. e., book chunks of at least 500 words BIBREF13 . Sanderson and Guenter also point out that the Unmasking approach is less useful when dealing with relatively short texts BIBREF40 . However, our results show a different picture, at least for this corpus. One explanation of the resilience of GLAD across the varying text lengths might be due to its decision model INLINEFORM0 (an SVM with a linear kernel) that withstands the absence of missing features caused by the truncation of the documents, in contrast to the distance-based approaches AVeer, NNCD and COAV, where the decision criterion INLINEFORM1 is reflected by a simple scalar. Table TABREF48 lists the confusion matrix outcomes of the six AV methods regarding the 250 characters version of INLINEFORM2 . Here, it can be seen that the underlying SVM model of GLAD and Unmasking is able to regulate its Y/N-predictions, in contrast to the three distance-based methods, where the majority of predictions fall either on the Y- or on the N-side. To gain a better picture regarding the stability of the decision criteria INLINEFORM0 and INLINEFORM1 of the methods, we decided to take a closer look on the ROC curves (cf. Figure FIGREF49 ) generated by GLAD, Caravel and COAV for the four corpora versions, where a number of interesting observations can be made. When focusing on AUC, it turns out that all three methods perform very similar to each other, whereas big discrepancies between GLAD and COAV can be observed regarding c@1. When we consider the current and maximum achievable results (depicted by the circles and triangles, respectively) it becomes apparent that GLAD's model behaves stable, while the one of COAV becomes increasingly vulnerable the more the documents are shortened. When looking at the ROC curve of Caravel, it can be clearly seen that the actual and maximum achievable results are very close to each other. This is not surprising, due to the fact that Caravel's threshold always lies at the median point of the ROC curve, provided that the given corpus is balanced. While inspecting the 250 characters long documents in more detail, we identified that they share similar vocabularies consisting of chat abbreviations such as “lol” (laughing out loud) or “k” (ok), smileys and specific obscene words. Therefore, we assume that the verification results of the examined methods are mainly caused by the similar vocabularies between the texts. Conclusion and Future Work We highlighted the problem that underlying characteristics of authorship verification approaches have not been paid much attention in the past research and that these affect the applicability of the methods in real forensic settings. Then, we proposed several properties that enable a better characterization and by this a better comparison between AV methods. Among others, we explained that the performance measure AUC is meaningless in regard to unary or specific non-optimizable AV methods, which involve a fixed decision criterion (for example, NNCD). Additionally, we mentioned that determinism must be fulfilled such that an AV method can be rated as reliable. Moreover, we clarified a number of misunderstandings in previous research works and proposed three clear criteria that allow to classify the model category of an AV method, which in turn influences its design and the way how it should be evaluated. In regard to binary-extrinsic AV approaches, we explained which challenges exist and how they affect their applicability. In an experimental setup, we applied 12 existing AV methods on three self-compiled corpora, where the intention behind each corpus was to focus on a different aspect of the methods applicability. Our findings regarding the examined approaches can be summarized as follows: Despite of the good performance of the five AV methods GenIM, ImpGI, Unmasking, Caravel and SPATIUM, none of them can be truly considered as reliable and therefore applicable in real forensic cases. The reason for this is not only the non-deterministic behavior of the methods but also their dependence (excepting Unmasking) on an impostor corpus. Here, it must be guaranteed that the true author is not among the candidates, but also that the impostor documents are suitable such that the AV task not inadvertently degenerates from style to topic classification. In particular, the applicability of the Caravel approach remains highly questionable, as it requires a corpus where the information regarding Y/N-distribution is known beforehand in order to set the threshold. In regard to the two examined unary AV approaches MOCC and OCCAV, we observed that these perform poorly on all three corpora in comparison to the binary-intrinsic and binary-extrinsic methods. Most likely, this is caused by the wrong threshold setting, as both tend to generate more N-predictions. From the remaining approaches, GLAD and COAV seem to be a good choice for realistic scenarios. However, the former has been shown to be more robust in regard to varying text lengths given a fixed model, while the latter requires a retraining of the model (note that both performed almost equal in terms of AUC). Our hypothesis, which we leave open for future work, is that AV methods relying on a complex model INLINEFORM0 are more robust than methods based on a scalar-threshold INLINEFORM1 . Lastly, we wish to underline that all examined approaches failed in the cross-topic experiment. One possibility to counteract this is to apply text distortion techniques (for instance, BIBREF41 ) in order to control the topic influence in the documents. As one next step, we will compile additional and larger corpora to investigate the question whether the evaluation results of this paper hold more generally. Furthermore, we will address the important question how the results of AV methods can be interpreted in a more systematic manner, which will further influence the practicability of AV methods besides the proposed properties. This work was supported by the German Federal Ministry of Education and Research (BMBF) under the project "DORIAN" (Scrutinise and thwart disinformation).
Yes
b217d9730ba469f48426280945dbb77542b39183
b217d9730ba469f48426280945dbb77542b39183_0
Q: Which is the best performing method? Text: Introduction Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed. In the past two decades, researchers from different fields including linguistics, psychology, computer science and mathematics proposed numerous techniques and concepts that aim to solve the AV task. Probably due to the interdisciplinary nature of this research field, AV approaches were becoming more and more diverse, as can be seen in the respective literature. In 2013, for example, Veenman and Li BIBREF2 presented an AV method based on compression, which has its roots in the field of information theory. In 2015, Bagnall BIBREF3 introduced the first deep learning approach that makes use of language modeling, an important key concept in statistical natural language processing. In 2017, Castañeda and Calvo BIBREF4 proposed an AV method that applies a semantic space model through Latent Dirichlet Allocation, a generative statistical model used in information retrieval and computational linguistics. Despite the increasing number of AV approaches, a closer look at the respective studies reveals that only minor attention is paid to their underlying characteristics such as reliability and robustness. These, however, must be taken into account before AV methods can be applied in real forensic settings. The objective of this paper is to fill this gap and to propose important properties and criteria that are not only intended to characterize AV methods, but also allow their assessment in a more systematic manner. By this, we hope to contribute to the further development of this young research field. Based on the proposed properties, we investigate the applicability of 12 existing AV approaches on three self-compiled corpora, where each corpus involves a specific challenge. The rest of this paper is structured as follows. Section SECREF2 discusses the related work that served as an inspiration for our analysis. Section SECREF3 comprises the proposed criteria and properties to characterize AV methods. Section SECREF4 describes the methodology, consisting of the used corpora, examined AV methods, selected performance measures and experiments. Finally, Section SECREF5 concludes the work and outlines future work. Related Work Over the years, researchers in the field of authorship analysis identified a number of challenges and limitations regarding existing studies and approaches. Azarbonyad et al. BIBREF8 , for example, focused on the questions if the writing styles of authors of short texts change over time and how this affects AA. To answer these questions, the authors proposed an AA approach based on time-aware language models that incorporate the temporal changes of the writing style of authors. In one of our experiments, we focus on a similar question, namely, whether it is possible to recognize the writing style of authors, despite of large time spans between their documents. However, there are several differences between our experiment and the study of Azarbonyad et al. First, the authors consider an AA task, where one anonymous document INLINEFORM0 has to be attributed to one of INLINEFORM1 possible candidate authors, while we focus on an AV task, where INLINEFORM2 is compared against one document INLINEFORM3 of a known author. Second, the authors focus on texts with informal language (emails and tweets) in their study, while in our experiment we consider documents written in a formal language (scientific works). Third, Azarbonyad et al. analyzed texts with a time span of four years, while in our experiment the average time span is 15.6 years. Fourth, in contrast to the approach of the authors, none of the 12 examined AV approaches in our experiment considers a special handling of temporal stylistic changes. In recent years, the new research field author obfuscation (AO) evolved, which concerns itself with the task to fool AA or AV methods in a way that the true author cannot be correctly recognized anymore. To achieve this, AO approaches which, according to Gröndahl and Asokan BIBREF9 can be divided into manual, computer-assisted and automatic types, perform a variety of modifications on the texts. These include simple synonym replacements, rule-based substitutions or word order permutations. In 2016, Potthast et al. BIBREF10 presented the first large-scale evaluation of three AO approaches that aim to attack 44 AV methods, which were submitted to the PAN-AV competitions during 2013-2015 BIBREF11 , BIBREF5 , BIBREF12 . One of their findings was that even basic AO approaches have a significant impact on many AV methods. More precisely, the best-performing AO approach was able to flip on average INLINEFORM0 % of an authorship verifier’s decisions towards choosing N (“different author”), while in fact Y (“same author”) was correct BIBREF10 . In contrast to Potthast et al., we do not focus on AO to measure the robustness of AV methods. Instead, we investigate in one experiment the question how trained AV models behave, if the lengths of the questioned documents are getting shorter and shorter. To our best knowledge, this question has not been addressed in previous authorship verification studies. Characteristics of Authorship Verification Before we can assess the applicability of AV methods, it is important to understand their fundamental characteristics. Due to the increasing number of proposed AV approaches in the last two decades, the need arose to develop a systematization including the conception, implementation and evaluation of authorship verification methods. In regard to this, only a few attempts have been made so far. In 2004, for example, Koppel and Schler BIBREF13 described for the first time the connection between AV and unary classification, also known as one-class classification. In 2008, Stein et al. BIBREF14 compiled an overview of important algorithmic building blocks for AV where, among other things, they also formulated three AV problems as decision problems. In 2009, Stamatatos BIBREF15 coined the phrases profile- and instance-based approaches that initially were used in the field of AA, but later found their way also into AV. In 2013 and 2014, Stamatatos et al. BIBREF11 , BIBREF16 introduced the terms intrinsic- and extrinsic models that aim to further distinguish between AV methods. However, a closer look at previous attempts to characterize authorship verification approaches reveals a number of misunderstandings, for instance, when it comes to draw the borders between their underlying classification models. In the following subsections, we clarify these misunderstandings, where we redefine previous definitions and propose new properties that enable a better comparison between AV methods. Reliability (Determinism) Reliability is a fundamental property any AV method must fulfill in order to be applicable in real-world forensic settings. However, since there is no consistent concept nor a uniform definition of the term “reliability” in the context of authorship verification according to the screened literature, we decided to reuse a definition from applied statistics, and adapt it carefully to AV. In his standard reference book, Bollen BIBREF17 gives a clear description for this term: “Reliability is the consistency of measurement” and provides a simple example to illustrate its meaning: At time INLINEFORM0 we ask a large number of persons the same question Q and record their responses. Afterwards, we remove their memory of the dialogue. At time INLINEFORM1 we ask them again the same question Q and record their responses again. “The reliability is the consistency of the responses across individuals for the two time periods. To the extent that all individuals are consistent, the measure is reliable” BIBREF17 . This example deals with the consistency of the measured objects as a factor for the reliability of measurements. In the case of authorship verification, the analyzed objects are static data, and hence these cannot be a source of inconsistency. However, the measurement system itself can behave inconsistently and hence unreliable. This aspect can be described as intra-rater reliability. Reliability in authorship verification is satisfied, if an AV method always generates the same prediction INLINEFORM0 for the same input INLINEFORM1 , or in other words, if the method behaves deterministically. Several AV approaches, including BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF16 fall into this category. In contrast, if an AV method behaves non-deterministically such that two different predictions for INLINEFORM2 are possible, the method can be rated as unreliable. Many AV approaches, including BIBREF4 , BIBREF13 , BIBREF26 , BIBREF1 , BIBREF27 , BIBREF3 , BIBREF28 , BIBREF29 , BIBREF30 belong to this category, since they involve randomness (e. g., weight initialization, feature subsampling, chunk generation or impostor selection), which might distort the evaluation, as every run on a test corpus very likely leads to different results. Under lab conditions, results of non-deterministic AV methods can (and should) be counteracted by averaging multiple runs. However, it remains highly questionable if such methods are generally applicable in realistic forensic cases, where the prediction INLINEFORM3 regarding a verification case INLINEFORM4 might sometimes result in Y and sometimes in N. Optimizability Another important property of an AV method is optimizability. We define an AV method as optimizable, if it is designed in such a way that it offers adjustable hyperparameters that can be tuned against a training/validation corpus, given an optimization method such as grid or random search. Hyperparameters might be, for instance, the selected distance/similarity function, the number of layers and neurons in a neural network or the choice of a kernel method. The majority of existing AV approaches in the literature (for example, BIBREF13 , BIBREF23 , BIBREF24 , BIBREF22 , BIBREF31 , BIBREF4 , BIBREF32 , BIBREF16 ) belong to this category. On the other hand, if a published AV approach involves hyperparameters that have been entirely fixed such that there is no further possibility to improve its performance from outside (without deviating from the definitions in the publication of the method), the method is considered to be non-optimizable. Non-optimizable AV methods are preferable in forensic settings as, here, the existence of a training/validation corpus is not always self-evident. Among the proposed AV approaches in the respective literature, we identified only a small fraction BIBREF21 , BIBREF2 , BIBREF30 that fall into this category. Model Category From a machine learning point of view, authorship verification represents a unary classification problem BIBREF22 , BIBREF13 , BIBREF16 , BIBREF33 , BIBREF14 . Yet, in the literature, it can be observed that sometimes AV is treated as a unary BIBREF25 , BIBREF23 , BIBREF26 , BIBREF16 and sometimes as a binary classification task BIBREF30 , BIBREF32 , BIBREF22 , BIBREF2 . We define the way an AV approach is modeled by the phrase model category. However, before explaining this in more detail, we wish to recall what unary/one-class classification exactly represents. For this, we list the following verbatim quotes, which characterize one-class classification, as can be seen, almost identically (emphasis by us): “In one-class classification it is assumed that only information of one of the classes, the target class, is available. This means that just example objects of the target class can be used and that no information about the other class of outlier objects is present.” BIBREF34 “One-class classification (OCC) [...] consists in making a description of a target class of objects and in detecting whether a new object resembles this class or not. [...] The OCC model is developed using target class samples only.” BIBREF35 “In one-class classification framework, an object is classified as belonging or not belonging to a target class, while only sample examples of objects from the target class are available during the training phase.” BIBREF25 Note that in the context of authorship verification, target class refers to the known author INLINEFORM0 such that for a document INLINEFORM1 of an unknown author INLINEFORM2 the task is to verify whether INLINEFORM3 holds. One of the most important requirements of any existing AV method is a decision criterion, which aims to accept or reject a questioned authorship. A decision criterion can be expressed through a simple scalar threshold INLINEFORM4 or a more complex model INLINEFORM5 such as a hyperplane in a high-dimensional feature space. As a consequence of the above statements, the determination of INLINEFORM6 or INLINEFORM7 has to be performed solely on the basis of INLINEFORM8 , otherwise the AV method cannot be considered to be unary. However, our conducted literature research regarding existing AV approaches revealed that there are uncertainties how to precisely draw the borders between unary and binary AV methods (for instance, BIBREF36 , BIBREF16 , BIBREF33 ). Nonetheless, few attempts have been made to distinguish both categories from another perspective. Potha and Stamatatos BIBREF33 , for example, categorize AV methods as either intrinsic or extrinsic (emphasis by us): “Intrinsic verification models view it [i. e., the verification task] as a one-class classification task and are based exclusively on analysing the similarity between [ INLINEFORM0 ] and [ INLINEFORM1 ]. [...] Such methods [...] do not require any external resources.” BIBREF33 “On the other hand, extrinsic verification models attempt to transform the verification task to a pair classification task by considering external documents to be used as samples of the negative class.” BIBREF33 While we agree with statement (2), the former statement (1) is unsatisfactory, as intrinsic verification models are not necessarily unary. For example, the AV approach GLAD proposed by Hürlimann et al. BIBREF22 directly contradicts statement (1). Here, the authors “decided to cast the problem as a binary classification task where class values are Y [ INLINEFORM0 ] and N [ INLINEFORM1 ]. [...] We do not introduce any negative examples by means of external documents, thus adhering to an intrinsic approach.” BIBREF22 . A misconception similar to statement (1) can be observed in the paper of Jankowska et al. BIBREF24 , who introduced the so-called CNG approach claimed to be a one-class classification method. CNG is intrinsic in that way that it considers only INLINEFORM0 when deciding a problem INLINEFORM1 . However, the decision criterion, which is a threshold INLINEFORM2 , is determined on a set of verification problems, labeled either as Y or N. This incorporates “external resources” for defining the decision criterion, and it constitutes an implementation of binary classification between Y and N in analogy to the statement of Hürlimann et al. BIBREF22 mentioned above. Thus, CNG is in conflict with the unary definition mentioned above. In a subsequent paper BIBREF25 , however, Jankowska et al. refined their approach and introduced a modification, where INLINEFORM3 was determined solely on the basis of INLINEFORM4 . Thus, the modified approach can be considered as a true unary AV method, according to the quoted definitions for unary classification. In 2004, Koppel and Schler BIBREF13 presented the Unmasking approach which, according to the authors, represents a unary AV method. However, if we take a closer look at the learning process of Unmasking, we can see that it is based on a binary SVM classifier that consumes feature vectors (derived from “degradation curves”) labeled as Y (“same author”) or N (“different author”). Unmasking, therefore, cannot be considered to be unary as the decision is not solely based on the documents within INLINEFORM0 , in analogy to the CNG approach of Jankowska et al. BIBREF24 discussed above. It should be highlighted again that the aforementioned three approaches are binary-intrinsic since their decision criteria INLINEFORM1 or INLINEFORM2 was determined on a set of problems labeled in a binary manner (Y and N) while after training, the verification is performed in an intrinsic manner, meaning that INLINEFORM3 and INLINEFORM4 are compared against INLINEFORM5 or INLINEFORM6 but not against documents within other verification problems (cf. Figure FIGREF15 ). A crucial aspect, which might have lead to misperceptions regarding the model category of these approaches in the past, is the fact that two different class domains are involved. On the one hand, there is the class domain of authors, where the task is to distinguish INLINEFORM7 and INLINEFORM8 . On the other hand, there is the elevated or lifted domain of verification problem classes, which are Y and N. The training phase of binary-intrinsic approaches is used for learning to distinguish these two classes, and the verification task can be understood as putting the verification problem as a whole into class Y or class N, whereby the class domain of authors fades from the spotlight (cf. Figure FIGREF15 ). Besides unary and binary-intrinsic methods, there is a third category of approaches, namely binary-extrinsic AV approaches (for example, BIBREF3 , BIBREF30 , BIBREF29 , BIBREF37 , BIBREF32 , BIBREF1 , BIBREF2 ). These methods use external documents during a potentially existing training phase and – more importantly – during testing. In these approaches, the decision between INLINEFORM0 and INLINEFORM1 is put into the focus, where the external documents aim to construct the counter class INLINEFORM2 . Based on the above observations, we conclude that the key requirement for judging the model category of an AV method depends solely on the aspect how its decision criterion INLINEFORM0 or INLINEFORM1 is determined (cf. Figure FIGREF15 ): An AV method is unary if and only if its decision criterion INLINEFORM0 or INLINEFORM1 is determined solely on the basis of the target class INLINEFORM2 during testing. As a consequence, an AV method cannot be considered to be unary if documents not belonging to INLINEFORM3 are used to define INLINEFORM4 or INLINEFORM5 . An AV method is binary-intrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined on a training corpus comprising verification problems labeled either as Y or N (in other words documents of several authors). However, once the training is completed, a binary-intrinsic method has no access to external documents anymore such that the decision regarding the authorship of INLINEFORM2 is made on the basis of the reference data of INLINEFORM3 as well as INLINEFORM4 or INLINEFORM5 . An AV method is binary-extrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined during testing on the basis of external documents that represent the outlier class INLINEFORM2 . Note that optimizable AV methods such as BIBREF18 , BIBREF25 are not excluded to be unary. Provided that INLINEFORM0 or INLINEFORM1 is not subject of the optimization procedure, the model category remains unary. The reason for this is obvious; Hyperparameters might influence the resulting performance of unary AV methods. The decision criterion itself, however, remains unchanged. Implications Each model category has its own implications regarding prerequisites, evaluability, and applicability. One advantage of unary AV methods is that they do not require a specific document collection strategy to construct the counter class INLINEFORM0 , which reduces their complexity. On the downside, the choice of the underlying machine learning model of a unary AV approach is restricted to one-class classification algorithms or unsupervised learning techniques, given a suitable decision criterion. However, a far more important implication of unary AV approaches concerns their performance assessment. Since unary classification (not necessarily AV) approaches depend on a fixed decision criterion INLINEFORM0 or INLINEFORM1 , performance measures such as the area under the ROC curve (AUC) are meaningless. Recall that ROC analysis is used for evaluating classifiers, where the decision threshold is not finally fixed. ROC analysis requires that the classifier generates scores, which are comparable across classification problem instances. The ROC curve and the area under this curve is then computed by considering all possible discrimination thresholds for these scores. While unary AV approaches might produce such scores, introducing a variable INLINEFORM2 would change the semantics of these approaches. Since unary AV approaches have a fixed decision criterion, they provide only a single point in the ROC space. To assess the performance of a unary AV method, it is, therefore, mandatory to consider the confusion matrix that leads to this point in the ROC space. Another implication is that unary AV methods are necessarily instance-based and, thus, require a set INLINEFORM0 of multiple documents of the known author INLINEFORM1 . If only one reference document is available ( INLINEFORM2 ), this document must be artificially turned into multiple samples from the author. In general, unary classification methods need multiple samples from the target class since it is not possible to determine a relative closeness to that class based on only one sample. On the plus side, binary-intrinsic or extrinsic AV methods benefit from the fact that we can choose among a variety of binary and INLINEFORM0 -ary classification models. However, if we consider designing a binary-intrinsic AV method, it should not be overlooked that the involved classifier will learn nothing about individual authors, but only similarities or differences that hold in general for Y and N verification problems BIBREF32 . If, on the other hand, the choice falls on a binary-extrinsic method, a strategy has to be considered for collecting representative documents for the outlier class INLINEFORM0 . Several existing methods such as BIBREF32 , BIBREF1 , BIBREF2 rely on search engines for retrieving appropriate documents, but these search engines might refuse their service if a specified quota is exhausted. Additionally, the retrieved documents render these methods inherently non-deterministic. Moreover, such methods cause relatively high runtimes BIBREF11 , BIBREF5 . Using search engines also requires an active Internet connection, which might not be available or allowed in specific scenarios. But even if we can access the Internet to retrieve documents, there is no guarantee that the true author is not among them. With these points in mind, the applicability of binary-extrinsic methods in real-world cases, i. e., in real forensic settings, remains highly questionable. Methodology In the following, we introduce our three self-compiled corpora, where each corpus represents a different challenge. Next, we describe which authorship verification approaches we considered for the experiments and classify each AV method according to the properties introduced in Section SECREF3 . Afterwards, we explain which performance measures were selected with respect to the conclusion made in Section UID17 . Finally, we describe our experiments, present the results and highlight a number of observations. Corpora A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed. As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems. As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. The chat conversations stem from a variety of sources including emails and instant messengers (e. g., MSN, AOL or Yahoo), where for each conversation, we ensured that only chat lines from the offender were extracted. We applied the same problem construction procedure as for the corpus INLINEFORM1 , which resulted in 1,100 verification problems that again were split into a training and test set given a 40/60% ratio. In contrast to the corpus INLINEFORM2 , we only performed slight preprocessing. Essentially, we removed user names, time-stamps, URLs, multiple blanks as well as annotations that were not part of the original conversations from all chat lines. Moreover, we did not normalize words (for example, shorten words such as “nooooo” to “no”) as we believe that these represent important style markers. Furthermore, we did not remove newlines between the chat lines, as the positions of specific words might play an important role regarding the individual's writing style. As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform. Overall, the postings were written by 100 Reddit users and stem from a variety of subreddits. In order to construct the Y-cases, we selected exactly two postings from disjoint subreddits for each user such that both the known and unknown document INLINEFORM1 and INLINEFORM2 differ in their topic. Regarding the N-cases, we applied the opposite strategy such that INLINEFORM3 and INLINEFORM4 belong to the same topic. The rationale behind this is to figure out to which extent AV methods can be fooled in cases, where the topic matches but not the authorship and vice versa. Since for this specific corpus we have to control the topics of the documents, we did not perform the same procedure applied for INLINEFORM5 and INLINEFORM6 to construct the training and test sets. Instead, we used for the resulting 100 verification problems a 40/60% hold-out split, where both training and test set are entirely disjoint. Examined Authorship Verification Methods As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 . All (optimizable) AV methods were tuned regarding their hyperparameters, according to the original procedure mentioned in the respective paper. However, in the case of the binary-extrinsic methods (GenIM, ImpGI and NNCD) we had to use an alternative impostors generation strategy in our reimplementations, due to technical problems. In the respective papers, the authors used search engine queries to generate the impostor documents, which are needed to model the counter class INLINEFORM0 . Regarding our reimplementations, we used the documents from the static corpora (similarly to the idea of Kocher and Savoy BIBREF30 ) to generate the impostors in the following manner: Let INLINEFORM1 denote a corpus with INLINEFORM2 verification problems. For each INLINEFORM3 we choose all unknown documents INLINEFORM4 in INLINEFORM5 with INLINEFORM6 and append them the impostor set INLINEFORM7 . Here, it should be highlighted that both GenIM and ImpGI consider the number of impostors as a hyperparameter such that the resulting impostor set is a subset of INLINEFORM8 . In contrast to this, NNCD considers all INLINEFORM9 as possible impostors. This fact plays an important role in the later experiments, where we compare the AV approaches to each other. Although our strategy is not flexible like using a search engine, it has one advantage that, here, it is assumed that the true author of an unknown document is not among the impostors, since in our corpora the user/author names are known beforehand. Performance Measures According to our extensive literature research, numerous measures (e. g., Accuracy, F INLINEFORM0 , c@1, AUC, AUC@1, INLINEFORM1 or EER) have been used so far to assess the performance of AV methods. In regard to our experiments, we decided to use c@1 and AUC for several reasons. First, Accuracy, F INLINEFORM2 and INLINEFORM3 are not applicable in cases where AV methods leave verification problems unanswered, which concerns some of our examined AV approaches. Second, using AUC alone is meaningless for non-optimizable AV methods, as explained in Section UID17 . Third, both have been used in the PAN-AV competitions BIBREF5 , BIBREF12 . Note that we also list the confusion matrix outcomes. Experiments Overall, we focus on three experiments, which are based on the corpora introduced in Section SECREF21 : The Effect of Stylistic Variation Across Large Time Spans The Effect of Topical Influence The Effect of Limited Text Length In the following each experiment is described in detail. In this experiment, we seek to answer the question if the writing style of an author INLINEFORM0 can be recognized, given a large time span between two documents of INLINEFORM1 . The motivation behind this experiment is based on the statement of Olsson BIBREF38 that language acquisition is a continuous process, which is not only acquired, but also can be lost. Therefore, an important question that arises here is, if the writing style of a person remains “stable” across a large time span, given the fact that language in each individual's life is never “fixed” BIBREF38 . Regarding this experiment, we used the INLINEFORM2 corpus. The results of the 12 examined AV methods are listed in Table TABREF41 , where it can be seen that the majority of the examined AV methods yield useful recognition results with a maximum value of 0.792 in terms of c@1. With the exception of the binary-intrinsic approach COAV, the remaining top performing methods belong to the binary-extrinsic category. This category of AV methods has also been superior in the PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 , where they outperformed binary-intrinsic and unary approaches three times in a row (2013–2015). The top performing approaches Caravel, COAV and NNCD deserve closer attention. All three are based on character-level language models that capture low-level features similar to character INLINEFORM0 -grams, which have been shown in numerous AA and AV studies (for instance, BIBREF39 , BIBREF26 ) to be highly effective and robust. In BIBREF19 , BIBREF28 , it has been shown that Caravel and COAV were also the two top-performing approaches, where in BIBREF19 they were evaluated on the PAN-2015 AV corpus BIBREF12 , while in BIBREF28 they were applied on texts obtained from Project Gutenberg. Although both approaches perform similarly, they differ in the way how the decision criterion INLINEFORM1 is determined. While COAV requires a training corpus to learn INLINEFORM2 , Caravel assumes that the given test corpus (which provides the impostors) is balanced. Given this assumption, Caravel first computes similarity scores for all verification problems in the corpus and then sets INLINEFORM3 to the median of all similarities (cf. Figure FIGREF49 ). Thus, from a machine learning perspective, there is some undue training on the test set. Moreover, the applicability of Caravel in realistic scenarios is questionable, as a forensic case is not part of a corpus where the Y/N-distribution is known beforehand. Another interesting observation can be made regarding COAV, NNCD and OCCAV. Although all three differ regarding their model category, they use the same underlying compression algorithm (PPMd) that is responsible for generating the language model. While the former two approaches perform similarly well, OCCAV achieves a poor c@1 score ( INLINEFORM0 ). An obvious explanation for this is a wrongly calibrated threshold INLINEFORM1 , as can be seen from the confusion matrix, where almost all answers are N-predictions. Regarding the NNCD approach, one should consider that INLINEFORM2 is compared against INLINEFORM3 as well as INLINEFORM4 impostors within a corpus comprised of INLINEFORM5 verification problems. Therefore, a Y-result is correct with relatively high certainty (i. e., the method has high precision compared to other approaches with a similar c@1 score), as NNCD decided that author INLINEFORM6 fits best to INLINEFORM7 among INLINEFORM8 candidates. In contrast to Caravel, NNCD only retrieves the impostors from the given corpus, but it does not exploit background knowledge about the distribution of problems in the corpus. Overall, the results indicate that it is possible to recognize writing styles across large time spans. To gain more insights regarding the question which features led to the correct predictions, we inspected the AVeer method. Although the method achieved only average results, it benefits from the fact that it can be interpreted easily, as it relies on a simple distance function, a fixed threshold INLINEFORM0 and predefined feature categories such as function words. Regarding the correctly recognized Y-cases, we noticed that conjunctive adverbs such as “hence”, “therefore” or “moreover” contributed mostly to AVeer's correct predictions. However, a more in-depth analysis is required in future work to figure out whether the decisions of the remaining methods are also primarily affected by these features. In this experiment, we investigate the question if the writing style of authors can be recognized under the influence of topical bias. In real-world scenarios, the topic of the documents within a verification problem INLINEFORM0 is not always known beforehand, which can lead to a serious challenge regarding the recognition of the writing style. Imagine, for example, that INLINEFORM1 consists of a known and unknown document INLINEFORM2 and INLINEFORM3 that are written by the same author ( INLINEFORM4 ) while at the same time differ regarding their topic. In such a case, an AV method that it focusing “too much” on the topic (for example on specific nouns or phrases) will likely predict a different authorship ( INLINEFORM5 ). On the other hand, when INLINEFORM6 and INLINEFORM7 match regarding their topic, while being written by different authors, a topically biased AV method might erroneously predict INLINEFORM8 . In the following we show to which extent these assumptions hold. As a data basis for this experiment, we used the INLINEFORM0 corpus introduced in Section UID30 . The results regarding the 12 AV methods are given in Table TABREF44 , where it can be seen that our assumptions hold. All examined AV methods (with no exception) are fooled by the topical bias in the corpus. Here, the highest achieved results in terms of c@1 and AUC are very close to random guessing. A closer look at the confusion matrix outcomes reveals that some methods, for example ImpGI and OCCAV, perform almost entirely inverse to each other, where the former predicts nothing but Y and the latter nothing but N (except 1 Y). Moreover, we can assume that the lower c@1 is, the stronger is the focus of the respective AV method on the topic of the documents. Overall, the results of this experiment suggest that none of the examined AV methods is robust against topical influence. In our third experiment, we investigate the question how text lengths affect the results of the examined AV methods. The motivation behind this experiment is based on the observation of Stamatatos et al. BIBREF12 that text length is an important issue, which has not been thoroughly studied within authorship verification research. To address this issue, we make use of the INLINEFORM0 corpus introduced in Section UID28 . The corpus is suitable for this purpose, as it comprises a large number of verification problems, where more than 90% of all documents have sufficient text lengths ( INLINEFORM1 2,000 characters). This allows a stepwise truncation and by this to analyze the effect between the text lengths and the recognition results. However, before considering this, we first focus on the results (shown in Table TABREF46 ) after applying all 12 AV methods on the original test corpus. As can be seen in Table TABREF46 , almost all approaches perform very well with c@1 scores up to 0.991. Although these results are quite impressive, it should be noted that a large fraction of the documents comprises thousands of words. Thus, the methods can learn precise representations based on a large variety of features, which in turn enable a good determination of (dis)similarities between known/unknown documents. To investigate this issue in more detail, we constructed four versions of the test corpus and equalized the unknown document lengths to 250, 500, 1000, and 2000 characters. Then, we applied the top performing AV methods with a c@1 value INLINEFORM0 on the four corpora. Here, we reused the same models and hyperparameters (including the decision criteria INLINEFORM1 and INLINEFORM2 ) that were determined on the training corpus. The intention behind this was to observe the robustness of the trained AV models, given the fact that during training they were confronted with longer documents. The results are illustrated in Figure FIGREF47 , where it can be observed that GLAD yields the most stable results across the four corpora versions, where even for the corpus with the 250 characters long unknown documents, it achieves a c@1 score of 0.727. Surprisingly, Unmasking performs similarly well, despite of the fact that the method has been designed for longer texts i. e., book chunks of at least 500 words BIBREF13 . Sanderson and Guenter also point out that the Unmasking approach is less useful when dealing with relatively short texts BIBREF40 . However, our results show a different picture, at least for this corpus. One explanation of the resilience of GLAD across the varying text lengths might be due to its decision model INLINEFORM0 (an SVM with a linear kernel) that withstands the absence of missing features caused by the truncation of the documents, in contrast to the distance-based approaches AVeer, NNCD and COAV, where the decision criterion INLINEFORM1 is reflected by a simple scalar. Table TABREF48 lists the confusion matrix outcomes of the six AV methods regarding the 250 characters version of INLINEFORM2 . Here, it can be seen that the underlying SVM model of GLAD and Unmasking is able to regulate its Y/N-predictions, in contrast to the three distance-based methods, where the majority of predictions fall either on the Y- or on the N-side. To gain a better picture regarding the stability of the decision criteria INLINEFORM0 and INLINEFORM1 of the methods, we decided to take a closer look on the ROC curves (cf. Figure FIGREF49 ) generated by GLAD, Caravel and COAV for the four corpora versions, where a number of interesting observations can be made. When focusing on AUC, it turns out that all three methods perform very similar to each other, whereas big discrepancies between GLAD and COAV can be observed regarding c@1. When we consider the current and maximum achievable results (depicted by the circles and triangles, respectively) it becomes apparent that GLAD's model behaves stable, while the one of COAV becomes increasingly vulnerable the more the documents are shortened. When looking at the ROC curve of Caravel, it can be clearly seen that the actual and maximum achievable results are very close to each other. This is not surprising, due to the fact that Caravel's threshold always lies at the median point of the ROC curve, provided that the given corpus is balanced. While inspecting the 250 characters long documents in more detail, we identified that they share similar vocabularies consisting of chat abbreviations such as “lol” (laughing out loud) or “k” (ok), smileys and specific obscene words. Therefore, we assume that the verification results of the examined methods are mainly caused by the similar vocabularies between the texts. Conclusion and Future Work We highlighted the problem that underlying characteristics of authorship verification approaches have not been paid much attention in the past research and that these affect the applicability of the methods in real forensic settings. Then, we proposed several properties that enable a better characterization and by this a better comparison between AV methods. Among others, we explained that the performance measure AUC is meaningless in regard to unary or specific non-optimizable AV methods, which involve a fixed decision criterion (for example, NNCD). Additionally, we mentioned that determinism must be fulfilled such that an AV method can be rated as reliable. Moreover, we clarified a number of misunderstandings in previous research works and proposed three clear criteria that allow to classify the model category of an AV method, which in turn influences its design and the way how it should be evaluated. In regard to binary-extrinsic AV approaches, we explained which challenges exist and how they affect their applicability. In an experimental setup, we applied 12 existing AV methods on three self-compiled corpora, where the intention behind each corpus was to focus on a different aspect of the methods applicability. Our findings regarding the examined approaches can be summarized as follows: Despite of the good performance of the five AV methods GenIM, ImpGI, Unmasking, Caravel and SPATIUM, none of them can be truly considered as reliable and therefore applicable in real forensic cases. The reason for this is not only the non-deterministic behavior of the methods but also their dependence (excepting Unmasking) on an impostor corpus. Here, it must be guaranteed that the true author is not among the candidates, but also that the impostor documents are suitable such that the AV task not inadvertently degenerates from style to topic classification. In particular, the applicability of the Caravel approach remains highly questionable, as it requires a corpus where the information regarding Y/N-distribution is known beforehand in order to set the threshold. In regard to the two examined unary AV approaches MOCC and OCCAV, we observed that these perform poorly on all three corpora in comparison to the binary-intrinsic and binary-extrinsic methods. Most likely, this is caused by the wrong threshold setting, as both tend to generate more N-predictions. From the remaining approaches, GLAD and COAV seem to be a good choice for realistic scenarios. However, the former has been shown to be more robust in regard to varying text lengths given a fixed model, while the latter requires a retraining of the model (note that both performed almost equal in terms of AUC). Our hypothesis, which we leave open for future work, is that AV methods relying on a complex model INLINEFORM0 are more robust than methods based on a scalar-threshold INLINEFORM1 . Lastly, we wish to underline that all examined approaches failed in the cross-topic experiment. One possibility to counteract this is to apply text distortion techniques (for instance, BIBREF41 ) in order to control the topic influence in the documents. As one next step, we will compile additional and larger corpora to investigate the question whether the evaluation results of this paper hold more generally. Furthermore, we will address the important question how the results of AV methods can be interpreted in a more systematic manner, which will further influence the practicability of AV methods besides the proposed properties. This work was supported by the German Federal Ministry of Education and Research (BMBF) under the project "DORIAN" (Scrutinise and thwart disinformation).
Caravel, COAV and NNCD
8c0846879771c8f3915cc2e0718bee448f5cb007
8c0846879771c8f3915cc2e0718bee448f5cb007_0
Q: What size are the corpora? Text: Introduction Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed. In the past two decades, researchers from different fields including linguistics, psychology, computer science and mathematics proposed numerous techniques and concepts that aim to solve the AV task. Probably due to the interdisciplinary nature of this research field, AV approaches were becoming more and more diverse, as can be seen in the respective literature. In 2013, for example, Veenman and Li BIBREF2 presented an AV method based on compression, which has its roots in the field of information theory. In 2015, Bagnall BIBREF3 introduced the first deep learning approach that makes use of language modeling, an important key concept in statistical natural language processing. In 2017, Castañeda and Calvo BIBREF4 proposed an AV method that applies a semantic space model through Latent Dirichlet Allocation, a generative statistical model used in information retrieval and computational linguistics. Despite the increasing number of AV approaches, a closer look at the respective studies reveals that only minor attention is paid to their underlying characteristics such as reliability and robustness. These, however, must be taken into account before AV methods can be applied in real forensic settings. The objective of this paper is to fill this gap and to propose important properties and criteria that are not only intended to characterize AV methods, but also allow their assessment in a more systematic manner. By this, we hope to contribute to the further development of this young research field. Based on the proposed properties, we investigate the applicability of 12 existing AV approaches on three self-compiled corpora, where each corpus involves a specific challenge. The rest of this paper is structured as follows. Section SECREF2 discusses the related work that served as an inspiration for our analysis. Section SECREF3 comprises the proposed criteria and properties to characterize AV methods. Section SECREF4 describes the methodology, consisting of the used corpora, examined AV methods, selected performance measures and experiments. Finally, Section SECREF5 concludes the work and outlines future work. Related Work Over the years, researchers in the field of authorship analysis identified a number of challenges and limitations regarding existing studies and approaches. Azarbonyad et al. BIBREF8 , for example, focused on the questions if the writing styles of authors of short texts change over time and how this affects AA. To answer these questions, the authors proposed an AA approach based on time-aware language models that incorporate the temporal changes of the writing style of authors. In one of our experiments, we focus on a similar question, namely, whether it is possible to recognize the writing style of authors, despite of large time spans between their documents. However, there are several differences between our experiment and the study of Azarbonyad et al. First, the authors consider an AA task, where one anonymous document INLINEFORM0 has to be attributed to one of INLINEFORM1 possible candidate authors, while we focus on an AV task, where INLINEFORM2 is compared against one document INLINEFORM3 of a known author. Second, the authors focus on texts with informal language (emails and tweets) in their study, while in our experiment we consider documents written in a formal language (scientific works). Third, Azarbonyad et al. analyzed texts with a time span of four years, while in our experiment the average time span is 15.6 years. Fourth, in contrast to the approach of the authors, none of the 12 examined AV approaches in our experiment considers a special handling of temporal stylistic changes. In recent years, the new research field author obfuscation (AO) evolved, which concerns itself with the task to fool AA or AV methods in a way that the true author cannot be correctly recognized anymore. To achieve this, AO approaches which, according to Gröndahl and Asokan BIBREF9 can be divided into manual, computer-assisted and automatic types, perform a variety of modifications on the texts. These include simple synonym replacements, rule-based substitutions or word order permutations. In 2016, Potthast et al. BIBREF10 presented the first large-scale evaluation of three AO approaches that aim to attack 44 AV methods, which were submitted to the PAN-AV competitions during 2013-2015 BIBREF11 , BIBREF5 , BIBREF12 . One of their findings was that even basic AO approaches have a significant impact on many AV methods. More precisely, the best-performing AO approach was able to flip on average INLINEFORM0 % of an authorship verifier’s decisions towards choosing N (“different author”), while in fact Y (“same author”) was correct BIBREF10 . In contrast to Potthast et al., we do not focus on AO to measure the robustness of AV methods. Instead, we investigate in one experiment the question how trained AV models behave, if the lengths of the questioned documents are getting shorter and shorter. To our best knowledge, this question has not been addressed in previous authorship verification studies. Characteristics of Authorship Verification Before we can assess the applicability of AV methods, it is important to understand their fundamental characteristics. Due to the increasing number of proposed AV approaches in the last two decades, the need arose to develop a systematization including the conception, implementation and evaluation of authorship verification methods. In regard to this, only a few attempts have been made so far. In 2004, for example, Koppel and Schler BIBREF13 described for the first time the connection between AV and unary classification, also known as one-class classification. In 2008, Stein et al. BIBREF14 compiled an overview of important algorithmic building blocks for AV where, among other things, they also formulated three AV problems as decision problems. In 2009, Stamatatos BIBREF15 coined the phrases profile- and instance-based approaches that initially were used in the field of AA, but later found their way also into AV. In 2013 and 2014, Stamatatos et al. BIBREF11 , BIBREF16 introduced the terms intrinsic- and extrinsic models that aim to further distinguish between AV methods. However, a closer look at previous attempts to characterize authorship verification approaches reveals a number of misunderstandings, for instance, when it comes to draw the borders between their underlying classification models. In the following subsections, we clarify these misunderstandings, where we redefine previous definitions and propose new properties that enable a better comparison between AV methods. Reliability (Determinism) Reliability is a fundamental property any AV method must fulfill in order to be applicable in real-world forensic settings. However, since there is no consistent concept nor a uniform definition of the term “reliability” in the context of authorship verification according to the screened literature, we decided to reuse a definition from applied statistics, and adapt it carefully to AV. In his standard reference book, Bollen BIBREF17 gives a clear description for this term: “Reliability is the consistency of measurement” and provides a simple example to illustrate its meaning: At time INLINEFORM0 we ask a large number of persons the same question Q and record their responses. Afterwards, we remove their memory of the dialogue. At time INLINEFORM1 we ask them again the same question Q and record their responses again. “The reliability is the consistency of the responses across individuals for the two time periods. To the extent that all individuals are consistent, the measure is reliable” BIBREF17 . This example deals with the consistency of the measured objects as a factor for the reliability of measurements. In the case of authorship verification, the analyzed objects are static data, and hence these cannot be a source of inconsistency. However, the measurement system itself can behave inconsistently and hence unreliable. This aspect can be described as intra-rater reliability. Reliability in authorship verification is satisfied, if an AV method always generates the same prediction INLINEFORM0 for the same input INLINEFORM1 , or in other words, if the method behaves deterministically. Several AV approaches, including BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF16 fall into this category. In contrast, if an AV method behaves non-deterministically such that two different predictions for INLINEFORM2 are possible, the method can be rated as unreliable. Many AV approaches, including BIBREF4 , BIBREF13 , BIBREF26 , BIBREF1 , BIBREF27 , BIBREF3 , BIBREF28 , BIBREF29 , BIBREF30 belong to this category, since they involve randomness (e. g., weight initialization, feature subsampling, chunk generation or impostor selection), which might distort the evaluation, as every run on a test corpus very likely leads to different results. Under lab conditions, results of non-deterministic AV methods can (and should) be counteracted by averaging multiple runs. However, it remains highly questionable if such methods are generally applicable in realistic forensic cases, where the prediction INLINEFORM3 regarding a verification case INLINEFORM4 might sometimes result in Y and sometimes in N. Optimizability Another important property of an AV method is optimizability. We define an AV method as optimizable, if it is designed in such a way that it offers adjustable hyperparameters that can be tuned against a training/validation corpus, given an optimization method such as grid or random search. Hyperparameters might be, for instance, the selected distance/similarity function, the number of layers and neurons in a neural network or the choice of a kernel method. The majority of existing AV approaches in the literature (for example, BIBREF13 , BIBREF23 , BIBREF24 , BIBREF22 , BIBREF31 , BIBREF4 , BIBREF32 , BIBREF16 ) belong to this category. On the other hand, if a published AV approach involves hyperparameters that have been entirely fixed such that there is no further possibility to improve its performance from outside (without deviating from the definitions in the publication of the method), the method is considered to be non-optimizable. Non-optimizable AV methods are preferable in forensic settings as, here, the existence of a training/validation corpus is not always self-evident. Among the proposed AV approaches in the respective literature, we identified only a small fraction BIBREF21 , BIBREF2 , BIBREF30 that fall into this category. Model Category From a machine learning point of view, authorship verification represents a unary classification problem BIBREF22 , BIBREF13 , BIBREF16 , BIBREF33 , BIBREF14 . Yet, in the literature, it can be observed that sometimes AV is treated as a unary BIBREF25 , BIBREF23 , BIBREF26 , BIBREF16 and sometimes as a binary classification task BIBREF30 , BIBREF32 , BIBREF22 , BIBREF2 . We define the way an AV approach is modeled by the phrase model category. However, before explaining this in more detail, we wish to recall what unary/one-class classification exactly represents. For this, we list the following verbatim quotes, which characterize one-class classification, as can be seen, almost identically (emphasis by us): “In one-class classification it is assumed that only information of one of the classes, the target class, is available. This means that just example objects of the target class can be used and that no information about the other class of outlier objects is present.” BIBREF34 “One-class classification (OCC) [...] consists in making a description of a target class of objects and in detecting whether a new object resembles this class or not. [...] The OCC model is developed using target class samples only.” BIBREF35 “In one-class classification framework, an object is classified as belonging or not belonging to a target class, while only sample examples of objects from the target class are available during the training phase.” BIBREF25 Note that in the context of authorship verification, target class refers to the known author INLINEFORM0 such that for a document INLINEFORM1 of an unknown author INLINEFORM2 the task is to verify whether INLINEFORM3 holds. One of the most important requirements of any existing AV method is a decision criterion, which aims to accept or reject a questioned authorship. A decision criterion can be expressed through a simple scalar threshold INLINEFORM4 or a more complex model INLINEFORM5 such as a hyperplane in a high-dimensional feature space. As a consequence of the above statements, the determination of INLINEFORM6 or INLINEFORM7 has to be performed solely on the basis of INLINEFORM8 , otherwise the AV method cannot be considered to be unary. However, our conducted literature research regarding existing AV approaches revealed that there are uncertainties how to precisely draw the borders between unary and binary AV methods (for instance, BIBREF36 , BIBREF16 , BIBREF33 ). Nonetheless, few attempts have been made to distinguish both categories from another perspective. Potha and Stamatatos BIBREF33 , for example, categorize AV methods as either intrinsic or extrinsic (emphasis by us): “Intrinsic verification models view it [i. e., the verification task] as a one-class classification task and are based exclusively on analysing the similarity between [ INLINEFORM0 ] and [ INLINEFORM1 ]. [...] Such methods [...] do not require any external resources.” BIBREF33 “On the other hand, extrinsic verification models attempt to transform the verification task to a pair classification task by considering external documents to be used as samples of the negative class.” BIBREF33 While we agree with statement (2), the former statement (1) is unsatisfactory, as intrinsic verification models are not necessarily unary. For example, the AV approach GLAD proposed by Hürlimann et al. BIBREF22 directly contradicts statement (1). Here, the authors “decided to cast the problem as a binary classification task where class values are Y [ INLINEFORM0 ] and N [ INLINEFORM1 ]. [...] We do not introduce any negative examples by means of external documents, thus adhering to an intrinsic approach.” BIBREF22 . A misconception similar to statement (1) can be observed in the paper of Jankowska et al. BIBREF24 , who introduced the so-called CNG approach claimed to be a one-class classification method. CNG is intrinsic in that way that it considers only INLINEFORM0 when deciding a problem INLINEFORM1 . However, the decision criterion, which is a threshold INLINEFORM2 , is determined on a set of verification problems, labeled either as Y or N. This incorporates “external resources” for defining the decision criterion, and it constitutes an implementation of binary classification between Y and N in analogy to the statement of Hürlimann et al. BIBREF22 mentioned above. Thus, CNG is in conflict with the unary definition mentioned above. In a subsequent paper BIBREF25 , however, Jankowska et al. refined their approach and introduced a modification, where INLINEFORM3 was determined solely on the basis of INLINEFORM4 . Thus, the modified approach can be considered as a true unary AV method, according to the quoted definitions for unary classification. In 2004, Koppel and Schler BIBREF13 presented the Unmasking approach which, according to the authors, represents a unary AV method. However, if we take a closer look at the learning process of Unmasking, we can see that it is based on a binary SVM classifier that consumes feature vectors (derived from “degradation curves”) labeled as Y (“same author”) or N (“different author”). Unmasking, therefore, cannot be considered to be unary as the decision is not solely based on the documents within INLINEFORM0 , in analogy to the CNG approach of Jankowska et al. BIBREF24 discussed above. It should be highlighted again that the aforementioned three approaches are binary-intrinsic since their decision criteria INLINEFORM1 or INLINEFORM2 was determined on a set of problems labeled in a binary manner (Y and N) while after training, the verification is performed in an intrinsic manner, meaning that INLINEFORM3 and INLINEFORM4 are compared against INLINEFORM5 or INLINEFORM6 but not against documents within other verification problems (cf. Figure FIGREF15 ). A crucial aspect, which might have lead to misperceptions regarding the model category of these approaches in the past, is the fact that two different class domains are involved. On the one hand, there is the class domain of authors, where the task is to distinguish INLINEFORM7 and INLINEFORM8 . On the other hand, there is the elevated or lifted domain of verification problem classes, which are Y and N. The training phase of binary-intrinsic approaches is used for learning to distinguish these two classes, and the verification task can be understood as putting the verification problem as a whole into class Y or class N, whereby the class domain of authors fades from the spotlight (cf. Figure FIGREF15 ). Besides unary and binary-intrinsic methods, there is a third category of approaches, namely binary-extrinsic AV approaches (for example, BIBREF3 , BIBREF30 , BIBREF29 , BIBREF37 , BIBREF32 , BIBREF1 , BIBREF2 ). These methods use external documents during a potentially existing training phase and – more importantly – during testing. In these approaches, the decision between INLINEFORM0 and INLINEFORM1 is put into the focus, where the external documents aim to construct the counter class INLINEFORM2 . Based on the above observations, we conclude that the key requirement for judging the model category of an AV method depends solely on the aspect how its decision criterion INLINEFORM0 or INLINEFORM1 is determined (cf. Figure FIGREF15 ): An AV method is unary if and only if its decision criterion INLINEFORM0 or INLINEFORM1 is determined solely on the basis of the target class INLINEFORM2 during testing. As a consequence, an AV method cannot be considered to be unary if documents not belonging to INLINEFORM3 are used to define INLINEFORM4 or INLINEFORM5 . An AV method is binary-intrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined on a training corpus comprising verification problems labeled either as Y or N (in other words documents of several authors). However, once the training is completed, a binary-intrinsic method has no access to external documents anymore such that the decision regarding the authorship of INLINEFORM2 is made on the basis of the reference data of INLINEFORM3 as well as INLINEFORM4 or INLINEFORM5 . An AV method is binary-extrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined during testing on the basis of external documents that represent the outlier class INLINEFORM2 . Note that optimizable AV methods such as BIBREF18 , BIBREF25 are not excluded to be unary. Provided that INLINEFORM0 or INLINEFORM1 is not subject of the optimization procedure, the model category remains unary. The reason for this is obvious; Hyperparameters might influence the resulting performance of unary AV methods. The decision criterion itself, however, remains unchanged. Implications Each model category has its own implications regarding prerequisites, evaluability, and applicability. One advantage of unary AV methods is that they do not require a specific document collection strategy to construct the counter class INLINEFORM0 , which reduces their complexity. On the downside, the choice of the underlying machine learning model of a unary AV approach is restricted to one-class classification algorithms or unsupervised learning techniques, given a suitable decision criterion. However, a far more important implication of unary AV approaches concerns their performance assessment. Since unary classification (not necessarily AV) approaches depend on a fixed decision criterion INLINEFORM0 or INLINEFORM1 , performance measures such as the area under the ROC curve (AUC) are meaningless. Recall that ROC analysis is used for evaluating classifiers, where the decision threshold is not finally fixed. ROC analysis requires that the classifier generates scores, which are comparable across classification problem instances. The ROC curve and the area under this curve is then computed by considering all possible discrimination thresholds for these scores. While unary AV approaches might produce such scores, introducing a variable INLINEFORM2 would change the semantics of these approaches. Since unary AV approaches have a fixed decision criterion, they provide only a single point in the ROC space. To assess the performance of a unary AV method, it is, therefore, mandatory to consider the confusion matrix that leads to this point in the ROC space. Another implication is that unary AV methods are necessarily instance-based and, thus, require a set INLINEFORM0 of multiple documents of the known author INLINEFORM1 . If only one reference document is available ( INLINEFORM2 ), this document must be artificially turned into multiple samples from the author. In general, unary classification methods need multiple samples from the target class since it is not possible to determine a relative closeness to that class based on only one sample. On the plus side, binary-intrinsic or extrinsic AV methods benefit from the fact that we can choose among a variety of binary and INLINEFORM0 -ary classification models. However, if we consider designing a binary-intrinsic AV method, it should not be overlooked that the involved classifier will learn nothing about individual authors, but only similarities or differences that hold in general for Y and N verification problems BIBREF32 . If, on the other hand, the choice falls on a binary-extrinsic method, a strategy has to be considered for collecting representative documents for the outlier class INLINEFORM0 . Several existing methods such as BIBREF32 , BIBREF1 , BIBREF2 rely on search engines for retrieving appropriate documents, but these search engines might refuse their service if a specified quota is exhausted. Additionally, the retrieved documents render these methods inherently non-deterministic. Moreover, such methods cause relatively high runtimes BIBREF11 , BIBREF5 . Using search engines also requires an active Internet connection, which might not be available or allowed in specific scenarios. But even if we can access the Internet to retrieve documents, there is no guarantee that the true author is not among them. With these points in mind, the applicability of binary-extrinsic methods in real-world cases, i. e., in real forensic settings, remains highly questionable. Methodology In the following, we introduce our three self-compiled corpora, where each corpus represents a different challenge. Next, we describe which authorship verification approaches we considered for the experiments and classify each AV method according to the properties introduced in Section SECREF3 . Afterwards, we explain which performance measures were selected with respect to the conclusion made in Section UID17 . Finally, we describe our experiments, present the results and highlight a number of observations. Corpora A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed. As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems. As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. The chat conversations stem from a variety of sources including emails and instant messengers (e. g., MSN, AOL or Yahoo), where for each conversation, we ensured that only chat lines from the offender were extracted. We applied the same problem construction procedure as for the corpus INLINEFORM1 , which resulted in 1,100 verification problems that again were split into a training and test set given a 40/60% ratio. In contrast to the corpus INLINEFORM2 , we only performed slight preprocessing. Essentially, we removed user names, time-stamps, URLs, multiple blanks as well as annotations that were not part of the original conversations from all chat lines. Moreover, we did not normalize words (for example, shorten words such as “nooooo” to “no”) as we believe that these represent important style markers. Furthermore, we did not remove newlines between the chat lines, as the positions of specific words might play an important role regarding the individual's writing style. As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform. Overall, the postings were written by 100 Reddit users and stem from a variety of subreddits. In order to construct the Y-cases, we selected exactly two postings from disjoint subreddits for each user such that both the known and unknown document INLINEFORM1 and INLINEFORM2 differ in their topic. Regarding the N-cases, we applied the opposite strategy such that INLINEFORM3 and INLINEFORM4 belong to the same topic. The rationale behind this is to figure out to which extent AV methods can be fooled in cases, where the topic matches but not the authorship and vice versa. Since for this specific corpus we have to control the topics of the documents, we did not perform the same procedure applied for INLINEFORM5 and INLINEFORM6 to construct the training and test sets. Instead, we used for the resulting 100 verification problems a 40/60% hold-out split, where both training and test set are entirely disjoint. Examined Authorship Verification Methods As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 . All (optimizable) AV methods were tuned regarding their hyperparameters, according to the original procedure mentioned in the respective paper. However, in the case of the binary-extrinsic methods (GenIM, ImpGI and NNCD) we had to use an alternative impostors generation strategy in our reimplementations, due to technical problems. In the respective papers, the authors used search engine queries to generate the impostor documents, which are needed to model the counter class INLINEFORM0 . Regarding our reimplementations, we used the documents from the static corpora (similarly to the idea of Kocher and Savoy BIBREF30 ) to generate the impostors in the following manner: Let INLINEFORM1 denote a corpus with INLINEFORM2 verification problems. For each INLINEFORM3 we choose all unknown documents INLINEFORM4 in INLINEFORM5 with INLINEFORM6 and append them the impostor set INLINEFORM7 . Here, it should be highlighted that both GenIM and ImpGI consider the number of impostors as a hyperparameter such that the resulting impostor set is a subset of INLINEFORM8 . In contrast to this, NNCD considers all INLINEFORM9 as possible impostors. This fact plays an important role in the later experiments, where we compare the AV approaches to each other. Although our strategy is not flexible like using a search engine, it has one advantage that, here, it is assumed that the true author of an unknown document is not among the impostors, since in our corpora the user/author names are known beforehand. Performance Measures According to our extensive literature research, numerous measures (e. g., Accuracy, F INLINEFORM0 , c@1, AUC, AUC@1, INLINEFORM1 or EER) have been used so far to assess the performance of AV methods. In regard to our experiments, we decided to use c@1 and AUC for several reasons. First, Accuracy, F INLINEFORM2 and INLINEFORM3 are not applicable in cases where AV methods leave verification problems unanswered, which concerns some of our examined AV approaches. Second, using AUC alone is meaningless for non-optimizable AV methods, as explained in Section UID17 . Third, both have been used in the PAN-AV competitions BIBREF5 , BIBREF12 . Note that we also list the confusion matrix outcomes. Experiments Overall, we focus on three experiments, which are based on the corpora introduced in Section SECREF21 : The Effect of Stylistic Variation Across Large Time Spans The Effect of Topical Influence The Effect of Limited Text Length In the following each experiment is described in detail. In this experiment, we seek to answer the question if the writing style of an author INLINEFORM0 can be recognized, given a large time span between two documents of INLINEFORM1 . The motivation behind this experiment is based on the statement of Olsson BIBREF38 that language acquisition is a continuous process, which is not only acquired, but also can be lost. Therefore, an important question that arises here is, if the writing style of a person remains “stable” across a large time span, given the fact that language in each individual's life is never “fixed” BIBREF38 . Regarding this experiment, we used the INLINEFORM2 corpus. The results of the 12 examined AV methods are listed in Table TABREF41 , where it can be seen that the majority of the examined AV methods yield useful recognition results with a maximum value of 0.792 in terms of c@1. With the exception of the binary-intrinsic approach COAV, the remaining top performing methods belong to the binary-extrinsic category. This category of AV methods has also been superior in the PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 , where they outperformed binary-intrinsic and unary approaches three times in a row (2013–2015). The top performing approaches Caravel, COAV and NNCD deserve closer attention. All three are based on character-level language models that capture low-level features similar to character INLINEFORM0 -grams, which have been shown in numerous AA and AV studies (for instance, BIBREF39 , BIBREF26 ) to be highly effective and robust. In BIBREF19 , BIBREF28 , it has been shown that Caravel and COAV were also the two top-performing approaches, where in BIBREF19 they were evaluated on the PAN-2015 AV corpus BIBREF12 , while in BIBREF28 they were applied on texts obtained from Project Gutenberg. Although both approaches perform similarly, they differ in the way how the decision criterion INLINEFORM1 is determined. While COAV requires a training corpus to learn INLINEFORM2 , Caravel assumes that the given test corpus (which provides the impostors) is balanced. Given this assumption, Caravel first computes similarity scores for all verification problems in the corpus and then sets INLINEFORM3 to the median of all similarities (cf. Figure FIGREF49 ). Thus, from a machine learning perspective, there is some undue training on the test set. Moreover, the applicability of Caravel in realistic scenarios is questionable, as a forensic case is not part of a corpus where the Y/N-distribution is known beforehand. Another interesting observation can be made regarding COAV, NNCD and OCCAV. Although all three differ regarding their model category, they use the same underlying compression algorithm (PPMd) that is responsible for generating the language model. While the former two approaches perform similarly well, OCCAV achieves a poor c@1 score ( INLINEFORM0 ). An obvious explanation for this is a wrongly calibrated threshold INLINEFORM1 , as can be seen from the confusion matrix, where almost all answers are N-predictions. Regarding the NNCD approach, one should consider that INLINEFORM2 is compared against INLINEFORM3 as well as INLINEFORM4 impostors within a corpus comprised of INLINEFORM5 verification problems. Therefore, a Y-result is correct with relatively high certainty (i. e., the method has high precision compared to other approaches with a similar c@1 score), as NNCD decided that author INLINEFORM6 fits best to INLINEFORM7 among INLINEFORM8 candidates. In contrast to Caravel, NNCD only retrieves the impostors from the given corpus, but it does not exploit background knowledge about the distribution of problems in the corpus. Overall, the results indicate that it is possible to recognize writing styles across large time spans. To gain more insights regarding the question which features led to the correct predictions, we inspected the AVeer method. Although the method achieved only average results, it benefits from the fact that it can be interpreted easily, as it relies on a simple distance function, a fixed threshold INLINEFORM0 and predefined feature categories such as function words. Regarding the correctly recognized Y-cases, we noticed that conjunctive adverbs such as “hence”, “therefore” or “moreover” contributed mostly to AVeer's correct predictions. However, a more in-depth analysis is required in future work to figure out whether the decisions of the remaining methods are also primarily affected by these features. In this experiment, we investigate the question if the writing style of authors can be recognized under the influence of topical bias. In real-world scenarios, the topic of the documents within a verification problem INLINEFORM0 is not always known beforehand, which can lead to a serious challenge regarding the recognition of the writing style. Imagine, for example, that INLINEFORM1 consists of a known and unknown document INLINEFORM2 and INLINEFORM3 that are written by the same author ( INLINEFORM4 ) while at the same time differ regarding their topic. In such a case, an AV method that it focusing “too much” on the topic (for example on specific nouns or phrases) will likely predict a different authorship ( INLINEFORM5 ). On the other hand, when INLINEFORM6 and INLINEFORM7 match regarding their topic, while being written by different authors, a topically biased AV method might erroneously predict INLINEFORM8 . In the following we show to which extent these assumptions hold. As a data basis for this experiment, we used the INLINEFORM0 corpus introduced in Section UID30 . The results regarding the 12 AV methods are given in Table TABREF44 , where it can be seen that our assumptions hold. All examined AV methods (with no exception) are fooled by the topical bias in the corpus. Here, the highest achieved results in terms of c@1 and AUC are very close to random guessing. A closer look at the confusion matrix outcomes reveals that some methods, for example ImpGI and OCCAV, perform almost entirely inverse to each other, where the former predicts nothing but Y and the latter nothing but N (except 1 Y). Moreover, we can assume that the lower c@1 is, the stronger is the focus of the respective AV method on the topic of the documents. Overall, the results of this experiment suggest that none of the examined AV methods is robust against topical influence. In our third experiment, we investigate the question how text lengths affect the results of the examined AV methods. The motivation behind this experiment is based on the observation of Stamatatos et al. BIBREF12 that text length is an important issue, which has not been thoroughly studied within authorship verification research. To address this issue, we make use of the INLINEFORM0 corpus introduced in Section UID28 . The corpus is suitable for this purpose, as it comprises a large number of verification problems, where more than 90% of all documents have sufficient text lengths ( INLINEFORM1 2,000 characters). This allows a stepwise truncation and by this to analyze the effect between the text lengths and the recognition results. However, before considering this, we first focus on the results (shown in Table TABREF46 ) after applying all 12 AV methods on the original test corpus. As can be seen in Table TABREF46 , almost all approaches perform very well with c@1 scores up to 0.991. Although these results are quite impressive, it should be noted that a large fraction of the documents comprises thousands of words. Thus, the methods can learn precise representations based on a large variety of features, which in turn enable a good determination of (dis)similarities between known/unknown documents. To investigate this issue in more detail, we constructed four versions of the test corpus and equalized the unknown document lengths to 250, 500, 1000, and 2000 characters. Then, we applied the top performing AV methods with a c@1 value INLINEFORM0 on the four corpora. Here, we reused the same models and hyperparameters (including the decision criteria INLINEFORM1 and INLINEFORM2 ) that were determined on the training corpus. The intention behind this was to observe the robustness of the trained AV models, given the fact that during training they were confronted with longer documents. The results are illustrated in Figure FIGREF47 , where it can be observed that GLAD yields the most stable results across the four corpora versions, where even for the corpus with the 250 characters long unknown documents, it achieves a c@1 score of 0.727. Surprisingly, Unmasking performs similarly well, despite of the fact that the method has been designed for longer texts i. e., book chunks of at least 500 words BIBREF13 . Sanderson and Guenter also point out that the Unmasking approach is less useful when dealing with relatively short texts BIBREF40 . However, our results show a different picture, at least for this corpus. One explanation of the resilience of GLAD across the varying text lengths might be due to its decision model INLINEFORM0 (an SVM with a linear kernel) that withstands the absence of missing features caused by the truncation of the documents, in contrast to the distance-based approaches AVeer, NNCD and COAV, where the decision criterion INLINEFORM1 is reflected by a simple scalar. Table TABREF48 lists the confusion matrix outcomes of the six AV methods regarding the 250 characters version of INLINEFORM2 . Here, it can be seen that the underlying SVM model of GLAD and Unmasking is able to regulate its Y/N-predictions, in contrast to the three distance-based methods, where the majority of predictions fall either on the Y- or on the N-side. To gain a better picture regarding the stability of the decision criteria INLINEFORM0 and INLINEFORM1 of the methods, we decided to take a closer look on the ROC curves (cf. Figure FIGREF49 ) generated by GLAD, Caravel and COAV for the four corpora versions, where a number of interesting observations can be made. When focusing on AUC, it turns out that all three methods perform very similar to each other, whereas big discrepancies between GLAD and COAV can be observed regarding c@1. When we consider the current and maximum achievable results (depicted by the circles and triangles, respectively) it becomes apparent that GLAD's model behaves stable, while the one of COAV becomes increasingly vulnerable the more the documents are shortened. When looking at the ROC curve of Caravel, it can be clearly seen that the actual and maximum achievable results are very close to each other. This is not surprising, due to the fact that Caravel's threshold always lies at the median point of the ROC curve, provided that the given corpus is balanced. While inspecting the 250 characters long documents in more detail, we identified that they share similar vocabularies consisting of chat abbreviations such as “lol” (laughing out loud) or “k” (ok), smileys and specific obscene words. Therefore, we assume that the verification results of the examined methods are mainly caused by the similar vocabularies between the texts. Conclusion and Future Work We highlighted the problem that underlying characteristics of authorship verification approaches have not been paid much attention in the past research and that these affect the applicability of the methods in real forensic settings. Then, we proposed several properties that enable a better characterization and by this a better comparison between AV methods. Among others, we explained that the performance measure AUC is meaningless in regard to unary or specific non-optimizable AV methods, which involve a fixed decision criterion (for example, NNCD). Additionally, we mentioned that determinism must be fulfilled such that an AV method can be rated as reliable. Moreover, we clarified a number of misunderstandings in previous research works and proposed three clear criteria that allow to classify the model category of an AV method, which in turn influences its design and the way how it should be evaluated. In regard to binary-extrinsic AV approaches, we explained which challenges exist and how they affect their applicability. In an experimental setup, we applied 12 existing AV methods on three self-compiled corpora, where the intention behind each corpus was to focus on a different aspect of the methods applicability. Our findings regarding the examined approaches can be summarized as follows: Despite of the good performance of the five AV methods GenIM, ImpGI, Unmasking, Caravel and SPATIUM, none of them can be truly considered as reliable and therefore applicable in real forensic cases. The reason for this is not only the non-deterministic behavior of the methods but also their dependence (excepting Unmasking) on an impostor corpus. Here, it must be guaranteed that the true author is not among the candidates, but also that the impostor documents are suitable such that the AV task not inadvertently degenerates from style to topic classification. In particular, the applicability of the Caravel approach remains highly questionable, as it requires a corpus where the information regarding Y/N-distribution is known beforehand in order to set the threshold. In regard to the two examined unary AV approaches MOCC and OCCAV, we observed that these perform poorly on all three corpora in comparison to the binary-intrinsic and binary-extrinsic methods. Most likely, this is caused by the wrong threshold setting, as both tend to generate more N-predictions. From the remaining approaches, GLAD and COAV seem to be a good choice for realistic scenarios. However, the former has been shown to be more robust in regard to varying text lengths given a fixed model, while the latter requires a retraining of the model (note that both performed almost equal in terms of AUC). Our hypothesis, which we leave open for future work, is that AV methods relying on a complex model INLINEFORM0 are more robust than methods based on a scalar-threshold INLINEFORM1 . Lastly, we wish to underline that all examined approaches failed in the cross-topic experiment. One possibility to counteract this is to apply text distortion techniques (for instance, BIBREF41 ) in order to control the topic influence in the documents. As one next step, we will compile additional and larger corpora to investigate the question whether the evaluation results of this paper hold more generally. Furthermore, we will address the important question how the results of AV methods can be interpreted in a more systematic manner, which will further influence the practicability of AV methods besides the proposed properties. This work was supported by the German Federal Ministry of Education and Research (BMBF) under the project "DORIAN" (Scrutinise and thwart disinformation).
80 excerpts from scientific works, collection of 1,645 chat conversations, collection of 200 aggregated postings
3fae289ab1fc023bce2fa4f1ce4d9f828074f232
3fae289ab1fc023bce2fa4f1ce4d9f828074f232_0
Q: What is a self-compiled corpus? Text: Introduction Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed. In the past two decades, researchers from different fields including linguistics, psychology, computer science and mathematics proposed numerous techniques and concepts that aim to solve the AV task. Probably due to the interdisciplinary nature of this research field, AV approaches were becoming more and more diverse, as can be seen in the respective literature. In 2013, for example, Veenman and Li BIBREF2 presented an AV method based on compression, which has its roots in the field of information theory. In 2015, Bagnall BIBREF3 introduced the first deep learning approach that makes use of language modeling, an important key concept in statistical natural language processing. In 2017, Castañeda and Calvo BIBREF4 proposed an AV method that applies a semantic space model through Latent Dirichlet Allocation, a generative statistical model used in information retrieval and computational linguistics. Despite the increasing number of AV approaches, a closer look at the respective studies reveals that only minor attention is paid to their underlying characteristics such as reliability and robustness. These, however, must be taken into account before AV methods can be applied in real forensic settings. The objective of this paper is to fill this gap and to propose important properties and criteria that are not only intended to characterize AV methods, but also allow their assessment in a more systematic manner. By this, we hope to contribute to the further development of this young research field. Based on the proposed properties, we investigate the applicability of 12 existing AV approaches on three self-compiled corpora, where each corpus involves a specific challenge. The rest of this paper is structured as follows. Section SECREF2 discusses the related work that served as an inspiration for our analysis. Section SECREF3 comprises the proposed criteria and properties to characterize AV methods. Section SECREF4 describes the methodology, consisting of the used corpora, examined AV methods, selected performance measures and experiments. Finally, Section SECREF5 concludes the work and outlines future work. Related Work Over the years, researchers in the field of authorship analysis identified a number of challenges and limitations regarding existing studies and approaches. Azarbonyad et al. BIBREF8 , for example, focused on the questions if the writing styles of authors of short texts change over time and how this affects AA. To answer these questions, the authors proposed an AA approach based on time-aware language models that incorporate the temporal changes of the writing style of authors. In one of our experiments, we focus on a similar question, namely, whether it is possible to recognize the writing style of authors, despite of large time spans between their documents. However, there are several differences between our experiment and the study of Azarbonyad et al. First, the authors consider an AA task, where one anonymous document INLINEFORM0 has to be attributed to one of INLINEFORM1 possible candidate authors, while we focus on an AV task, where INLINEFORM2 is compared against one document INLINEFORM3 of a known author. Second, the authors focus on texts with informal language (emails and tweets) in their study, while in our experiment we consider documents written in a formal language (scientific works). Third, Azarbonyad et al. analyzed texts with a time span of four years, while in our experiment the average time span is 15.6 years. Fourth, in contrast to the approach of the authors, none of the 12 examined AV approaches in our experiment considers a special handling of temporal stylistic changes. In recent years, the new research field author obfuscation (AO) evolved, which concerns itself with the task to fool AA or AV methods in a way that the true author cannot be correctly recognized anymore. To achieve this, AO approaches which, according to Gröndahl and Asokan BIBREF9 can be divided into manual, computer-assisted and automatic types, perform a variety of modifications on the texts. These include simple synonym replacements, rule-based substitutions or word order permutations. In 2016, Potthast et al. BIBREF10 presented the first large-scale evaluation of three AO approaches that aim to attack 44 AV methods, which were submitted to the PAN-AV competitions during 2013-2015 BIBREF11 , BIBREF5 , BIBREF12 . One of their findings was that even basic AO approaches have a significant impact on many AV methods. More precisely, the best-performing AO approach was able to flip on average INLINEFORM0 % of an authorship verifier’s decisions towards choosing N (“different author”), while in fact Y (“same author”) was correct BIBREF10 . In contrast to Potthast et al., we do not focus on AO to measure the robustness of AV methods. Instead, we investigate in one experiment the question how trained AV models behave, if the lengths of the questioned documents are getting shorter and shorter. To our best knowledge, this question has not been addressed in previous authorship verification studies. Characteristics of Authorship Verification Before we can assess the applicability of AV methods, it is important to understand their fundamental characteristics. Due to the increasing number of proposed AV approaches in the last two decades, the need arose to develop a systematization including the conception, implementation and evaluation of authorship verification methods. In regard to this, only a few attempts have been made so far. In 2004, for example, Koppel and Schler BIBREF13 described for the first time the connection between AV and unary classification, also known as one-class classification. In 2008, Stein et al. BIBREF14 compiled an overview of important algorithmic building blocks for AV where, among other things, they also formulated three AV problems as decision problems. In 2009, Stamatatos BIBREF15 coined the phrases profile- and instance-based approaches that initially were used in the field of AA, but later found their way also into AV. In 2013 and 2014, Stamatatos et al. BIBREF11 , BIBREF16 introduced the terms intrinsic- and extrinsic models that aim to further distinguish between AV methods. However, a closer look at previous attempts to characterize authorship verification approaches reveals a number of misunderstandings, for instance, when it comes to draw the borders between their underlying classification models. In the following subsections, we clarify these misunderstandings, where we redefine previous definitions and propose new properties that enable a better comparison between AV methods. Reliability (Determinism) Reliability is a fundamental property any AV method must fulfill in order to be applicable in real-world forensic settings. However, since there is no consistent concept nor a uniform definition of the term “reliability” in the context of authorship verification according to the screened literature, we decided to reuse a definition from applied statistics, and adapt it carefully to AV. In his standard reference book, Bollen BIBREF17 gives a clear description for this term: “Reliability is the consistency of measurement” and provides a simple example to illustrate its meaning: At time INLINEFORM0 we ask a large number of persons the same question Q and record their responses. Afterwards, we remove their memory of the dialogue. At time INLINEFORM1 we ask them again the same question Q and record their responses again. “The reliability is the consistency of the responses across individuals for the two time periods. To the extent that all individuals are consistent, the measure is reliable” BIBREF17 . This example deals with the consistency of the measured objects as a factor for the reliability of measurements. In the case of authorship verification, the analyzed objects are static data, and hence these cannot be a source of inconsistency. However, the measurement system itself can behave inconsistently and hence unreliable. This aspect can be described as intra-rater reliability. Reliability in authorship verification is satisfied, if an AV method always generates the same prediction INLINEFORM0 for the same input INLINEFORM1 , or in other words, if the method behaves deterministically. Several AV approaches, including BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF16 fall into this category. In contrast, if an AV method behaves non-deterministically such that two different predictions for INLINEFORM2 are possible, the method can be rated as unreliable. Many AV approaches, including BIBREF4 , BIBREF13 , BIBREF26 , BIBREF1 , BIBREF27 , BIBREF3 , BIBREF28 , BIBREF29 , BIBREF30 belong to this category, since they involve randomness (e. g., weight initialization, feature subsampling, chunk generation or impostor selection), which might distort the evaluation, as every run on a test corpus very likely leads to different results. Under lab conditions, results of non-deterministic AV methods can (and should) be counteracted by averaging multiple runs. However, it remains highly questionable if such methods are generally applicable in realistic forensic cases, where the prediction INLINEFORM3 regarding a verification case INLINEFORM4 might sometimes result in Y and sometimes in N. Optimizability Another important property of an AV method is optimizability. We define an AV method as optimizable, if it is designed in such a way that it offers adjustable hyperparameters that can be tuned against a training/validation corpus, given an optimization method such as grid or random search. Hyperparameters might be, for instance, the selected distance/similarity function, the number of layers and neurons in a neural network or the choice of a kernel method. The majority of existing AV approaches in the literature (for example, BIBREF13 , BIBREF23 , BIBREF24 , BIBREF22 , BIBREF31 , BIBREF4 , BIBREF32 , BIBREF16 ) belong to this category. On the other hand, if a published AV approach involves hyperparameters that have been entirely fixed such that there is no further possibility to improve its performance from outside (without deviating from the definitions in the publication of the method), the method is considered to be non-optimizable. Non-optimizable AV methods are preferable in forensic settings as, here, the existence of a training/validation corpus is not always self-evident. Among the proposed AV approaches in the respective literature, we identified only a small fraction BIBREF21 , BIBREF2 , BIBREF30 that fall into this category. Model Category From a machine learning point of view, authorship verification represents a unary classification problem BIBREF22 , BIBREF13 , BIBREF16 , BIBREF33 , BIBREF14 . Yet, in the literature, it can be observed that sometimes AV is treated as a unary BIBREF25 , BIBREF23 , BIBREF26 , BIBREF16 and sometimes as a binary classification task BIBREF30 , BIBREF32 , BIBREF22 , BIBREF2 . We define the way an AV approach is modeled by the phrase model category. However, before explaining this in more detail, we wish to recall what unary/one-class classification exactly represents. For this, we list the following verbatim quotes, which characterize one-class classification, as can be seen, almost identically (emphasis by us): “In one-class classification it is assumed that only information of one of the classes, the target class, is available. This means that just example objects of the target class can be used and that no information about the other class of outlier objects is present.” BIBREF34 “One-class classification (OCC) [...] consists in making a description of a target class of objects and in detecting whether a new object resembles this class or not. [...] The OCC model is developed using target class samples only.” BIBREF35 “In one-class classification framework, an object is classified as belonging or not belonging to a target class, while only sample examples of objects from the target class are available during the training phase.” BIBREF25 Note that in the context of authorship verification, target class refers to the known author INLINEFORM0 such that for a document INLINEFORM1 of an unknown author INLINEFORM2 the task is to verify whether INLINEFORM3 holds. One of the most important requirements of any existing AV method is a decision criterion, which aims to accept or reject a questioned authorship. A decision criterion can be expressed through a simple scalar threshold INLINEFORM4 or a more complex model INLINEFORM5 such as a hyperplane in a high-dimensional feature space. As a consequence of the above statements, the determination of INLINEFORM6 or INLINEFORM7 has to be performed solely on the basis of INLINEFORM8 , otherwise the AV method cannot be considered to be unary. However, our conducted literature research regarding existing AV approaches revealed that there are uncertainties how to precisely draw the borders between unary and binary AV methods (for instance, BIBREF36 , BIBREF16 , BIBREF33 ). Nonetheless, few attempts have been made to distinguish both categories from another perspective. Potha and Stamatatos BIBREF33 , for example, categorize AV methods as either intrinsic or extrinsic (emphasis by us): “Intrinsic verification models view it [i. e., the verification task] as a one-class classification task and are based exclusively on analysing the similarity between [ INLINEFORM0 ] and [ INLINEFORM1 ]. [...] Such methods [...] do not require any external resources.” BIBREF33 “On the other hand, extrinsic verification models attempt to transform the verification task to a pair classification task by considering external documents to be used as samples of the negative class.” BIBREF33 While we agree with statement (2), the former statement (1) is unsatisfactory, as intrinsic verification models are not necessarily unary. For example, the AV approach GLAD proposed by Hürlimann et al. BIBREF22 directly contradicts statement (1). Here, the authors “decided to cast the problem as a binary classification task where class values are Y [ INLINEFORM0 ] and N [ INLINEFORM1 ]. [...] We do not introduce any negative examples by means of external documents, thus adhering to an intrinsic approach.” BIBREF22 . A misconception similar to statement (1) can be observed in the paper of Jankowska et al. BIBREF24 , who introduced the so-called CNG approach claimed to be a one-class classification method. CNG is intrinsic in that way that it considers only INLINEFORM0 when deciding a problem INLINEFORM1 . However, the decision criterion, which is a threshold INLINEFORM2 , is determined on a set of verification problems, labeled either as Y or N. This incorporates “external resources” for defining the decision criterion, and it constitutes an implementation of binary classification between Y and N in analogy to the statement of Hürlimann et al. BIBREF22 mentioned above. Thus, CNG is in conflict with the unary definition mentioned above. In a subsequent paper BIBREF25 , however, Jankowska et al. refined their approach and introduced a modification, where INLINEFORM3 was determined solely on the basis of INLINEFORM4 . Thus, the modified approach can be considered as a true unary AV method, according to the quoted definitions for unary classification. In 2004, Koppel and Schler BIBREF13 presented the Unmasking approach which, according to the authors, represents a unary AV method. However, if we take a closer look at the learning process of Unmasking, we can see that it is based on a binary SVM classifier that consumes feature vectors (derived from “degradation curves”) labeled as Y (“same author”) or N (“different author”). Unmasking, therefore, cannot be considered to be unary as the decision is not solely based on the documents within INLINEFORM0 , in analogy to the CNG approach of Jankowska et al. BIBREF24 discussed above. It should be highlighted again that the aforementioned three approaches are binary-intrinsic since their decision criteria INLINEFORM1 or INLINEFORM2 was determined on a set of problems labeled in a binary manner (Y and N) while after training, the verification is performed in an intrinsic manner, meaning that INLINEFORM3 and INLINEFORM4 are compared against INLINEFORM5 or INLINEFORM6 but not against documents within other verification problems (cf. Figure FIGREF15 ). A crucial aspect, which might have lead to misperceptions regarding the model category of these approaches in the past, is the fact that two different class domains are involved. On the one hand, there is the class domain of authors, where the task is to distinguish INLINEFORM7 and INLINEFORM8 . On the other hand, there is the elevated or lifted domain of verification problem classes, which are Y and N. The training phase of binary-intrinsic approaches is used for learning to distinguish these two classes, and the verification task can be understood as putting the verification problem as a whole into class Y or class N, whereby the class domain of authors fades from the spotlight (cf. Figure FIGREF15 ). Besides unary and binary-intrinsic methods, there is a third category of approaches, namely binary-extrinsic AV approaches (for example, BIBREF3 , BIBREF30 , BIBREF29 , BIBREF37 , BIBREF32 , BIBREF1 , BIBREF2 ). These methods use external documents during a potentially existing training phase and – more importantly – during testing. In these approaches, the decision between INLINEFORM0 and INLINEFORM1 is put into the focus, where the external documents aim to construct the counter class INLINEFORM2 . Based on the above observations, we conclude that the key requirement for judging the model category of an AV method depends solely on the aspect how its decision criterion INLINEFORM0 or INLINEFORM1 is determined (cf. Figure FIGREF15 ): An AV method is unary if and only if its decision criterion INLINEFORM0 or INLINEFORM1 is determined solely on the basis of the target class INLINEFORM2 during testing. As a consequence, an AV method cannot be considered to be unary if documents not belonging to INLINEFORM3 are used to define INLINEFORM4 or INLINEFORM5 . An AV method is binary-intrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined on a training corpus comprising verification problems labeled either as Y or N (in other words documents of several authors). However, once the training is completed, a binary-intrinsic method has no access to external documents anymore such that the decision regarding the authorship of INLINEFORM2 is made on the basis of the reference data of INLINEFORM3 as well as INLINEFORM4 or INLINEFORM5 . An AV method is binary-extrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined during testing on the basis of external documents that represent the outlier class INLINEFORM2 . Note that optimizable AV methods such as BIBREF18 , BIBREF25 are not excluded to be unary. Provided that INLINEFORM0 or INLINEFORM1 is not subject of the optimization procedure, the model category remains unary. The reason for this is obvious; Hyperparameters might influence the resulting performance of unary AV methods. The decision criterion itself, however, remains unchanged. Implications Each model category has its own implications regarding prerequisites, evaluability, and applicability. One advantage of unary AV methods is that they do not require a specific document collection strategy to construct the counter class INLINEFORM0 , which reduces their complexity. On the downside, the choice of the underlying machine learning model of a unary AV approach is restricted to one-class classification algorithms or unsupervised learning techniques, given a suitable decision criterion. However, a far more important implication of unary AV approaches concerns their performance assessment. Since unary classification (not necessarily AV) approaches depend on a fixed decision criterion INLINEFORM0 or INLINEFORM1 , performance measures such as the area under the ROC curve (AUC) are meaningless. Recall that ROC analysis is used for evaluating classifiers, where the decision threshold is not finally fixed. ROC analysis requires that the classifier generates scores, which are comparable across classification problem instances. The ROC curve and the area under this curve is then computed by considering all possible discrimination thresholds for these scores. While unary AV approaches might produce such scores, introducing a variable INLINEFORM2 would change the semantics of these approaches. Since unary AV approaches have a fixed decision criterion, they provide only a single point in the ROC space. To assess the performance of a unary AV method, it is, therefore, mandatory to consider the confusion matrix that leads to this point in the ROC space. Another implication is that unary AV methods are necessarily instance-based and, thus, require a set INLINEFORM0 of multiple documents of the known author INLINEFORM1 . If only one reference document is available ( INLINEFORM2 ), this document must be artificially turned into multiple samples from the author. In general, unary classification methods need multiple samples from the target class since it is not possible to determine a relative closeness to that class based on only one sample. On the plus side, binary-intrinsic or extrinsic AV methods benefit from the fact that we can choose among a variety of binary and INLINEFORM0 -ary classification models. However, if we consider designing a binary-intrinsic AV method, it should not be overlooked that the involved classifier will learn nothing about individual authors, but only similarities or differences that hold in general for Y and N verification problems BIBREF32 . If, on the other hand, the choice falls on a binary-extrinsic method, a strategy has to be considered for collecting representative documents for the outlier class INLINEFORM0 . Several existing methods such as BIBREF32 , BIBREF1 , BIBREF2 rely on search engines for retrieving appropriate documents, but these search engines might refuse their service if a specified quota is exhausted. Additionally, the retrieved documents render these methods inherently non-deterministic. Moreover, such methods cause relatively high runtimes BIBREF11 , BIBREF5 . Using search engines also requires an active Internet connection, which might not be available or allowed in specific scenarios. But even if we can access the Internet to retrieve documents, there is no guarantee that the true author is not among them. With these points in mind, the applicability of binary-extrinsic methods in real-world cases, i. e., in real forensic settings, remains highly questionable. Methodology In the following, we introduce our three self-compiled corpora, where each corpus represents a different challenge. Next, we describe which authorship verification approaches we considered for the experiments and classify each AV method according to the properties introduced in Section SECREF3 . Afterwards, we explain which performance measures were selected with respect to the conclusion made in Section UID17 . Finally, we describe our experiments, present the results and highlight a number of observations. Corpora A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed. As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems. As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. The chat conversations stem from a variety of sources including emails and instant messengers (e. g., MSN, AOL or Yahoo), where for each conversation, we ensured that only chat lines from the offender were extracted. We applied the same problem construction procedure as for the corpus INLINEFORM1 , which resulted in 1,100 verification problems that again were split into a training and test set given a 40/60% ratio. In contrast to the corpus INLINEFORM2 , we only performed slight preprocessing. Essentially, we removed user names, time-stamps, URLs, multiple blanks as well as annotations that were not part of the original conversations from all chat lines. Moreover, we did not normalize words (for example, shorten words such as “nooooo” to “no”) as we believe that these represent important style markers. Furthermore, we did not remove newlines between the chat lines, as the positions of specific words might play an important role regarding the individual's writing style. As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform. Overall, the postings were written by 100 Reddit users and stem from a variety of subreddits. In order to construct the Y-cases, we selected exactly two postings from disjoint subreddits for each user such that both the known and unknown document INLINEFORM1 and INLINEFORM2 differ in their topic. Regarding the N-cases, we applied the opposite strategy such that INLINEFORM3 and INLINEFORM4 belong to the same topic. The rationale behind this is to figure out to which extent AV methods can be fooled in cases, where the topic matches but not the authorship and vice versa. Since for this specific corpus we have to control the topics of the documents, we did not perform the same procedure applied for INLINEFORM5 and INLINEFORM6 to construct the training and test sets. Instead, we used for the resulting 100 verification problems a 40/60% hold-out split, where both training and test set are entirely disjoint. Examined Authorship Verification Methods As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 . All (optimizable) AV methods were tuned regarding their hyperparameters, according to the original procedure mentioned in the respective paper. However, in the case of the binary-extrinsic methods (GenIM, ImpGI and NNCD) we had to use an alternative impostors generation strategy in our reimplementations, due to technical problems. In the respective papers, the authors used search engine queries to generate the impostor documents, which are needed to model the counter class INLINEFORM0 . Regarding our reimplementations, we used the documents from the static corpora (similarly to the idea of Kocher and Savoy BIBREF30 ) to generate the impostors in the following manner: Let INLINEFORM1 denote a corpus with INLINEFORM2 verification problems. For each INLINEFORM3 we choose all unknown documents INLINEFORM4 in INLINEFORM5 with INLINEFORM6 and append them the impostor set INLINEFORM7 . Here, it should be highlighted that both GenIM and ImpGI consider the number of impostors as a hyperparameter such that the resulting impostor set is a subset of INLINEFORM8 . In contrast to this, NNCD considers all INLINEFORM9 as possible impostors. This fact plays an important role in the later experiments, where we compare the AV approaches to each other. Although our strategy is not flexible like using a search engine, it has one advantage that, here, it is assumed that the true author of an unknown document is not among the impostors, since in our corpora the user/author names are known beforehand. Performance Measures According to our extensive literature research, numerous measures (e. g., Accuracy, F INLINEFORM0 , c@1, AUC, AUC@1, INLINEFORM1 or EER) have been used so far to assess the performance of AV methods. In regard to our experiments, we decided to use c@1 and AUC for several reasons. First, Accuracy, F INLINEFORM2 and INLINEFORM3 are not applicable in cases where AV methods leave verification problems unanswered, which concerns some of our examined AV approaches. Second, using AUC alone is meaningless for non-optimizable AV methods, as explained in Section UID17 . Third, both have been used in the PAN-AV competitions BIBREF5 , BIBREF12 . Note that we also list the confusion matrix outcomes. Experiments Overall, we focus on three experiments, which are based on the corpora introduced in Section SECREF21 : The Effect of Stylistic Variation Across Large Time Spans The Effect of Topical Influence The Effect of Limited Text Length In the following each experiment is described in detail. In this experiment, we seek to answer the question if the writing style of an author INLINEFORM0 can be recognized, given a large time span between two documents of INLINEFORM1 . The motivation behind this experiment is based on the statement of Olsson BIBREF38 that language acquisition is a continuous process, which is not only acquired, but also can be lost. Therefore, an important question that arises here is, if the writing style of a person remains “stable” across a large time span, given the fact that language in each individual's life is never “fixed” BIBREF38 . Regarding this experiment, we used the INLINEFORM2 corpus. The results of the 12 examined AV methods are listed in Table TABREF41 , where it can be seen that the majority of the examined AV methods yield useful recognition results with a maximum value of 0.792 in terms of c@1. With the exception of the binary-intrinsic approach COAV, the remaining top performing methods belong to the binary-extrinsic category. This category of AV methods has also been superior in the PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 , where they outperformed binary-intrinsic and unary approaches three times in a row (2013–2015). The top performing approaches Caravel, COAV and NNCD deserve closer attention. All three are based on character-level language models that capture low-level features similar to character INLINEFORM0 -grams, which have been shown in numerous AA and AV studies (for instance, BIBREF39 , BIBREF26 ) to be highly effective and robust. In BIBREF19 , BIBREF28 , it has been shown that Caravel and COAV were also the two top-performing approaches, where in BIBREF19 they were evaluated on the PAN-2015 AV corpus BIBREF12 , while in BIBREF28 they were applied on texts obtained from Project Gutenberg. Although both approaches perform similarly, they differ in the way how the decision criterion INLINEFORM1 is determined. While COAV requires a training corpus to learn INLINEFORM2 , Caravel assumes that the given test corpus (which provides the impostors) is balanced. Given this assumption, Caravel first computes similarity scores for all verification problems in the corpus and then sets INLINEFORM3 to the median of all similarities (cf. Figure FIGREF49 ). Thus, from a machine learning perspective, there is some undue training on the test set. Moreover, the applicability of Caravel in realistic scenarios is questionable, as a forensic case is not part of a corpus where the Y/N-distribution is known beforehand. Another interesting observation can be made regarding COAV, NNCD and OCCAV. Although all three differ regarding their model category, they use the same underlying compression algorithm (PPMd) that is responsible for generating the language model. While the former two approaches perform similarly well, OCCAV achieves a poor c@1 score ( INLINEFORM0 ). An obvious explanation for this is a wrongly calibrated threshold INLINEFORM1 , as can be seen from the confusion matrix, where almost all answers are N-predictions. Regarding the NNCD approach, one should consider that INLINEFORM2 is compared against INLINEFORM3 as well as INLINEFORM4 impostors within a corpus comprised of INLINEFORM5 verification problems. Therefore, a Y-result is correct with relatively high certainty (i. e., the method has high precision compared to other approaches with a similar c@1 score), as NNCD decided that author INLINEFORM6 fits best to INLINEFORM7 among INLINEFORM8 candidates. In contrast to Caravel, NNCD only retrieves the impostors from the given corpus, but it does not exploit background knowledge about the distribution of problems in the corpus. Overall, the results indicate that it is possible to recognize writing styles across large time spans. To gain more insights regarding the question which features led to the correct predictions, we inspected the AVeer method. Although the method achieved only average results, it benefits from the fact that it can be interpreted easily, as it relies on a simple distance function, a fixed threshold INLINEFORM0 and predefined feature categories such as function words. Regarding the correctly recognized Y-cases, we noticed that conjunctive adverbs such as “hence”, “therefore” or “moreover” contributed mostly to AVeer's correct predictions. However, a more in-depth analysis is required in future work to figure out whether the decisions of the remaining methods are also primarily affected by these features. In this experiment, we investigate the question if the writing style of authors can be recognized under the influence of topical bias. In real-world scenarios, the topic of the documents within a verification problem INLINEFORM0 is not always known beforehand, which can lead to a serious challenge regarding the recognition of the writing style. Imagine, for example, that INLINEFORM1 consists of a known and unknown document INLINEFORM2 and INLINEFORM3 that are written by the same author ( INLINEFORM4 ) while at the same time differ regarding their topic. In such a case, an AV method that it focusing “too much” on the topic (for example on specific nouns or phrases) will likely predict a different authorship ( INLINEFORM5 ). On the other hand, when INLINEFORM6 and INLINEFORM7 match regarding their topic, while being written by different authors, a topically biased AV method might erroneously predict INLINEFORM8 . In the following we show to which extent these assumptions hold. As a data basis for this experiment, we used the INLINEFORM0 corpus introduced in Section UID30 . The results regarding the 12 AV methods are given in Table TABREF44 , where it can be seen that our assumptions hold. All examined AV methods (with no exception) are fooled by the topical bias in the corpus. Here, the highest achieved results in terms of c@1 and AUC are very close to random guessing. A closer look at the confusion matrix outcomes reveals that some methods, for example ImpGI and OCCAV, perform almost entirely inverse to each other, where the former predicts nothing but Y and the latter nothing but N (except 1 Y). Moreover, we can assume that the lower c@1 is, the stronger is the focus of the respective AV method on the topic of the documents. Overall, the results of this experiment suggest that none of the examined AV methods is robust against topical influence. In our third experiment, we investigate the question how text lengths affect the results of the examined AV methods. The motivation behind this experiment is based on the observation of Stamatatos et al. BIBREF12 that text length is an important issue, which has not been thoroughly studied within authorship verification research. To address this issue, we make use of the INLINEFORM0 corpus introduced in Section UID28 . The corpus is suitable for this purpose, as it comprises a large number of verification problems, where more than 90% of all documents have sufficient text lengths ( INLINEFORM1 2,000 characters). This allows a stepwise truncation and by this to analyze the effect between the text lengths and the recognition results. However, before considering this, we first focus on the results (shown in Table TABREF46 ) after applying all 12 AV methods on the original test corpus. As can be seen in Table TABREF46 , almost all approaches perform very well with c@1 scores up to 0.991. Although these results are quite impressive, it should be noted that a large fraction of the documents comprises thousands of words. Thus, the methods can learn precise representations based on a large variety of features, which in turn enable a good determination of (dis)similarities between known/unknown documents. To investigate this issue in more detail, we constructed four versions of the test corpus and equalized the unknown document lengths to 250, 500, 1000, and 2000 characters. Then, we applied the top performing AV methods with a c@1 value INLINEFORM0 on the four corpora. Here, we reused the same models and hyperparameters (including the decision criteria INLINEFORM1 and INLINEFORM2 ) that were determined on the training corpus. The intention behind this was to observe the robustness of the trained AV models, given the fact that during training they were confronted with longer documents. The results are illustrated in Figure FIGREF47 , where it can be observed that GLAD yields the most stable results across the four corpora versions, where even for the corpus with the 250 characters long unknown documents, it achieves a c@1 score of 0.727. Surprisingly, Unmasking performs similarly well, despite of the fact that the method has been designed for longer texts i. e., book chunks of at least 500 words BIBREF13 . Sanderson and Guenter also point out that the Unmasking approach is less useful when dealing with relatively short texts BIBREF40 . However, our results show a different picture, at least for this corpus. One explanation of the resilience of GLAD across the varying text lengths might be due to its decision model INLINEFORM0 (an SVM with a linear kernel) that withstands the absence of missing features caused by the truncation of the documents, in contrast to the distance-based approaches AVeer, NNCD and COAV, where the decision criterion INLINEFORM1 is reflected by a simple scalar. Table TABREF48 lists the confusion matrix outcomes of the six AV methods regarding the 250 characters version of INLINEFORM2 . Here, it can be seen that the underlying SVM model of GLAD and Unmasking is able to regulate its Y/N-predictions, in contrast to the three distance-based methods, where the majority of predictions fall either on the Y- or on the N-side. To gain a better picture regarding the stability of the decision criteria INLINEFORM0 and INLINEFORM1 of the methods, we decided to take a closer look on the ROC curves (cf. Figure FIGREF49 ) generated by GLAD, Caravel and COAV for the four corpora versions, where a number of interesting observations can be made. When focusing on AUC, it turns out that all three methods perform very similar to each other, whereas big discrepancies between GLAD and COAV can be observed regarding c@1. When we consider the current and maximum achievable results (depicted by the circles and triangles, respectively) it becomes apparent that GLAD's model behaves stable, while the one of COAV becomes increasingly vulnerable the more the documents are shortened. When looking at the ROC curve of Caravel, it can be clearly seen that the actual and maximum achievable results are very close to each other. This is not surprising, due to the fact that Caravel's threshold always lies at the median point of the ROC curve, provided that the given corpus is balanced. While inspecting the 250 characters long documents in more detail, we identified that they share similar vocabularies consisting of chat abbreviations such as “lol” (laughing out loud) or “k” (ok), smileys and specific obscene words. Therefore, we assume that the verification results of the examined methods are mainly caused by the similar vocabularies between the texts. Conclusion and Future Work We highlighted the problem that underlying characteristics of authorship verification approaches have not been paid much attention in the past research and that these affect the applicability of the methods in real forensic settings. Then, we proposed several properties that enable a better characterization and by this a better comparison between AV methods. Among others, we explained that the performance measure AUC is meaningless in regard to unary or specific non-optimizable AV methods, which involve a fixed decision criterion (for example, NNCD). Additionally, we mentioned that determinism must be fulfilled such that an AV method can be rated as reliable. Moreover, we clarified a number of misunderstandings in previous research works and proposed three clear criteria that allow to classify the model category of an AV method, which in turn influences its design and the way how it should be evaluated. In regard to binary-extrinsic AV approaches, we explained which challenges exist and how they affect their applicability. In an experimental setup, we applied 12 existing AV methods on three self-compiled corpora, where the intention behind each corpus was to focus on a different aspect of the methods applicability. Our findings regarding the examined approaches can be summarized as follows: Despite of the good performance of the five AV methods GenIM, ImpGI, Unmasking, Caravel and SPATIUM, none of them can be truly considered as reliable and therefore applicable in real forensic cases. The reason for this is not only the non-deterministic behavior of the methods but also their dependence (excepting Unmasking) on an impostor corpus. Here, it must be guaranteed that the true author is not among the candidates, but also that the impostor documents are suitable such that the AV task not inadvertently degenerates from style to topic classification. In particular, the applicability of the Caravel approach remains highly questionable, as it requires a corpus where the information regarding Y/N-distribution is known beforehand in order to set the threshold. In regard to the two examined unary AV approaches MOCC and OCCAV, we observed that these perform poorly on all three corpora in comparison to the binary-intrinsic and binary-extrinsic methods. Most likely, this is caused by the wrong threshold setting, as both tend to generate more N-predictions. From the remaining approaches, GLAD and COAV seem to be a good choice for realistic scenarios. However, the former has been shown to be more robust in regard to varying text lengths given a fixed model, while the latter requires a retraining of the model (note that both performed almost equal in terms of AUC). Our hypothesis, which we leave open for future work, is that AV methods relying on a complex model INLINEFORM0 are more robust than methods based on a scalar-threshold INLINEFORM1 . Lastly, we wish to underline that all examined approaches failed in the cross-topic experiment. One possibility to counteract this is to apply text distortion techniques (for instance, BIBREF41 ) in order to control the topic influence in the documents. As one next step, we will compile additional and larger corpora to investigate the question whether the evaluation results of this paper hold more generally. Furthermore, we will address the important question how the results of AV methods can be interpreted in a more systematic manner, which will further influence the practicability of AV methods besides the proposed properties. This work was supported by the German Federal Ministry of Education and Research (BMBF) under the project "DORIAN" (Scrutinise and thwart disinformation).
restrict the content of each text to the abstract and conclusion of the original work, considered other parts of the original works such as introduction or discussion sections, extracted text portions are appropriate for the AV task, each original work was preprocessed manually, removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms
863d5c6305e5bb4b14882b85b6216fa11bcbf053
863d5c6305e5bb4b14882b85b6216fa11bcbf053_0
Q: What are the 12 AV approaches which are examined? Text: Introduction Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed. In the past two decades, researchers from different fields including linguistics, psychology, computer science and mathematics proposed numerous techniques and concepts that aim to solve the AV task. Probably due to the interdisciplinary nature of this research field, AV approaches were becoming more and more diverse, as can be seen in the respective literature. In 2013, for example, Veenman and Li BIBREF2 presented an AV method based on compression, which has its roots in the field of information theory. In 2015, Bagnall BIBREF3 introduced the first deep learning approach that makes use of language modeling, an important key concept in statistical natural language processing. In 2017, Castañeda and Calvo BIBREF4 proposed an AV method that applies a semantic space model through Latent Dirichlet Allocation, a generative statistical model used in information retrieval and computational linguistics. Despite the increasing number of AV approaches, a closer look at the respective studies reveals that only minor attention is paid to their underlying characteristics such as reliability and robustness. These, however, must be taken into account before AV methods can be applied in real forensic settings. The objective of this paper is to fill this gap and to propose important properties and criteria that are not only intended to characterize AV methods, but also allow their assessment in a more systematic manner. By this, we hope to contribute to the further development of this young research field. Based on the proposed properties, we investigate the applicability of 12 existing AV approaches on three self-compiled corpora, where each corpus involves a specific challenge. The rest of this paper is structured as follows. Section SECREF2 discusses the related work that served as an inspiration for our analysis. Section SECREF3 comprises the proposed criteria and properties to characterize AV methods. Section SECREF4 describes the methodology, consisting of the used corpora, examined AV methods, selected performance measures and experiments. Finally, Section SECREF5 concludes the work and outlines future work. Related Work Over the years, researchers in the field of authorship analysis identified a number of challenges and limitations regarding existing studies and approaches. Azarbonyad et al. BIBREF8 , for example, focused on the questions if the writing styles of authors of short texts change over time and how this affects AA. To answer these questions, the authors proposed an AA approach based on time-aware language models that incorporate the temporal changes of the writing style of authors. In one of our experiments, we focus on a similar question, namely, whether it is possible to recognize the writing style of authors, despite of large time spans between their documents. However, there are several differences between our experiment and the study of Azarbonyad et al. First, the authors consider an AA task, where one anonymous document INLINEFORM0 has to be attributed to one of INLINEFORM1 possible candidate authors, while we focus on an AV task, where INLINEFORM2 is compared against one document INLINEFORM3 of a known author. Second, the authors focus on texts with informal language (emails and tweets) in their study, while in our experiment we consider documents written in a formal language (scientific works). Third, Azarbonyad et al. analyzed texts with a time span of four years, while in our experiment the average time span is 15.6 years. Fourth, in contrast to the approach of the authors, none of the 12 examined AV approaches in our experiment considers a special handling of temporal stylistic changes. In recent years, the new research field author obfuscation (AO) evolved, which concerns itself with the task to fool AA or AV methods in a way that the true author cannot be correctly recognized anymore. To achieve this, AO approaches which, according to Gröndahl and Asokan BIBREF9 can be divided into manual, computer-assisted and automatic types, perform a variety of modifications on the texts. These include simple synonym replacements, rule-based substitutions or word order permutations. In 2016, Potthast et al. BIBREF10 presented the first large-scale evaluation of three AO approaches that aim to attack 44 AV methods, which were submitted to the PAN-AV competitions during 2013-2015 BIBREF11 , BIBREF5 , BIBREF12 . One of their findings was that even basic AO approaches have a significant impact on many AV methods. More precisely, the best-performing AO approach was able to flip on average INLINEFORM0 % of an authorship verifier’s decisions towards choosing N (“different author”), while in fact Y (“same author”) was correct BIBREF10 . In contrast to Potthast et al., we do not focus on AO to measure the robustness of AV methods. Instead, we investigate in one experiment the question how trained AV models behave, if the lengths of the questioned documents are getting shorter and shorter. To our best knowledge, this question has not been addressed in previous authorship verification studies. Characteristics of Authorship Verification Before we can assess the applicability of AV methods, it is important to understand their fundamental characteristics. Due to the increasing number of proposed AV approaches in the last two decades, the need arose to develop a systematization including the conception, implementation and evaluation of authorship verification methods. In regard to this, only a few attempts have been made so far. In 2004, for example, Koppel and Schler BIBREF13 described for the first time the connection between AV and unary classification, also known as one-class classification. In 2008, Stein et al. BIBREF14 compiled an overview of important algorithmic building blocks for AV where, among other things, they also formulated three AV problems as decision problems. In 2009, Stamatatos BIBREF15 coined the phrases profile- and instance-based approaches that initially were used in the field of AA, but later found their way also into AV. In 2013 and 2014, Stamatatos et al. BIBREF11 , BIBREF16 introduced the terms intrinsic- and extrinsic models that aim to further distinguish between AV methods. However, a closer look at previous attempts to characterize authorship verification approaches reveals a number of misunderstandings, for instance, when it comes to draw the borders between their underlying classification models. In the following subsections, we clarify these misunderstandings, where we redefine previous definitions and propose new properties that enable a better comparison between AV methods. Reliability (Determinism) Reliability is a fundamental property any AV method must fulfill in order to be applicable in real-world forensic settings. However, since there is no consistent concept nor a uniform definition of the term “reliability” in the context of authorship verification according to the screened literature, we decided to reuse a definition from applied statistics, and adapt it carefully to AV. In his standard reference book, Bollen BIBREF17 gives a clear description for this term: “Reliability is the consistency of measurement” and provides a simple example to illustrate its meaning: At time INLINEFORM0 we ask a large number of persons the same question Q and record their responses. Afterwards, we remove their memory of the dialogue. At time INLINEFORM1 we ask them again the same question Q and record their responses again. “The reliability is the consistency of the responses across individuals for the two time periods. To the extent that all individuals are consistent, the measure is reliable” BIBREF17 . This example deals with the consistency of the measured objects as a factor for the reliability of measurements. In the case of authorship verification, the analyzed objects are static data, and hence these cannot be a source of inconsistency. However, the measurement system itself can behave inconsistently and hence unreliable. This aspect can be described as intra-rater reliability. Reliability in authorship verification is satisfied, if an AV method always generates the same prediction INLINEFORM0 for the same input INLINEFORM1 , or in other words, if the method behaves deterministically. Several AV approaches, including BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF16 fall into this category. In contrast, if an AV method behaves non-deterministically such that two different predictions for INLINEFORM2 are possible, the method can be rated as unreliable. Many AV approaches, including BIBREF4 , BIBREF13 , BIBREF26 , BIBREF1 , BIBREF27 , BIBREF3 , BIBREF28 , BIBREF29 , BIBREF30 belong to this category, since they involve randomness (e. g., weight initialization, feature subsampling, chunk generation or impostor selection), which might distort the evaluation, as every run on a test corpus very likely leads to different results. Under lab conditions, results of non-deterministic AV methods can (and should) be counteracted by averaging multiple runs. However, it remains highly questionable if such methods are generally applicable in realistic forensic cases, where the prediction INLINEFORM3 regarding a verification case INLINEFORM4 might sometimes result in Y and sometimes in N. Optimizability Another important property of an AV method is optimizability. We define an AV method as optimizable, if it is designed in such a way that it offers adjustable hyperparameters that can be tuned against a training/validation corpus, given an optimization method such as grid or random search. Hyperparameters might be, for instance, the selected distance/similarity function, the number of layers and neurons in a neural network or the choice of a kernel method. The majority of existing AV approaches in the literature (for example, BIBREF13 , BIBREF23 , BIBREF24 , BIBREF22 , BIBREF31 , BIBREF4 , BIBREF32 , BIBREF16 ) belong to this category. On the other hand, if a published AV approach involves hyperparameters that have been entirely fixed such that there is no further possibility to improve its performance from outside (without deviating from the definitions in the publication of the method), the method is considered to be non-optimizable. Non-optimizable AV methods are preferable in forensic settings as, here, the existence of a training/validation corpus is not always self-evident. Among the proposed AV approaches in the respective literature, we identified only a small fraction BIBREF21 , BIBREF2 , BIBREF30 that fall into this category. Model Category From a machine learning point of view, authorship verification represents a unary classification problem BIBREF22 , BIBREF13 , BIBREF16 , BIBREF33 , BIBREF14 . Yet, in the literature, it can be observed that sometimes AV is treated as a unary BIBREF25 , BIBREF23 , BIBREF26 , BIBREF16 and sometimes as a binary classification task BIBREF30 , BIBREF32 , BIBREF22 , BIBREF2 . We define the way an AV approach is modeled by the phrase model category. However, before explaining this in more detail, we wish to recall what unary/one-class classification exactly represents. For this, we list the following verbatim quotes, which characterize one-class classification, as can be seen, almost identically (emphasis by us): “In one-class classification it is assumed that only information of one of the classes, the target class, is available. This means that just example objects of the target class can be used and that no information about the other class of outlier objects is present.” BIBREF34 “One-class classification (OCC) [...] consists in making a description of a target class of objects and in detecting whether a new object resembles this class or not. [...] The OCC model is developed using target class samples only.” BIBREF35 “In one-class classification framework, an object is classified as belonging or not belonging to a target class, while only sample examples of objects from the target class are available during the training phase.” BIBREF25 Note that in the context of authorship verification, target class refers to the known author INLINEFORM0 such that for a document INLINEFORM1 of an unknown author INLINEFORM2 the task is to verify whether INLINEFORM3 holds. One of the most important requirements of any existing AV method is a decision criterion, which aims to accept or reject a questioned authorship. A decision criterion can be expressed through a simple scalar threshold INLINEFORM4 or a more complex model INLINEFORM5 such as a hyperplane in a high-dimensional feature space. As a consequence of the above statements, the determination of INLINEFORM6 or INLINEFORM7 has to be performed solely on the basis of INLINEFORM8 , otherwise the AV method cannot be considered to be unary. However, our conducted literature research regarding existing AV approaches revealed that there are uncertainties how to precisely draw the borders between unary and binary AV methods (for instance, BIBREF36 , BIBREF16 , BIBREF33 ). Nonetheless, few attempts have been made to distinguish both categories from another perspective. Potha and Stamatatos BIBREF33 , for example, categorize AV methods as either intrinsic or extrinsic (emphasis by us): “Intrinsic verification models view it [i. e., the verification task] as a one-class classification task and are based exclusively on analysing the similarity between [ INLINEFORM0 ] and [ INLINEFORM1 ]. [...] Such methods [...] do not require any external resources.” BIBREF33 “On the other hand, extrinsic verification models attempt to transform the verification task to a pair classification task by considering external documents to be used as samples of the negative class.” BIBREF33 While we agree with statement (2), the former statement (1) is unsatisfactory, as intrinsic verification models are not necessarily unary. For example, the AV approach GLAD proposed by Hürlimann et al. BIBREF22 directly contradicts statement (1). Here, the authors “decided to cast the problem as a binary classification task where class values are Y [ INLINEFORM0 ] and N [ INLINEFORM1 ]. [...] We do not introduce any negative examples by means of external documents, thus adhering to an intrinsic approach.” BIBREF22 . A misconception similar to statement (1) can be observed in the paper of Jankowska et al. BIBREF24 , who introduced the so-called CNG approach claimed to be a one-class classification method. CNG is intrinsic in that way that it considers only INLINEFORM0 when deciding a problem INLINEFORM1 . However, the decision criterion, which is a threshold INLINEFORM2 , is determined on a set of verification problems, labeled either as Y or N. This incorporates “external resources” for defining the decision criterion, and it constitutes an implementation of binary classification between Y and N in analogy to the statement of Hürlimann et al. BIBREF22 mentioned above. Thus, CNG is in conflict with the unary definition mentioned above. In a subsequent paper BIBREF25 , however, Jankowska et al. refined their approach and introduced a modification, where INLINEFORM3 was determined solely on the basis of INLINEFORM4 . Thus, the modified approach can be considered as a true unary AV method, according to the quoted definitions for unary classification. In 2004, Koppel and Schler BIBREF13 presented the Unmasking approach which, according to the authors, represents a unary AV method. However, if we take a closer look at the learning process of Unmasking, we can see that it is based on a binary SVM classifier that consumes feature vectors (derived from “degradation curves”) labeled as Y (“same author”) or N (“different author”). Unmasking, therefore, cannot be considered to be unary as the decision is not solely based on the documents within INLINEFORM0 , in analogy to the CNG approach of Jankowska et al. BIBREF24 discussed above. It should be highlighted again that the aforementioned three approaches are binary-intrinsic since their decision criteria INLINEFORM1 or INLINEFORM2 was determined on a set of problems labeled in a binary manner (Y and N) while after training, the verification is performed in an intrinsic manner, meaning that INLINEFORM3 and INLINEFORM4 are compared against INLINEFORM5 or INLINEFORM6 but not against documents within other verification problems (cf. Figure FIGREF15 ). A crucial aspect, which might have lead to misperceptions regarding the model category of these approaches in the past, is the fact that two different class domains are involved. On the one hand, there is the class domain of authors, where the task is to distinguish INLINEFORM7 and INLINEFORM8 . On the other hand, there is the elevated or lifted domain of verification problem classes, which are Y and N. The training phase of binary-intrinsic approaches is used for learning to distinguish these two classes, and the verification task can be understood as putting the verification problem as a whole into class Y or class N, whereby the class domain of authors fades from the spotlight (cf. Figure FIGREF15 ). Besides unary and binary-intrinsic methods, there is a third category of approaches, namely binary-extrinsic AV approaches (for example, BIBREF3 , BIBREF30 , BIBREF29 , BIBREF37 , BIBREF32 , BIBREF1 , BIBREF2 ). These methods use external documents during a potentially existing training phase and – more importantly – during testing. In these approaches, the decision between INLINEFORM0 and INLINEFORM1 is put into the focus, where the external documents aim to construct the counter class INLINEFORM2 . Based on the above observations, we conclude that the key requirement for judging the model category of an AV method depends solely on the aspect how its decision criterion INLINEFORM0 or INLINEFORM1 is determined (cf. Figure FIGREF15 ): An AV method is unary if and only if its decision criterion INLINEFORM0 or INLINEFORM1 is determined solely on the basis of the target class INLINEFORM2 during testing. As a consequence, an AV method cannot be considered to be unary if documents not belonging to INLINEFORM3 are used to define INLINEFORM4 or INLINEFORM5 . An AV method is binary-intrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined on a training corpus comprising verification problems labeled either as Y or N (in other words documents of several authors). However, once the training is completed, a binary-intrinsic method has no access to external documents anymore such that the decision regarding the authorship of INLINEFORM2 is made on the basis of the reference data of INLINEFORM3 as well as INLINEFORM4 or INLINEFORM5 . An AV method is binary-extrinsic if its decision criterion INLINEFORM0 or INLINEFORM1 is determined during testing on the basis of external documents that represent the outlier class INLINEFORM2 . Note that optimizable AV methods such as BIBREF18 , BIBREF25 are not excluded to be unary. Provided that INLINEFORM0 or INLINEFORM1 is not subject of the optimization procedure, the model category remains unary. The reason for this is obvious; Hyperparameters might influence the resulting performance of unary AV methods. The decision criterion itself, however, remains unchanged. Implications Each model category has its own implications regarding prerequisites, evaluability, and applicability. One advantage of unary AV methods is that they do not require a specific document collection strategy to construct the counter class INLINEFORM0 , which reduces their complexity. On the downside, the choice of the underlying machine learning model of a unary AV approach is restricted to one-class classification algorithms or unsupervised learning techniques, given a suitable decision criterion. However, a far more important implication of unary AV approaches concerns their performance assessment. Since unary classification (not necessarily AV) approaches depend on a fixed decision criterion INLINEFORM0 or INLINEFORM1 , performance measures such as the area under the ROC curve (AUC) are meaningless. Recall that ROC analysis is used for evaluating classifiers, where the decision threshold is not finally fixed. ROC analysis requires that the classifier generates scores, which are comparable across classification problem instances. The ROC curve and the area under this curve is then computed by considering all possible discrimination thresholds for these scores. While unary AV approaches might produce such scores, introducing a variable INLINEFORM2 would change the semantics of these approaches. Since unary AV approaches have a fixed decision criterion, they provide only a single point in the ROC space. To assess the performance of a unary AV method, it is, therefore, mandatory to consider the confusion matrix that leads to this point in the ROC space. Another implication is that unary AV methods are necessarily instance-based and, thus, require a set INLINEFORM0 of multiple documents of the known author INLINEFORM1 . If only one reference document is available ( INLINEFORM2 ), this document must be artificially turned into multiple samples from the author. In general, unary classification methods need multiple samples from the target class since it is not possible to determine a relative closeness to that class based on only one sample. On the plus side, binary-intrinsic or extrinsic AV methods benefit from the fact that we can choose among a variety of binary and INLINEFORM0 -ary classification models. However, if we consider designing a binary-intrinsic AV method, it should not be overlooked that the involved classifier will learn nothing about individual authors, but only similarities or differences that hold in general for Y and N verification problems BIBREF32 . If, on the other hand, the choice falls on a binary-extrinsic method, a strategy has to be considered for collecting representative documents for the outlier class INLINEFORM0 . Several existing methods such as BIBREF32 , BIBREF1 , BIBREF2 rely on search engines for retrieving appropriate documents, but these search engines might refuse their service if a specified quota is exhausted. Additionally, the retrieved documents render these methods inherently non-deterministic. Moreover, such methods cause relatively high runtimes BIBREF11 , BIBREF5 . Using search engines also requires an active Internet connection, which might not be available or allowed in specific scenarios. But even if we can access the Internet to retrieve documents, there is no guarantee that the true author is not among them. With these points in mind, the applicability of binary-extrinsic methods in real-world cases, i. e., in real forensic settings, remains highly questionable. Methodology In the following, we introduce our three self-compiled corpora, where each corpus represents a different challenge. Next, we describe which authorship verification approaches we considered for the experiments and classify each AV method according to the properties introduced in Section SECREF3 . Afterwards, we explain which performance measures were selected with respect to the conclusion made in Section UID17 . Finally, we describe our experiments, present the results and highlight a number of observations. Corpora A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed. As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. Overall, the documents were written by 40 researchers, where for each author INLINEFORM1 , there are exactly two documents. Given the 80 documents, we constructed for each author INLINEFORM2 two verification problems INLINEFORM3 (a Y-case) and INLINEFORM4 (an N-case). For INLINEFORM5 we set INLINEFORM6 's first document as INLINEFORM7 and the second document as INLINEFORM8 . For INLINEFORM9 we reuse INLINEFORM10 from INLINEFORM11 as the known document and selected a text from another (random) author as the unknown document. The result of this procedure is a set of 80 verification problems, which we split into a training and test set based on a 40/60% ratio. Where possible, we tried to restrict the content of each text to the abstract and conclusion of the original work. However, since in many cases these sections were too short, we also considered other parts of the original works such as introduction or discussion sections. To ensure that the extracted text portions are appropriate for the AV task, each original work was preprocessed manually. More precisely, we removed tables, formulas, citations, quotes and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of an author is 15.6 years. The minimum and maximum time span are 6 and 40 years, respectively. Besides the temporal aspect of INLINEFORM12 , another challenge of this corpus is the formal (scientific) language, where the usage of stylistic devices is more restricted, in contrast to other genres such as novels or poems. As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. The chat conversations stem from a variety of sources including emails and instant messengers (e. g., MSN, AOL or Yahoo), where for each conversation, we ensured that only chat lines from the offender were extracted. We applied the same problem construction procedure as for the corpus INLINEFORM1 , which resulted in 1,100 verification problems that again were split into a training and test set given a 40/60% ratio. In contrast to the corpus INLINEFORM2 , we only performed slight preprocessing. Essentially, we removed user names, time-stamps, URLs, multiple blanks as well as annotations that were not part of the original conversations from all chat lines. Moreover, we did not normalize words (for example, shorten words such as “nooooo” to “no”) as we believe that these represent important style markers. Furthermore, we did not remove newlines between the chat lines, as the positions of specific words might play an important role regarding the individual's writing style. As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform. Overall, the postings were written by 100 Reddit users and stem from a variety of subreddits. In order to construct the Y-cases, we selected exactly two postings from disjoint subreddits for each user such that both the known and unknown document INLINEFORM1 and INLINEFORM2 differ in their topic. Regarding the N-cases, we applied the opposite strategy such that INLINEFORM3 and INLINEFORM4 belong to the same topic. The rationale behind this is to figure out to which extent AV methods can be fooled in cases, where the topic matches but not the authorship and vice versa. Since for this specific corpus we have to control the topics of the documents, we did not perform the same procedure applied for INLINEFORM5 and INLINEFORM6 to construct the training and test sets. Instead, we used for the resulting 100 verification problems a 40/60% hold-out split, where both training and test set are entirely disjoint. Examined Authorship Verification Methods As a basis for our experiments, we reimplemented 12 existing AV approaches, which have shown their potentials in the previous PAN-AV competitions BIBREF11 , BIBREF12 as well as in a number of AV studies. The methods are listed in Table TABREF33 together with their classifications regarding the AV characteristics, which we proposed in Section SECREF3 . All (optimizable) AV methods were tuned regarding their hyperparameters, according to the original procedure mentioned in the respective paper. However, in the case of the binary-extrinsic methods (GenIM, ImpGI and NNCD) we had to use an alternative impostors generation strategy in our reimplementations, due to technical problems. In the respective papers, the authors used search engine queries to generate the impostor documents, which are needed to model the counter class INLINEFORM0 . Regarding our reimplementations, we used the documents from the static corpora (similarly to the idea of Kocher and Savoy BIBREF30 ) to generate the impostors in the following manner: Let INLINEFORM1 denote a corpus with INLINEFORM2 verification problems. For each INLINEFORM3 we choose all unknown documents INLINEFORM4 in INLINEFORM5 with INLINEFORM6 and append them the impostor set INLINEFORM7 . Here, it should be highlighted that both GenIM and ImpGI consider the number of impostors as a hyperparameter such that the resulting impostor set is a subset of INLINEFORM8 . In contrast to this, NNCD considers all INLINEFORM9 as possible impostors. This fact plays an important role in the later experiments, where we compare the AV approaches to each other. Although our strategy is not flexible like using a search engine, it has one advantage that, here, it is assumed that the true author of an unknown document is not among the impostors, since in our corpora the user/author names are known beforehand. Performance Measures According to our extensive literature research, numerous measures (e. g., Accuracy, F INLINEFORM0 , c@1, AUC, AUC@1, INLINEFORM1 or EER) have been used so far to assess the performance of AV methods. In regard to our experiments, we decided to use c@1 and AUC for several reasons. First, Accuracy, F INLINEFORM2 and INLINEFORM3 are not applicable in cases where AV methods leave verification problems unanswered, which concerns some of our examined AV approaches. Second, using AUC alone is meaningless for non-optimizable AV methods, as explained in Section UID17 . Third, both have been used in the PAN-AV competitions BIBREF5 , BIBREF12 . Note that we also list the confusion matrix outcomes. Experiments Overall, we focus on three experiments, which are based on the corpora introduced in Section SECREF21 : The Effect of Stylistic Variation Across Large Time Spans The Effect of Topical Influence The Effect of Limited Text Length In the following each experiment is described in detail. In this experiment, we seek to answer the question if the writing style of an author INLINEFORM0 can be recognized, given a large time span between two documents of INLINEFORM1 . The motivation behind this experiment is based on the statement of Olsson BIBREF38 that language acquisition is a continuous process, which is not only acquired, but also can be lost. Therefore, an important question that arises here is, if the writing style of a person remains “stable” across a large time span, given the fact that language in each individual's life is never “fixed” BIBREF38 . Regarding this experiment, we used the INLINEFORM2 corpus. The results of the 12 examined AV methods are listed in Table TABREF41 , where it can be seen that the majority of the examined AV methods yield useful recognition results with a maximum value of 0.792 in terms of c@1. With the exception of the binary-intrinsic approach COAV, the remaining top performing methods belong to the binary-extrinsic category. This category of AV methods has also been superior in the PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 , where they outperformed binary-intrinsic and unary approaches three times in a row (2013–2015). The top performing approaches Caravel, COAV and NNCD deserve closer attention. All three are based on character-level language models that capture low-level features similar to character INLINEFORM0 -grams, which have been shown in numerous AA and AV studies (for instance, BIBREF39 , BIBREF26 ) to be highly effective and robust. In BIBREF19 , BIBREF28 , it has been shown that Caravel and COAV were also the two top-performing approaches, where in BIBREF19 they were evaluated on the PAN-2015 AV corpus BIBREF12 , while in BIBREF28 they were applied on texts obtained from Project Gutenberg. Although both approaches perform similarly, they differ in the way how the decision criterion INLINEFORM1 is determined. While COAV requires a training corpus to learn INLINEFORM2 , Caravel assumes that the given test corpus (which provides the impostors) is balanced. Given this assumption, Caravel first computes similarity scores for all verification problems in the corpus and then sets INLINEFORM3 to the median of all similarities (cf. Figure FIGREF49 ). Thus, from a machine learning perspective, there is some undue training on the test set. Moreover, the applicability of Caravel in realistic scenarios is questionable, as a forensic case is not part of a corpus where the Y/N-distribution is known beforehand. Another interesting observation can be made regarding COAV, NNCD and OCCAV. Although all three differ regarding their model category, they use the same underlying compression algorithm (PPMd) that is responsible for generating the language model. While the former two approaches perform similarly well, OCCAV achieves a poor c@1 score ( INLINEFORM0 ). An obvious explanation for this is a wrongly calibrated threshold INLINEFORM1 , as can be seen from the confusion matrix, where almost all answers are N-predictions. Regarding the NNCD approach, one should consider that INLINEFORM2 is compared against INLINEFORM3 as well as INLINEFORM4 impostors within a corpus comprised of INLINEFORM5 verification problems. Therefore, a Y-result is correct with relatively high certainty (i. e., the method has high precision compared to other approaches with a similar c@1 score), as NNCD decided that author INLINEFORM6 fits best to INLINEFORM7 among INLINEFORM8 candidates. In contrast to Caravel, NNCD only retrieves the impostors from the given corpus, but it does not exploit background knowledge about the distribution of problems in the corpus. Overall, the results indicate that it is possible to recognize writing styles across large time spans. To gain more insights regarding the question which features led to the correct predictions, we inspected the AVeer method. Although the method achieved only average results, it benefits from the fact that it can be interpreted easily, as it relies on a simple distance function, a fixed threshold INLINEFORM0 and predefined feature categories such as function words. Regarding the correctly recognized Y-cases, we noticed that conjunctive adverbs such as “hence”, “therefore” or “moreover” contributed mostly to AVeer's correct predictions. However, a more in-depth analysis is required in future work to figure out whether the decisions of the remaining methods are also primarily affected by these features. In this experiment, we investigate the question if the writing style of authors can be recognized under the influence of topical bias. In real-world scenarios, the topic of the documents within a verification problem INLINEFORM0 is not always known beforehand, which can lead to a serious challenge regarding the recognition of the writing style. Imagine, for example, that INLINEFORM1 consists of a known and unknown document INLINEFORM2 and INLINEFORM3 that are written by the same author ( INLINEFORM4 ) while at the same time differ regarding their topic. In such a case, an AV method that it focusing “too much” on the topic (for example on specific nouns or phrases) will likely predict a different authorship ( INLINEFORM5 ). On the other hand, when INLINEFORM6 and INLINEFORM7 match regarding their topic, while being written by different authors, a topically biased AV method might erroneously predict INLINEFORM8 . In the following we show to which extent these assumptions hold. As a data basis for this experiment, we used the INLINEFORM0 corpus introduced in Section UID30 . The results regarding the 12 AV methods are given in Table TABREF44 , where it can be seen that our assumptions hold. All examined AV methods (with no exception) are fooled by the topical bias in the corpus. Here, the highest achieved results in terms of c@1 and AUC are very close to random guessing. A closer look at the confusion matrix outcomes reveals that some methods, for example ImpGI and OCCAV, perform almost entirely inverse to each other, where the former predicts nothing but Y and the latter nothing but N (except 1 Y). Moreover, we can assume that the lower c@1 is, the stronger is the focus of the respective AV method on the topic of the documents. Overall, the results of this experiment suggest that none of the examined AV methods is robust against topical influence. In our third experiment, we investigate the question how text lengths affect the results of the examined AV methods. The motivation behind this experiment is based on the observation of Stamatatos et al. BIBREF12 that text length is an important issue, which has not been thoroughly studied within authorship verification research. To address this issue, we make use of the INLINEFORM0 corpus introduced in Section UID28 . The corpus is suitable for this purpose, as it comprises a large number of verification problems, where more than 90% of all documents have sufficient text lengths ( INLINEFORM1 2,000 characters). This allows a stepwise truncation and by this to analyze the effect between the text lengths and the recognition results. However, before considering this, we first focus on the results (shown in Table TABREF46 ) after applying all 12 AV methods on the original test corpus. As can be seen in Table TABREF46 , almost all approaches perform very well with c@1 scores up to 0.991. Although these results are quite impressive, it should be noted that a large fraction of the documents comprises thousands of words. Thus, the methods can learn precise representations based on a large variety of features, which in turn enable a good determination of (dis)similarities between known/unknown documents. To investigate this issue in more detail, we constructed four versions of the test corpus and equalized the unknown document lengths to 250, 500, 1000, and 2000 characters. Then, we applied the top performing AV methods with a c@1 value INLINEFORM0 on the four corpora. Here, we reused the same models and hyperparameters (including the decision criteria INLINEFORM1 and INLINEFORM2 ) that were determined on the training corpus. The intention behind this was to observe the robustness of the trained AV models, given the fact that during training they were confronted with longer documents. The results are illustrated in Figure FIGREF47 , where it can be observed that GLAD yields the most stable results across the four corpora versions, where even for the corpus with the 250 characters long unknown documents, it achieves a c@1 score of 0.727. Surprisingly, Unmasking performs similarly well, despite of the fact that the method has been designed for longer texts i. e., book chunks of at least 500 words BIBREF13 . Sanderson and Guenter also point out that the Unmasking approach is less useful when dealing with relatively short texts BIBREF40 . However, our results show a different picture, at least for this corpus. One explanation of the resilience of GLAD across the varying text lengths might be due to its decision model INLINEFORM0 (an SVM with a linear kernel) that withstands the absence of missing features caused by the truncation of the documents, in contrast to the distance-based approaches AVeer, NNCD and COAV, where the decision criterion INLINEFORM1 is reflected by a simple scalar. Table TABREF48 lists the confusion matrix outcomes of the six AV methods regarding the 250 characters version of INLINEFORM2 . Here, it can be seen that the underlying SVM model of GLAD and Unmasking is able to regulate its Y/N-predictions, in contrast to the three distance-based methods, where the majority of predictions fall either on the Y- or on the N-side. To gain a better picture regarding the stability of the decision criteria INLINEFORM0 and INLINEFORM1 of the methods, we decided to take a closer look on the ROC curves (cf. Figure FIGREF49 ) generated by GLAD, Caravel and COAV for the four corpora versions, where a number of interesting observations can be made. When focusing on AUC, it turns out that all three methods perform very similar to each other, whereas big discrepancies between GLAD and COAV can be observed regarding c@1. When we consider the current and maximum achievable results (depicted by the circles and triangles, respectively) it becomes apparent that GLAD's model behaves stable, while the one of COAV becomes increasingly vulnerable the more the documents are shortened. When looking at the ROC curve of Caravel, it can be clearly seen that the actual and maximum achievable results are very close to each other. This is not surprising, due to the fact that Caravel's threshold always lies at the median point of the ROC curve, provided that the given corpus is balanced. While inspecting the 250 characters long documents in more detail, we identified that they share similar vocabularies consisting of chat abbreviations such as “lol” (laughing out loud) or “k” (ok), smileys and specific obscene words. Therefore, we assume that the verification results of the examined methods are mainly caused by the similar vocabularies between the texts. Conclusion and Future Work We highlighted the problem that underlying characteristics of authorship verification approaches have not been paid much attention in the past research and that these affect the applicability of the methods in real forensic settings. Then, we proposed several properties that enable a better characterization and by this a better comparison between AV methods. Among others, we explained that the performance measure AUC is meaningless in regard to unary or specific non-optimizable AV methods, which involve a fixed decision criterion (for example, NNCD). Additionally, we mentioned that determinism must be fulfilled such that an AV method can be rated as reliable. Moreover, we clarified a number of misunderstandings in previous research works and proposed three clear criteria that allow to classify the model category of an AV method, which in turn influences its design and the way how it should be evaluated. In regard to binary-extrinsic AV approaches, we explained which challenges exist and how they affect their applicability. In an experimental setup, we applied 12 existing AV methods on three self-compiled corpora, where the intention behind each corpus was to focus on a different aspect of the methods applicability. Our findings regarding the examined approaches can be summarized as follows: Despite of the good performance of the five AV methods GenIM, ImpGI, Unmasking, Caravel and SPATIUM, none of them can be truly considered as reliable and therefore applicable in real forensic cases. The reason for this is not only the non-deterministic behavior of the methods but also their dependence (excepting Unmasking) on an impostor corpus. Here, it must be guaranteed that the true author is not among the candidates, but also that the impostor documents are suitable such that the AV task not inadvertently degenerates from style to topic classification. In particular, the applicability of the Caravel approach remains highly questionable, as it requires a corpus where the information regarding Y/N-distribution is known beforehand in order to set the threshold. In regard to the two examined unary AV approaches MOCC and OCCAV, we observed that these perform poorly on all three corpora in comparison to the binary-intrinsic and binary-extrinsic methods. Most likely, this is caused by the wrong threshold setting, as both tend to generate more N-predictions. From the remaining approaches, GLAD and COAV seem to be a good choice for realistic scenarios. However, the former has been shown to be more robust in regard to varying text lengths given a fixed model, while the latter requires a retraining of the model (note that both performed almost equal in terms of AUC). Our hypothesis, which we leave open for future work, is that AV methods relying on a complex model INLINEFORM0 are more robust than methods based on a scalar-threshold INLINEFORM1 . Lastly, we wish to underline that all examined approaches failed in the cross-topic experiment. One possibility to counteract this is to apply text distortion techniques (for instance, BIBREF41 ) in order to control the topic influence in the documents. As one next step, we will compile additional and larger corpora to investigate the question whether the evaluation results of this paper hold more generally. Furthermore, we will address the important question how the results of AV methods can be interpreted in a more systematic manner, which will further influence the practicability of AV methods besides the proposed properties. This work was supported by the German Federal Ministry of Education and Research (BMBF) under the project "DORIAN" (Scrutinise and thwart disinformation).
MOCC, OCCAV, COAV, AVeer, GLAD, DistAV, Unmasking, Caravel, GenIM, ImpGI, SPATIUM and NNCD
37c7c62c9216d6cf3d0858cf1deab6db4b815384
37c7c62c9216d6cf3d0858cf1deab6db4b815384_0
Q: how was annotation done? Text: Introduction In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. One of these forms of interaction is the presence of comments sections that are found in many websites. The comments section allows visitors, authenticated in some cases and unauthenticated in others, to leave a message for others to read. This is a type of multi-party asynchronous conversation that offers interesting insights: one can learn what is the commenting community thinking about the topic being discussed, their sentiment, recommendations among many other. There are some comment sections in which commentators are allowed to directly respond to others, creating a comment hierarchy. These kind of written conversations are interesting because they bring light to the types interaction between participants with minimal supervision. This lack of supervision and in some forums, anonymity, give place to interactions that may not be necessarily related with the original topic being discussed, and as in regular conversations, there are participants with not the best intentions. Such participants are called trolls in some communities. Even though there are some studies related to trolls in different research communities, there is a lack of attention from the NLP community. We aim to reduce this gap by presenting a comprehensive categorization of trolling and propose two models to predict trolling aspects. First, we revise the some trolling definitions: “Trolling is the activity of posting messages via communication networks that are in tended to be provocative, offensive or menacing” by BIBREF0 , this definition considers trolling from the most negative perspective where a crime might be committed. In a different tone, BIBREF1 provides a working definition for troll: “A troller in a user in a computer mediated communication who constructs the identity of sincerely wishing to be part of the group in question, including professing, or conveying pseudo-sincere intentions, but whose real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. These definitions inspire our trolling categorization, but first, we define a trolling event: a comment in a conversation whose intention is to cause conflict, trouble; be malicious, purposely seek or disseminate false information or advice; give a dishonest impression to deceive; offend, insult, cause harm, humiliation or aggravation. Also, a troll or troller is the individual that generates a trolling event, trolling is the overall phenomena that involves a troll, trolling event and generates responses from others. Any participant in a forum conversation may become a troll at any given point, as we will see, the addressee of a trolling event may choose to reply with a trolling comment or counter-trolling, effectively becoming a troll as well. We believe our work makes four contributions. First, unlike previous computational work on trolling, which focused primarily on analyzing the narrative retrospectively by the victim (e.g., determining the trolling type and the role played by each participant), we study trolling by analyzing comments in a conversation, aiming instead to identify trollers, who, once identified, could be banned from posting. Second, while previous work has focused on analyzing trolling from the troll's perspective, we additionally model trolling from the target's perspective, with the goal understanding the psychological impact of a trolling event on the target, which we believe is equally important from a practical standpoint. Third, we propose a comprehensive categorization of trolling that covers not only the troll's intention but also the victim and other commenters' reaction to the troll's comment. We believe such categorization will provide a solid basis on which future computational approaches to trolling can be built. Finally, we make our annotated data set consisting of 1000 annotated trolling events publicly available. We believe that our data set will be a valuable resource to any researcher interested in the computational modeling of trolling. Trolling Categorization Based on the previous definitions we identify four aspects that uniquely define a trolling event-response pair: 1) Intention: what is the author of the comment in consideration purpose, a) trolling, the comment is malicious in nature, aims to disrupt, annoy, offend, harm or spread purposely false information, b) playing the comment is playful, joking, teasing others without the malicious intentions as in a), or c) none, the comment has no malicious intentions nor is playful, it is a simple comment. 2) Intention Disclosure: this aspect is meant to indicate weather a trolling comment is trying to deceive its readers, the possible values for this aspect are a) the comment's author is a troll and is trying to hide its real intentions, and pretends to convey a different meaning, at least temporarily, b) the comment's author is a troll but is clearly exposing its malicious intentions and c) the comment's author is not a troll, therefore there are not hidden or exposed malicious or playful intentions. There are two aspects defined on the comments that direct address the comment in consideration, 3) Intentions Interpretation: this aspect refers to the responder's understanding of the parent's comment intentions. There possible interpretations are the same as the intentions aspect: trolling, playing or none. The last element, is the 4) Response strategy employed by the commentators directly replaying to a comment, which can be a trolling event. The response strategy is influenced directly by the responder's interpretation of the parent's comment intention. We identify 14 possible response strategies. Some of these strategies are tied with combinations of the three other aspects. We briefly define each of them in the appendix. Figure FIGREF2 shows this categories as a hierarchy. Using this trolling formulation, the suspected troll event and the responses are correlated and one cannot independently name the strategy response without learning about the other three aspects. This is challenging prediction problem that we address in this work. Conversations Excerpts Examples To illustrate this hierarchy, we present some examples. These are excerpts from original conversations; the first comment, generated by author C0, on each excerpt is given as a minimal piece of context, the second comment, by the author C1 in italics, is the comment suspected to be a trolling event. The rest of the comments, are all direct responses to the suspected trolling comment. When the response author “name” is the same as the first comment, it indicates that the that same individual also replied to the suspected troll. Example 1. [noitemsep,nolistsep] My friend who makes $20,000 a year leased a brand new Chevy Spark EV, for only $75 per month and he got a California rebate for driving an electric car. Much cheaper than buying older car which usually require heavy upkeep due to its mileage. At this point I think you're just trolling. [noitemsep,nolistsep] IYour friend has a good credit score, which can't be said about actual poor people. Did you grow up sheltered by any chance? [noitemsep,nolistsep] Judging by your post history, you're indeed a troll. Have a good one. In this example, when C1 asks “Did you grow up sheltered by any chance?", her intention is to denigrate or be offensive, and it is not hiding it, instead he is clearly disclosing her trolling intentions. In C0's response, we see that has came to the conclusion that C1 is trolling and his response strategy is frustrate the trolling event by ignoring the malicious troll's intentions. Example 2. [noitemsep,nolistsep] What do you mean look up ?:( I don't see anything lol [noitemsep,nolistsep] Look up! Space is cool! :) [noitemsep,nolistsep] why must you troll me :( Keep going, no matter how many times you say it, he will keep asking In this example, we hypothesize that C0 is requesting some information and C1 is given an answer that is unfit to C0's' request. We do so based on the last C0's comment; CO is showing disappointment or grievance. Also, we deduct that C1 is trying to deceive C0, therefore, C1's comment is a trolling event. This is a trolling event whose intention is to purposely convey false information, and that hiding its intentions. As for the response, in the last C0's comment, he has finally realized or interpreted that C1's real intentions are deceiving and since his comment shows a “sad emoticon” his reply is emotionally, with aggravation, so we say that CO got engaged. C2 on the other hand, acknowledges the malicious and play along with the troll. Given these examples, address the task of predicting the four aspects of a trolling event based on the methodology described in the next section. Corpus and Annotations We collected all available comments in the stories from Reddit from August 2015. Reddit is popular website that allows registered users (without identity verification) to participate in forums specific a post or topic. These forums are of they hierarchical type, those that allow nested conversation, where the children of a comment are its direct response. To increase recall and make the annotation process feasible we created an inverted index with Lucene and queried for comments containing the word troll with an edit distance of 1, to include close variations of this word. We do so inspired by the method by BIBREF2 to created a bullying dataset, and because we hypothesize that such comments will be related or involved in a trolling event. As we observed in the dataset, people use the word troll in many different ways, sometimes it is to point out that some used is indeed trolling him or her or is accusing someone else of being a troll. Other times, people use the term, to express their frustration or dislike about a particular user, but there is no trolling event. Other times, people simple discuss about trolling and trolls, without actually participating or observing one directly. Nonetheless, we found that this search produced a dataset in which 44.3 % of the comments directly involved a trolling event. Moreover, as we exposed our trolling definition, it is possible for commentators in a conversation to believe that they are witnessing a trolling event and respond accordingly even where there is none. Therefore, even in the comments that do not involve trolling, we are interested in learning what triggers users interpretation of trolling where it is not present and what kind of response strategies are used. We define as a suspected trolling event in our dataset a comment in which at least one of its children contains the word troll. With the gathered comments, we reconstructed the original conversation trees, from the original post, the root, to the leaves, when they were available and selected a subset to annotated. For annotation purposes, we created snippets of conversations as the ones shown in Example 1 and Example 2 consisting of the parent of the suspected trolling event, the suspected trolling event comment, and all of the direct responses to the suspected trolling event. We added an extra constraint that the parent of the suspected trolling event should also be part of the direct responses, we hypothesize that if the suspected trolling event is indeed trolling, its parent should be the object of its trolling and would have a say about it. We recognize that this limited amount of information is not always sufficient to recover the original message conveyed by all of the participants in the snippet, and additional context would be beneficial. However, the trade off is that snippets like this allow us to make use of Amazon Mechanical Turk (AMT) to have the dataset annotated, because it is not a big burden for a “turker” to work on an individual snippet in exchange for a small pay, and expedites the annotation process by distributing it over dozens of people. Specifically, for each snippet, we requested three annotators to label the four aspects previously described. Before annotating, we set up a qualification test along with borderline examples to guide them in process and align them with our criteria. The qualification test turned out to be very selective since only 5% of all of the turkers that attempted it passed the exam. Our dataset consists of 1000 conversations with 5868 sentences and 71033 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF24 in the column “Size”. Inter-Annotator Agreement. Due to the subjective nature of the task we did not expected perfect agreement. However, we obtained substantial inter-annotator agreement as we measured the fleiss-kappa statistic BIBREF3 for each of the trolling aspects: Intention: 0.578, Intention Disclosure: 0.556, Interpretation: 0.731 and Response 0.632. After inspecting the dataset, we manually reconciled aspects of the threads that found no majority on the turkers annotation and verified and corrected consistency on the four tasks on each thread. Trolling Events Prediction In this section we propose to solve the following problem: given a comment in a conversation, suspected to a trolling event, it's parent comment and all it's direct responses, we aim to predict the suspected comment I: intention, its D: intention disclosure and from the responses point of view, for each response comment the R: interpretation of the suspected troll comment's intentions, and identify its B: response strategy. This problem can be seen as a multi-task prediction. To do so, we split the dataset into training and testing sets using a 5-fold cross validation setup. Feature Set For prediction we define two sets of features, a basic and an enhanced dataset, extracted from each of the comments in the dataset. The features are described below. N-gram features. We encode each unigram and bigram collected from the training comments a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF2 . To extract these features we used the most current version of the Stanford CoreNLP BIBREF4 . Each token's Lemmas as in BIBREF5 as a binary feature. Harmful Vocabulary. In their research on bullying BIBREF6 identified a small set of words that are highly offensive. We encode them as well as binary features. Emotions Synsets. As in BIBREF5 we extracted all lemmas associated with each of Synsets extracted from WordNet BIBREF7 from these emotions: anger, embarrassment, empathy, fear, pride, relief and sadness. As well all the synonyms from these emotions extracted from the dictionary. Also, Emoticons. Reddit's comments make extensive use of emoticons, we argue that some emoticons are specially used in trolling events and to express a variety of emotions, which we hypothesize would be useful to identify a comments intention, interpretation and response. For that we use the emoticon dictionary BIBREF8 and we set a binary feature for each emoticon that is found in the dictionary. Sentiment Polarity. Using a similar idea, we hypothesize that the overall comment emotion would be useful to identify the response and intention in a trolling event. So, we apply the Vader Sentiment Polarity Analyzer BIBREF9 and include a four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value. Subjectivity Lexicon. From the MPQA Subjective Lexicon BIBREF10 we include all tokens that are found in the lexicon as binary features. This lexicon was created from a news domains, so the words in it don't necessarily align with the informal vocabulary used in Reddit, but, there are serious Reddit users that use proper language and formal constructions. We believe that these features will allow us to discriminate formal comments from being potentially labeled as trolling events, which tend to be vulgar. Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories. The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature for each word or short phrase in a comment if it appears in the swearing dictionary. Framenet. Following BIBREF11 use of FrameNet, we apply the Semaphore Parser BIBREF12 to each sentence in every comment in the training set, and construct three different binary features: every frame name that is present in the sentence, the frame name a the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We argue that some frames are especially interesting from the trolling perspective. For example, the frame “Deception_success” precisely models one of the trolling models, and we argue that these features will be particularly to identify trolling events in which semantic and not just syntactic information is necessary. Politeness Queues. BIBREF13 identified queues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming troll and engaged or emotional responses would use impolite queues. On the contrary, neutralizing and frustrating responses to troll avoid falling in confrontation and their vocabulary tends to be more polite. So use these queues as binary features as they appear in the comments in consideration. Baseline System The most naïve approach is to consider each of the four tasks as an independent classification problem. Such system would be deprived from the other's tasks information that we've mentioned is strictly necessary to make a correct prediction of the response strategy. Instead, as our baseline we follow a pipeline approach, using the tasks oder: I, D, R and B, so that each of the subsequent subtasks' feature set is extended with a feature per each of previously computed subtasks. We argue that this setup is a competitive baseline, as can be verified in the results table TABREF24 . For the classifier in the pipeline approach we choose a log-linear model, a logistic regression classifier. In addition to logistic regression, we tried the generative complement of logistic regression, naïve bayes and max-margin classifier, a support vector machine, but their performance was not superior to the logistic regression. It is noteworthy to mention that the feature set used for the intention predict is the combined features sets of the suspected troll comment as well as its parent. We do so in all of our experiments the learner can take advantage of the conversation context. Joint Models The nature of this problem makes the use of a joint model a logical choice. Among different options for joint inference, we choose a (conditional) probabilistic graphical model (henceforth PGM) BIBREF15 because, in contrast to ILP formulations, has the ability to learn parameters and not just impose hard constraints. Also, compared to Markov Logic Networks BIBREF16 , a relatively recent formulation that combines logic and Markov Random Fields, PGMs in practice have proved to be more scalable, even though, inference in general models is shown to be intractable. Finally, we are also interested in choosing a PGM because it allow to directly compare the strength of joint inference with the baseline, because the our model is a collection of logistic regressors trained simultaneously. A conditional random field factorizes the conditional probability distribution over all possible values of the query variables in the model, given a set of observations as in equation EQREF22 . In our model, the query variables are the four tasks we desire to predict, INLINEFORM0 and the observations is their combined feature sets INLINEFORM1 . Each of the factors INLINEFORM2 in this distribution is a log-linear model as in equation EQREF23 and represents the probability distribution of the clique of variables INLINEFORM3 in it, given the observation set INLINEFORM4 . This is identical to the independent logistic regression model described in the baseline, except for the fact that all variables or tasks are consider a the same time. To do so, we add additional factors that connect task variables among them, permitting the flow of information of one task to the other. Specifically, our model represent each task with a random variable, shown in figure FIGREF15 (left), represented by the circles. The plate notation that surrounds variables INLINEFORM0 and INLINEFORM1 indicates that there will as many variables INLINEFORM2 and INLINEFORM3 and edges connecting them to INLINEFORM4 as the number of responses in the problem snippet. The edges connecting INLINEFORM5 and INLINEFORM6 with INLINEFORM7 attempts to model influence of these two variables on the response, and how this information is passed along to the response strategy variable INLINEFORM8 . Figure FIGREF15 (right) explicitly represents the cliques in the underlying factor graph. We can see that there are unary factors, INLINEFORM9 , INLINEFORM10 , INLINEFORM11 and INLINEFORM12 , that model the influence of the observation features over their associated variables, just as the logistic regression model does. Factors INLINEFORM13 models the interaction between variables INLINEFORM14 and INLINEFORM15 , INLINEFORM16 the interaction between variables INLINEFORM17 and INLINEFORM18 and INLINEFORM19 models the interactions between variables INLINEFORM20 and INLINEFORM21 , using a log-linear model over the possible values of the pair of variables in that particular clique. Due to the size of the model, we are able to perform exact inference at train and test time. For parameter learning we employ limited memory lbfgs optimizer BIBREF17 as we provide the cost function and gradient based on the equations described in BIBREF18 . 2 pass Model A hybrid mode that we experiment with is model that performs joint inference on three tasks: I: intention, D: intention disclosure and R: responders' intention interpretation. The remaining task B: response strategy is performed in a second step, with the input the other three tasks. We do so because we observed in our experiments that the close coupling between the first three tasks allow them to perform better independently of the response strategy, as we will elaborate in the results section. DISPLAYFORM0 DISPLAYFORM1 Evaluation and Results We perform 5-fold cross validation on the dataset. We use the first fold to tune the parameters and the remaining four folds to report results. The system performance is measured using precision, recall and F-1 as shown in table TABREF24 . The left side of the table, reports results obtained using the basic feature set, while the right side does so on the enhanced feature set. In order to maintain consistency folds are created based on the threads or snippets and for the case of the baseline system, all instances in the particular fold for task in consideration are considered independent of each other. On the table, rows show the classes performance for each of the tasks, indicated by a heard with the task name. For the response strategy we present results for those class values that are at least 5% of the total distribution, we do so, because the number of labeled instances for this classes is statistically insignificant compared to the majority classes. Results Discussion From the result table TABREF24 , we observe that hybrid model significantly outperform the baseline, by more than 20 points in intention and intention disclosure prediction. For the response strategy, it is clear that none of the systems offer satisfying results; this showcases the difficult of such a large number of classes. Nonetheless, the hybrid model outperforms the fully joint model and baseline in all but one the response strategy classes. However, the differences are far less impressive as in the other tasks. It is surprisingly; that the full joint model did not offered the best performance. One of the reasons behind this is that intention, intentions disclosure and interpretation tasks are hurt by the difficulty of learning parameters that maximize the response strategy, this last task drags the other three down in performance. Another reason is that, the features response strategy is not informative enough to learn the correct concept, and due to the joint inference process, all tasks receive a hit. Also, it is counter-intuitive that the augmented set of features did not outperform in all tasks but in intentions disclosure and interpretation, and just by a small margin. A reason explaining this unexpected behavior is that the majority of enhanced features are already represented in the basic feature set by means of the unigrams and bigrams, and the Framenet and Sentiment features are uninformative or redundant. Lastly, we observe that for interpretation category, none of systems were able to predict the “playing” class. This is because of the relative size of the number of instances annotated with that value, 1% of the entire dataset. We hypothesize those instances labeled by the annotators, of which a majority agreed on, incorrectly selected the playing category instead of the trolling class, and that, at the interpretation level, one can only expect to reliably differentiate between trolling and trolling. Related Work In this section, we discuss related work in the areas of trolling, bullying and politeness, as they intersect in their scope and at least partially address the problem presented in this work. BIBREF19 address the problem of identifying manipulation trolls in news community forums. The major difference with this work is that all their predictions are based on meta-information such as number of votes, dates, number of comments and so on. There is no NLP approach to the problem and their task is limited to identifying trolls. BIBREF0 and BIBREF20 elaborate a deep description of the trolls personality, motivations, effects on the community that trolls interfere and the criminal and psychological aspects of trolls. Their main focus are flaming trolls, but have no NLP insights do not propose and automated prediction tasks as in this work. In a networks related framework BIBREF21 and BIBREF22 present a methodology to identify malicious individuals in a network based solely on the network's properties. Even though they offer present and evaluate a methodology, their focus is different from NLP. BIBREF23 proposes a method that involves NLP components, but fails to provide a evaluation of their system. Finally, BIBREF2 and BIBREF5 address bullying traces. That is self reported events of individuals describing being part of bullying events, but their focus is different from trolling event and the interactions with other participants. Conclusion and Future Work In this paper we address the under-attended problem of trolling in Internet forums. We presented a comprehensive categorization of trolling events and defined a prediction tasks that does not only considers trolling from the troll's perspective but includes the responders to the trolls comment. Also, we evaluated three different models and analyzed their successes and shortcomings. Finally we provide an annotated dataset which we hope will be useful for the research community. We look forward to investigate trolling phenomena in larger conversations, formalize the concepts of changing roles among the participants in trolling events, and improve response strategy performance.
Annotation was done with the help of annotators from Amazon Mechanical Turk on snippets of conversations
539eb559744641e6a4aefe267cbc4c79e2bcceae
539eb559744641e6a4aefe267cbc4c79e2bcceae_0
Q: what is the source of the new dataset? Text: Introduction In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. One of these forms of interaction is the presence of comments sections that are found in many websites. The comments section allows visitors, authenticated in some cases and unauthenticated in others, to leave a message for others to read. This is a type of multi-party asynchronous conversation that offers interesting insights: one can learn what is the commenting community thinking about the topic being discussed, their sentiment, recommendations among many other. There are some comment sections in which commentators are allowed to directly respond to others, creating a comment hierarchy. These kind of written conversations are interesting because they bring light to the types interaction between participants with minimal supervision. This lack of supervision and in some forums, anonymity, give place to interactions that may not be necessarily related with the original topic being discussed, and as in regular conversations, there are participants with not the best intentions. Such participants are called trolls in some communities. Even though there are some studies related to trolls in different research communities, there is a lack of attention from the NLP community. We aim to reduce this gap by presenting a comprehensive categorization of trolling and propose two models to predict trolling aspects. First, we revise the some trolling definitions: “Trolling is the activity of posting messages via communication networks that are in tended to be provocative, offensive or menacing” by BIBREF0 , this definition considers trolling from the most negative perspective where a crime might be committed. In a different tone, BIBREF1 provides a working definition for troll: “A troller in a user in a computer mediated communication who constructs the identity of sincerely wishing to be part of the group in question, including professing, or conveying pseudo-sincere intentions, but whose real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. These definitions inspire our trolling categorization, but first, we define a trolling event: a comment in a conversation whose intention is to cause conflict, trouble; be malicious, purposely seek or disseminate false information or advice; give a dishonest impression to deceive; offend, insult, cause harm, humiliation or aggravation. Also, a troll or troller is the individual that generates a trolling event, trolling is the overall phenomena that involves a troll, trolling event and generates responses from others. Any participant in a forum conversation may become a troll at any given point, as we will see, the addressee of a trolling event may choose to reply with a trolling comment or counter-trolling, effectively becoming a troll as well. We believe our work makes four contributions. First, unlike previous computational work on trolling, which focused primarily on analyzing the narrative retrospectively by the victim (e.g., determining the trolling type and the role played by each participant), we study trolling by analyzing comments in a conversation, aiming instead to identify trollers, who, once identified, could be banned from posting. Second, while previous work has focused on analyzing trolling from the troll's perspective, we additionally model trolling from the target's perspective, with the goal understanding the psychological impact of a trolling event on the target, which we believe is equally important from a practical standpoint. Third, we propose a comprehensive categorization of trolling that covers not only the troll's intention but also the victim and other commenters' reaction to the troll's comment. We believe such categorization will provide a solid basis on which future computational approaches to trolling can be built. Finally, we make our annotated data set consisting of 1000 annotated trolling events publicly available. We believe that our data set will be a valuable resource to any researcher interested in the computational modeling of trolling. Trolling Categorization Based on the previous definitions we identify four aspects that uniquely define a trolling event-response pair: 1) Intention: what is the author of the comment in consideration purpose, a) trolling, the comment is malicious in nature, aims to disrupt, annoy, offend, harm or spread purposely false information, b) playing the comment is playful, joking, teasing others without the malicious intentions as in a), or c) none, the comment has no malicious intentions nor is playful, it is a simple comment. 2) Intention Disclosure: this aspect is meant to indicate weather a trolling comment is trying to deceive its readers, the possible values for this aspect are a) the comment's author is a troll and is trying to hide its real intentions, and pretends to convey a different meaning, at least temporarily, b) the comment's author is a troll but is clearly exposing its malicious intentions and c) the comment's author is not a troll, therefore there are not hidden or exposed malicious or playful intentions. There are two aspects defined on the comments that direct address the comment in consideration, 3) Intentions Interpretation: this aspect refers to the responder's understanding of the parent's comment intentions. There possible interpretations are the same as the intentions aspect: trolling, playing or none. The last element, is the 4) Response strategy employed by the commentators directly replaying to a comment, which can be a trolling event. The response strategy is influenced directly by the responder's interpretation of the parent's comment intention. We identify 14 possible response strategies. Some of these strategies are tied with combinations of the three other aspects. We briefly define each of them in the appendix. Figure FIGREF2 shows this categories as a hierarchy. Using this trolling formulation, the suspected troll event and the responses are correlated and one cannot independently name the strategy response without learning about the other three aspects. This is challenging prediction problem that we address in this work. Conversations Excerpts Examples To illustrate this hierarchy, we present some examples. These are excerpts from original conversations; the first comment, generated by author C0, on each excerpt is given as a minimal piece of context, the second comment, by the author C1 in italics, is the comment suspected to be a trolling event. The rest of the comments, are all direct responses to the suspected trolling comment. When the response author “name” is the same as the first comment, it indicates that the that same individual also replied to the suspected troll. Example 1. [noitemsep,nolistsep] My friend who makes $20,000 a year leased a brand new Chevy Spark EV, for only $75 per month and he got a California rebate for driving an electric car. Much cheaper than buying older car which usually require heavy upkeep due to its mileage. At this point I think you're just trolling. [noitemsep,nolistsep] IYour friend has a good credit score, which can't be said about actual poor people. Did you grow up sheltered by any chance? [noitemsep,nolistsep] Judging by your post history, you're indeed a troll. Have a good one. In this example, when C1 asks “Did you grow up sheltered by any chance?", her intention is to denigrate or be offensive, and it is not hiding it, instead he is clearly disclosing her trolling intentions. In C0's response, we see that has came to the conclusion that C1 is trolling and his response strategy is frustrate the trolling event by ignoring the malicious troll's intentions. Example 2. [noitemsep,nolistsep] What do you mean look up ?:( I don't see anything lol [noitemsep,nolistsep] Look up! Space is cool! :) [noitemsep,nolistsep] why must you troll me :( Keep going, no matter how many times you say it, he will keep asking In this example, we hypothesize that C0 is requesting some information and C1 is given an answer that is unfit to C0's' request. We do so based on the last C0's comment; CO is showing disappointment or grievance. Also, we deduct that C1 is trying to deceive C0, therefore, C1's comment is a trolling event. This is a trolling event whose intention is to purposely convey false information, and that hiding its intentions. As for the response, in the last C0's comment, he has finally realized or interpreted that C1's real intentions are deceiving and since his comment shows a “sad emoticon” his reply is emotionally, with aggravation, so we say that CO got engaged. C2 on the other hand, acknowledges the malicious and play along with the troll. Given these examples, address the task of predicting the four aspects of a trolling event based on the methodology described in the next section. Corpus and Annotations We collected all available comments in the stories from Reddit from August 2015. Reddit is popular website that allows registered users (without identity verification) to participate in forums specific a post or topic. These forums are of they hierarchical type, those that allow nested conversation, where the children of a comment are its direct response. To increase recall and make the annotation process feasible we created an inverted index with Lucene and queried for comments containing the word troll with an edit distance of 1, to include close variations of this word. We do so inspired by the method by BIBREF2 to created a bullying dataset, and because we hypothesize that such comments will be related or involved in a trolling event. As we observed in the dataset, people use the word troll in many different ways, sometimes it is to point out that some used is indeed trolling him or her or is accusing someone else of being a troll. Other times, people use the term, to express their frustration or dislike about a particular user, but there is no trolling event. Other times, people simple discuss about trolling and trolls, without actually participating or observing one directly. Nonetheless, we found that this search produced a dataset in which 44.3 % of the comments directly involved a trolling event. Moreover, as we exposed our trolling definition, it is possible for commentators in a conversation to believe that they are witnessing a trolling event and respond accordingly even where there is none. Therefore, even in the comments that do not involve trolling, we are interested in learning what triggers users interpretation of trolling where it is not present and what kind of response strategies are used. We define as a suspected trolling event in our dataset a comment in which at least one of its children contains the word troll. With the gathered comments, we reconstructed the original conversation trees, from the original post, the root, to the leaves, when they were available and selected a subset to annotated. For annotation purposes, we created snippets of conversations as the ones shown in Example 1 and Example 2 consisting of the parent of the suspected trolling event, the suspected trolling event comment, and all of the direct responses to the suspected trolling event. We added an extra constraint that the parent of the suspected trolling event should also be part of the direct responses, we hypothesize that if the suspected trolling event is indeed trolling, its parent should be the object of its trolling and would have a say about it. We recognize that this limited amount of information is not always sufficient to recover the original message conveyed by all of the participants in the snippet, and additional context would be beneficial. However, the trade off is that snippets like this allow us to make use of Amazon Mechanical Turk (AMT) to have the dataset annotated, because it is not a big burden for a “turker” to work on an individual snippet in exchange for a small pay, and expedites the annotation process by distributing it over dozens of people. Specifically, for each snippet, we requested three annotators to label the four aspects previously described. Before annotating, we set up a qualification test along with borderline examples to guide them in process and align them with our criteria. The qualification test turned out to be very selective since only 5% of all of the turkers that attempted it passed the exam. Our dataset consists of 1000 conversations with 5868 sentences and 71033 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF24 in the column “Size”. Inter-Annotator Agreement. Due to the subjective nature of the task we did not expected perfect agreement. However, we obtained substantial inter-annotator agreement as we measured the fleiss-kappa statistic BIBREF3 for each of the trolling aspects: Intention: 0.578, Intention Disclosure: 0.556, Interpretation: 0.731 and Response 0.632. After inspecting the dataset, we manually reconciled aspects of the threads that found no majority on the turkers annotation and verified and corrected consistency on the four tasks on each thread. Trolling Events Prediction In this section we propose to solve the following problem: given a comment in a conversation, suspected to a trolling event, it's parent comment and all it's direct responses, we aim to predict the suspected comment I: intention, its D: intention disclosure and from the responses point of view, for each response comment the R: interpretation of the suspected troll comment's intentions, and identify its B: response strategy. This problem can be seen as a multi-task prediction. To do so, we split the dataset into training and testing sets using a 5-fold cross validation setup. Feature Set For prediction we define two sets of features, a basic and an enhanced dataset, extracted from each of the comments in the dataset. The features are described below. N-gram features. We encode each unigram and bigram collected from the training comments a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF2 . To extract these features we used the most current version of the Stanford CoreNLP BIBREF4 . Each token's Lemmas as in BIBREF5 as a binary feature. Harmful Vocabulary. In their research on bullying BIBREF6 identified a small set of words that are highly offensive. We encode them as well as binary features. Emotions Synsets. As in BIBREF5 we extracted all lemmas associated with each of Synsets extracted from WordNet BIBREF7 from these emotions: anger, embarrassment, empathy, fear, pride, relief and sadness. As well all the synonyms from these emotions extracted from the dictionary. Also, Emoticons. Reddit's comments make extensive use of emoticons, we argue that some emoticons are specially used in trolling events and to express a variety of emotions, which we hypothesize would be useful to identify a comments intention, interpretation and response. For that we use the emoticon dictionary BIBREF8 and we set a binary feature for each emoticon that is found in the dictionary. Sentiment Polarity. Using a similar idea, we hypothesize that the overall comment emotion would be useful to identify the response and intention in a trolling event. So, we apply the Vader Sentiment Polarity Analyzer BIBREF9 and include a four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value. Subjectivity Lexicon. From the MPQA Subjective Lexicon BIBREF10 we include all tokens that are found in the lexicon as binary features. This lexicon was created from a news domains, so the words in it don't necessarily align with the informal vocabulary used in Reddit, but, there are serious Reddit users that use proper language and formal constructions. We believe that these features will allow us to discriminate formal comments from being potentially labeled as trolling events, which tend to be vulgar. Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories. The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature for each word or short phrase in a comment if it appears in the swearing dictionary. Framenet. Following BIBREF11 use of FrameNet, we apply the Semaphore Parser BIBREF12 to each sentence in every comment in the training set, and construct three different binary features: every frame name that is present in the sentence, the frame name a the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We argue that some frames are especially interesting from the trolling perspective. For example, the frame “Deception_success” precisely models one of the trolling models, and we argue that these features will be particularly to identify trolling events in which semantic and not just syntactic information is necessary. Politeness Queues. BIBREF13 identified queues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming troll and engaged or emotional responses would use impolite queues. On the contrary, neutralizing and frustrating responses to troll avoid falling in confrontation and their vocabulary tends to be more polite. So use these queues as binary features as they appear in the comments in consideration. Baseline System The most naïve approach is to consider each of the four tasks as an independent classification problem. Such system would be deprived from the other's tasks information that we've mentioned is strictly necessary to make a correct prediction of the response strategy. Instead, as our baseline we follow a pipeline approach, using the tasks oder: I, D, R and B, so that each of the subsequent subtasks' feature set is extended with a feature per each of previously computed subtasks. We argue that this setup is a competitive baseline, as can be verified in the results table TABREF24 . For the classifier in the pipeline approach we choose a log-linear model, a logistic regression classifier. In addition to logistic regression, we tried the generative complement of logistic regression, naïve bayes and max-margin classifier, a support vector machine, but their performance was not superior to the logistic regression. It is noteworthy to mention that the feature set used for the intention predict is the combined features sets of the suspected troll comment as well as its parent. We do so in all of our experiments the learner can take advantage of the conversation context. Joint Models The nature of this problem makes the use of a joint model a logical choice. Among different options for joint inference, we choose a (conditional) probabilistic graphical model (henceforth PGM) BIBREF15 because, in contrast to ILP formulations, has the ability to learn parameters and not just impose hard constraints. Also, compared to Markov Logic Networks BIBREF16 , a relatively recent formulation that combines logic and Markov Random Fields, PGMs in practice have proved to be more scalable, even though, inference in general models is shown to be intractable. Finally, we are also interested in choosing a PGM because it allow to directly compare the strength of joint inference with the baseline, because the our model is a collection of logistic regressors trained simultaneously. A conditional random field factorizes the conditional probability distribution over all possible values of the query variables in the model, given a set of observations as in equation EQREF22 . In our model, the query variables are the four tasks we desire to predict, INLINEFORM0 and the observations is their combined feature sets INLINEFORM1 . Each of the factors INLINEFORM2 in this distribution is a log-linear model as in equation EQREF23 and represents the probability distribution of the clique of variables INLINEFORM3 in it, given the observation set INLINEFORM4 . This is identical to the independent logistic regression model described in the baseline, except for the fact that all variables or tasks are consider a the same time. To do so, we add additional factors that connect task variables among them, permitting the flow of information of one task to the other. Specifically, our model represent each task with a random variable, shown in figure FIGREF15 (left), represented by the circles. The plate notation that surrounds variables INLINEFORM0 and INLINEFORM1 indicates that there will as many variables INLINEFORM2 and INLINEFORM3 and edges connecting them to INLINEFORM4 as the number of responses in the problem snippet. The edges connecting INLINEFORM5 and INLINEFORM6 with INLINEFORM7 attempts to model influence of these two variables on the response, and how this information is passed along to the response strategy variable INLINEFORM8 . Figure FIGREF15 (right) explicitly represents the cliques in the underlying factor graph. We can see that there are unary factors, INLINEFORM9 , INLINEFORM10 , INLINEFORM11 and INLINEFORM12 , that model the influence of the observation features over their associated variables, just as the logistic regression model does. Factors INLINEFORM13 models the interaction between variables INLINEFORM14 and INLINEFORM15 , INLINEFORM16 the interaction between variables INLINEFORM17 and INLINEFORM18 and INLINEFORM19 models the interactions between variables INLINEFORM20 and INLINEFORM21 , using a log-linear model over the possible values of the pair of variables in that particular clique. Due to the size of the model, we are able to perform exact inference at train and test time. For parameter learning we employ limited memory lbfgs optimizer BIBREF17 as we provide the cost function and gradient based on the equations described in BIBREF18 . 2 pass Model A hybrid mode that we experiment with is model that performs joint inference on three tasks: I: intention, D: intention disclosure and R: responders' intention interpretation. The remaining task B: response strategy is performed in a second step, with the input the other three tasks. We do so because we observed in our experiments that the close coupling between the first three tasks allow them to perform better independently of the response strategy, as we will elaborate in the results section. DISPLAYFORM0 DISPLAYFORM1 Evaluation and Results We perform 5-fold cross validation on the dataset. We use the first fold to tune the parameters and the remaining four folds to report results. The system performance is measured using precision, recall and F-1 as shown in table TABREF24 . The left side of the table, reports results obtained using the basic feature set, while the right side does so on the enhanced feature set. In order to maintain consistency folds are created based on the threads or snippets and for the case of the baseline system, all instances in the particular fold for task in consideration are considered independent of each other. On the table, rows show the classes performance for each of the tasks, indicated by a heard with the task name. For the response strategy we present results for those class values that are at least 5% of the total distribution, we do so, because the number of labeled instances for this classes is statistically insignificant compared to the majority classes. Results Discussion From the result table TABREF24 , we observe that hybrid model significantly outperform the baseline, by more than 20 points in intention and intention disclosure prediction. For the response strategy, it is clear that none of the systems offer satisfying results; this showcases the difficult of such a large number of classes. Nonetheless, the hybrid model outperforms the fully joint model and baseline in all but one the response strategy classes. However, the differences are far less impressive as in the other tasks. It is surprisingly; that the full joint model did not offered the best performance. One of the reasons behind this is that intention, intentions disclosure and interpretation tasks are hurt by the difficulty of learning parameters that maximize the response strategy, this last task drags the other three down in performance. Another reason is that, the features response strategy is not informative enough to learn the correct concept, and due to the joint inference process, all tasks receive a hit. Also, it is counter-intuitive that the augmented set of features did not outperform in all tasks but in intentions disclosure and interpretation, and just by a small margin. A reason explaining this unexpected behavior is that the majority of enhanced features are already represented in the basic feature set by means of the unigrams and bigrams, and the Framenet and Sentiment features are uninformative or redundant. Lastly, we observe that for interpretation category, none of systems were able to predict the “playing” class. This is because of the relative size of the number of instances annotated with that value, 1% of the entire dataset. We hypothesize those instances labeled by the annotators, of which a majority agreed on, incorrectly selected the playing category instead of the trolling class, and that, at the interpretation level, one can only expect to reliably differentiate between trolling and trolling. Related Work In this section, we discuss related work in the areas of trolling, bullying and politeness, as they intersect in their scope and at least partially address the problem presented in this work. BIBREF19 address the problem of identifying manipulation trolls in news community forums. The major difference with this work is that all their predictions are based on meta-information such as number of votes, dates, number of comments and so on. There is no NLP approach to the problem and their task is limited to identifying trolls. BIBREF0 and BIBREF20 elaborate a deep description of the trolls personality, motivations, effects on the community that trolls interfere and the criminal and psychological aspects of trolls. Their main focus are flaming trolls, but have no NLP insights do not propose and automated prediction tasks as in this work. In a networks related framework BIBREF21 and BIBREF22 present a methodology to identify malicious individuals in a network based solely on the network's properties. Even though they offer present and evaluate a methodology, their focus is different from NLP. BIBREF23 proposes a method that involves NLP components, but fails to provide a evaluation of their system. Finally, BIBREF2 and BIBREF5 address bullying traces. That is self reported events of individuals describing being part of bullying events, but their focus is different from trolling event and the interactions with other participants. Conclusion and Future Work In this paper we address the under-attended problem of trolling in Internet forums. We presented a comprehensive categorization of trolling events and defined a prediction tasks that does not only considers trolling from the troll's perspective but includes the responders to the trolls comment. Also, we evaluated three different models and analyzed their successes and shortcomings. Finally we provide an annotated dataset which we hope will be useful for the research community. We look forward to investigate trolling phenomena in larger conversations, formalize the concepts of changing roles among the participants in trolling events, and improve response strategy performance.
Reddit
d0444cbf01efdcc247b313c7487120a2f047f421
d0444cbf01efdcc247b313c7487120a2f047f421_0
Q: Do the authors give examples of positive and negative sentiment with regard to the virus? Text: Introduction Catastrophic global circumstances have a pronounced effect on the lives of human beings across the world. The ramifications of such a scenario are experienced in diverse and multiplicative ways spanning routine tasks, media and news reports, detrimental physical and mental health, and also routine conversations. A similar footprint has been left by the global pandemic Coronavirus particularly since February 2020. The outbreak has not only created havoc in the economic conditions, physical health, working conditions, and manufacturing sector to name a few but has also created a niche in the minds of the people worldwide. It has had serious repercussions on the psychological state of the humans that is most evident now. One of the best possible mechanisms of capturing human emotions is to analyze the content they post on social media websites like Twitter and Facebook. Not to be surprised, social media is ablaze with myriad content on Coronavirus reflecting facts, fears, numbers, and the overall thoughts dominating the people's minds at this time. This paper is an effort towards analysis of the present textual content posted by people on social media from a statistical perspective. Two techniques have been deployed to undertake statistical interpretation of text messages posted on twitter; first being word frequency analysis and second sentiment analysis. A well known and profoundly researched as well as used statistical tool for quantitative linguistics is word frequency analysis. Determining word frequencies in any document gives a strong idea about the patterns of word used and the sentimental content of the text. The analysis can be carried out in computational as well as statistical settings. An investigation of the probability distribution of word frequencies extracted from the Twitter text messages posted by different users during the coronavirus outbreak in 2020 has been presented. Power law has been shown to capture patterns in the text analysis BIBREF0, BIBREF1. Sentiment analysis is a technique to gauge the sentimental content of a writing. It can help understand attitudes in a text related to a particular subject. Sentiment analysis is a highly intriguing field of research that can assist in inferring the emotional content of a text. Sentiment analysis has been performed on two datasets, one pertaining to tweets by World Health Organization (WHO) and the other tweets with 1000 retweets. The sentimental quotient from the tweets has been deduced by computing the positive and negative polarities from the messages. The paper has been structured as follows. The next section presents a brief overview of some work in the area of word frequency analysis and emergence of power law. Section 3 details the analysis of the twitter data. Section 4 provides a discussion on the obtained results. Section 5 provides a discussion on the scope of sentiment analysis and outlines the methodology of sentiment analysis adopted in this paper. Section 6 presents the results of the sentiment analysis performed on the two datasets mentioned above. The final section concludes the paper. Word frequency analysis and power law Several researchers have devised statistical and mathematical techniques to analyze literary artifacts. A substantially significant approach among these is inferring the pattern of frequency distributions of the words in the text BIBREF2. Zipf's law is mostly immanent in word frequency distributions BIBREF3, BIBREF4. The law essentially proclaims that for the words' vector $x$, the word frequency distribution $\nu $ varies as an inverse power of $x$. Some other distributions that are prevalent include Zipf–Mandelbrot BIBREF5, lognormal BIBREF6, BIBREF7, and Gauss–Poisson BIBREF6. Such studies have been conducted for several languages such as Chinese BIBREF8, Japanese BIBREF9, Hindi BIBREF10 and many others BIBREF2. Not only single word frequencies, but also multi-word frequencies have been exhaustively explored. One of the examples is BIBREF11 wherein bigram and trigram frequencies and versatilities were analyzed and 577 different bigrams and 6,140 different trigrams were reported. A well known distribution is the power law. This “non-normal" distribution has been a subject of immense interest in the academic community due to its unique nature. The rightly skewed distribution is mathematically represented as follows: where a is a constant and b is the scaling or exponential parameter. Power law has been deployed in various studies. In BIBREF12, the authors explicitly focus on the presence of power law in social networks and use this property to create a degree threshold-based similarity measure that can help in link prediction. In an attempt to model the self similar computer network traffic, the authors claim that during fragmentation of the data into Ethernet frames leads, the power spectrum of the departure process mimics power law similar to that of observed traffic. They also state that the power law was independent of the input process BIBREF13. It was shown in BIBREF14 that internet topologies also can be modelled by power law. While investigating the presence of power laws in information retrieval data, the authors showed that query frequency and 5 out of 24 term frequency distributions could be best fit by a power law BIBREF15. Power law has found immense applications in different domains. In this paper, we intend to use it to model the word frequencies of twitter messages posted during this time. Statistical analysis of tweets In this section, we present the details of the analysis performed on the data obtained pertaining to Twitter messages from January 2020 upto now, that is the time since the news of the Coronavirus outbreak in China was spread across nations. The word frequency data corresponding to the twitter messages has been taken from BIBREF16. The data source indicates that during March 11th to March 30th there were over 4 million tweets a day with the surge in the awareness. Also, the data prominently captures the tweets in English, Spanish, and French languages. A total of four datasets have been used to carry out the study. The first is the data on Twitter Id evolution reflecting on number of tweets and the other three are unigram, bigram and trigram frequencies of words. In the following subsections analysis of each has been undertaken. Statistical analysis of tweets ::: Twitter id evolution First, we consider the data corresponding to the number of twitter ids tweeting about coronavirus at a particular time. Fig. FIGREF1 depicts the pattern of the twitter id evolution. A couple of peaks can be observed in its evolution in the months of February and March. Statistical analysis of tweets ::: Unigram, Bigram an Trigram word frequency analysis Three forms of tokens of words have been considered for the study viz. unigram, bigram and trigram. These represent the frequencies of one word, two words together and finally three words coupled. The dataset provides the top 1000 unigrams, top 1000 bigrams and the top 1000 trigrams. Fig. FIGREF3 gives the visualization of the word cloud for unigrams. It can be seen that Coronavirus was the most frequent word. Fig. FIGREF6, FIGREF7, and FIGREF8 depict plots for the Rank or Index vs frequency distributions for unigram, bigram and trigram respectively. The graphical visualization well demonstrates that the nature or the pattern of the data follows power law. The power law distribution can be seen to closely fit the data. The exponents in case of unigram and bigrams are -1.273 and -1.375 respectively while for trigram it gives a smaller value of -0.5266. The computed parameters are reported in Table TABREF9. Also, heavy tails are observed specifically in the case of unigrams and bigrams. A good fit by power law connotes a difference in the behaviour of tweet messages when compared to literary documents like novels and poems which are replicated by Zipf's law with exponents close to 1. In the following section, we investigate the goodness of fit of our proposed model using measures: SSE, $R^2$ and RMSE. Evaluating the Goodness of fit of the model The quality of fit of the data using power law distribution has been evaluated using three goodness of fit metrics: SSE, $R^2$ and RMSE. The value obtained for the three datasets with the three forms of token of words has been shown in Table TABREF9. We obtain a high value of $R^2$ in all the three cases: unigram (0.9172), bigram (0.8718) and trigram (0.9461). Also, the values of SSE and RMSE obtained in all the three cases is quite low. This confirms that power law provides a good model for the frequency data of the tweet messages. Sentiment Analysis of Twitter Messages Sentiment analysis is a fast growing field due to its capacity to interpret emotional quotient of a text.It has often been defined as a computational study that relates to people's attitudes, emotions and opinions towards a subject BIBREF17. The key intent of sentiment analysis is to gauge opinions, identify hidden sentiments and finally to classify their polarity into positive, negative or neutral. Sentiment analysis has been immensely used in customer reviews BIBREF18, news and blogs BIBREF19, and stock market BIBREF20 to name a few. Several methods have been deployed for sentiment analysis including Support Vector Machines BIBREF21, Naive Bayes BIBREF22, and Artificial Neural Networks BIBREF23. There have also been several papers that have provided algorithms for sentiment analysis on twitter posts BIBREF24,BIBREF25, BIBREF26, BIBREF27. In the present work, we use the Python built-in package TextBlob BIBREF28 to perform sentiment analysis of tweets pertaining to the coronavirus outbreak. The analysis has been conducted on two datasets: one corresponds to the tweets made by WHO and the other is the one that contains tweets that have been retweeted more than 1000 times. The polarity values of individual tweets have been computed. The interpretation of these values is as follows: polarity $>$ 0 implies positive; polarity $<$ 0 implies negative; polarity $=$ 0 implies neutral. The polarities range between -1 to 1. These polarity values have then been plotted in a histogram to highlight the overall sentiment (i.e. more positivity or negativity). The plots have been given in Fig. FIGREF11, FIGREF12, FIGREF13, and FIGREF14. Table presents the results of the percentage of positive, negative and neutral tweets based on the polarities in the dataset. The following section outlines an analysis of the results obtained. Results of Sentiment Analysis of Twitter Messages In this section, we provide a detailed discussion of the results obtained from the sentiment analysis of the two datasets. Fig. FIGREF11 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by general public. It can be seen that majority of the tweets have a neutral sentiment followed by positive. The same can be inferred from Table TABREF10 that shows that around 54$\%$ tweets are neutral, 29$\%$ positive and a mere 15$\%$ is negative. Fig. FIGREF12 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by WHO. It can be seen that majority of the tweets have a neutral and positive sentiment. Table TABREF10 that shows that around 60$\%$ tweets are positive, 24$\%$ neutral and a mere 15$\%$ is negative. This shows how WHO is trying to retain the positive spirit through its social media accounts. Fig. FIGREF13 and FIGREF14 represent the histograms produced by removing the neutral tweets. It readily reiterates that the positive emotions in the tweets are higher than negative ones. This is a great sign that indicates that the humans are still focusing more on the positive and sharing the same light through their messages. Conclusion The paper is an effort towards deriving statistical characterization of tweet messages posted during the Coronavirus outbreak. The spread of the disease has created an environment of threat, risks and uncertainty among the population globally. This response can be gauged from the immensely high number of tweet messages corresponding to the outbreak in a short duration of 2-3 months. In the present work, data related to tweets made since January 2020 have been analyzed with two major perspectives: one understanding the word frequency pattern and the second sentiment analysis. The study has resulted in the following observations. The number of twitter ids tweeting about Coronavius has increased rapidly with several peaks during February and March. An empirical analysis of words in the messages was made by determining their frequencies of occurrence. The most frequent words were Coronavirus, Covid19 and Wuhan. Unigram, bigram and trigram frequencies (the top 1000) were modeled. It was seen that all of them followed the rightly skewed power law distribution with quite heavy tails in the first two cases. The exponential parameters for the same were determined to be -1.273 for unigram, -1.375 for bigram and -0.5266 for trigram. The plots for the same have been depicted. The goodness of fit for the model was determined using SSE, $R^2$ and RMSE. The results were found to be satisfactorily good. The model can be used to determine the pattern of the words used during this time. The sentiment analysis was performed on tweet messages by general public and WHO. The polarities of the individual messages were determined and a histogram of the same was plotted. It could be observed that most of the messages belonged to the neutral and positive categories. With WHO messages, $60\%$ of the messages were found to be positive and with general messages, $29\%$ were found to be positive and $54\%$ neutral. In both the cases, just about $15\%$ messages were of negative sentiment. The results obtained are a great reflection of the sentiments expressed worldwide during this pandemic.
No
1f6666c2c1d1d5f66208a6fa7da3b3442a577dbc
1f6666c2c1d1d5f66208a6fa7da3b3442a577dbc_0
Q: Which word frequencies reflect on the psychology of the twitter users, according to the authors? Text: Introduction Catastrophic global circumstances have a pronounced effect on the lives of human beings across the world. The ramifications of such a scenario are experienced in diverse and multiplicative ways spanning routine tasks, media and news reports, detrimental physical and mental health, and also routine conversations. A similar footprint has been left by the global pandemic Coronavirus particularly since February 2020. The outbreak has not only created havoc in the economic conditions, physical health, working conditions, and manufacturing sector to name a few but has also created a niche in the minds of the people worldwide. It has had serious repercussions on the psychological state of the humans that is most evident now. One of the best possible mechanisms of capturing human emotions is to analyze the content they post on social media websites like Twitter and Facebook. Not to be surprised, social media is ablaze with myriad content on Coronavirus reflecting facts, fears, numbers, and the overall thoughts dominating the people's minds at this time. This paper is an effort towards analysis of the present textual content posted by people on social media from a statistical perspective. Two techniques have been deployed to undertake statistical interpretation of text messages posted on twitter; first being word frequency analysis and second sentiment analysis. A well known and profoundly researched as well as used statistical tool for quantitative linguistics is word frequency analysis. Determining word frequencies in any document gives a strong idea about the patterns of word used and the sentimental content of the text. The analysis can be carried out in computational as well as statistical settings. An investigation of the probability distribution of word frequencies extracted from the Twitter text messages posted by different users during the coronavirus outbreak in 2020 has been presented. Power law has been shown to capture patterns in the text analysis BIBREF0, BIBREF1. Sentiment analysis is a technique to gauge the sentimental content of a writing. It can help understand attitudes in a text related to a particular subject. Sentiment analysis is a highly intriguing field of research that can assist in inferring the emotional content of a text. Sentiment analysis has been performed on two datasets, one pertaining to tweets by World Health Organization (WHO) and the other tweets with 1000 retweets. The sentimental quotient from the tweets has been deduced by computing the positive and negative polarities from the messages. The paper has been structured as follows. The next section presents a brief overview of some work in the area of word frequency analysis and emergence of power law. Section 3 details the analysis of the twitter data. Section 4 provides a discussion on the obtained results. Section 5 provides a discussion on the scope of sentiment analysis and outlines the methodology of sentiment analysis adopted in this paper. Section 6 presents the results of the sentiment analysis performed on the two datasets mentioned above. The final section concludes the paper. Word frequency analysis and power law Several researchers have devised statistical and mathematical techniques to analyze literary artifacts. A substantially significant approach among these is inferring the pattern of frequency distributions of the words in the text BIBREF2. Zipf's law is mostly immanent in word frequency distributions BIBREF3, BIBREF4. The law essentially proclaims that for the words' vector $x$, the word frequency distribution $\nu $ varies as an inverse power of $x$. Some other distributions that are prevalent include Zipf–Mandelbrot BIBREF5, lognormal BIBREF6, BIBREF7, and Gauss–Poisson BIBREF6. Such studies have been conducted for several languages such as Chinese BIBREF8, Japanese BIBREF9, Hindi BIBREF10 and many others BIBREF2. Not only single word frequencies, but also multi-word frequencies have been exhaustively explored. One of the examples is BIBREF11 wherein bigram and trigram frequencies and versatilities were analyzed and 577 different bigrams and 6,140 different trigrams were reported. A well known distribution is the power law. This “non-normal" distribution has been a subject of immense interest in the academic community due to its unique nature. The rightly skewed distribution is mathematically represented as follows: where a is a constant and b is the scaling or exponential parameter. Power law has been deployed in various studies. In BIBREF12, the authors explicitly focus on the presence of power law in social networks and use this property to create a degree threshold-based similarity measure that can help in link prediction. In an attempt to model the self similar computer network traffic, the authors claim that during fragmentation of the data into Ethernet frames leads, the power spectrum of the departure process mimics power law similar to that of observed traffic. They also state that the power law was independent of the input process BIBREF13. It was shown in BIBREF14 that internet topologies also can be modelled by power law. While investigating the presence of power laws in information retrieval data, the authors showed that query frequency and 5 out of 24 term frequency distributions could be best fit by a power law BIBREF15. Power law has found immense applications in different domains. In this paper, we intend to use it to model the word frequencies of twitter messages posted during this time. Statistical analysis of tweets In this section, we present the details of the analysis performed on the data obtained pertaining to Twitter messages from January 2020 upto now, that is the time since the news of the Coronavirus outbreak in China was spread across nations. The word frequency data corresponding to the twitter messages has been taken from BIBREF16. The data source indicates that during March 11th to March 30th there were over 4 million tweets a day with the surge in the awareness. Also, the data prominently captures the tweets in English, Spanish, and French languages. A total of four datasets have been used to carry out the study. The first is the data on Twitter Id evolution reflecting on number of tweets and the other three are unigram, bigram and trigram frequencies of words. In the following subsections analysis of each has been undertaken. Statistical analysis of tweets ::: Twitter id evolution First, we consider the data corresponding to the number of twitter ids tweeting about coronavirus at a particular time. Fig. FIGREF1 depicts the pattern of the twitter id evolution. A couple of peaks can be observed in its evolution in the months of February and March. Statistical analysis of tweets ::: Unigram, Bigram an Trigram word frequency analysis Three forms of tokens of words have been considered for the study viz. unigram, bigram and trigram. These represent the frequencies of one word, two words together and finally three words coupled. The dataset provides the top 1000 unigrams, top 1000 bigrams and the top 1000 trigrams. Fig. FIGREF3 gives the visualization of the word cloud for unigrams. It can be seen that Coronavirus was the most frequent word. Fig. FIGREF6, FIGREF7, and FIGREF8 depict plots for the Rank or Index vs frequency distributions for unigram, bigram and trigram respectively. The graphical visualization well demonstrates that the nature or the pattern of the data follows power law. The power law distribution can be seen to closely fit the data. The exponents in case of unigram and bigrams are -1.273 and -1.375 respectively while for trigram it gives a smaller value of -0.5266. The computed parameters are reported in Table TABREF9. Also, heavy tails are observed specifically in the case of unigrams and bigrams. A good fit by power law connotes a difference in the behaviour of tweet messages when compared to literary documents like novels and poems which are replicated by Zipf's law with exponents close to 1. In the following section, we investigate the goodness of fit of our proposed model using measures: SSE, $R^2$ and RMSE. Evaluating the Goodness of fit of the model The quality of fit of the data using power law distribution has been evaluated using three goodness of fit metrics: SSE, $R^2$ and RMSE. The value obtained for the three datasets with the three forms of token of words has been shown in Table TABREF9. We obtain a high value of $R^2$ in all the three cases: unigram (0.9172), bigram (0.8718) and trigram (0.9461). Also, the values of SSE and RMSE obtained in all the three cases is quite low. This confirms that power law provides a good model for the frequency data of the tweet messages. Sentiment Analysis of Twitter Messages Sentiment analysis is a fast growing field due to its capacity to interpret emotional quotient of a text.It has often been defined as a computational study that relates to people's attitudes, emotions and opinions towards a subject BIBREF17. The key intent of sentiment analysis is to gauge opinions, identify hidden sentiments and finally to classify their polarity into positive, negative or neutral. Sentiment analysis has been immensely used in customer reviews BIBREF18, news and blogs BIBREF19, and stock market BIBREF20 to name a few. Several methods have been deployed for sentiment analysis including Support Vector Machines BIBREF21, Naive Bayes BIBREF22, and Artificial Neural Networks BIBREF23. There have also been several papers that have provided algorithms for sentiment analysis on twitter posts BIBREF24,BIBREF25, BIBREF26, BIBREF27. In the present work, we use the Python built-in package TextBlob BIBREF28 to perform sentiment analysis of tweets pertaining to the coronavirus outbreak. The analysis has been conducted on two datasets: one corresponds to the tweets made by WHO and the other is the one that contains tweets that have been retweeted more than 1000 times. The polarity values of individual tweets have been computed. The interpretation of these values is as follows: polarity $>$ 0 implies positive; polarity $<$ 0 implies negative; polarity $=$ 0 implies neutral. The polarities range between -1 to 1. These polarity values have then been plotted in a histogram to highlight the overall sentiment (i.e. more positivity or negativity). The plots have been given in Fig. FIGREF11, FIGREF12, FIGREF13, and FIGREF14. Table presents the results of the percentage of positive, negative and neutral tweets based on the polarities in the dataset. The following section outlines an analysis of the results obtained. Results of Sentiment Analysis of Twitter Messages In this section, we provide a detailed discussion of the results obtained from the sentiment analysis of the two datasets. Fig. FIGREF11 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by general public. It can be seen that majority of the tweets have a neutral sentiment followed by positive. The same can be inferred from Table TABREF10 that shows that around 54$\%$ tweets are neutral, 29$\%$ positive and a mere 15$\%$ is negative. Fig. FIGREF12 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by WHO. It can be seen that majority of the tweets have a neutral and positive sentiment. Table TABREF10 that shows that around 60$\%$ tweets are positive, 24$\%$ neutral and a mere 15$\%$ is negative. This shows how WHO is trying to retain the positive spirit through its social media accounts. Fig. FIGREF13 and FIGREF14 represent the histograms produced by removing the neutral tweets. It readily reiterates that the positive emotions in the tweets are higher than negative ones. This is a great sign that indicates that the humans are still focusing more on the positive and sharing the same light through their messages. Conclusion The paper is an effort towards deriving statistical characterization of tweet messages posted during the Coronavirus outbreak. The spread of the disease has created an environment of threat, risks and uncertainty among the population globally. This response can be gauged from the immensely high number of tweet messages corresponding to the outbreak in a short duration of 2-3 months. In the present work, data related to tweets made since January 2020 have been analyzed with two major perspectives: one understanding the word frequency pattern and the second sentiment analysis. The study has resulted in the following observations. The number of twitter ids tweeting about Coronavius has increased rapidly with several peaks during February and March. An empirical analysis of words in the messages was made by determining their frequencies of occurrence. The most frequent words were Coronavirus, Covid19 and Wuhan. Unigram, bigram and trigram frequencies (the top 1000) were modeled. It was seen that all of them followed the rightly skewed power law distribution with quite heavy tails in the first two cases. The exponential parameters for the same were determined to be -1.273 for unigram, -1.375 for bigram and -0.5266 for trigram. The plots for the same have been depicted. The goodness of fit for the model was determined using SSE, $R^2$ and RMSE. The results were found to be satisfactorily good. The model can be used to determine the pattern of the words used during this time. The sentiment analysis was performed on tweet messages by general public and WHO. The polarities of the individual messages were determined and a histogram of the same was plotted. It could be observed that most of the messages belonged to the neutral and positive categories. With WHO messages, $60\%$ of the messages were found to be positive and with general messages, $29\%$ were found to be positive and $54\%$ neutral. In both the cases, just about $15\%$ messages were of negative sentiment. The results obtained are a great reflection of the sentiments expressed worldwide during this pandemic.
unigram, bigram and trigram
a78a6fd6ca36413586836838e38f3fa9282646ee
a78a6fd6ca36413586836838e38f3fa9282646ee_0
Q: Do they specify which countries they collected twitter data from? Text: Introduction Catastrophic global circumstances have a pronounced effect on the lives of human beings across the world. The ramifications of such a scenario are experienced in diverse and multiplicative ways spanning routine tasks, media and news reports, detrimental physical and mental health, and also routine conversations. A similar footprint has been left by the global pandemic Coronavirus particularly since February 2020. The outbreak has not only created havoc in the economic conditions, physical health, working conditions, and manufacturing sector to name a few but has also created a niche in the minds of the people worldwide. It has had serious repercussions on the psychological state of the humans that is most evident now. One of the best possible mechanisms of capturing human emotions is to analyze the content they post on social media websites like Twitter and Facebook. Not to be surprised, social media is ablaze with myriad content on Coronavirus reflecting facts, fears, numbers, and the overall thoughts dominating the people's minds at this time. This paper is an effort towards analysis of the present textual content posted by people on social media from a statistical perspective. Two techniques have been deployed to undertake statistical interpretation of text messages posted on twitter; first being word frequency analysis and second sentiment analysis. A well known and profoundly researched as well as used statistical tool for quantitative linguistics is word frequency analysis. Determining word frequencies in any document gives a strong idea about the patterns of word used and the sentimental content of the text. The analysis can be carried out in computational as well as statistical settings. An investigation of the probability distribution of word frequencies extracted from the Twitter text messages posted by different users during the coronavirus outbreak in 2020 has been presented. Power law has been shown to capture patterns in the text analysis BIBREF0, BIBREF1. Sentiment analysis is a technique to gauge the sentimental content of a writing. It can help understand attitudes in a text related to a particular subject. Sentiment analysis is a highly intriguing field of research that can assist in inferring the emotional content of a text. Sentiment analysis has been performed on two datasets, one pertaining to tweets by World Health Organization (WHO) and the other tweets with 1000 retweets. The sentimental quotient from the tweets has been deduced by computing the positive and negative polarities from the messages. The paper has been structured as follows. The next section presents a brief overview of some work in the area of word frequency analysis and emergence of power law. Section 3 details the analysis of the twitter data. Section 4 provides a discussion on the obtained results. Section 5 provides a discussion on the scope of sentiment analysis and outlines the methodology of sentiment analysis adopted in this paper. Section 6 presents the results of the sentiment analysis performed on the two datasets mentioned above. The final section concludes the paper. Word frequency analysis and power law Several researchers have devised statistical and mathematical techniques to analyze literary artifacts. A substantially significant approach among these is inferring the pattern of frequency distributions of the words in the text BIBREF2. Zipf's law is mostly immanent in word frequency distributions BIBREF3, BIBREF4. The law essentially proclaims that for the words' vector $x$, the word frequency distribution $\nu $ varies as an inverse power of $x$. Some other distributions that are prevalent include Zipf–Mandelbrot BIBREF5, lognormal BIBREF6, BIBREF7, and Gauss–Poisson BIBREF6. Such studies have been conducted for several languages such as Chinese BIBREF8, Japanese BIBREF9, Hindi BIBREF10 and many others BIBREF2. Not only single word frequencies, but also multi-word frequencies have been exhaustively explored. One of the examples is BIBREF11 wherein bigram and trigram frequencies and versatilities were analyzed and 577 different bigrams and 6,140 different trigrams were reported. A well known distribution is the power law. This “non-normal" distribution has been a subject of immense interest in the academic community due to its unique nature. The rightly skewed distribution is mathematically represented as follows: where a is a constant and b is the scaling or exponential parameter. Power law has been deployed in various studies. In BIBREF12, the authors explicitly focus on the presence of power law in social networks and use this property to create a degree threshold-based similarity measure that can help in link prediction. In an attempt to model the self similar computer network traffic, the authors claim that during fragmentation of the data into Ethernet frames leads, the power spectrum of the departure process mimics power law similar to that of observed traffic. They also state that the power law was independent of the input process BIBREF13. It was shown in BIBREF14 that internet topologies also can be modelled by power law. While investigating the presence of power laws in information retrieval data, the authors showed that query frequency and 5 out of 24 term frequency distributions could be best fit by a power law BIBREF15. Power law has found immense applications in different domains. In this paper, we intend to use it to model the word frequencies of twitter messages posted during this time. Statistical analysis of tweets In this section, we present the details of the analysis performed on the data obtained pertaining to Twitter messages from January 2020 upto now, that is the time since the news of the Coronavirus outbreak in China was spread across nations. The word frequency data corresponding to the twitter messages has been taken from BIBREF16. The data source indicates that during March 11th to March 30th there were over 4 million tweets a day with the surge in the awareness. Also, the data prominently captures the tweets in English, Spanish, and French languages. A total of four datasets have been used to carry out the study. The first is the data on Twitter Id evolution reflecting on number of tweets and the other three are unigram, bigram and trigram frequencies of words. In the following subsections analysis of each has been undertaken. Statistical analysis of tweets ::: Twitter id evolution First, we consider the data corresponding to the number of twitter ids tweeting about coronavirus at a particular time. Fig. FIGREF1 depicts the pattern of the twitter id evolution. A couple of peaks can be observed in its evolution in the months of February and March. Statistical analysis of tweets ::: Unigram, Bigram an Trigram word frequency analysis Three forms of tokens of words have been considered for the study viz. unigram, bigram and trigram. These represent the frequencies of one word, two words together and finally three words coupled. The dataset provides the top 1000 unigrams, top 1000 bigrams and the top 1000 trigrams. Fig. FIGREF3 gives the visualization of the word cloud for unigrams. It can be seen that Coronavirus was the most frequent word. Fig. FIGREF6, FIGREF7, and FIGREF8 depict plots for the Rank or Index vs frequency distributions for unigram, bigram and trigram respectively. The graphical visualization well demonstrates that the nature or the pattern of the data follows power law. The power law distribution can be seen to closely fit the data. The exponents in case of unigram and bigrams are -1.273 and -1.375 respectively while for trigram it gives a smaller value of -0.5266. The computed parameters are reported in Table TABREF9. Also, heavy tails are observed specifically in the case of unigrams and bigrams. A good fit by power law connotes a difference in the behaviour of tweet messages when compared to literary documents like novels and poems which are replicated by Zipf's law with exponents close to 1. In the following section, we investigate the goodness of fit of our proposed model using measures: SSE, $R^2$ and RMSE. Evaluating the Goodness of fit of the model The quality of fit of the data using power law distribution has been evaluated using three goodness of fit metrics: SSE, $R^2$ and RMSE. The value obtained for the three datasets with the three forms of token of words has been shown in Table TABREF9. We obtain a high value of $R^2$ in all the three cases: unigram (0.9172), bigram (0.8718) and trigram (0.9461). Also, the values of SSE and RMSE obtained in all the three cases is quite low. This confirms that power law provides a good model for the frequency data of the tweet messages. Sentiment Analysis of Twitter Messages Sentiment analysis is a fast growing field due to its capacity to interpret emotional quotient of a text.It has often been defined as a computational study that relates to people's attitudes, emotions and opinions towards a subject BIBREF17. The key intent of sentiment analysis is to gauge opinions, identify hidden sentiments and finally to classify their polarity into positive, negative or neutral. Sentiment analysis has been immensely used in customer reviews BIBREF18, news and blogs BIBREF19, and stock market BIBREF20 to name a few. Several methods have been deployed for sentiment analysis including Support Vector Machines BIBREF21, Naive Bayes BIBREF22, and Artificial Neural Networks BIBREF23. There have also been several papers that have provided algorithms for sentiment analysis on twitter posts BIBREF24,BIBREF25, BIBREF26, BIBREF27. In the present work, we use the Python built-in package TextBlob BIBREF28 to perform sentiment analysis of tweets pertaining to the coronavirus outbreak. The analysis has been conducted on two datasets: one corresponds to the tweets made by WHO and the other is the one that contains tweets that have been retweeted more than 1000 times. The polarity values of individual tweets have been computed. The interpretation of these values is as follows: polarity $>$ 0 implies positive; polarity $<$ 0 implies negative; polarity $=$ 0 implies neutral. The polarities range between -1 to 1. These polarity values have then been plotted in a histogram to highlight the overall sentiment (i.e. more positivity or negativity). The plots have been given in Fig. FIGREF11, FIGREF12, FIGREF13, and FIGREF14. Table presents the results of the percentage of positive, negative and neutral tweets based on the polarities in the dataset. The following section outlines an analysis of the results obtained. Results of Sentiment Analysis of Twitter Messages In this section, we provide a detailed discussion of the results obtained from the sentiment analysis of the two datasets. Fig. FIGREF11 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by general public. It can be seen that majority of the tweets have a neutral sentiment followed by positive. The same can be inferred from Table TABREF10 that shows that around 54$\%$ tweets are neutral, 29$\%$ positive and a mere 15$\%$ is negative. Fig. FIGREF12 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by WHO. It can be seen that majority of the tweets have a neutral and positive sentiment. Table TABREF10 that shows that around 60$\%$ tweets are positive, 24$\%$ neutral and a mere 15$\%$ is negative. This shows how WHO is trying to retain the positive spirit through its social media accounts. Fig. FIGREF13 and FIGREF14 represent the histograms produced by removing the neutral tweets. It readily reiterates that the positive emotions in the tweets are higher than negative ones. This is a great sign that indicates that the humans are still focusing more on the positive and sharing the same light through their messages. Conclusion The paper is an effort towards deriving statistical characterization of tweet messages posted during the Coronavirus outbreak. The spread of the disease has created an environment of threat, risks and uncertainty among the population globally. This response can be gauged from the immensely high number of tweet messages corresponding to the outbreak in a short duration of 2-3 months. In the present work, data related to tweets made since January 2020 have been analyzed with two major perspectives: one understanding the word frequency pattern and the second sentiment analysis. The study has resulted in the following observations. The number of twitter ids tweeting about Coronavius has increased rapidly with several peaks during February and March. An empirical analysis of words in the messages was made by determining their frequencies of occurrence. The most frequent words were Coronavirus, Covid19 and Wuhan. Unigram, bigram and trigram frequencies (the top 1000) were modeled. It was seen that all of them followed the rightly skewed power law distribution with quite heavy tails in the first two cases. The exponential parameters for the same were determined to be -1.273 for unigram, -1.375 for bigram and -0.5266 for trigram. The plots for the same have been depicted. The goodness of fit for the model was determined using SSE, $R^2$ and RMSE. The results were found to be satisfactorily good. The model can be used to determine the pattern of the words used during this time. The sentiment analysis was performed on tweet messages by general public and WHO. The polarities of the individual messages were determined and a histogram of the same was plotted. It could be observed that most of the messages belonged to the neutral and positive categories. With WHO messages, $60\%$ of the messages were found to be positive and with general messages, $29\%$ were found to be positive and $54\%$ neutral. In both the cases, just about $15\%$ messages were of negative sentiment. The results obtained are a great reflection of the sentiments expressed worldwide during this pandemic.
No
c4a0c7b6f1a00f3233a5fe16240a98d9975701c0
c4a0c7b6f1a00f3233a5fe16240a98d9975701c0_0
Q: Do they collect only English data? Text: Introduction Catastrophic global circumstances have a pronounced effect on the lives of human beings across the world. The ramifications of such a scenario are experienced in diverse and multiplicative ways spanning routine tasks, media and news reports, detrimental physical and mental health, and also routine conversations. A similar footprint has been left by the global pandemic Coronavirus particularly since February 2020. The outbreak has not only created havoc in the economic conditions, physical health, working conditions, and manufacturing sector to name a few but has also created a niche in the minds of the people worldwide. It has had serious repercussions on the psychological state of the humans that is most evident now. One of the best possible mechanisms of capturing human emotions is to analyze the content they post on social media websites like Twitter and Facebook. Not to be surprised, social media is ablaze with myriad content on Coronavirus reflecting facts, fears, numbers, and the overall thoughts dominating the people's minds at this time. This paper is an effort towards analysis of the present textual content posted by people on social media from a statistical perspective. Two techniques have been deployed to undertake statistical interpretation of text messages posted on twitter; first being word frequency analysis and second sentiment analysis. A well known and profoundly researched as well as used statistical tool for quantitative linguistics is word frequency analysis. Determining word frequencies in any document gives a strong idea about the patterns of word used and the sentimental content of the text. The analysis can be carried out in computational as well as statistical settings. An investigation of the probability distribution of word frequencies extracted from the Twitter text messages posted by different users during the coronavirus outbreak in 2020 has been presented. Power law has been shown to capture patterns in the text analysis BIBREF0, BIBREF1. Sentiment analysis is a technique to gauge the sentimental content of a writing. It can help understand attitudes in a text related to a particular subject. Sentiment analysis is a highly intriguing field of research that can assist in inferring the emotional content of a text. Sentiment analysis has been performed on two datasets, one pertaining to tweets by World Health Organization (WHO) and the other tweets with 1000 retweets. The sentimental quotient from the tweets has been deduced by computing the positive and negative polarities from the messages. The paper has been structured as follows. The next section presents a brief overview of some work in the area of word frequency analysis and emergence of power law. Section 3 details the analysis of the twitter data. Section 4 provides a discussion on the obtained results. Section 5 provides a discussion on the scope of sentiment analysis and outlines the methodology of sentiment analysis adopted in this paper. Section 6 presents the results of the sentiment analysis performed on the two datasets mentioned above. The final section concludes the paper. Word frequency analysis and power law Several researchers have devised statistical and mathematical techniques to analyze literary artifacts. A substantially significant approach among these is inferring the pattern of frequency distributions of the words in the text BIBREF2. Zipf's law is mostly immanent in word frequency distributions BIBREF3, BIBREF4. The law essentially proclaims that for the words' vector $x$, the word frequency distribution $\nu $ varies as an inverse power of $x$. Some other distributions that are prevalent include Zipf–Mandelbrot BIBREF5, lognormal BIBREF6, BIBREF7, and Gauss–Poisson BIBREF6. Such studies have been conducted for several languages such as Chinese BIBREF8, Japanese BIBREF9, Hindi BIBREF10 and many others BIBREF2. Not only single word frequencies, but also multi-word frequencies have been exhaustively explored. One of the examples is BIBREF11 wherein bigram and trigram frequencies and versatilities were analyzed and 577 different bigrams and 6,140 different trigrams were reported. A well known distribution is the power law. This “non-normal" distribution has been a subject of immense interest in the academic community due to its unique nature. The rightly skewed distribution is mathematically represented as follows: where a is a constant and b is the scaling or exponential parameter. Power law has been deployed in various studies. In BIBREF12, the authors explicitly focus on the presence of power law in social networks and use this property to create a degree threshold-based similarity measure that can help in link prediction. In an attempt to model the self similar computer network traffic, the authors claim that during fragmentation of the data into Ethernet frames leads, the power spectrum of the departure process mimics power law similar to that of observed traffic. They also state that the power law was independent of the input process BIBREF13. It was shown in BIBREF14 that internet topologies also can be modelled by power law. While investigating the presence of power laws in information retrieval data, the authors showed that query frequency and 5 out of 24 term frequency distributions could be best fit by a power law BIBREF15. Power law has found immense applications in different domains. In this paper, we intend to use it to model the word frequencies of twitter messages posted during this time. Statistical analysis of tweets In this section, we present the details of the analysis performed on the data obtained pertaining to Twitter messages from January 2020 upto now, that is the time since the news of the Coronavirus outbreak in China was spread across nations. The word frequency data corresponding to the twitter messages has been taken from BIBREF16. The data source indicates that during March 11th to March 30th there were over 4 million tweets a day with the surge in the awareness. Also, the data prominently captures the tweets in English, Spanish, and French languages. A total of four datasets have been used to carry out the study. The first is the data on Twitter Id evolution reflecting on number of tweets and the other three are unigram, bigram and trigram frequencies of words. In the following subsections analysis of each has been undertaken. Statistical analysis of tweets ::: Twitter id evolution First, we consider the data corresponding to the number of twitter ids tweeting about coronavirus at a particular time. Fig. FIGREF1 depicts the pattern of the twitter id evolution. A couple of peaks can be observed in its evolution in the months of February and March. Statistical analysis of tweets ::: Unigram, Bigram an Trigram word frequency analysis Three forms of tokens of words have been considered for the study viz. unigram, bigram and trigram. These represent the frequencies of one word, two words together and finally three words coupled. The dataset provides the top 1000 unigrams, top 1000 bigrams and the top 1000 trigrams. Fig. FIGREF3 gives the visualization of the word cloud for unigrams. It can be seen that Coronavirus was the most frequent word. Fig. FIGREF6, FIGREF7, and FIGREF8 depict plots for the Rank or Index vs frequency distributions for unigram, bigram and trigram respectively. The graphical visualization well demonstrates that the nature or the pattern of the data follows power law. The power law distribution can be seen to closely fit the data. The exponents in case of unigram and bigrams are -1.273 and -1.375 respectively while for trigram it gives a smaller value of -0.5266. The computed parameters are reported in Table TABREF9. Also, heavy tails are observed specifically in the case of unigrams and bigrams. A good fit by power law connotes a difference in the behaviour of tweet messages when compared to literary documents like novels and poems which are replicated by Zipf's law with exponents close to 1. In the following section, we investigate the goodness of fit of our proposed model using measures: SSE, $R^2$ and RMSE. Evaluating the Goodness of fit of the model The quality of fit of the data using power law distribution has been evaluated using three goodness of fit metrics: SSE, $R^2$ and RMSE. The value obtained for the three datasets with the three forms of token of words has been shown in Table TABREF9. We obtain a high value of $R^2$ in all the three cases: unigram (0.9172), bigram (0.8718) and trigram (0.9461). Also, the values of SSE and RMSE obtained in all the three cases is quite low. This confirms that power law provides a good model for the frequency data of the tweet messages. Sentiment Analysis of Twitter Messages Sentiment analysis is a fast growing field due to its capacity to interpret emotional quotient of a text.It has often been defined as a computational study that relates to people's attitudes, emotions and opinions towards a subject BIBREF17. The key intent of sentiment analysis is to gauge opinions, identify hidden sentiments and finally to classify their polarity into positive, negative or neutral. Sentiment analysis has been immensely used in customer reviews BIBREF18, news and blogs BIBREF19, and stock market BIBREF20 to name a few. Several methods have been deployed for sentiment analysis including Support Vector Machines BIBREF21, Naive Bayes BIBREF22, and Artificial Neural Networks BIBREF23. There have also been several papers that have provided algorithms for sentiment analysis on twitter posts BIBREF24,BIBREF25, BIBREF26, BIBREF27. In the present work, we use the Python built-in package TextBlob BIBREF28 to perform sentiment analysis of tweets pertaining to the coronavirus outbreak. The analysis has been conducted on two datasets: one corresponds to the tweets made by WHO and the other is the one that contains tweets that have been retweeted more than 1000 times. The polarity values of individual tweets have been computed. The interpretation of these values is as follows: polarity $>$ 0 implies positive; polarity $<$ 0 implies negative; polarity $=$ 0 implies neutral. The polarities range between -1 to 1. These polarity values have then been plotted in a histogram to highlight the overall sentiment (i.e. more positivity or negativity). The plots have been given in Fig. FIGREF11, FIGREF12, FIGREF13, and FIGREF14. Table presents the results of the percentage of positive, negative and neutral tweets based on the polarities in the dataset. The following section outlines an analysis of the results obtained. Results of Sentiment Analysis of Twitter Messages In this section, we provide a detailed discussion of the results obtained from the sentiment analysis of the two datasets. Fig. FIGREF11 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by general public. It can be seen that majority of the tweets have a neutral sentiment followed by positive. The same can be inferred from Table TABREF10 that shows that around 54$\%$ tweets are neutral, 29$\%$ positive and a mere 15$\%$ is negative. Fig. FIGREF12 corresponds to the histogram of sentiment polarities of tweets on Coronavirus by WHO. It can be seen that majority of the tweets have a neutral and positive sentiment. Table TABREF10 that shows that around 60$\%$ tweets are positive, 24$\%$ neutral and a mere 15$\%$ is negative. This shows how WHO is trying to retain the positive spirit through its social media accounts. Fig. FIGREF13 and FIGREF14 represent the histograms produced by removing the neutral tweets. It readily reiterates that the positive emotions in the tweets are higher than negative ones. This is a great sign that indicates that the humans are still focusing more on the positive and sharing the same light through their messages. Conclusion The paper is an effort towards deriving statistical characterization of tweet messages posted during the Coronavirus outbreak. The spread of the disease has created an environment of threat, risks and uncertainty among the population globally. This response can be gauged from the immensely high number of tweet messages corresponding to the outbreak in a short duration of 2-3 months. In the present work, data related to tweets made since January 2020 have been analyzed with two major perspectives: one understanding the word frequency pattern and the second sentiment analysis. The study has resulted in the following observations. The number of twitter ids tweeting about Coronavius has increased rapidly with several peaks during February and March. An empirical analysis of words in the messages was made by determining their frequencies of occurrence. The most frequent words were Coronavirus, Covid19 and Wuhan. Unigram, bigram and trigram frequencies (the top 1000) were modeled. It was seen that all of them followed the rightly skewed power law distribution with quite heavy tails in the first two cases. The exponential parameters for the same were determined to be -1.273 for unigram, -1.375 for bigram and -0.5266 for trigram. The plots for the same have been depicted. The goodness of fit for the model was determined using SSE, $R^2$ and RMSE. The results were found to be satisfactorily good. The model can be used to determine the pattern of the words used during this time. The sentiment analysis was performed on tweet messages by general public and WHO. The polarities of the individual messages were determined and a histogram of the same was plotted. It could be observed that most of the messages belonged to the neutral and positive categories. With WHO messages, $60\%$ of the messages were found to be positive and with general messages, $29\%$ were found to be positive and $54\%$ neutral. In both the cases, just about $15\%$ messages were of negative sentiment. The results obtained are a great reflection of the sentiments expressed worldwide during this pandemic.
No
2ec97cf890b537e393c2ce4c2b3bd05dfe46f683
2ec97cf890b537e393c2ce4c2b3bd05dfe46f683_0
Q: How do they measure correlation between the prediction and explanation quality? Text: Introduction In recent years deep neural network models have been successfully applied in a variety of applications such as machine translation BIBREF0 , object recognition BIBREF1 , BIBREF2 , game playing BIBREF3 , dialog BIBREF4 and more. However, their lack of interpretability makes them a less attractive choice when stakeholders must be able to understand and validate the inference process. Examples include medical diagnosis, business decision-making and reasoning, legal and safety compliance, etc. This opacity also presents a challenge simply for debugging and improving model performance. For neural systems to move into realms where more transparent, symbolic models are currently employed, we must find mechanisms to ground neural computation in meaningful human concepts, inferences, and explanations. One approach to this problem is to treat the explanation problem itself as a learning problem and train a network to explain the results of a neural computation. This can be done either with a single network learning jointly to explain its own predictions or with separate networks for prediction and explanation. Regardless, the availability of sufficient labelled training data is a key impediment. In previous work BIBREF5 we developed a synthetic conversational reasoning dataset in which the User presents the Agent with a simple, ambiguous story and a challenge question about that story. Ambiguities arise because some of the entities in the story have been replaced by variables, some of which may need to be known to answer the challenge question. A successful Agent must reason about what the answers might be, given the ambiguity, and, if there is more than one possible answer, ask for the value of a relevant variable to reduce the possible answer set. In this paper we present a new dataset e-QRAQ constructed by augmenting the QRAQ simulator with the ability to provide detailed explanations about whether the Agent's response was correct and why. Using this dataset we perform some preliminary experiments, training an extended End-to-End Memory Network architecture BIBREF6 to jointly predict a response and a partial explanation of its reasoning. We consider two types of partial explanation in these experiments: the set of relevant variables, which the Agent must know to ask a relevant, reasoned question; and the set of possible answers, which the Agent must know to answer correctly. We demonstrate a strong correlation between the qualities of the prediction and explanation. Related Work Current interpretable machine learning algorithms for deep learning can be divided into two approaches: one approach aims to explain black box models in a model-agnostic fashion BIBREF7 , BIBREF8 ; another studies learning models, in particular deep neural networks, by visualizing for example the activations or gradients inside the networks BIBREF9 , BIBREF10 , BIBREF11 . Other work has studied the interpretability of traditional machine learning algorithms, such as decision trees BIBREF12 , graphical models BIBREF13 , and learned rule-based systems BIBREF14 . Notably, none of these algorithms produces natural language explanations, although the rule-based system is close to a human-understandable form if the features are interpretable. We believe one of the major impediments to getting NL explanations is the lack of datasets containing supervised explanations. Datasets have often accelerated the advance of machine learning in their perspective areas BIBREF15 , including computer vision BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , natural language BIBREF21 , BIBREF22 , BIBREF23 , reasoning BIBREF24 , BIBREF25 , BIBREF5 , etc. Recently, natural language explanation was added to complement existing visual datasets via crowd-sourcing labeling BIBREF26 . However, we know of no question answering or reasoning datasets which offer NL explanations. Obviously labeling a large number of examples with explanations is a difficult and tedious task – and not one which is easily delegated to an unskilled worker. To make progress until such a dataset is available or other techniques obviate its need, we follow the approach of existing work such as BIBREF24 , BIBREF4 , and generate synthetic natural language explanations from a simulator. The QRAQ Dataset A QRAQ domain, as introduced in BIBREF5 , has two actors, the User and the Agent. The User provides a short story set in a domain similar to the HomeWorld domain of BIBREF24 , BIBREF27 given as an initial context followed by a sequence of events, in temporal order, and a challenge question. The stories are semantically coherent but may contain hidden, sometimes ambiguous, entity references, which the Agent must potentially resolve to answer the question. To do so, the Agent can query the User for the value of variables which hide the identity of entities in the story. At each point in the interaction, the Agent must determine whether it knows the answer, and if so, provide it; otherwise it must determine a variable to query which will reduce the potential answer set (a “relevant” variable). In example SECREF1 the actors $v, $w, $x and $y are treated as variables whose value is unknown to the Agent. In the first event, for example, $v refers to either Hannah or Emma, but the Agent can't tell which. In a realistic text this entity obfuscation might occur due to spelling or transcription errors, unknown descriptive references such as “Emma's sibling”, or indefinite pronouns such as “somebody”. Several datasets with 100k problems each and of varying difficulty have been released to the research community and are available for download BIBREF28 . The Dataset This paper's main contribution is an extension to the original QRAQ simulator that provides extensive explanations of the reasoning process required to solve a QRAQ problem. These explanations are created dynamically at runtime, in response to the Agent's actions. The following two examples illustrate these explanations, for several different scenarios: The context (C), events (E), and question (Q) parts of the problem are identical to those in a QRAQ problem. In addition there is a trace of the interaction of a trained Agent (A) model with the User (U) simulator. The simulator provides two kinds of explanations in response to the Agent's query or answer. The first kind denoted “U” indicates whether the Agent's response is correct or not and why. The second kind of explanation, denoted “U INLINEFORM0 ” provides a full description of what can be inferred in the current state of the interaction. In this case the relevant information is the set of possible answers at different points in the interaction (Porch, Boudoir / Porch for Example UID13 ) and the set of relevant variables ($V0 / none for Example UID13 ). In Example UID13 , illustrating a successful interaction, the Agent asks for the value of $V0 and the User responds with the answer (Silvia) as well as an explanation indicating that it was correct (helpful) and why. Specifically, in this instance it was helpful because it enabled an inference which reduced the possible answer set (and reduced the set of relevant variables). On the other hand, in Example UID30 , we see an example of a bad query and corresponding critical explanation. In general, the e-QRAQ simulator offers the following explanations to the Agent: When answering, the User will provide feedback depending on whether or not the Agent has enough information to answer; that is, on whether the set of possible answers contains only one answer. If the Agent has enough information, the User will only provide feedback on whether or not the answer was correct and on the correct answer if the answer was false. If the agent does not have enough information, and is hence guessing, the User will say so and list all still relevant variables and the resulting possible answers. When querying, the User will provide several kinds of feedback, depending on how useful the query was. A query on a variable not even occurring in the problem will trigger an explanation that says that the variable is not in the problem. A query on an irrelevant variable will result in an explanation showing that the story's protagonist cannot be the entity hidden by that variable. Finally, a useful (i.e. relevant) query will result in feedback showing the inference that is possible by knowing that variable's reference. This set of inference can also serve as the detailed explanation to obtain the correct answer above. The e-QRAQ simulator will be available upon publication of this paper at the same location as QRAQ BIBREF28 for researchers to test their interpretable learning algorithms. The “interaction flow” The normal interaction flow between the User and the Agent during runtime of the simulator is shown in Figure FIGREF49 , and is - with the exception of the additional explanations - identical to the interaction flow for the original QRAQ proglems BIBREF5 . This means that the User acts as a scripted counterpart to the Agent in the simulated e-QRAQ environment. We show interaction flows for both supervised and reinforcement learning modes. Additionally, we want to point out that INLINEFORM0 in Figure FIGREF49 can be both U and U INLINEFORM1 , i.e. both the natural language explanation and the internal state explanations. Performance and accuracy are measured by the User, that compares the Agent's suggested actions and the Agent's suggested explanations with the ground truth known by the User. Experimental Setup For the experiments, we use the User simulator explanations to train an extended memory network. As shown in Figure FIGREF50 , our network architecture extends the End-to-End Memory architecture of BIBREF6 , adding a two layer Multi-Layer Perceptron to a concatenation of all “hops” of the network. The explanation and response prediction are trained jointly. In these preliminary experiments we do not train directly with the natural language explanation from U, just the explanation of what can be inferred in the current state U INLINEFORM0 . In future experiments we will work with the U explanations directly. Specifically, for our experiments, we provide a classification label for the prediction output generating the Agent's actions, and a vector INLINEFORM0 of the following form to the explanation output (where INLINEFORM1 is an one-hot encoding of dimensionality (or vocabulary size) INLINEFORM2 of word INLINEFORM3 , and INLINEFORM4 is the explanation set: DISPLAYFORM0 For testing, we consider the network to predict a entity in the explanation if the output vector INLINEFORM0 surpasses a threshold for the index corresponding to that entity. We tried several thresholds, some adaptive (such as the average of the output vector's values), but found that a fixed threshold of .5 works best. Results To evaluate the model's ability to jointly learn to predict and explain its predictions we performed two experiments. First, we investigate how the prediction accuracy is affected by jointly training the network to produce explanations. Second, we evaluate how well the model learns to generate explanations. To understand the role of the explanation content in the learning process we perform both of these experiments for each of the two types of explanation: relevant variables and possible answers. We do not perform hyperparameter optimization on the E2E Memory Network, since we are more interested in relative performance. While we only show a single experimental run in our Figures, results were nearly identical for over five experimental runs. The experimental results differ widely for the two kinds of explanation considered, where an explanation based on possible answers provides better scores for both experiments. As illustrated in Figure FIGREF52 , simultaneously learning possible-answer explanations does not affect prediction, while learning relevant-variable explanation learning severely impairs prediction performance, slowing the learning by roughly a factor of four. We can observe the same outcome for the quality of the explanations learned, shown in Figure FIGREF53 . Here again the performance on possible-answer explanations is significantly higher than for relevant-variable explanations. Possible-answer explanations reach an F-Score of .9, while relevant-variable explanations one of .09 only, with precision and recall only slightly deviating from the F-Score in all experiments. We would expect that explanation performance should correlate with prediction performance. Since Possible-answer knowledge is primarily needed to decide if the net has enough information to answer the challenge question without guessing and relevant-variable knowledge is needed for the net to know what to query, we analyzed the network's performance on querying and answering separately. The memory network has particular difficulty learning to query relevant variables, reaching only about .5 accuracy when querying. At the same time, it learns to answer very well, reaching over .9 accuracy there. Since these two parts of the interaction are what we ask it to explain in the two modes, we find that the quality of the explanations strongly correlates with the quality of the algorithm executed by the network. Conclusion and Future Work We have constructed a new dataset and simulator, e-QRAQ, designed to test a network's ability to explain its predictions in a set of multi-turn, challenging reasoning problems. In addition to providing supervision on the correct response at each turn, the simulator provides two types of explanation to the Agent: A natural language assessment of the Agent's prediction which includes language about whether the prediction was correct or not, and a description of what can be inferred in the current state – both about the possible answers and the relevant variables. We used the relevant variable and possible answer explanations to jointly train a modified E2E memory network to both predict and explain it's predictions. Our experiments show that the quality of the explanations strongly correlates with the quality of the predictions. Moreover, when the network has trouble predicting, as it does with queries, requiring it to generate good explanations slows its learning. For future work, we would like to investigate whether we can train the net to generate natural language explanations and how this might affect prediction performance.
They look at the performance accuracy of explanation and the prediction performance
41174d8b176cb8549c2d83429d94ba8218335c84
41174d8b176cb8549c2d83429d94ba8218335c84_0
Q: Does the Agent ask for a value of a variable using natural language generated text? Text: Introduction In recent years deep neural network models have been successfully applied in a variety of applications such as machine translation BIBREF0 , object recognition BIBREF1 , BIBREF2 , game playing BIBREF3 , dialog BIBREF4 and more. However, their lack of interpretability makes them a less attractive choice when stakeholders must be able to understand and validate the inference process. Examples include medical diagnosis, business decision-making and reasoning, legal and safety compliance, etc. This opacity also presents a challenge simply for debugging and improving model performance. For neural systems to move into realms where more transparent, symbolic models are currently employed, we must find mechanisms to ground neural computation in meaningful human concepts, inferences, and explanations. One approach to this problem is to treat the explanation problem itself as a learning problem and train a network to explain the results of a neural computation. This can be done either with a single network learning jointly to explain its own predictions or with separate networks for prediction and explanation. Regardless, the availability of sufficient labelled training data is a key impediment. In previous work BIBREF5 we developed a synthetic conversational reasoning dataset in which the User presents the Agent with a simple, ambiguous story and a challenge question about that story. Ambiguities arise because some of the entities in the story have been replaced by variables, some of which may need to be known to answer the challenge question. A successful Agent must reason about what the answers might be, given the ambiguity, and, if there is more than one possible answer, ask for the value of a relevant variable to reduce the possible answer set. In this paper we present a new dataset e-QRAQ constructed by augmenting the QRAQ simulator with the ability to provide detailed explanations about whether the Agent's response was correct and why. Using this dataset we perform some preliminary experiments, training an extended End-to-End Memory Network architecture BIBREF6 to jointly predict a response and a partial explanation of its reasoning. We consider two types of partial explanation in these experiments: the set of relevant variables, which the Agent must know to ask a relevant, reasoned question; and the set of possible answers, which the Agent must know to answer correctly. We demonstrate a strong correlation between the qualities of the prediction and explanation. Related Work Current interpretable machine learning algorithms for deep learning can be divided into two approaches: one approach aims to explain black box models in a model-agnostic fashion BIBREF7 , BIBREF8 ; another studies learning models, in particular deep neural networks, by visualizing for example the activations or gradients inside the networks BIBREF9 , BIBREF10 , BIBREF11 . Other work has studied the interpretability of traditional machine learning algorithms, such as decision trees BIBREF12 , graphical models BIBREF13 , and learned rule-based systems BIBREF14 . Notably, none of these algorithms produces natural language explanations, although the rule-based system is close to a human-understandable form if the features are interpretable. We believe one of the major impediments to getting NL explanations is the lack of datasets containing supervised explanations. Datasets have often accelerated the advance of machine learning in their perspective areas BIBREF15 , including computer vision BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , natural language BIBREF21 , BIBREF22 , BIBREF23 , reasoning BIBREF24 , BIBREF25 , BIBREF5 , etc. Recently, natural language explanation was added to complement existing visual datasets via crowd-sourcing labeling BIBREF26 . However, we know of no question answering or reasoning datasets which offer NL explanations. Obviously labeling a large number of examples with explanations is a difficult and tedious task – and not one which is easily delegated to an unskilled worker. To make progress until such a dataset is available or other techniques obviate its need, we follow the approach of existing work such as BIBREF24 , BIBREF4 , and generate synthetic natural language explanations from a simulator. The QRAQ Dataset A QRAQ domain, as introduced in BIBREF5 , has two actors, the User and the Agent. The User provides a short story set in a domain similar to the HomeWorld domain of BIBREF24 , BIBREF27 given as an initial context followed by a sequence of events, in temporal order, and a challenge question. The stories are semantically coherent but may contain hidden, sometimes ambiguous, entity references, which the Agent must potentially resolve to answer the question. To do so, the Agent can query the User for the value of variables which hide the identity of entities in the story. At each point in the interaction, the Agent must determine whether it knows the answer, and if so, provide it; otherwise it must determine a variable to query which will reduce the potential answer set (a “relevant” variable). In example SECREF1 the actors $v, $w, $x and $y are treated as variables whose value is unknown to the Agent. In the first event, for example, $v refers to either Hannah or Emma, but the Agent can't tell which. In a realistic text this entity obfuscation might occur due to spelling or transcription errors, unknown descriptive references such as “Emma's sibling”, or indefinite pronouns such as “somebody”. Several datasets with 100k problems each and of varying difficulty have been released to the research community and are available for download BIBREF28 . The Dataset This paper's main contribution is an extension to the original QRAQ simulator that provides extensive explanations of the reasoning process required to solve a QRAQ problem. These explanations are created dynamically at runtime, in response to the Agent's actions. The following two examples illustrate these explanations, for several different scenarios: The context (C), events (E), and question (Q) parts of the problem are identical to those in a QRAQ problem. In addition there is a trace of the interaction of a trained Agent (A) model with the User (U) simulator. The simulator provides two kinds of explanations in response to the Agent's query or answer. The first kind denoted “U” indicates whether the Agent's response is correct or not and why. The second kind of explanation, denoted “U INLINEFORM0 ” provides a full description of what can be inferred in the current state of the interaction. In this case the relevant information is the set of possible answers at different points in the interaction (Porch, Boudoir / Porch for Example UID13 ) and the set of relevant variables ($V0 / none for Example UID13 ). In Example UID13 , illustrating a successful interaction, the Agent asks for the value of $V0 and the User responds with the answer (Silvia) as well as an explanation indicating that it was correct (helpful) and why. Specifically, in this instance it was helpful because it enabled an inference which reduced the possible answer set (and reduced the set of relevant variables). On the other hand, in Example UID30 , we see an example of a bad query and corresponding critical explanation. In general, the e-QRAQ simulator offers the following explanations to the Agent: When answering, the User will provide feedback depending on whether or not the Agent has enough information to answer; that is, on whether the set of possible answers contains only one answer. If the Agent has enough information, the User will only provide feedback on whether or not the answer was correct and on the correct answer if the answer was false. If the agent does not have enough information, and is hence guessing, the User will say so and list all still relevant variables and the resulting possible answers. When querying, the User will provide several kinds of feedback, depending on how useful the query was. A query on a variable not even occurring in the problem will trigger an explanation that says that the variable is not in the problem. A query on an irrelevant variable will result in an explanation showing that the story's protagonist cannot be the entity hidden by that variable. Finally, a useful (i.e. relevant) query will result in feedback showing the inference that is possible by knowing that variable's reference. This set of inference can also serve as the detailed explanation to obtain the correct answer above. The e-QRAQ simulator will be available upon publication of this paper at the same location as QRAQ BIBREF28 for researchers to test their interpretable learning algorithms. The “interaction flow” The normal interaction flow between the User and the Agent during runtime of the simulator is shown in Figure FIGREF49 , and is - with the exception of the additional explanations - identical to the interaction flow for the original QRAQ proglems BIBREF5 . This means that the User acts as a scripted counterpart to the Agent in the simulated e-QRAQ environment. We show interaction flows for both supervised and reinforcement learning modes. Additionally, we want to point out that INLINEFORM0 in Figure FIGREF49 can be both U and U INLINEFORM1 , i.e. both the natural language explanation and the internal state explanations. Performance and accuracy are measured by the User, that compares the Agent's suggested actions and the Agent's suggested explanations with the ground truth known by the User. Experimental Setup For the experiments, we use the User simulator explanations to train an extended memory network. As shown in Figure FIGREF50 , our network architecture extends the End-to-End Memory architecture of BIBREF6 , adding a two layer Multi-Layer Perceptron to a concatenation of all “hops” of the network. The explanation and response prediction are trained jointly. In these preliminary experiments we do not train directly with the natural language explanation from U, just the explanation of what can be inferred in the current state U INLINEFORM0 . In future experiments we will work with the U explanations directly. Specifically, for our experiments, we provide a classification label for the prediction output generating the Agent's actions, and a vector INLINEFORM0 of the following form to the explanation output (where INLINEFORM1 is an one-hot encoding of dimensionality (or vocabulary size) INLINEFORM2 of word INLINEFORM3 , and INLINEFORM4 is the explanation set: DISPLAYFORM0 For testing, we consider the network to predict a entity in the explanation if the output vector INLINEFORM0 surpasses a threshold for the index corresponding to that entity. We tried several thresholds, some adaptive (such as the average of the output vector's values), but found that a fixed threshold of .5 works best. Results To evaluate the model's ability to jointly learn to predict and explain its predictions we performed two experiments. First, we investigate how the prediction accuracy is affected by jointly training the network to produce explanations. Second, we evaluate how well the model learns to generate explanations. To understand the role of the explanation content in the learning process we perform both of these experiments for each of the two types of explanation: relevant variables and possible answers. We do not perform hyperparameter optimization on the E2E Memory Network, since we are more interested in relative performance. While we only show a single experimental run in our Figures, results were nearly identical for over five experimental runs. The experimental results differ widely for the two kinds of explanation considered, where an explanation based on possible answers provides better scores for both experiments. As illustrated in Figure FIGREF52 , simultaneously learning possible-answer explanations does not affect prediction, while learning relevant-variable explanation learning severely impairs prediction performance, slowing the learning by roughly a factor of four. We can observe the same outcome for the quality of the explanations learned, shown in Figure FIGREF53 . Here again the performance on possible-answer explanations is significantly higher than for relevant-variable explanations. Possible-answer explanations reach an F-Score of .9, while relevant-variable explanations one of .09 only, with precision and recall only slightly deviating from the F-Score in all experiments. We would expect that explanation performance should correlate with prediction performance. Since Possible-answer knowledge is primarily needed to decide if the net has enough information to answer the challenge question without guessing and relevant-variable knowledge is needed for the net to know what to query, we analyzed the network's performance on querying and answering separately. The memory network has particular difficulty learning to query relevant variables, reaching only about .5 accuracy when querying. At the same time, it learns to answer very well, reaching over .9 accuracy there. Since these two parts of the interaction are what we ask it to explain in the two modes, we find that the quality of the explanations strongly correlates with the quality of the algorithm executed by the network. Conclusion and Future Work We have constructed a new dataset and simulator, e-QRAQ, designed to test a network's ability to explain its predictions in a set of multi-turn, challenging reasoning problems. In addition to providing supervision on the correct response at each turn, the simulator provides two types of explanation to the Agent: A natural language assessment of the Agent's prediction which includes language about whether the prediction was correct or not, and a description of what can be inferred in the current state – both about the possible answers and the relevant variables. We used the relevant variable and possible answer explanations to jointly train a modified E2E memory network to both predict and explain it's predictions. Our experiments show that the quality of the explanations strongly correlates with the quality of the predictions. Moreover, when the network has trouble predicting, as it does with queries, requiring it to generate good explanations slows its learning. For future work, we would like to investigate whether we can train the net to generate natural language explanations and how this might affect prediction performance.
Yes
47ecaca8adc7306e3014e8c4358e306a5f0e1716
47ecaca8adc7306e3014e8c4358e306a5f0e1716_0
Q: What models does this overview cover? Text: Introduction Before introducing the KB completion task in details, let us return to the classic Word2Vec example of a “royal” relationship between “ $\mathsf {king}$ ” and “ $\mathsf {man}$ ”, and between “ $\mathsf {queen}$ ” and “ $\mathsf {woman}$ .” As illustrated in this example: $v_{king} - v_{man} \approx v_{queen} - v_{woman}$ , word vectors learned from a large corpus can model relational similarities or linguistic regularities between pairs of words as translations in the projected vector space BIBREF0 , BIBREF1 . Figure 1 shows another example of a relational similarity between word pairs of countries and capital cities: $ v_{Japan} - v_{Tokyo} &\approx & v_{Germany} - v_{Berlin}\\ v_{Germany} - v_{Berlin} &\approx & v_{Italy} - v_{Rome} \\ v_{Italy} - v_{Rome} &\approx & v_{Portugal} - v_{Lisbon} $ Let us consider the country and capital pairs in Figure 1 to be pairs of entities rather than word types. That is, we now represent country and capital entities by low-dimensional and dense vectors. The relational similarity between word pairs is presumably to capture a “ $\mathsf {is\_capital\_of}$ ” relationship between country and capital entities. Also, we represent this relationship by a translation vector $v_{{is\_capital\_of}}$ in the entity vector space. Thus, we expect: $ v_{Tokyo} + v_{{is\_capital\_of}} - v_{Japan} &\approx & 0 \\ v_{Berlin} + v_{{is\_capital\_of}} - v_{Germany} &\approx & 0 \\ v_{Rome} + v_{{is\_capital\_of}} - v_{Italy} &\approx & 0 \\ v_{Lisbon} + v_{{is\_capital\_of}} - v_{Portugal} &\approx & 0 $ This intuition inspired the TransE model—a well-known embedding model for KB completion or link prediction in KBs BIBREF2 . Knowledge bases are collections of real-world triples, where each triple or fact $(h, r, t)$ in KBs represents some relation $r$ between a head entity $h$ and a tail entity $t$ . KBs can thus be formalized as directed multi-relational graphs, where nodes correspond to entities and edges linking the nodes encode various kinds of relationship BIBREF3 , BIBREF4 . Here entities are real-world things or objects such as persons, places, organizations, music tracks or movies. Each relation type defines a certain relationship between entities. For example, as illustrated in Figure 2 , the relation type “ $\mathsf {child\_of}$ ” relates person entities with each other, while the relation type “ $\mathsf {born\_in}$ ” relates person entities with place entities. Several KB examples include the domain-specific KB GeneOntology and popular generic KBs of WordNet BIBREF5 , YAGO BIBREF6 , Freebase BIBREF7 , NELL BIBREF8 and DBpedia BIBREF9 as well as commercial KBs such as Google's Knowledge Graph, Microsoft's Satori and Facebook's Open Graph. Nowadays, KBs are used in a number of commercial applications including search engines such as Google, Microsoft's Bing and Facebook's Graph search. They also are useful resources for many NLP tasks such as question answering BIBREF10 , BIBREF11 , word sense disambiguation BIBREF12 , BIBREF13 , semantic parsing BIBREF14 , BIBREF15 and co-reference resolution BIBREF16 , BIBREF17 . A main issue is that even very large KBs, such as Freebase and DBpedia, which contain billions of fact triples about the world, are still far from complete. In particular, in English DBpedia 2014, 60% of person entities miss a place of birth and 58% of the scientists do not have a fact about what they are known for BIBREF19 . In Freebase, 71% of 3 million person entities miss a place of birth, 75% do not have a nationality while 94% have no facts about their parents BIBREF20 . So, in terms of a specific application, question answering systems based on incomplete KBs would not provide a correct answer given a correctly interpreted question. For example, given the incomplete KB in Figure 2 , it would be impossible to answer the question “where was Jane born ?”, although the question is completely matched with existing entity and relation type information (i.e., “ $\mathsf {Jane}$ ” and “ $\mathsf {born\_in}$ ”) in KB. Consequently, much work has been devoted towards knowledge base completion to perform link prediction in KBs, which attempts to predict whether a relationship/triple not in the KB is likely to be true, i.e., to add new triples by leveraging existing triples in the KB BIBREF21 , BIBREF22 , BIBREF23 , BIBREF3 . For example, we would like to predict the missing tail entity in the incomplete triple $\mathsf {(Jane, born\_in, ?)}$ or predict whether the triple $\mathsf {(Jane, born\_in, Miami)}$ is correct or not. Embedding models for KB completion have been proven to give state-of-the-art link prediction performances, in which entities are represented by latent feature vectors while relation types are represented by latent feature vectors and/or matrices and/or third-order tensors BIBREF24 , BIBREF25 , BIBREF2 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF19 , BIBREF30 , BIBREF3 , BIBREF31 , BIBREF32 , BIBREF33 . This article briefly overviews the embedding models for KB completion, and then summarizes up-to-date experimental results on two standard evaluation tasks: i) the entity prediction task—which is also referred to as the link prediction task BIBREF2 —and ii) the triple classification task BIBREF34 . A general approach Let $\mathcal {E}$ denote the set of entities and $\mathcal {R}$ the set of relation types. Denote by $\mathcal {G}$ the knowledge base consisting of a set of correct triples $(h, r, t)$ , such that $h, t \in \mathcal {E}$ and $r \in \mathcal {R}$ . For each triple $(h, r, t)$ , the embedding models define a score function $f(h, r, t)$ of its implausibility. Their goal is to choose $f$ such that the score $f(h, r, t)$ of a plausible triple $\mathcal {R}$0 is smaller than the score $\mathcal {R}$1 of an implausible triple $\mathcal {R}$2 . Table 1 summarizes different score functions $f(h, r, t)$ and the optimization algorithms used to estimate model parameters. To learn model parameters (i.e., entity vectors, relation vectors or matrices), the embedding models minimize an objective function. A common objective function is the following margin-based function: $$\mathcal {L} & = \sum _{\begin{array}{c}(h,r,t) \in \mathcal {G} \\ (h^{\prime },r,t^{\prime }) \in \mathcal {G}^{\prime }_{(h, r, t)}\end{array}} [\gamma + f(h, r, t) - f(h^{\prime }, r, t^{\prime })]_+ \nonumber $$ (Eq. 5) where $[x]_+ = \max (0, x)$ , $\gamma $ is the margin hyper-parameter, and $\mathcal {G}^{\prime }_{(h, r, t)} $ is the set of incorrect triples generated by corrupting the correct triple $(h, r, t)\in \mathcal {G}$ . Specific models The Unstructured model BIBREF22 assumes that the head and tail entity vectors are similar. As the Unstructured model does not take the relationship into account, it cannot distinguish different relation types. The Structured Embedding (SE) model BIBREF35 assumes that the head and tail entities are similar only in a relation-dependent subspace, where each relation is represented by two different matrices. Furthermore, the SME model BIBREF22 uses four different matrices to project entity and relation vectors into a subspace. The TransE model BIBREF2 is inspired by models such as the Word2Vec Skip-gram model BIBREF0 where relationships between words often correspond to translations in latent feature space. TorusE BIBREF36 embeds entities and relations on a torus to handle TransE's regularization problem. The TransH model BIBREF26 associates each relation with a relation-specific hyperplane and uses a projection vector to project entity vectors onto that hyperplane. TransD BIBREF37 and TransR/CTransR BIBREF28 extend the TransH model by using two projection vectors and a matrix to project entity vectors into a relation-specific space, respectively. Similar to TransR, TransR-FT BIBREF38 also uses a matrix to project head and tail entity vectors. TEKE_H BIBREF39 extends TransH to incorporate rich context information in an external text corpus. lppTransD BIBREF40 extends TransD to additionally use two projection vectors for representing each relation. STransE BIBREF41 and TranSparse BIBREF42 can be viewed as direct extensions of the TransR model, where head and tail entities are associated with their own projection matrices. Unlike STransE, the TranSparse model uses adaptive sparse matrices, whose sparse degrees are defined based on the number of entities linked by relations. TranSparse-DT BIBREF43 is an extension of TranSparse with a dynamic translation. ITransF BIBREF44 can be considered as a generalization of STransE, which allows sharing statistic regularities between relation projection matrices and alleviates data sparsity issue. DISTMULT BIBREF45 is based on the Bilinear model BIBREF24 , BIBREF22 , BIBREF25 where each relation is represented by a diagonal rather than a full matrix. The neural tensor network (NTN) model BIBREF34 uses a bilinear tensor operator to represent each relation while ER-MLP BIBREF27 and ProjE BIBREF46 can be viewed as simplified versions of NTN. Such quadratic forms are also used to model entities and relations in KG2E BIBREF47 , TransG BIBREF48 , ComplEx BIBREF31 , TATEC BIBREF3 , RSTE BIBREF49 and ANALOGY BIBREF50 . In addition, the HolE model BIBREF33 uses circular correlation–a compositional operator–which can be interpreted as a compression of the tensor product. ConvE BIBREF51 and ConvKB BIBREF52 are based on convolutional neural networks. ConvE uses a 2D convolutional layer directly over head-entity and relation vector embeddings while ConvKB applies a convolutional layer over embedding triples. Unlike ConvE and ConvKB, the IRN model BIBREF53 uses a shared memory and recurrent neural network-based controller to implicitly model multi-step structured relationships. Recent research has shown that relation paths between entities in KBs provide richer context information and improve the performance of embedding models for KB completion BIBREF54 , BIBREF55 , BIBREF56 , BIBREF29 , BIBREF32 , BIBREF57 , BIBREF58 . BIBREF54 constructed relation paths between entities and, viewing entities and relations in the path as pseudo-words, then applied Word2Vec algorithms BIBREF0 to produce pre-trained vectors for these pseudo-words. BIBREF54 showed that using these pre-trained vectors for initialization helps to improve the performance of models TransE BIBREF2 , SME BIBREF22 and SE BIBREF35 . BIBREF55 used the implausibility score produced by SME to compute the weights of relation paths. PTransE-RNN BIBREF59 models relation paths by using a recurrent neural network. In addition, rTransE BIBREF56 , PTransE-ADD BIBREF59 and TransE-comp BIBREF29 are extensions of the TransE model. These models similarly represent a relation path by a vector which is the sum of the vectors of all relations in the path, whereas in the Bilinear-comp model BIBREF29 and the pruned-paths model BIBREF32 , each relation is a matrix and so it represents the relation path by matrix multiplication. The neighborhood mixture model TransE-NMM BIBREF57 can be also viewed as a three-relation path model as it takes into account the neighborhood entity and relation information of both head and tail entities in each triple. Neighborhood information is also exploited in the relational graph convolutional networks R-GCN BIBREF60 . Furthermore, BIBREF58 proposed the KB $_{LRN}$ framework to combine relational paths of length one and two with latent and numerical features. Other KB completion models The Path Ranking Algorithm (PRA) BIBREF21 is a random walk inference technique which was proposed to predict a new relationship between two entities in KBs. BIBREF61 used PRA to estimate the probability of an unseen triple as a combination of weighted random walks that follow different paths linking the head entity and tail entity in the KB. BIBREF23 made use of an external text corpus to increase the connectivity of the KB used as the input to PRA. BIBREF62 improved PRA by proposing a subgraph feature extraction technique to make the generation of random walks in KBs more efficient and expressive, while BIBREF63 extended PRA to couple the path ranking of multiple relations. PRA can also be used in conjunction with first-order logic in the discriminative Gaifman model BIBREF64 . In addition, BIBREF65 used a recurrent neural network to learn vector representations of PRA-style relation paths between entities in the KB. Other random-walk based learning algorithms for KB completion can be also found in BIBREF66 , BIBREF67 , BIBREF68 and BIBREF69 . Recently, BIBREF70 have proposed a Neural Logic Programming (LP) framework to learning probabilistic first-order logical rules for KB reasoning, producing competitive link prediction performances. See other methods for learning from KBs and multi-relational data in BIBREF4 . Evaluation tasks Two standard tasks are proposed to evaluate embedding models for KB completion including: the entity prediction task, i.e. link prediction BIBREF2 , and the triple classification task BIBREF34 . Information about benchmark datasets for KB completion evaluation is given in Table 2 . Commonly, datasets FB15k and WN18 BIBREF2 are used for entity prediction evaluation, while datasets FB13 and WN11 BIBREF34 are used for triple classification evaluation. FB15k and FB13 are derived from the large real-world fact KB FreeBase BIBREF7 . WN18 and WN11 are derived from the large lexical KB WordNet BIBREF71 . BIBREF30 noted that FB15k and WN18 are not challenging datasets because they contain many reversible triples. BIBREF51 showed a concrete example: A test triple ( $\mathsf {feline, hyponym, cat}$ ) can be mapped to a training triple ( $\mathsf {cat, hypernym, feline}$ ), thus knowing that “ $\mathsf {hyponym}$ ” and “ $\mathsf {hypernym}$ ” are reversible allows us to easily predict the majority of test triples. So, datasets FB15k-237 BIBREF30 and WN18RR BIBREF51 are created to serve as realistic KB completion datasets which represent a more challenging learning setting. FB15k-237 and WN18RR are subsets of FB15k and WN18, respectively. Note that when creating the FB13 and WN11 datasets, BIBREF34 already filtered out triples from the test set if either or both of their head and tail entities also appear in the training set in a different relation type or order. Entity prediction The entity prediction task, i.e. link prediction BIBREF2 , predicts the head or the tail entity given the relation type and the other entity, i.e. predicting $h$ given $(?, r, t)$ or predicting $t$ given $(h, r, ?)$ where $?$ denotes the missing element. The results are evaluated using a ranking induced by the function $f(h, r, t)$ on test triples. Each correct test triple $(h, r, t)$ is corrupted by replacing either its head or tail entity by each of the possible entities in turn, and then these candidates are ranked in ascending order of their implausibility score. This is called as the “Raw” setting protocol. Furthermore, the “Filtered” setting protocol, described in NIPS20135071, filters out before ranking any corrupted triples that appear in the KB. Ranking a corrupted triple appearing in the KB (i.e. a correct triple) higher than the original test triple is also correct, but is penalized by the “Raw” score, thus the “Filtered” setting provides a clearer view on the ranking performance. In addition to the mean rank and the Hits@10 (i.e., the proportion of test triples for which the target entity was ranked in the top 10 predictions), which were originally used in the entity prediction task BIBREF2 , recent work also reports the mean reciprocal rank (MRR). In both “Raw” and “Filtered” settings, mean rank is always greater or equal to 1 and the lower mean rank indicates better entity prediction performance. MRR and Hits@10 scores always range from 0.0 to 1.0, and higher score reflects better prediction result. Table 3 lists entity prediction results of KB completion models on the FB15k and WN18 datasets. The first 26 rows report the performance of triple-based models that directly optimize a score function for the triples in a KB, i.e. they do not exploit information about alternative paths between head and tail entities. The next 9 rows report results of models that exploit information about relation paths. The last 3 rows present results for models which make use of textual mentions derived from a large external corpus. The reasons why much work has been devoted towards developing triple-based models are mentioned by BIBREF41 as follows: (1) additional information sources might not be available, e.g., for KBs for specialized domains, (2) models that do not exploit path information or external resources are simpler and thus typically much faster to train than the more complex models using path or external information, and (3) the more complex models that exploit path or external information are typically extensions of these simpler models, and are often initialized with parameters estimated by such simpler models, so improvements to the simpler models should yield corresponding improvements to the more complex models as well. Table 3 shows that the models using external corpus information or employing path information generally achieve better scores than the triple-based models that do not use such information. In terms of models not exploiting path or external information, on FB15k the IRN model BIBREF53 obtains highest scores, followed by DISTMULT BIBREF45 , ProjE BIBREF46 and ConvE BIBREF51 . On WN18 top-4 triple-based models are ConvE, IRN, TorusE BIBREF36 and ANALOGY BIBREF50 . Table 4 lists recent results on datasets FB15k-237 and WN18RR. On FB15k-237, by exploiting external textual mentions of entities, the Conv-E + Conv-DISTMULT model BIBREF75 produces the highest Hits@10 and MRR. In terms of models not exploiting external textual information, on FB15k-237, ER-MLP BIBREF27 can be considered as the best model to date, followed by ConvKB BIBREF52 and KB $_{LRN}$ BIBREF58 . On WN18RR, ConvKB can be considered as the best one, followed by ComplEx BIBREF31 and TransE BIBREF2 . Clearly, tables 3 and 4 show that TransE, despite of its simplicity, can produce very competitive results (by performing a careful grid search of hyper-parameters). Triple classification The triple classification task was first introduced by NIPS20135028, and since then it has been used to evaluate various embedding models. The aim of this task is to predict whether a triple $(h, r, t)$ is correct or not. For classification, a relation-specific threshold $\theta _r$ is set for each relation type $r$ . If the implausibility score of an unseen test triple $(h, r, t)$ is smaller than $\theta _r$ then the triple will be classified as correct, otherwise incorrect. Following NIPS20135028, the relation-specific thresholds are determined by maximizing the micro-averaged accuracy, which is a per-triple average, on the validation set. Table 5 presents the triple classification results of KB completion models on the WN11 and FB13 datasets. The first 6 rows report the performance of models that use TransE to initialize the entity and relation vectors. The last 12 rows present the accuracy of models with randomly initialized parameters. Note that there are higher results reported for NTN, Bilinear-comp and TransE-comp when entity vectors are initialized by averaging the pre-trained word vectors BIBREF0 , BIBREF1 . It is not surprising as many entity names in WordNet and FreeBase are lexically meaningful. It is possible for all other embedding models to utilize the pre-trained word vectors as well. However, as pointed out by BIBREF26 and BIBREF29 , averaging the pre-trained word vectors for initializing entity vectors is an open problem and it is not always useful since entity names in many domain-specific KBs are not lexically meaningful. Conclusions and further discussion This article presented a brief overview of embedding models of entity and relationships for KB completion. The article also provided update-to-date experimental results of the embedding models on the entity prediction and triple classification tasks on benchmark datasets FB15k, WN18, FB15k-237, WN18RR, FB13 and WN11. Dozens of embedding models have been proposed for KB completion, so it is worth to further explore these models for a new application where we could formulate its corresponding data into triples. For example of an interesting application, BIBREF76 extended the STransE model BIBREF41 for a search personalization task in information retrieval, to model user-oriented relationships between submitted queries and documents returned by search engines.
This article presented a brief overview of embedding models of entity and relationships for KB completion.
496b4ae3c0e26ec95ff6ded5e6790f24c35f0f5b
496b4ae3c0e26ec95ff6ded5e6790f24c35f0f5b_0
Q: How do they incorporate human advice? Text: Introduction The problem of knowledge base population (KBP) – constructing a knowledge base (KB) of facts gleaned from a large corpus of unstructured data – poses several challenges for the NLP community. Commonly, this relation extraction task is decomposed into two subtasks – entity linking, in which entities are linked to already identified identities within the document or to entities in the existing KB, and slot filling, which identifies certain attributes about a target entity. We present our work-in-progress for KBP slot filling based on our probabilistic logic formalisms and present the different components of the system. Specifically, we employ Relational Dependency Networks BIBREF0 , a formalism that has been successfully used for joint learning and inference from stochastic, noisy, relational data. We consider our RDN system against the current state-of-the-art for KBP to demonstrate the effectiveness of our probabilistic relational framework. Additionally, we show how RDNs can effectively incorporate many popular approaches in relation extraction such as joint learning, weak supervision, word2vec features, and human advice, among others. We provide a comprehensive comparison of settings such as joint learning vs learning of individual relations, use of weak supervision vs gold standard labels, using expert advice vs only learning from data, etc. These questions are extremely interesting from a general machine learning perspective, but also critical to the NLP community. As we show empirically, some of the results such as human advice being useful in many relations and joint learning being beneficial in the cases where the relations are correlated among themselves are on the expected lines. However, some surprising observations include the fact that weak supervision is not as useful as expected and word2vec features are not as predictive as the other domain-specific features. We first present the proposed pipeline with all the different components of the learning system. Next we present the set of 14 relations that we learn on before presenting the experimental results. We finally discuss the results of these comparisons before concluding by presenting directions for future research. Proposed Pipeline We present the different aspects of our pipeline, depicted in Figure FIGREF1 . We will first describe our approach to generating features and training examples from the KBP corpus, before describing the core of our framework – the RDN Boost algorithm. Feature Generation Given a training corpus of raw text documents, our learning algorithm first converts these documents into a set of facts (i.e., features) that are encoded in first order logic (FOL). Raw text is processed using the Stanford CoreNLP Toolkit BIBREF1 to extract parts-of-speech, word lemmas, etc. as well as generate parse trees, dependency graphs and named-entity recognition information. The full set of extracted features is listed in Table TABREF3 . These are then converted into features in prolog (i.e., FOL) format and are given as input to the system. In addition to the structured features from the output of Stanford toolkit, we also use deeper features based on word2vec BIBREF2 as input to our learning system. Standard NLP features tend to treat words as individual objects, ignoring links between words that occur with similar meanings or, importantly, similar contexts (e.g., city-country pairs such as Paris – France and Rome – Italy occur in similar contexts). word2vec provide a continuous-space vector embedding of words that, in practice, capture many of these relationships BIBREF2 , BIBREF3 . We use word vectors from Stanford and Google along with a few specific words that, experts believe, are related to the relations learned. For example, we include words such as “father” and “mother” (inspired by the INLINEFORM0 relation) or “devout”,“convert”, and “follow” ( INLINEFORM1 relation). We generated features from word vectors by finding words with high similarity in the embedded space. That is, we used word vectors by considering relations of the following form: INLINEFORM2 , where INLINEFORM3 is the cosine similarity score between the words. Only the top cosine similarity scores for a word are utilized. Weak Supervision One difficulty with the KBP task is that very few documents come labeled as gold standard labels, and further annotation is prohibitively expensive beyond a few hundred documents. This is problematic for discriminative learning algorithms, like the RDN learning algorithm, which excel when given a large supervised training corpus. To overcome this obstacle, we employ weak supervision – the use of external knowledge (e.g., a database) to heuristically label examples. Following our work in Soni et al. akbc16, we employ two approaches for generating weakly supervised examples – distant supervision and knowledge-based weak supervision. Distant supervision entails the use of external knowledge (e.g., a database) to heuristically label examples. Following standard procedure, we use three data sources – Never Ending Language Learner (NELL) BIBREF4 , Wikipedia Infoboxes and Freebase. For a given target relation, we identify relevant database(s), where the entries in the database form entity pairs (e.g., an entry of INLINEFORM0 for a parent database) that will serve as a seed for positive training examples. These pairs must then be mapped to mentions in our corpus – that is, we must find sentences in our corpus that contain both entities together BIBREF5 . This process is done heuristically and is fraught with potential errors and noise BIBREF6 . An alternative approach, knowledge-based weak supervision is based on previous work BIBREF7 , BIBREF8 with the following insight: labels are typically created by “domain experts” who annotate the labels carefully, and who typically employ some inherent rules in their mind to create examples. For example, when identifying family relationship, we may have an inductive bias towards believing two persons in a sentence with the same last name are related, or that the words “son” or “daughter” are strong indicators of a parent relation. We call this world knowledge as it describes the domain (or the world) of the target relation. To this effect, we encode the domain expert's knowledge in the form of first-order logic rules with accompanying weights to indicate the expert's confidence. We use the probabilistic logic formalism Markov Logic Networks BIBREF9 to perform inference on unlabeled text (e.g., the TAC KBP corpus). Potential entity pairs from the corpus are queried to the MLN, yielding (weakly-supervised) positive examples. We choose MLNs as they permit domain experts to easily write rules while providing a probabilistic framework that can handle noise, uncertainty, and preferences while simultaneously ranking positive examples. We use the Tuffy system BIBREF10 to perform inference. The inference algorithm implemented inside Tuffy appears to be robust and scales well to millions of documents. For the KBP task, some rules that we used are shown in Table TABREF8 . For example, the first rule identifies any number following a person's name and separated by a comma is likely to be the person's age (e.g., “Sharon, 42”). The third and fourth rule provide examples of rules that utilize more textual features; these rules state the appearance of the lemma “mother” or “father” between two persons is indicative of a parent relationship (e.g.,“Malia's father, Barack, introduced her...”). To answer Q1, we generated positive training examples using the weak supervision techniques specified earlier. Specifically, we evaluated 10 relations as show in Table TABREF20 . Based on experiments from BIBREF8 , we utilized our knowledge-based weak supervision approach to provide positive examples in all but two of our relations. A range of 4 to 8 rules are derived for each relation. Examples for the organization relations INLINEFORM0 and INLINEFORM1 were generated using standard distant supervision techniques – Freebase databases were mapped to INLINEFORM2 while Wikipedia Infoboxes provides entity pairs for INLINEFORM3 . Lastly, only 150 weakly supervised examples were utilized in our experiments (all gold standard examples were utilized). Performing larger runs is part of work in progress. The results are presented in Table TABREF20 . We compared our standard pipeline (individually learned relations with only standard features) learned on gold standard examples only versus our system learned with weak and gold examples combined. Surprisingly, weak supervision does not seem to help learn better models for inferring relations in most cases. Only two relations – INLINEFORM0 , INLINEFORM1 – see substantial improvements in AUC ROC, while F1 shows improvements for INLINEFORM2 and, INLINEFORM3 , and INLINEFORM4 . We hypothesize that generating more examples will help (some relations produced thousands of examples), but nonetheless find the lack of improved models from even a modest number of examples a surprising result. Alternatively, the number of gold standard examples provided may be sufficient to learn RDN models. Thus Q1 is answered equivocally, but in the negative. Learning Relational Dependency Networks Previous research BIBREF11 has demonstrated that joint inferences of the relations are more effective than considering each relation individually. Consequently, we have considered a formalism that has been successfully used for joint learning and inference from stochastic, noisy, relational data called Relational Dependency Networks (RDNs) BIBREF0 , BIBREF12 . RDNs extend dependency networks (DN) BIBREF13 to the relational setting. The key idea in a DN is to approximate the joint distribution over a set of random variables as a product of their marginal distributions, i.e., INLINEFORM0 INLINEFORM1 INLINEFORM2 . It has been shown that employing Gibbs sampling in the presence of a large amount of data allows this approximation to be particularly effective. Note that, one does not have to explicitly check for acyclicity making these DNs particularly easy to be learned. In an RDN, typically, each distribution is represented by a relational probability tree (RPT) BIBREF14 . However, following previous work BIBREF12 , we replace the RPT of each distribution with a set of relational regression trees BIBREF15 built in a sequential manner i.e., replace a single tree with a set of gradient boosted trees. This approach has been shown to have state-of-the-art results in learning RDNs and we adapted boosting to learn for relation extraction. Since this method requires negative examples, we created negative examples by considering all possible combinations of entities that are not present in positive example set and sampled twice as many negatives as positive examples. Incorporating Human Advice While most relational learning methods restrict the human to merely annotating the data, we go beyond and request the human for advice. The intuition is that we as humans read certain patterns and use them to deduce the nature of the relation between two entities present in the text. The goal of our work is to capture such mental patterns of the humans as advice to the learning algorithm. We modified the work of Odom et al. odomAIME15,odomAAAI15 to learn RDNs in the presence of advice. The key idea is to explicitly represent advice in calculating gradients. This allows the system to trade-off between data and advice throughout the learning phase, rather than only consider advice in initial iterations. Advice, in particular, become influential in the presence of noisy or less amout of data. A few sample advice rules in English (these are converted to first-order logic format and given as input to our algorithm) are presented in Table TABREF11 . Note that some of the rules are “soft" rules in that they are not true in many situations. Odom et al. odomAAAI15 weigh the effect of the rules against the data and hence allow for partially correct rules. Experiments and Results We now present our experimental evaluation. We considered 14 specific relations from two categories, person and organization from the TAC KBP competition. The relations considered are listed in the left column of Table TABREF13 . We utilize documents from KBP 2014 for training while utilizing documents from the 2015 corpus for testing. All results presented are obtained from 5 different runs of the train and test sets to provide more robust estimates of accuracy. We consider three standard metrics – area under the ROC curve, F-1 score and the recall at a certain precision. We chose the precision as INLINEFORM0 since the fraction of positive examples to negatives is 1:2 (we sub-sampled the negative examples for the different training sets). Negative examples are re-sampled for each training run. It must be mentioned that not all relations had the same number of hand-annotated (gold standard) examples because the 781 documents that we annotated had different number of instances for these relations. The train/test gold-standard sizes are provided in the table, including weakly supervised examples, if available. Lastly, to control for other factors, the default setting for our experiments is individual learning, standard features, with gold standard examples only (i.e., no weak supervision, word2vec, advice, or advice). Since our system had different components, we aimed to answer the following questions: Joint learning To address our next question, we assessed our pipeline when learning relations independently (i.e., individually) versus learning relations jointly within the RDN, displayed in Table TABREF22 . Recall and F1 are omitted for conciseness – the conclusions are the same across all metrics. Joint learning appears to help in about half of the relations (8/14). Particularly, in person category, joint learning with gold standard outperforms their individual learning counterparts. This is due to the fact that some relations such as parents, spouse, siblings etc. are inter-related and learning them jointly indeed improves performance. Hence Q2 can be answered affirmatively for half the relations. word2vec Table TABREF24 shows the results of experiments comparing the RDN framework with and without word2vec features. word2vec appears to largely have no impact, boosting results in just 4 relations. We hypothesize that this may be due to a limitation in the depth of trees learned. Learning more and/or deeper trees may improve use of word2vec features, and additional work can be done to generate deep features from word vectors. Q3 is answered cautiously in the negative, although future work could lead to improvements. Advice Table TABREF26 shows the results of experiments that test the use of advice within the joint learning setting. The use of advice improves or matches the performance of using only joint learning. The key impact of advice can be mostly seen in the improvement of recall in several relations. This clearly shows that using human advice patterns allows us to extract more relations effectively making up for noisy or less number of training examples. This is in-line with previously published machine learning literature BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 in that humans can be more than mere labelers by providing useful advice to learning algorithms that can improve their performance. Thus Q4 can be answered affirmatively. RDN Boost vs Relation Factory Relation factory (RF) BIBREF16 is an efficient, open source system for performing relation extraction based on distantly supervised classifiers. It was the top system in the TAC KBP 2013 competition BIBREF21 and thus serves as a suitable baseline for our method. RF is very conservative in its responses, making it very difficult to adjust the precision levels. To be most generous to RF, we present recall for all returned results (i.e., score INLINEFORM0 ). The AUC ROC, recall, and F1 scores of our system against RF are presented in Table TABREF28 . Our system performs comparably, and often better than the state-of-the-art Relation Factory system. In particular, our method outperforms Relation Factory in AUC ROC across all relations. Recall provides a more mixed picture with both approaches showing some improvements – RDN outperforms in 6 relations while Relation Factory does so in 8. Note that in the instances where RDN provides superior recall, it does so with dramatic improvements (RF often returns 0 positives in these relations). F1 also shows RDN's superior performance, outperforming RF in most relations. Thus, the conclusion for Q5 is that our RDN framework performas comparably, if not better, across all metrics against the state-of-the-art. Conclusion We presented our fully relational system utilizing Relational Dependency Networks for the Knowledge Base Population task. We demonstrated RDN's ability to effectively learn the relation extraction task, performing comparably (and often better) than the state-of-art Relation Factory system. Furthermore, we demonstrated the ability of RDNs to incorporate various concepts in a relational framework, including word2vec, human advice, joint learning, and weak supervision. Some surprising results are that weak supervision and word2vec did not significantly improve performance. However, advice is extremely useful thus validating the long-standing results inside the Artificial Intelligence community for the relation extraction task as well. Possible future directions include considering a larger number of relations, deeper features and finally, comparisons with more systems. We believe further work on developing word2vec features and utilizing more weak supervision examples may reveal further insights into how to effectively utilize such features in RDNs.
by converting human advice to first-order logic format and use as an input to calculate gradient
281cb27cfa0eea12180fd82ae33035945476609e
281cb27cfa0eea12180fd82ae33035945476609e_0
Q: What do they learn jointly? Text: Introduction The problem of knowledge base population (KBP) – constructing a knowledge base (KB) of facts gleaned from a large corpus of unstructured data – poses several challenges for the NLP community. Commonly, this relation extraction task is decomposed into two subtasks – entity linking, in which entities are linked to already identified identities within the document or to entities in the existing KB, and slot filling, which identifies certain attributes about a target entity. We present our work-in-progress for KBP slot filling based on our probabilistic logic formalisms and present the different components of the system. Specifically, we employ Relational Dependency Networks BIBREF0 , a formalism that has been successfully used for joint learning and inference from stochastic, noisy, relational data. We consider our RDN system against the current state-of-the-art for KBP to demonstrate the effectiveness of our probabilistic relational framework. Additionally, we show how RDNs can effectively incorporate many popular approaches in relation extraction such as joint learning, weak supervision, word2vec features, and human advice, among others. We provide a comprehensive comparison of settings such as joint learning vs learning of individual relations, use of weak supervision vs gold standard labels, using expert advice vs only learning from data, etc. These questions are extremely interesting from a general machine learning perspective, but also critical to the NLP community. As we show empirically, some of the results such as human advice being useful in many relations and joint learning being beneficial in the cases where the relations are correlated among themselves are on the expected lines. However, some surprising observations include the fact that weak supervision is not as useful as expected and word2vec features are not as predictive as the other domain-specific features. We first present the proposed pipeline with all the different components of the learning system. Next we present the set of 14 relations that we learn on before presenting the experimental results. We finally discuss the results of these comparisons before concluding by presenting directions for future research. Proposed Pipeline We present the different aspects of our pipeline, depicted in Figure FIGREF1 . We will first describe our approach to generating features and training examples from the KBP corpus, before describing the core of our framework – the RDN Boost algorithm. Feature Generation Given a training corpus of raw text documents, our learning algorithm first converts these documents into a set of facts (i.e., features) that are encoded in first order logic (FOL). Raw text is processed using the Stanford CoreNLP Toolkit BIBREF1 to extract parts-of-speech, word lemmas, etc. as well as generate parse trees, dependency graphs and named-entity recognition information. The full set of extracted features is listed in Table TABREF3 . These are then converted into features in prolog (i.e., FOL) format and are given as input to the system. In addition to the structured features from the output of Stanford toolkit, we also use deeper features based on word2vec BIBREF2 as input to our learning system. Standard NLP features tend to treat words as individual objects, ignoring links between words that occur with similar meanings or, importantly, similar contexts (e.g., city-country pairs such as Paris – France and Rome – Italy occur in similar contexts). word2vec provide a continuous-space vector embedding of words that, in practice, capture many of these relationships BIBREF2 , BIBREF3 . We use word vectors from Stanford and Google along with a few specific words that, experts believe, are related to the relations learned. For example, we include words such as “father” and “mother” (inspired by the INLINEFORM0 relation) or “devout”,“convert”, and “follow” ( INLINEFORM1 relation). We generated features from word vectors by finding words with high similarity in the embedded space. That is, we used word vectors by considering relations of the following form: INLINEFORM2 , where INLINEFORM3 is the cosine similarity score between the words. Only the top cosine similarity scores for a word are utilized. Weak Supervision One difficulty with the KBP task is that very few documents come labeled as gold standard labels, and further annotation is prohibitively expensive beyond a few hundred documents. This is problematic for discriminative learning algorithms, like the RDN learning algorithm, which excel when given a large supervised training corpus. To overcome this obstacle, we employ weak supervision – the use of external knowledge (e.g., a database) to heuristically label examples. Following our work in Soni et al. akbc16, we employ two approaches for generating weakly supervised examples – distant supervision and knowledge-based weak supervision. Distant supervision entails the use of external knowledge (e.g., a database) to heuristically label examples. Following standard procedure, we use three data sources – Never Ending Language Learner (NELL) BIBREF4 , Wikipedia Infoboxes and Freebase. For a given target relation, we identify relevant database(s), where the entries in the database form entity pairs (e.g., an entry of INLINEFORM0 for a parent database) that will serve as a seed for positive training examples. These pairs must then be mapped to mentions in our corpus – that is, we must find sentences in our corpus that contain both entities together BIBREF5 . This process is done heuristically and is fraught with potential errors and noise BIBREF6 . An alternative approach, knowledge-based weak supervision is based on previous work BIBREF7 , BIBREF8 with the following insight: labels are typically created by “domain experts” who annotate the labels carefully, and who typically employ some inherent rules in their mind to create examples. For example, when identifying family relationship, we may have an inductive bias towards believing two persons in a sentence with the same last name are related, or that the words “son” or “daughter” are strong indicators of a parent relation. We call this world knowledge as it describes the domain (or the world) of the target relation. To this effect, we encode the domain expert's knowledge in the form of first-order logic rules with accompanying weights to indicate the expert's confidence. We use the probabilistic logic formalism Markov Logic Networks BIBREF9 to perform inference on unlabeled text (e.g., the TAC KBP corpus). Potential entity pairs from the corpus are queried to the MLN, yielding (weakly-supervised) positive examples. We choose MLNs as they permit domain experts to easily write rules while providing a probabilistic framework that can handle noise, uncertainty, and preferences while simultaneously ranking positive examples. We use the Tuffy system BIBREF10 to perform inference. The inference algorithm implemented inside Tuffy appears to be robust and scales well to millions of documents. For the KBP task, some rules that we used are shown in Table TABREF8 . For example, the first rule identifies any number following a person's name and separated by a comma is likely to be the person's age (e.g., “Sharon, 42”). The third and fourth rule provide examples of rules that utilize more textual features; these rules state the appearance of the lemma “mother” or “father” between two persons is indicative of a parent relationship (e.g.,“Malia's father, Barack, introduced her...”). To answer Q1, we generated positive training examples using the weak supervision techniques specified earlier. Specifically, we evaluated 10 relations as show in Table TABREF20 . Based on experiments from BIBREF8 , we utilized our knowledge-based weak supervision approach to provide positive examples in all but two of our relations. A range of 4 to 8 rules are derived for each relation. Examples for the organization relations INLINEFORM0 and INLINEFORM1 were generated using standard distant supervision techniques – Freebase databases were mapped to INLINEFORM2 while Wikipedia Infoboxes provides entity pairs for INLINEFORM3 . Lastly, only 150 weakly supervised examples were utilized in our experiments (all gold standard examples were utilized). Performing larger runs is part of work in progress. The results are presented in Table TABREF20 . We compared our standard pipeline (individually learned relations with only standard features) learned on gold standard examples only versus our system learned with weak and gold examples combined. Surprisingly, weak supervision does not seem to help learn better models for inferring relations in most cases. Only two relations – INLINEFORM0 , INLINEFORM1 – see substantial improvements in AUC ROC, while F1 shows improvements for INLINEFORM2 and, INLINEFORM3 , and INLINEFORM4 . We hypothesize that generating more examples will help (some relations produced thousands of examples), but nonetheless find the lack of improved models from even a modest number of examples a surprising result. Alternatively, the number of gold standard examples provided may be sufficient to learn RDN models. Thus Q1 is answered equivocally, but in the negative. Learning Relational Dependency Networks Previous research BIBREF11 has demonstrated that joint inferences of the relations are more effective than considering each relation individually. Consequently, we have considered a formalism that has been successfully used for joint learning and inference from stochastic, noisy, relational data called Relational Dependency Networks (RDNs) BIBREF0 , BIBREF12 . RDNs extend dependency networks (DN) BIBREF13 to the relational setting. The key idea in a DN is to approximate the joint distribution over a set of random variables as a product of their marginal distributions, i.e., INLINEFORM0 INLINEFORM1 INLINEFORM2 . It has been shown that employing Gibbs sampling in the presence of a large amount of data allows this approximation to be particularly effective. Note that, one does not have to explicitly check for acyclicity making these DNs particularly easy to be learned. In an RDN, typically, each distribution is represented by a relational probability tree (RPT) BIBREF14 . However, following previous work BIBREF12 , we replace the RPT of each distribution with a set of relational regression trees BIBREF15 built in a sequential manner i.e., replace a single tree with a set of gradient boosted trees. This approach has been shown to have state-of-the-art results in learning RDNs and we adapted boosting to learn for relation extraction. Since this method requires negative examples, we created negative examples by considering all possible combinations of entities that are not present in positive example set and sampled twice as many negatives as positive examples. Incorporating Human Advice While most relational learning methods restrict the human to merely annotating the data, we go beyond and request the human for advice. The intuition is that we as humans read certain patterns and use them to deduce the nature of the relation between two entities present in the text. The goal of our work is to capture such mental patterns of the humans as advice to the learning algorithm. We modified the work of Odom et al. odomAIME15,odomAAAI15 to learn RDNs in the presence of advice. The key idea is to explicitly represent advice in calculating gradients. This allows the system to trade-off between data and advice throughout the learning phase, rather than only consider advice in initial iterations. Advice, in particular, become influential in the presence of noisy or less amout of data. A few sample advice rules in English (these are converted to first-order logic format and given as input to our algorithm) are presented in Table TABREF11 . Note that some of the rules are “soft" rules in that they are not true in many situations. Odom et al. odomAAAI15 weigh the effect of the rules against the data and hence allow for partially correct rules. Experiments and Results We now present our experimental evaluation. We considered 14 specific relations from two categories, person and organization from the TAC KBP competition. The relations considered are listed in the left column of Table TABREF13 . We utilize documents from KBP 2014 for training while utilizing documents from the 2015 corpus for testing. All results presented are obtained from 5 different runs of the train and test sets to provide more robust estimates of accuracy. We consider three standard metrics – area under the ROC curve, F-1 score and the recall at a certain precision. We chose the precision as INLINEFORM0 since the fraction of positive examples to negatives is 1:2 (we sub-sampled the negative examples for the different training sets). Negative examples are re-sampled for each training run. It must be mentioned that not all relations had the same number of hand-annotated (gold standard) examples because the 781 documents that we annotated had different number of instances for these relations. The train/test gold-standard sizes are provided in the table, including weakly supervised examples, if available. Lastly, to control for other factors, the default setting for our experiments is individual learning, standard features, with gold standard examples only (i.e., no weak supervision, word2vec, advice, or advice). Since our system had different components, we aimed to answer the following questions: Joint learning To address our next question, we assessed our pipeline when learning relations independently (i.e., individually) versus learning relations jointly within the RDN, displayed in Table TABREF22 . Recall and F1 are omitted for conciseness – the conclusions are the same across all metrics. Joint learning appears to help in about half of the relations (8/14). Particularly, in person category, joint learning with gold standard outperforms their individual learning counterparts. This is due to the fact that some relations such as parents, spouse, siblings etc. are inter-related and learning them jointly indeed improves performance. Hence Q2 can be answered affirmatively for half the relations. word2vec Table TABREF24 shows the results of experiments comparing the RDN framework with and without word2vec features. word2vec appears to largely have no impact, boosting results in just 4 relations. We hypothesize that this may be due to a limitation in the depth of trees learned. Learning more and/or deeper trees may improve use of word2vec features, and additional work can be done to generate deep features from word vectors. Q3 is answered cautiously in the negative, although future work could lead to improvements. Advice Table TABREF26 shows the results of experiments that test the use of advice within the joint learning setting. The use of advice improves or matches the performance of using only joint learning. The key impact of advice can be mostly seen in the improvement of recall in several relations. This clearly shows that using human advice patterns allows us to extract more relations effectively making up for noisy or less number of training examples. This is in-line with previously published machine learning literature BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 in that humans can be more than mere labelers by providing useful advice to learning algorithms that can improve their performance. Thus Q4 can be answered affirmatively. RDN Boost vs Relation Factory Relation factory (RF) BIBREF16 is an efficient, open source system for performing relation extraction based on distantly supervised classifiers. It was the top system in the TAC KBP 2013 competition BIBREF21 and thus serves as a suitable baseline for our method. RF is very conservative in its responses, making it very difficult to adjust the precision levels. To be most generous to RF, we present recall for all returned results (i.e., score INLINEFORM0 ). The AUC ROC, recall, and F1 scores of our system against RF are presented in Table TABREF28 . Our system performs comparably, and often better than the state-of-the-art Relation Factory system. In particular, our method outperforms Relation Factory in AUC ROC across all relations. Recall provides a more mixed picture with both approaches showing some improvements – RDN outperforms in 6 relations while Relation Factory does so in 8. Note that in the instances where RDN provides superior recall, it does so with dramatic improvements (RF often returns 0 positives in these relations). F1 also shows RDN's superior performance, outperforming RF in most relations. Thus, the conclusion for Q5 is that our RDN framework performas comparably, if not better, across all metrics against the state-of-the-art. Conclusion We presented our fully relational system utilizing Relational Dependency Networks for the Knowledge Base Population task. We demonstrated RDN's ability to effectively learn the relation extraction task, performing comparably (and often better) than the state-of-art Relation Factory system. Furthermore, we demonstrated the ability of RDNs to incorporate various concepts in a relational framework, including word2vec, human advice, joint learning, and weak supervision. Some surprising results are that weak supervision and word2vec did not significantly improve performance. However, advice is extremely useful thus validating the long-standing results inside the Artificial Intelligence community for the relation extraction task as well. Possible future directions include considering a larger number of relations, deeper features and finally, comparisons with more systems. We believe further work on developing word2vec features and utilizing more weak supervision examples may reveal further insights into how to effectively utilize such features in RDNs.
relations
04a4b0c6c8bd4c170c93ea7ea1bf693965ef38f4
04a4b0c6c8bd4c170c93ea7ea1bf693965ef38f4_0
Q: Is this an English-language dataset? Text: Introduction Nowadays, people increasingly tend to use social media like Facebook and Twitter as their primary source of information and news consumption. There are several reasons behind this tendency, such as the simplicity to gather and share the news and the possibility of staying abreast of the latest news and updated faster than with traditional media. An important factor is also that people can be engaged in conversations on the latest breaking news with their contacts by using these platforms. Pew Research Center's newest report shows that two-thirds of U.S. adults gather their news from social media, where Twitter is the most used platform. However, the absence of a systematic approach to do some form of fact and veracity checking may also encourage the spread of rumourous stories and misinformation BIBREF0 . Indeed, in social media, unverified information can spread very quickly and becomes viral easily, enabling the diffusion of false rumours and fake information. Within this scenario, it is crucial to analyse people attitudes towards rumours in social media and to resolve their veracity as soon as possible. Several approaches have been proposed to check the rumour veracity in social media BIBREF1 . This paper focus on a stance-based analysis of event-related rumours, following the approach proposed at SemEval-2017 in the new RumourEval shared task (Task 8, sub-task A) BIBREF2 . In this task English tweets from conversation threads, each associated to a newsworthy event and the rumours around it, are provided as data. The goal is to determine whether a tweet in the thread is supporting, denying, querying, or commenting the original rumour which started the conversation. It can be considered a stance classification task, where we have to predict the user's stance towards the rumour from a tweet, in the context of a given thread. This task has been defined as open stance classification task and is conceived as a key step in rumour resolution, by providing an analysis of people reactions towards an emerging rumour BIBREF0 , BIBREF3 . The task is also different from detecting stance towards a specific target entity BIBREF4 . Contribution We describe a novel classification approach, by proposing a new feature matrix, which includes two new groups: (a) features exploiting the conversational structure of the dataset BIBREF2 ; (b) affective features relying on the use of a wide range of affective resources capturing different facets of sentiment and other affect-related phenomena. We were also inspired by the fake news study on Twitter in BIBREF5 , showing that false stories inspire fear, disgust, and surprise in replies, while true stories inspire anticipation, sadness, joy, and trust. Meanwhile, from a dialogue act perspective, the study of BIBREF6 found that a relationship exists between the use of an affective lexicon and the communicative intention of an utterance which includes AGREE-ACCEPT (support), REJECT (deny), INFO-REQUEST (question), and OPINION (comment). They exploited several LIWC categories to analyse the role of affective content. Our results show that our model outperforms the state of the art on the Semeval-2017 benchmark dataset. Feature analysis highlights the contribution of the different feature groups, and error analysis is shedding some light on the main difficulties and challenges which still need to be addressed. Outline The paper is organized as follows. Section 2 introduces the SemEval-2017 Task 8. Section 3 describes our approach to deal with open stance classification by exploiting different groups of features. Section 4 describes the evaluation and includes a qualitative error analysis. Finally, Section 5 concludes the paper and points to future directions. SemEval-2017 Task 8: RumourEval The SemEval-2017 Task 8 Task A BIBREF2 has as its main objective to determine the stance of the users in a Twitter thread towards a given rumour, in terms of support, denying, querying or commenting (SDQC) on the original rumour. Rumour is defined as a “circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism and/or anxiety so as to motivate finding out the actual truth” BIBREF7 . The task was very timing due to the growing importance of rumour resolution in the breaking news and to the urgency of preventing the spreading of misinformation. Dataset The data for this task are taken from Twitter conversations about news-related rumours collected by BIBREF3 . They were annotated using four labels (SDQC): support - S (when tweet's author support the rumour veracity); deny -D (when tweet's author denies the rumour veracity); query - Q (when tweet's author ask for additional information/evidence); comment -C (when tweet's author just make a comment and does not give important information to asses the rumour veracity). The distribution consists of three sets: development, training and test sets, as summarized in Table TABREF3 , where you can see also the label distribution and the news related to the rumors discussed. Training data consist of 297 Twitter conversations and 4,238 tweets in total with related direct and nested replies, where conversations are associated to seven different breaking news. Test data consist of 1049 tweets, where two new rumourous topics were added. Participants Eight teams participated in the task. The best performing system was developed by Turing (78.4 in accuracy). ECNU, MamaEdha, UWaterloo, and DFKI-DKT utilized ensemble classifier. Some systems also used deep learning techniques, including Turing, IKM, and MamaEdha. Meanwhile, NileTRMG and IITP used classical classifier (SVM) to build their systems. Most of the participants exploited word embedding to construct their feature space, beside the Twitter domain features. Proposed Method We developed a new model by exploiting several stylistic and structural features characterizing Twitter language. In addition, we propose to utilize conversational-based features by exploiting the peculiar tree structure of the dataset. We also explored the use of affective based feature by extracting information from several affective resources including dialogue-act inspired features. Structural Features They were designed taking into account several Twitter data characteristics, and then selecting the most relevant features to improve the classification performance. The set of structural features that we used is listed below. Retweet Count: The number of retweet of each tweet. Question Mark: presence of question mark "?"; binary value (0 and 1). Question Mark Count: number of question marks present in the tweet. Hashtag Presence: this feature has a binary value 0 (if there is no hashtag in the tweet) or 1 (if there is at least one hashtag in the tweet). Text Length: number of characters after removing Twitter markers such as hashtags, mentions, and URLs. URL Count: number of URL links in the tweet. Conversation Based Features These features are devoted to exploit the peculiar characteristics of the dataset, which have a tree structure reflecting the conversation thread. Text Similarity to Source Tweet: Jaccard Similarity of each tweet with its source tweet. Text Similarity to Replied Tweet: the degree of similarity between the tweet with the previous tweet in the thread (the tweet is a reply to that tweet). Tweet Depth: the depth value is obtained by counting the node from sources (roots) to each tweet in their hierarchy. Affective Based Features The idea to use affective features in the context of our task was inspired by recent works on fake news detection, focusing on emotional responses to true and false rumors BIBREF5 , and by the work in BIBREF6 reflecting on the role of affect in dialogue acts BIBREF6 . Multi-faceted affective features have been already proven to be effective in some related tasks BIBREF9 , including the stance detection task proposed at SemEval-2016 (Task 6). We used the following affective resources relying on different emotion models. Emolex: it contains 14,182 words associated with eight primary emotion based on the Plutchik model BIBREF10 , BIBREF11 . EmoSenticNet(EmoSN): it is an enriched version of SenticNet BIBREF12 including 13,189 words labeled by six Ekman's basic emotion BIBREF13 , BIBREF14 . Dictionary of Affect in Language (DAL): includes 8,742 English words labeled by three scores representing three dimensions: Pleasantness, Activation and Imagery BIBREF15 . Affective Norms for English Words (ANEW): consists of 1,034 English words BIBREF16 rated with ratings based on the Valence-Arousal-Dominance (VAD) model BIBREF17 . Linguistic Inquiry and Word Count (LIWC): this psycholinguistic resource BIBREF18 includes 4,500 words distributed into 64 emotional categories including positive (PosEMO) and negative (NegEMO). Dialogue-Act Features We also included additional 11 categories from bf LIWC, which were already proven to be effective in dialogue-act task in previous work BIBREF6 . Basically, these features are part of the affective feature group, but we present them separately because we are interested in exploring the contribution of such feature set separately. This feature set was obtained by selecting 4 communicative goals related to our classes in the stance task: agree-accept (support), reject (deny), info-request (question), and opinion (comment). The 11 LIWC categories include: Agree-accept: Assent, Certain, Affect; Reject: Negate, Inhib; Info-request: You, Cause; Opinion: Future, Sad, Insight, Cogmech. Experiments, Evaluation and Analysis We used the RumourEval dataset from SemEval-2017 Task 8 described in Section SECREF2 . We defined the rumour stance detection problem as a simple four-way classification task, where every tweet in the dataset (source and direct or nested reply) should be classified into one among four classes: support, deny, query, and comment. We conducted a set of experiments in order to evaluate and analyze the effectiveness of our proposed feature set.. The results are summarized in Table TABREF28 , showing that our system outperforms all of the other systems in terms of accuracy. Our best result was obtained by a simple configuration with a support vector classifier with radial basis function (RBF) kernel. Our model performed better than the best-performing systems in SemEval 2017 Task 8 Subtask A (Turing team, BIBREF19 ), which exploited deep learning approach by using LTSM-Branch model. In addition, we also got a higher accuracy than the system described in BIBREF20 , which exploits a Random Forest classifier and word embeddings based features. We experimented with several classifiers, including Naive Bayes, Decision Trees, Support Vector Machine, and Random Forest, noting that SVM outperforms the other classifiers on this task. We explored the parameter space by tuning the SVM hyperparameters, namely the penalty parameter C, kernel type, and class weights (to deal with class imbalance). We tested several values for C (0.001, 0.01, 0.1, 1, 10, 100, and 1000), four different kernels (linear, RBF, polynomial, and sigmoid) and weighted the classes based on their distribution in the training data. The best result was obtained with C=1, RBF kernel, and without class weighting. An ablation test was conducted to explore the contribution of each feature set. Table TABREF32 shows the result of our ablation test, by exploiting several feature sets on the same classifier (SVM with RBF kernel) . This evaluation includes macro-averages of precision, recall and INLINEFORM0 -score as well as accuracy. We also presented the scores for each class in order to get a better understanding of our classifier's performance. Using only conversational, affective, or dialogue-act features (without structural features) did not give a good classification result. Set B (conversational features only) was not able to detect the query and deny classes, while set C (affective features only) and D (dialogue-act features only) failed to catch the support, query, and deny classes. Conversational features were able to improve the classifier performance significantly, especially in detecting the support class. Sets E, H, I, and K which utilize conversational features induce an improvement on the prediction of the support class (roughly from 0.3 to 0.73 on precision). Meanwhile, the combination of affective and dialogue-act features was able to slightly improve the classification of the query class. The improvement can be seen from set E to set K where the INLINEFORM0 -score of query class increased from 0.52 to 0.58. Overall, the best result was obtained by the K set which encompasses all sets of features. It is worth to be noted that in our best configuration system, not all of affective and dialogue-act features were used in our feature vector. After several optimization steps, we found that some features were not improving the system's performance. Our final list of affective and dialogue-act based features includes: DAL Activation, ANEW Dominance, Emolex Negative, Emolex Fear, LIWC Assent, LIWC Cause, LIWC Certain and LIWC Sad. Therefore, we have only 17 columns of features in the best performing system covering structural, conversational, affective and dialogue-act features. We conducted a further analysis of the classification result obtained by the best performing system (79.50 on accuracy). Table TABREF30 shows the confusion matrix of our result. On the one hand, the system is able to detect the comment tweets very well. However, this result is biased due to the number of comment data in the dataset. On the other hand, the system is failing to detect denying tweets, which were falsely classified into comments (68 out of 71). Meanwhile, approximately two thirds of supporting tweets and almost half of querying tweets were classified into the correct class by the system. In order to assess the impact of class imbalance on the learning, we performed an additional experiment with a balanced dataset using the best performing configuration. We took a subset of the instances equally distributed with respect to their class from the training set (330 instances for each class) and test set (71 instances for each class). As shown in Table TABREF31 , our classifier was able to correctly predict the underrepresented classes much better, although the overall accuracy is lower (59.9%). The result of this analysis clearly indicates that class imbalance has a negative impact on the system performance. Error analysis We conducted a qualitative error analysis on the 215 misclassified in the test set, to shed some light on the issues and difficulties to be addressed in future work and to detect some notable error classes. Denying by attacking the rumour's author. An interesting finding from the analysis of the Marina Joyce rumour data is that it contains a lot of denying tweets including insulting comments towards the author of the source tweet, like in the following cases: Rumour: Marina Joyce Misclassified tweets: (da1) stfu you toxic sludge (da2) @sampepper u need rehab Misclassification type: deny (gold) INLINEFORM0 comment (prediction) Source tweet: (s1) Anyone who knows Marina Joyce personally knows she has a serious drug addiction. she needs help, but in the form of rehab #savemarinajoyce Tweets like (da1) and (da2) seem to be more inclined to show the respondent's personal hatred towards the s1-tweet's author than to deny the veracity of the rumour. In other words, they represent a peculiar form of denying the rumour, which is expressed by personal attack and by showing negative attitudes or hatred towards the rumour's author. This is different from denying by attacking the source tweet content, and it was difficult to comprehend for our system, that often misclassified such kind of tweets as comments. Noisy text, specific jargon, very short text. In (da1) and (da2) (as in many tweets in the test set), we also observe the use of noisy text (abbreviations, misspellings, slang words and slurs, question statements without question mark, and so on) that our classifier struggles to handle . Moreover, especially in tweets from the Marina Joyce rumour's group, we found some very short tweets in the denying class that do not provide enough information, e.g. tweets like “shut up!", “delete", and “stop it. get some help". Argumentation context. We also observed misclassification cases that seem to be related to a deeper capability of dealing with the argumentation context underlying the conversation thread. Rumour: Ferguson Misclassified tweet: (arg1)@QuadCityPat @AP I join you in this demand. Unconscionable. Misclassification type: deny (gold) INLINEFORM0 comment (prediction) Source tweet: (s2) @AP I demand you retract the lie that people in #Ferguson were shouting “kill the police", local reporting has refuted your ugly racism Here the misclassified tweet is a reply including an explicit expression of agreement with the author of the source tweet (“I join you”). Tweet (s2) is one of the rare cases of source tweets denying the rumor (source tweets in the RumourEval17 dataset are mostly supporting the rumor at issue). Our hypothesis is that it is difficult for a system to detect such kind of stance without a deeper comprehension of the argumentation context (e.g., if the author's stance is denying the rumor, and I agree with him, then I am denying the rumor as well). In general, we observed that when the source tweet is annotated by the deny label, most of denying replies of the thread include features typical of the support class (and vice versa), and this was a criticism. Mixed cases. Furthermore, we found some borderline mixed cases in the gold standard annotation. See for instance the following case: Rumour: Ferguson Misclassified tweet: (mx1) @MichaelSkolnik @MediaLizzy Oh do tell where they keep track of "vigilante" stats. That's interesting. Misclassification type: query (gold) INLINEFORM0 comment (prediction) Source tweet: (s3) Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson Tweet (mx1) is annotated with a query label rather than as a comment (our system prediction), but we can observe the presence of a comment (“That's interesting”) after the request for clarification, so it seems to be a kind of mixed case, where both labels make sense. Citation of the source's tweet. We have noticed many misclassified cases of replying tweets with error pattern support (gold) INLINEFORM0 comment (our prediction), where the text contains a literal citation of the source tweet, like in the following tweet: THIS HAS TO END “@MichaelSkolnik: Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson” (the text enclosed in quotes is the source tweet). Such kind of mistakes could be maybe addressed by applying some pre-processing to the data, for instance by detecting the literal citation and replacing it with a marker. Figurative language devices. Finally, the use of figurative language (e.g., sarcasm) is also an issue that should be considered for the future work. Let us consider for instance the following misclassified tweets: Rumour: Hillary's Illness Misclassified tweets: (fg1) @mitchellvii True, after all she can open a pickle jar. (fg2) @mitchellvii Also, except for having a 24/7 MD by her side giving her Valium injections, Hillary is in good health! https://t.co/GieNxwTXX7 (fg3) @mitchellvii @JoanieChesnutt At the very peak yes, almost time to go down a cliff and into the earth. Misclassification type: support (gold) INLINEFORM0 comment (prediction) Source tweet: (s4) Except for the coughing, fainting, apparent seizures and "short-circuits," Hillary is in the peak of health. All misclassified tweets (fg1-fg3) from the Hillary's illness data are replies to a source tweet (s4), which is featured by sarcasm. In such replies authors support the rumor by echoing the sarcastic tone of the source tweet. Such more sophisticated cases, where the supportive attitude is expressed in an implicit way, were challenging for our classifier, and they were quite systematically misclassified as simple comments. Conclusion In this paper we proposed a new classification model for rumour stance classification. We designed a set of features including structural, conversation-based, affective and dialogue-act based feature. Experiments on the SemEval-2017 Task 8 Subtask A dataset show that our system based on a limited set of well-engineered features outperforms the state-of-the-art systems in this task, without relying on the use of sophisticated deep learning approaches. Although achieving a very good result, several research challenges related to this task are left open. Class imbalance was recognized as one the main issues in this task. For instance, our system was struggling to detect the deny class in the original dataset distribution, but it performed much better in that respect when we balanced the distribution across the classes. A re-run of the RumourEval shared task has been proposed at SemEval 2019 and it will be very interesting to participate to the new task with an evolution of the system here described. Acknowledgements Endang Wahyu Pamungkas, Valerio Basile and Viviana Patti were partially funded by Progetto di Ateneo/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, S1618_L2_BOSC_01).
Yes
dbfce07613e6d0d7412165e14438d5f92ad4b004
dbfce07613e6d0d7412165e14438d5f92ad4b004_0
Q: What affective-based features are used? Text: Introduction Nowadays, people increasingly tend to use social media like Facebook and Twitter as their primary source of information and news consumption. There are several reasons behind this tendency, such as the simplicity to gather and share the news and the possibility of staying abreast of the latest news and updated faster than with traditional media. An important factor is also that people can be engaged in conversations on the latest breaking news with their contacts by using these platforms. Pew Research Center's newest report shows that two-thirds of U.S. adults gather their news from social media, where Twitter is the most used platform. However, the absence of a systematic approach to do some form of fact and veracity checking may also encourage the spread of rumourous stories and misinformation BIBREF0 . Indeed, in social media, unverified information can spread very quickly and becomes viral easily, enabling the diffusion of false rumours and fake information. Within this scenario, it is crucial to analyse people attitudes towards rumours in social media and to resolve their veracity as soon as possible. Several approaches have been proposed to check the rumour veracity in social media BIBREF1 . This paper focus on a stance-based analysis of event-related rumours, following the approach proposed at SemEval-2017 in the new RumourEval shared task (Task 8, sub-task A) BIBREF2 . In this task English tweets from conversation threads, each associated to a newsworthy event and the rumours around it, are provided as data. The goal is to determine whether a tweet in the thread is supporting, denying, querying, or commenting the original rumour which started the conversation. It can be considered a stance classification task, where we have to predict the user's stance towards the rumour from a tweet, in the context of a given thread. This task has been defined as open stance classification task and is conceived as a key step in rumour resolution, by providing an analysis of people reactions towards an emerging rumour BIBREF0 , BIBREF3 . The task is also different from detecting stance towards a specific target entity BIBREF4 . Contribution We describe a novel classification approach, by proposing a new feature matrix, which includes two new groups: (a) features exploiting the conversational structure of the dataset BIBREF2 ; (b) affective features relying on the use of a wide range of affective resources capturing different facets of sentiment and other affect-related phenomena. We were also inspired by the fake news study on Twitter in BIBREF5 , showing that false stories inspire fear, disgust, and surprise in replies, while true stories inspire anticipation, sadness, joy, and trust. Meanwhile, from a dialogue act perspective, the study of BIBREF6 found that a relationship exists between the use of an affective lexicon and the communicative intention of an utterance which includes AGREE-ACCEPT (support), REJECT (deny), INFO-REQUEST (question), and OPINION (comment). They exploited several LIWC categories to analyse the role of affective content. Our results show that our model outperforms the state of the art on the Semeval-2017 benchmark dataset. Feature analysis highlights the contribution of the different feature groups, and error analysis is shedding some light on the main difficulties and challenges which still need to be addressed. Outline The paper is organized as follows. Section 2 introduces the SemEval-2017 Task 8. Section 3 describes our approach to deal with open stance classification by exploiting different groups of features. Section 4 describes the evaluation and includes a qualitative error analysis. Finally, Section 5 concludes the paper and points to future directions. SemEval-2017 Task 8: RumourEval The SemEval-2017 Task 8 Task A BIBREF2 has as its main objective to determine the stance of the users in a Twitter thread towards a given rumour, in terms of support, denying, querying or commenting (SDQC) on the original rumour. Rumour is defined as a “circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism and/or anxiety so as to motivate finding out the actual truth” BIBREF7 . The task was very timing due to the growing importance of rumour resolution in the breaking news and to the urgency of preventing the spreading of misinformation. Dataset The data for this task are taken from Twitter conversations about news-related rumours collected by BIBREF3 . They were annotated using four labels (SDQC): support - S (when tweet's author support the rumour veracity); deny -D (when tweet's author denies the rumour veracity); query - Q (when tweet's author ask for additional information/evidence); comment -C (when tweet's author just make a comment and does not give important information to asses the rumour veracity). The distribution consists of three sets: development, training and test sets, as summarized in Table TABREF3 , where you can see also the label distribution and the news related to the rumors discussed. Training data consist of 297 Twitter conversations and 4,238 tweets in total with related direct and nested replies, where conversations are associated to seven different breaking news. Test data consist of 1049 tweets, where two new rumourous topics were added. Participants Eight teams participated in the task. The best performing system was developed by Turing (78.4 in accuracy). ECNU, MamaEdha, UWaterloo, and DFKI-DKT utilized ensemble classifier. Some systems also used deep learning techniques, including Turing, IKM, and MamaEdha. Meanwhile, NileTRMG and IITP used classical classifier (SVM) to build their systems. Most of the participants exploited word embedding to construct their feature space, beside the Twitter domain features. Proposed Method We developed a new model by exploiting several stylistic and structural features characterizing Twitter language. In addition, we propose to utilize conversational-based features by exploiting the peculiar tree structure of the dataset. We also explored the use of affective based feature by extracting information from several affective resources including dialogue-act inspired features. Structural Features They were designed taking into account several Twitter data characteristics, and then selecting the most relevant features to improve the classification performance. The set of structural features that we used is listed below. Retweet Count: The number of retweet of each tweet. Question Mark: presence of question mark "?"; binary value (0 and 1). Question Mark Count: number of question marks present in the tweet. Hashtag Presence: this feature has a binary value 0 (if there is no hashtag in the tweet) or 1 (if there is at least one hashtag in the tweet). Text Length: number of characters after removing Twitter markers such as hashtags, mentions, and URLs. URL Count: number of URL links in the tweet. Conversation Based Features These features are devoted to exploit the peculiar characteristics of the dataset, which have a tree structure reflecting the conversation thread. Text Similarity to Source Tweet: Jaccard Similarity of each tweet with its source tweet. Text Similarity to Replied Tweet: the degree of similarity between the tweet with the previous tweet in the thread (the tweet is a reply to that tweet). Tweet Depth: the depth value is obtained by counting the node from sources (roots) to each tweet in their hierarchy. Affective Based Features The idea to use affective features in the context of our task was inspired by recent works on fake news detection, focusing on emotional responses to true and false rumors BIBREF5 , and by the work in BIBREF6 reflecting on the role of affect in dialogue acts BIBREF6 . Multi-faceted affective features have been already proven to be effective in some related tasks BIBREF9 , including the stance detection task proposed at SemEval-2016 (Task 6). We used the following affective resources relying on different emotion models. Emolex: it contains 14,182 words associated with eight primary emotion based on the Plutchik model BIBREF10 , BIBREF11 . EmoSenticNet(EmoSN): it is an enriched version of SenticNet BIBREF12 including 13,189 words labeled by six Ekman's basic emotion BIBREF13 , BIBREF14 . Dictionary of Affect in Language (DAL): includes 8,742 English words labeled by three scores representing three dimensions: Pleasantness, Activation and Imagery BIBREF15 . Affective Norms for English Words (ANEW): consists of 1,034 English words BIBREF16 rated with ratings based on the Valence-Arousal-Dominance (VAD) model BIBREF17 . Linguistic Inquiry and Word Count (LIWC): this psycholinguistic resource BIBREF18 includes 4,500 words distributed into 64 emotional categories including positive (PosEMO) and negative (NegEMO). Dialogue-Act Features We also included additional 11 categories from bf LIWC, which were already proven to be effective in dialogue-act task in previous work BIBREF6 . Basically, these features are part of the affective feature group, but we present them separately because we are interested in exploring the contribution of such feature set separately. This feature set was obtained by selecting 4 communicative goals related to our classes in the stance task: agree-accept (support), reject (deny), info-request (question), and opinion (comment). The 11 LIWC categories include: Agree-accept: Assent, Certain, Affect; Reject: Negate, Inhib; Info-request: You, Cause; Opinion: Future, Sad, Insight, Cogmech. Experiments, Evaluation and Analysis We used the RumourEval dataset from SemEval-2017 Task 8 described in Section SECREF2 . We defined the rumour stance detection problem as a simple four-way classification task, where every tweet in the dataset (source and direct or nested reply) should be classified into one among four classes: support, deny, query, and comment. We conducted a set of experiments in order to evaluate and analyze the effectiveness of our proposed feature set.. The results are summarized in Table TABREF28 , showing that our system outperforms all of the other systems in terms of accuracy. Our best result was obtained by a simple configuration with a support vector classifier with radial basis function (RBF) kernel. Our model performed better than the best-performing systems in SemEval 2017 Task 8 Subtask A (Turing team, BIBREF19 ), which exploited deep learning approach by using LTSM-Branch model. In addition, we also got a higher accuracy than the system described in BIBREF20 , which exploits a Random Forest classifier and word embeddings based features. We experimented with several classifiers, including Naive Bayes, Decision Trees, Support Vector Machine, and Random Forest, noting that SVM outperforms the other classifiers on this task. We explored the parameter space by tuning the SVM hyperparameters, namely the penalty parameter C, kernel type, and class weights (to deal with class imbalance). We tested several values for C (0.001, 0.01, 0.1, 1, 10, 100, and 1000), four different kernels (linear, RBF, polynomial, and sigmoid) and weighted the classes based on their distribution in the training data. The best result was obtained with C=1, RBF kernel, and without class weighting. An ablation test was conducted to explore the contribution of each feature set. Table TABREF32 shows the result of our ablation test, by exploiting several feature sets on the same classifier (SVM with RBF kernel) . This evaluation includes macro-averages of precision, recall and INLINEFORM0 -score as well as accuracy. We also presented the scores for each class in order to get a better understanding of our classifier's performance. Using only conversational, affective, or dialogue-act features (without structural features) did not give a good classification result. Set B (conversational features only) was not able to detect the query and deny classes, while set C (affective features only) and D (dialogue-act features only) failed to catch the support, query, and deny classes. Conversational features were able to improve the classifier performance significantly, especially in detecting the support class. Sets E, H, I, and K which utilize conversational features induce an improvement on the prediction of the support class (roughly from 0.3 to 0.73 on precision). Meanwhile, the combination of affective and dialogue-act features was able to slightly improve the classification of the query class. The improvement can be seen from set E to set K where the INLINEFORM0 -score of query class increased from 0.52 to 0.58. Overall, the best result was obtained by the K set which encompasses all sets of features. It is worth to be noted that in our best configuration system, not all of affective and dialogue-act features were used in our feature vector. After several optimization steps, we found that some features were not improving the system's performance. Our final list of affective and dialogue-act based features includes: DAL Activation, ANEW Dominance, Emolex Negative, Emolex Fear, LIWC Assent, LIWC Cause, LIWC Certain and LIWC Sad. Therefore, we have only 17 columns of features in the best performing system covering structural, conversational, affective and dialogue-act features. We conducted a further analysis of the classification result obtained by the best performing system (79.50 on accuracy). Table TABREF30 shows the confusion matrix of our result. On the one hand, the system is able to detect the comment tweets very well. However, this result is biased due to the number of comment data in the dataset. On the other hand, the system is failing to detect denying tweets, which were falsely classified into comments (68 out of 71). Meanwhile, approximately two thirds of supporting tweets and almost half of querying tweets were classified into the correct class by the system. In order to assess the impact of class imbalance on the learning, we performed an additional experiment with a balanced dataset using the best performing configuration. We took a subset of the instances equally distributed with respect to their class from the training set (330 instances for each class) and test set (71 instances for each class). As shown in Table TABREF31 , our classifier was able to correctly predict the underrepresented classes much better, although the overall accuracy is lower (59.9%). The result of this analysis clearly indicates that class imbalance has a negative impact on the system performance. Error analysis We conducted a qualitative error analysis on the 215 misclassified in the test set, to shed some light on the issues and difficulties to be addressed in future work and to detect some notable error classes. Denying by attacking the rumour's author. An interesting finding from the analysis of the Marina Joyce rumour data is that it contains a lot of denying tweets including insulting comments towards the author of the source tweet, like in the following cases: Rumour: Marina Joyce Misclassified tweets: (da1) stfu you toxic sludge (da2) @sampepper u need rehab Misclassification type: deny (gold) INLINEFORM0 comment (prediction) Source tweet: (s1) Anyone who knows Marina Joyce personally knows she has a serious drug addiction. she needs help, but in the form of rehab #savemarinajoyce Tweets like (da1) and (da2) seem to be more inclined to show the respondent's personal hatred towards the s1-tweet's author than to deny the veracity of the rumour. In other words, they represent a peculiar form of denying the rumour, which is expressed by personal attack and by showing negative attitudes or hatred towards the rumour's author. This is different from denying by attacking the source tweet content, and it was difficult to comprehend for our system, that often misclassified such kind of tweets as comments. Noisy text, specific jargon, very short text. In (da1) and (da2) (as in many tweets in the test set), we also observe the use of noisy text (abbreviations, misspellings, slang words and slurs, question statements without question mark, and so on) that our classifier struggles to handle . Moreover, especially in tweets from the Marina Joyce rumour's group, we found some very short tweets in the denying class that do not provide enough information, e.g. tweets like “shut up!", “delete", and “stop it. get some help". Argumentation context. We also observed misclassification cases that seem to be related to a deeper capability of dealing with the argumentation context underlying the conversation thread. Rumour: Ferguson Misclassified tweet: (arg1)@QuadCityPat @AP I join you in this demand. Unconscionable. Misclassification type: deny (gold) INLINEFORM0 comment (prediction) Source tweet: (s2) @AP I demand you retract the lie that people in #Ferguson were shouting “kill the police", local reporting has refuted your ugly racism Here the misclassified tweet is a reply including an explicit expression of agreement with the author of the source tweet (“I join you”). Tweet (s2) is one of the rare cases of source tweets denying the rumor (source tweets in the RumourEval17 dataset are mostly supporting the rumor at issue). Our hypothesis is that it is difficult for a system to detect such kind of stance without a deeper comprehension of the argumentation context (e.g., if the author's stance is denying the rumor, and I agree with him, then I am denying the rumor as well). In general, we observed that when the source tweet is annotated by the deny label, most of denying replies of the thread include features typical of the support class (and vice versa), and this was a criticism. Mixed cases. Furthermore, we found some borderline mixed cases in the gold standard annotation. See for instance the following case: Rumour: Ferguson Misclassified tweet: (mx1) @MichaelSkolnik @MediaLizzy Oh do tell where they keep track of "vigilante" stats. That's interesting. Misclassification type: query (gold) INLINEFORM0 comment (prediction) Source tweet: (s3) Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson Tweet (mx1) is annotated with a query label rather than as a comment (our system prediction), but we can observe the presence of a comment (“That's interesting”) after the request for clarification, so it seems to be a kind of mixed case, where both labels make sense. Citation of the source's tweet. We have noticed many misclassified cases of replying tweets with error pattern support (gold) INLINEFORM0 comment (our prediction), where the text contains a literal citation of the source tweet, like in the following tweet: THIS HAS TO END “@MichaelSkolnik: Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson” (the text enclosed in quotes is the source tweet). Such kind of mistakes could be maybe addressed by applying some pre-processing to the data, for instance by detecting the literal citation and replacing it with a marker. Figurative language devices. Finally, the use of figurative language (e.g., sarcasm) is also an issue that should be considered for the future work. Let us consider for instance the following misclassified tweets: Rumour: Hillary's Illness Misclassified tweets: (fg1) @mitchellvii True, after all she can open a pickle jar. (fg2) @mitchellvii Also, except for having a 24/7 MD by her side giving her Valium injections, Hillary is in good health! https://t.co/GieNxwTXX7 (fg3) @mitchellvii @JoanieChesnutt At the very peak yes, almost time to go down a cliff and into the earth. Misclassification type: support (gold) INLINEFORM0 comment (prediction) Source tweet: (s4) Except for the coughing, fainting, apparent seizures and "short-circuits," Hillary is in the peak of health. All misclassified tweets (fg1-fg3) from the Hillary's illness data are replies to a source tweet (s4), which is featured by sarcasm. In such replies authors support the rumor by echoing the sarcastic tone of the source tweet. Such more sophisticated cases, where the supportive attitude is expressed in an implicit way, were challenging for our classifier, and they were quite systematically misclassified as simple comments. Conclusion In this paper we proposed a new classification model for rumour stance classification. We designed a set of features including structural, conversation-based, affective and dialogue-act based feature. Experiments on the SemEval-2017 Task 8 Subtask A dataset show that our system based on a limited set of well-engineered features outperforms the state-of-the-art systems in this task, without relying on the use of sophisticated deep learning approaches. Although achieving a very good result, several research challenges related to this task are left open. Class imbalance was recognized as one the main issues in this task. For instance, our system was struggling to detect the deny class in the original dataset distribution, but it performed much better in that respect when we balanced the distribution across the classes. A re-run of the RumourEval shared task has been proposed at SemEval 2019 and it will be very interesting to participate to the new task with an evolution of the system here described. Acknowledgements Endang Wahyu Pamungkas, Valerio Basile and Viviana Patti were partially funded by Progetto di Ateneo/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, S1618_L2_BOSC_01).
affective features provided by different emotion models such as Emolex, EmoSenticNet, Dictionary of Affect in Language, Affective Norms for English Words and Linguistics Inquiry and Word Count
b7e419d2c4e24c40b8ad0fae87036110297d6752
b7e419d2c4e24c40b8ad0fae87036110297d6752_0
Q: What conversation-based features are used? Text: Introduction Nowadays, people increasingly tend to use social media like Facebook and Twitter as their primary source of information and news consumption. There are several reasons behind this tendency, such as the simplicity to gather and share the news and the possibility of staying abreast of the latest news and updated faster than with traditional media. An important factor is also that people can be engaged in conversations on the latest breaking news with their contacts by using these platforms. Pew Research Center's newest report shows that two-thirds of U.S. adults gather their news from social media, where Twitter is the most used platform. However, the absence of a systematic approach to do some form of fact and veracity checking may also encourage the spread of rumourous stories and misinformation BIBREF0 . Indeed, in social media, unverified information can spread very quickly and becomes viral easily, enabling the diffusion of false rumours and fake information. Within this scenario, it is crucial to analyse people attitudes towards rumours in social media and to resolve their veracity as soon as possible. Several approaches have been proposed to check the rumour veracity in social media BIBREF1 . This paper focus on a stance-based analysis of event-related rumours, following the approach proposed at SemEval-2017 in the new RumourEval shared task (Task 8, sub-task A) BIBREF2 . In this task English tweets from conversation threads, each associated to a newsworthy event and the rumours around it, are provided as data. The goal is to determine whether a tweet in the thread is supporting, denying, querying, or commenting the original rumour which started the conversation. It can be considered a stance classification task, where we have to predict the user's stance towards the rumour from a tweet, in the context of a given thread. This task has been defined as open stance classification task and is conceived as a key step in rumour resolution, by providing an analysis of people reactions towards an emerging rumour BIBREF0 , BIBREF3 . The task is also different from detecting stance towards a specific target entity BIBREF4 . Contribution We describe a novel classification approach, by proposing a new feature matrix, which includes two new groups: (a) features exploiting the conversational structure of the dataset BIBREF2 ; (b) affective features relying on the use of a wide range of affective resources capturing different facets of sentiment and other affect-related phenomena. We were also inspired by the fake news study on Twitter in BIBREF5 , showing that false stories inspire fear, disgust, and surprise in replies, while true stories inspire anticipation, sadness, joy, and trust. Meanwhile, from a dialogue act perspective, the study of BIBREF6 found that a relationship exists between the use of an affective lexicon and the communicative intention of an utterance which includes AGREE-ACCEPT (support), REJECT (deny), INFO-REQUEST (question), and OPINION (comment). They exploited several LIWC categories to analyse the role of affective content. Our results show that our model outperforms the state of the art on the Semeval-2017 benchmark dataset. Feature analysis highlights the contribution of the different feature groups, and error analysis is shedding some light on the main difficulties and challenges which still need to be addressed. Outline The paper is organized as follows. Section 2 introduces the SemEval-2017 Task 8. Section 3 describes our approach to deal with open stance classification by exploiting different groups of features. Section 4 describes the evaluation and includes a qualitative error analysis. Finally, Section 5 concludes the paper and points to future directions. SemEval-2017 Task 8: RumourEval The SemEval-2017 Task 8 Task A BIBREF2 has as its main objective to determine the stance of the users in a Twitter thread towards a given rumour, in terms of support, denying, querying or commenting (SDQC) on the original rumour. Rumour is defined as a “circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism and/or anxiety so as to motivate finding out the actual truth” BIBREF7 . The task was very timing due to the growing importance of rumour resolution in the breaking news and to the urgency of preventing the spreading of misinformation. Dataset The data for this task are taken from Twitter conversations about news-related rumours collected by BIBREF3 . They were annotated using four labels (SDQC): support - S (when tweet's author support the rumour veracity); deny -D (when tweet's author denies the rumour veracity); query - Q (when tweet's author ask for additional information/evidence); comment -C (when tweet's author just make a comment and does not give important information to asses the rumour veracity). The distribution consists of three sets: development, training and test sets, as summarized in Table TABREF3 , where you can see also the label distribution and the news related to the rumors discussed. Training data consist of 297 Twitter conversations and 4,238 tweets in total with related direct and nested replies, where conversations are associated to seven different breaking news. Test data consist of 1049 tweets, where two new rumourous topics were added. Participants Eight teams participated in the task. The best performing system was developed by Turing (78.4 in accuracy). ECNU, MamaEdha, UWaterloo, and DFKI-DKT utilized ensemble classifier. Some systems also used deep learning techniques, including Turing, IKM, and MamaEdha. Meanwhile, NileTRMG and IITP used classical classifier (SVM) to build their systems. Most of the participants exploited word embedding to construct their feature space, beside the Twitter domain features. Proposed Method We developed a new model by exploiting several stylistic and structural features characterizing Twitter language. In addition, we propose to utilize conversational-based features by exploiting the peculiar tree structure of the dataset. We also explored the use of affective based feature by extracting information from several affective resources including dialogue-act inspired features. Structural Features They were designed taking into account several Twitter data characteristics, and then selecting the most relevant features to improve the classification performance. The set of structural features that we used is listed below. Retweet Count: The number of retweet of each tweet. Question Mark: presence of question mark "?"; binary value (0 and 1). Question Mark Count: number of question marks present in the tweet. Hashtag Presence: this feature has a binary value 0 (if there is no hashtag in the tweet) or 1 (if there is at least one hashtag in the tweet). Text Length: number of characters after removing Twitter markers such as hashtags, mentions, and URLs. URL Count: number of URL links in the tweet. Conversation Based Features These features are devoted to exploit the peculiar characteristics of the dataset, which have a tree structure reflecting the conversation thread. Text Similarity to Source Tweet: Jaccard Similarity of each tweet with its source tweet. Text Similarity to Replied Tweet: the degree of similarity between the tweet with the previous tweet in the thread (the tweet is a reply to that tweet). Tweet Depth: the depth value is obtained by counting the node from sources (roots) to each tweet in their hierarchy. Affective Based Features The idea to use affective features in the context of our task was inspired by recent works on fake news detection, focusing on emotional responses to true and false rumors BIBREF5 , and by the work in BIBREF6 reflecting on the role of affect in dialogue acts BIBREF6 . Multi-faceted affective features have been already proven to be effective in some related tasks BIBREF9 , including the stance detection task proposed at SemEval-2016 (Task 6). We used the following affective resources relying on different emotion models. Emolex: it contains 14,182 words associated with eight primary emotion based on the Plutchik model BIBREF10 , BIBREF11 . EmoSenticNet(EmoSN): it is an enriched version of SenticNet BIBREF12 including 13,189 words labeled by six Ekman's basic emotion BIBREF13 , BIBREF14 . Dictionary of Affect in Language (DAL): includes 8,742 English words labeled by three scores representing three dimensions: Pleasantness, Activation and Imagery BIBREF15 . Affective Norms for English Words (ANEW): consists of 1,034 English words BIBREF16 rated with ratings based on the Valence-Arousal-Dominance (VAD) model BIBREF17 . Linguistic Inquiry and Word Count (LIWC): this psycholinguistic resource BIBREF18 includes 4,500 words distributed into 64 emotional categories including positive (PosEMO) and negative (NegEMO). Dialogue-Act Features We also included additional 11 categories from bf LIWC, which were already proven to be effective in dialogue-act task in previous work BIBREF6 . Basically, these features are part of the affective feature group, but we present them separately because we are interested in exploring the contribution of such feature set separately. This feature set was obtained by selecting 4 communicative goals related to our classes in the stance task: agree-accept (support), reject (deny), info-request (question), and opinion (comment). The 11 LIWC categories include: Agree-accept: Assent, Certain, Affect; Reject: Negate, Inhib; Info-request: You, Cause; Opinion: Future, Sad, Insight, Cogmech. Experiments, Evaluation and Analysis We used the RumourEval dataset from SemEval-2017 Task 8 described in Section SECREF2 . We defined the rumour stance detection problem as a simple four-way classification task, where every tweet in the dataset (source and direct or nested reply) should be classified into one among four classes: support, deny, query, and comment. We conducted a set of experiments in order to evaluate and analyze the effectiveness of our proposed feature set.. The results are summarized in Table TABREF28 , showing that our system outperforms all of the other systems in terms of accuracy. Our best result was obtained by a simple configuration with a support vector classifier with radial basis function (RBF) kernel. Our model performed better than the best-performing systems in SemEval 2017 Task 8 Subtask A (Turing team, BIBREF19 ), which exploited deep learning approach by using LTSM-Branch model. In addition, we also got a higher accuracy than the system described in BIBREF20 , which exploits a Random Forest classifier and word embeddings based features. We experimented with several classifiers, including Naive Bayes, Decision Trees, Support Vector Machine, and Random Forest, noting that SVM outperforms the other classifiers on this task. We explored the parameter space by tuning the SVM hyperparameters, namely the penalty parameter C, kernel type, and class weights (to deal with class imbalance). We tested several values for C (0.001, 0.01, 0.1, 1, 10, 100, and 1000), four different kernels (linear, RBF, polynomial, and sigmoid) and weighted the classes based on their distribution in the training data. The best result was obtained with C=1, RBF kernel, and without class weighting. An ablation test was conducted to explore the contribution of each feature set. Table TABREF32 shows the result of our ablation test, by exploiting several feature sets on the same classifier (SVM with RBF kernel) . This evaluation includes macro-averages of precision, recall and INLINEFORM0 -score as well as accuracy. We also presented the scores for each class in order to get a better understanding of our classifier's performance. Using only conversational, affective, or dialogue-act features (without structural features) did not give a good classification result. Set B (conversational features only) was not able to detect the query and deny classes, while set C (affective features only) and D (dialogue-act features only) failed to catch the support, query, and deny classes. Conversational features were able to improve the classifier performance significantly, especially in detecting the support class. Sets E, H, I, and K which utilize conversational features induce an improvement on the prediction of the support class (roughly from 0.3 to 0.73 on precision). Meanwhile, the combination of affective and dialogue-act features was able to slightly improve the classification of the query class. The improvement can be seen from set E to set K where the INLINEFORM0 -score of query class increased from 0.52 to 0.58. Overall, the best result was obtained by the K set which encompasses all sets of features. It is worth to be noted that in our best configuration system, not all of affective and dialogue-act features were used in our feature vector. After several optimization steps, we found that some features were not improving the system's performance. Our final list of affective and dialogue-act based features includes: DAL Activation, ANEW Dominance, Emolex Negative, Emolex Fear, LIWC Assent, LIWC Cause, LIWC Certain and LIWC Sad. Therefore, we have only 17 columns of features in the best performing system covering structural, conversational, affective and dialogue-act features. We conducted a further analysis of the classification result obtained by the best performing system (79.50 on accuracy). Table TABREF30 shows the confusion matrix of our result. On the one hand, the system is able to detect the comment tweets very well. However, this result is biased due to the number of comment data in the dataset. On the other hand, the system is failing to detect denying tweets, which were falsely classified into comments (68 out of 71). Meanwhile, approximately two thirds of supporting tweets and almost half of querying tweets were classified into the correct class by the system. In order to assess the impact of class imbalance on the learning, we performed an additional experiment with a balanced dataset using the best performing configuration. We took a subset of the instances equally distributed with respect to their class from the training set (330 instances for each class) and test set (71 instances for each class). As shown in Table TABREF31 , our classifier was able to correctly predict the underrepresented classes much better, although the overall accuracy is lower (59.9%). The result of this analysis clearly indicates that class imbalance has a negative impact on the system performance. Error analysis We conducted a qualitative error analysis on the 215 misclassified in the test set, to shed some light on the issues and difficulties to be addressed in future work and to detect some notable error classes. Denying by attacking the rumour's author. An interesting finding from the analysis of the Marina Joyce rumour data is that it contains a lot of denying tweets including insulting comments towards the author of the source tweet, like in the following cases: Rumour: Marina Joyce Misclassified tweets: (da1) stfu you toxic sludge (da2) @sampepper u need rehab Misclassification type: deny (gold) INLINEFORM0 comment (prediction) Source tweet: (s1) Anyone who knows Marina Joyce personally knows she has a serious drug addiction. she needs help, but in the form of rehab #savemarinajoyce Tweets like (da1) and (da2) seem to be more inclined to show the respondent's personal hatred towards the s1-tweet's author than to deny the veracity of the rumour. In other words, they represent a peculiar form of denying the rumour, which is expressed by personal attack and by showing negative attitudes or hatred towards the rumour's author. This is different from denying by attacking the source tweet content, and it was difficult to comprehend for our system, that often misclassified such kind of tweets as comments. Noisy text, specific jargon, very short text. In (da1) and (da2) (as in many tweets in the test set), we also observe the use of noisy text (abbreviations, misspellings, slang words and slurs, question statements without question mark, and so on) that our classifier struggles to handle . Moreover, especially in tweets from the Marina Joyce rumour's group, we found some very short tweets in the denying class that do not provide enough information, e.g. tweets like “shut up!", “delete", and “stop it. get some help". Argumentation context. We also observed misclassification cases that seem to be related to a deeper capability of dealing with the argumentation context underlying the conversation thread. Rumour: Ferguson Misclassified tweet: (arg1)@QuadCityPat @AP I join you in this demand. Unconscionable. Misclassification type: deny (gold) INLINEFORM0 comment (prediction) Source tweet: (s2) @AP I demand you retract the lie that people in #Ferguson were shouting “kill the police", local reporting has refuted your ugly racism Here the misclassified tweet is a reply including an explicit expression of agreement with the author of the source tweet (“I join you”). Tweet (s2) is one of the rare cases of source tweets denying the rumor (source tweets in the RumourEval17 dataset are mostly supporting the rumor at issue). Our hypothesis is that it is difficult for a system to detect such kind of stance without a deeper comprehension of the argumentation context (e.g., if the author's stance is denying the rumor, and I agree with him, then I am denying the rumor as well). In general, we observed that when the source tweet is annotated by the deny label, most of denying replies of the thread include features typical of the support class (and vice versa), and this was a criticism. Mixed cases. Furthermore, we found some borderline mixed cases in the gold standard annotation. See for instance the following case: Rumour: Ferguson Misclassified tweet: (mx1) @MichaelSkolnik @MediaLizzy Oh do tell where they keep track of "vigilante" stats. That's interesting. Misclassification type: query (gold) INLINEFORM0 comment (prediction) Source tweet: (s3) Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson Tweet (mx1) is annotated with a query label rather than as a comment (our system prediction), but we can observe the presence of a comment (“That's interesting”) after the request for clarification, so it seems to be a kind of mixed case, where both labels make sense. Citation of the source's tweet. We have noticed many misclassified cases of replying tweets with error pattern support (gold) INLINEFORM0 comment (our prediction), where the text contains a literal citation of the source tweet, like in the following tweet: THIS HAS TO END “@MichaelSkolnik: Every 28 hours a black male is killed in the United States by police or vigilantes. #Ferguson” (the text enclosed in quotes is the source tweet). Such kind of mistakes could be maybe addressed by applying some pre-processing to the data, for instance by detecting the literal citation and replacing it with a marker. Figurative language devices. Finally, the use of figurative language (e.g., sarcasm) is also an issue that should be considered for the future work. Let us consider for instance the following misclassified tweets: Rumour: Hillary's Illness Misclassified tweets: (fg1) @mitchellvii True, after all she can open a pickle jar. (fg2) @mitchellvii Also, except for having a 24/7 MD by her side giving her Valium injections, Hillary is in good health! https://t.co/GieNxwTXX7 (fg3) @mitchellvii @JoanieChesnutt At the very peak yes, almost time to go down a cliff and into the earth. Misclassification type: support (gold) INLINEFORM0 comment (prediction) Source tweet: (s4) Except for the coughing, fainting, apparent seizures and "short-circuits," Hillary is in the peak of health. All misclassified tweets (fg1-fg3) from the Hillary's illness data are replies to a source tweet (s4), which is featured by sarcasm. In such replies authors support the rumor by echoing the sarcastic tone of the source tweet. Such more sophisticated cases, where the supportive attitude is expressed in an implicit way, were challenging for our classifier, and they were quite systematically misclassified as simple comments. Conclusion In this paper we proposed a new classification model for rumour stance classification. We designed a set of features including structural, conversation-based, affective and dialogue-act based feature. Experiments on the SemEval-2017 Task 8 Subtask A dataset show that our system based on a limited set of well-engineered features outperforms the state-of-the-art systems in this task, without relying on the use of sophisticated deep learning approaches. Although achieving a very good result, several research challenges related to this task are left open. Class imbalance was recognized as one the main issues in this task. For instance, our system was struggling to detect the deny class in the original dataset distribution, but it performed much better in that respect when we balanced the distribution across the classes. A re-run of the RumourEval shared task has been proposed at SemEval 2019 and it will be very interesting to participate to the new task with an evolution of the system here described. Acknowledgements Endang Wahyu Pamungkas, Valerio Basile and Viviana Patti were partially funded by Progetto di Ateneo/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, S1618_L2_BOSC_01).
Text Similarity to Source Tweet, Text Similarity to Replied Tweet, Tweet Depth
be9cadaebfa0ff1a3c5a5ed56ff3aae76cf5e0a4
be9cadaebfa0ff1a3c5a5ed56ff3aae76cf5e0a4_0
Q: What are the evaluation metrics used? Text: Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . It is of interest to several applications including personalized machine translation, forensics, and marketing BIBREF7 , BIBREF8 . Early approaches to gender prediction BIBREF9 , BIBREF10 are inspired by pioneering work on authorship attribution BIBREF11 . Such stylometric models typically rely on carefully hand-selected sets of content-independent features to capture style beyond topic. Recently, open vocabulary approaches BIBREF12 , where the entire linguistic production of an author is used, yielded substantial performance gains in online user-attribute prediction BIBREF13 , BIBREF14 , BIBREF15 . Indeed, the best performing gender prediction models exploit chiefly lexical information BIBREF16 , BIBREF17 . Relying heavily on the lexicon though has its limitations, as it results in models with limited portability. Moreover, performance might be overly optimistic due to topic bias BIBREF18 . Recent work on cross-lingual author profiling has proposed the use of solely language-independent features BIBREF19 , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available. We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features. One could view this as a method in between the open vocabulary strategy and the stylometric approach. It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging. In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2). We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3). We focus on gender prediction for Twitter, motivated by data availability. Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features? We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific meta-data. Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic. To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction. We propose the following transformations, exemplified in Table TABREF2 . They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 : Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in- and cross-language experiments. We compare them to a model using multilingual embeddings BIBREF24 . Finally, we elicit human judgments both within language and across language. The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine. If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information. Lexical vs Bleached Models We use the scikit-learn BIBREF26 implementation of a linear SVM with default parameters (e.g., L2 regularization). We use 10-fold cross validation for all in-language experiments. For the cross-lingual experiments, we train on all available source language data and test on all target language data. For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign BIBREF17 (word 1-2 grams and character 3-6 grams). For the multilingual embeddings model we use the mean embedding representation from the system of BIBREF27 and add max, std and coverage features. We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary BIBREF28 . The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%). This results in an embedding space with a vocabulary size of 16M word types. All code is available at https://github.com/bplank/bleaching-text. For the bleached experiments, we ran models with each feature set separately. In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages. We tuned the INLINEFORM0 -gram size of this model through in-language cross-validation, finding that INLINEFORM1 performs best. When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (Avg), and accuracy obtained when training on the concatenation of all languages but the target one (All). The latter setting is also used for the embeddings model. We report accuracy for all experiments. Table TABREF13 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting. Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages. The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%). If we go across language, the lexical approaches break down (overall to 53.7% for Lex Avg/56.3% for All), except for Portuguese and Spanish, thanks to their similarities (see Table TABREF16 for pair-wise results). The closely-related-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language. The same holds for the multilingual embeddings model. On average it reaches an accuracy of 59.8%. The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES INLINEFORM0 PT and PT INLINEFORM1 ES are the highest. Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table TABREF13 providing number of users per language). The abstract features fare surprisingly well and work a lot better across languages. The performance is on average 6% higher across all languages (57.9% for Avg, 63.9% for All) in comparison to their lexicalized counterparts, where Abs All results in the overall best model. For Spanish, the multilingual embedding model clearly outperforms Abs. However, the approach requires large Twitter-specific embeddings. For our Abs model, if we investigate predictive features over all languages, cf. Table TABREF19 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users. Quotes, question marks and length features, for example, appear to be more predictive of male users. Human Evaluation We experimented with three different conditions, one within language and two across language. For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender. In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets. In both cases, the participants declared to have no prior knowledge of the target language. For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets. The Dutch speakers who participated in the two experiments are distinct individuals. Participants were informed of the experiment's goal. Their identity is anonymized in the data. We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution. Each user was represented by twenty tweets. The answer key (F/M) order was randomized. For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user. Inter-annotator agreement for the tasks was measured via Fleiss kappa ( INLINEFORM0 ), and was higher for the in-language experiment ( INLINEFORM1 ) than for the cross-language tasks (NL INLINEFORM2 PT: INLINEFORM3 ; FR INLINEFORM4 NL: INLINEFORM5 ). Table TABREF22 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users. Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section SECREF14 ). First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of BIBREF6 , who report an accuracy of 75% on English. Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup). One explanation for this might lie in an observation by BIBREF6 , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns. Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table TABREF13 ). We can also observe that the amount of information available to represent a user influences system's performance. Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points. This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams. The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy. In the cross-language setting, the picture is very different. Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time. This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them. Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores. Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users. Few studies investigate deeper syntactic information BIBREF9 , BIBREF2 or non-linguistic input, e.g., language-independent clues such as visual BIBREF29 or network information BIBREF3 , BIBREF5 , BIBREF19 . A related angle is cross-genre profiling. In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on BIBREF30 , BIBREF31 , BIBREF32 . Lexical bias has been shown to affect in-language human gender prediction, too. BIBREF6 found that people tend to rely too much on stereotypical lexical indicators, while BIBREF13 show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex. Our features abstract away from such lexical cues while retaining predictive signal. Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical information is still more useful within language (RQ1). However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate. Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2). We are well aware that we are testing our cross-language bleached models in the context of closely related languages. While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless. Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features. In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon. Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average. Acknowledgments We would like to thank the three anonymous reviewers and our colleagues for their useful feedback on earlier versions of this paper. Furthermore, we are grateful to Chloé Braud for helping with the French human evaluation part. We would like to thank all of our human participants.
average accuracy over each single-language model (Avg), and accuracy obtained when training on the concatenation of all languages but the target one (All)
aa979aed5a454b6705d0085ba2777859feb6fc62
aa979aed5a454b6705d0085ba2777859feb6fc62_0
Q: Do they report results only on English datasets? Text: Introduction Depression is a leading contributor to the burden of disability worldwideBIBREF0, BIBREF1, with some evidence that disability attributed to depression is rising, particularly among youthBIBREF2, BIBREF3. A key challenge in reducing the prevalence of depression has been that it is often under-recognizedBIBREF4 as well as under-treatedBIBREF5. Cognitive-behavioral therapy (CBT), is the most widely researched psychotherapy for depression. It is equivalent to antidepressant medications in its short-term efficacy and evidences superior outcomes in the long-termBIBREF6, BIBREF7. The cognitive theory underlying CBT argues that the ways in which individuals process and interpret information about themselves and their world is directly related to the onset, maintenance, and recurrence of their depressionBIBREF8, BIBREF9. This model is consistent with information processing accounts of mood regulationBIBREF10 and its dynamicsBIBREF11, as well as basic research that supports the role of cognitive reappraisal and language in emotion regulationBIBREF12, BIBREF13, BIBREF14, BIBREF15. In CBT, therapists work with their clients to identify depressogenic thinking patterns by identifying lexical or verbal markers of rigid, distorted, or overly negative interpretationsBIBREF16, BIBREF17. For example, statements that include “should” or “must” are often challenged as reflecting overly rigid rules about the world (“I shouldn't be lazy”, “I must never fail”). This process often entails a series of conversations with the client to uncover and address statements that reflect these so-called Cogntive Distortions (CD). The idea that language is predictive of depression is supported by data-driven approaches detecting depression from various lexical markers including the use of language to describe negative emotionsBIBREF18, BIBREF19, the use of first-person pronounsBIBREF20, BIBREF21, BIBREF22, BIBREF23, and mentions of common symptomsBIBREF24. Machine learning approaches have been shown to successfully predict whether Facebook users suffer from depressionBIBREF25, BIBREF26, identifying the most useful lexical markers to render a prediction. These results, while useful for prediction and the detection of depression, do not offer insights into the cognitive dynamics of the disease pattern, nor its relationship to language, which is crucial in developing treatments and interventions. Here, we emphloy a theory-driven approach to studying depressive language on Twitter. Rather than attemphting to extract relevant text features from text data, e.g. “sleep”, “health”, or other mental health related features, we define a clinical lexicon of 241 n-gramsBIBREF27 that a panel of clinical psychologists deemed to form a schema involved in the expression of a particular type of distorted thinking according to CBT theory and practice. For example, “I will never _” would be implicated in the expression of a cognitive distortions such as Catastrophizing or Fortune-telling, whereas “I am a _” would be used to express a Labeling and Mislabeling distortion. We then compare the longitudinal prevalence of this set of Cognitive Distortion Schemata (CDS) in the language of a large cohort of depressed individuals vs. a random sample on social media (Twitter). Our results indicate significantly higher prevalence of most types of CDS in the Depressed cohort, both at the within-subjects and between-groups level. Particularly CDS in the Personalizing and Emotional Reasoning types occur approximately 2.3 times more frequently in the online language of Depressed users. Our results are robust to changes in our user sample, our choice of CDS n-grams, text sentiment, and the known propensity of Depressed individuals to make self-referential statements. Introduction ::: Cognitive distortion types and n-gram schemata Aaron T. Beck introduced the concept of cognitive distortions to characterize the thinking of individuals with depressionBIBREF28, BIBREF29. Subsequently, other clinicians expanded on his typology of distortionsBIBREF30, including most recently his daughter, clinical psychologist and CBT expert, Judith BeckBIBREF31. We drew upon these latest lists to identify 12 types of cognitive distortions that may characterize the thinking of individuals who are depressed. We defined 241 CDS n-grams in total, each expressing at least 1 type of cognitive distortion (see Appendix Table 7). The schemata in each category were formulated to capture the “minimal semantic building blocks” of expressing distorted thinking for the particular type, avoiding expressions that are specific to a depression-related topics, such as poor sleep or health issues. For example, the 3-gram “I am a” was included as a building block of expressing Labeling and Mislabeling, because it would be a highly likely (and nearly unavoidable) n-gram to express many self-referential (“I”) expressions of labeling (“am a”) (for an example see table:CDdefinitions). Where possible, higher-order n-grams were chosen to capture as much of the semantic structure of one or more distorted schemata as possible, e.g. the 3-gram “everyone will believe” captures both Overgeneralizing and Mindreading. We did include 1-grams such as “nobody” and “everybody” in spite of their prevalence in common language, since they strongly correspond to the expression of Dichotomous Reasoning. table:CDclasses shows the number of schemata per category in our CDS set along with the average n-gram size, and a number of relevant grammatical features. The complete set of CD schemata is provided in Table 7 in the Appendix. We note that a significant sub-set of the CDS do not occur in the Twitter content for both cohorts (see table:CDclasses: $N_\exists $), indicating that parts of our set of CDS are “lexically exhaustive” with respect to capturing the major modes of CD expression in natural language. Introduction ::: Depressed and random sample We identified a cohort of social media users that had self-reported a clinical diagnosis of depression by posting a variant of the explicit statement “I was diagnosed with depression” (see “Materials and Methods”). To make sure we were only including truly self-referential statements of diagnosis of depression, 3 of the authors manually removed quotes, retweets, jokes, and external references. Note that we exclude all diagnosis statements themselves from our analysis, including all tweets that contain the term “diagnos” and “depress”. We also examine the sensitivity of our results to the propensity of this cohort to make similar self-referential statements (see “Absence of personal pronoun effect.”) With this final set of adjudicated diagnosis tweets, we harvested the maximum number of tweets allowed by the Twitter API (the most recent 3200) for each individual, resulting in a sample of 1,207 users and their 1,759,644 tweets (ranging from May 2008 to September 2018). We refer to this cohort as “Depressed”, but acknowledge that we have no independent confirmation of their present mental health state. We also established a baseline sample of randomly chosen individuals with a similar distribution of account creation dates as the Depressed cohort to account for changes in user behavior and platform effects. Here too we exclude all tweets that contain the terms “diagnos” and “depress” from subsequent analysis. Our “Random Sample” cohort contains 8,791 individuals and a total 8,498,574 tweets (see “Materials and Methods”). Results We first compare the within-subject prevalence of the established set of CDS between the Depressed and Random Sample cohorts. For each individual we count how many of their tweets contained any of the 241 CDS and divide it by their total number of tweets, resulting in an individual within-subject CD prevalence (see “Materials and Methods”). The density distribution of individual prevalence values can then be compared between Depressed and Random Sample individuals as shown in fig:User Ratio. We restrict this analysis to individuals with at least 150 tweets so that we have sufficient data to determine prevalence reliably, but retain all individuals in subsequent between-groups analyses since the latter does not require the calculation of within-subject prevalence values. We observe that the distribution of within-subject CDS prevalence is shifted significantly to the right for the Depressed cohort relative to that of the Random Sample, indicating that individuals in the Depressed cohort express significantly more CDS. Note that $0.487$% of the Random Sample individuals have no tweets with CDS whereas the Depressed sample has no cases without CDS. Results from a Two-Sample Kolmogorov–Smirnov test ($p < 0.0001$) indicate that we can reject the null hypothesis that the two samples are drawn from the same distribution. Furthermore, we conduct a between-groups analysis to compare the prevalence of CDS between the Depressed vs. the Random Sample cohort. We do so by calculating the Prevalence of CDS for all tweets from each cohort and calculating the Prevalence Ratio ($PR$) between the two cohorts (see Materials and Methods “Prevalence Ratio”). A Prevalence Ratio significantly larger than 1 indicates that the presence of CDS in the tweets written by the Depressed cohort is greater than the Random Sample cohort. To assess the sensitivity of our results to changes in our cohort samples, we repeatedly calculate the estimated $PR$ value over 10,000 random re-samples (with replacement) of both groups, resulting in a distribution of PR values shown in fig:All CD Categories (see Materials and Methods “Bootstrapping”). Note, Prevalence Ratios express the relative difference between the 2 cohorts, not the absolute difference which is provided in Appendix Table 6. We observe in fig:All CD Categories that the median of this distribution of PR values is significantly larger than 1 (and its 95% confidence intervals does not include 1), indicating that we find a statistically significant higher prevalence of CDS in the Depressed cohort ($1.2\times $) than in the Random Sample, and that this result is robust to random changes in our cohorts. The between-groups PR values shown in fig:All CD Categories do not reflect specific distortion types; all CDS are equally and independently matched to all tweets. Total CDS prevalence over all tweets is 21.8% and 18.407% for the Depressed and Random Sample cohort respectively but differs significantly for each CD type (See Appendix Table 5). It is reasonable to expect that the different types of CDS may differ in their prevalence between our cohorts. We therefore repeat the above analysis, with CDS separated by CD type (see table:CDclasses). As shown in table:CDCategoryPrevalence and fig:Separated CD Categories, the prevalence of CDS is significantly higher for nearly all CD types in the tweets of the Depressed cohort than those of the Random Sample with Prevalence Ratio values ranging from $2.4\times $ to $1.1\times $, with the exception of Catastrophizing and Fortune-telling, with the latter not producing a PR significantly different from parity. The CD types Personalizing and Emotional Reasoning have the greatest PR values of $2.4\times $ and $2.3\times $, followed by Overgeneralizing ($1.6\times $), Mental Filtering ($1.5\times $), Labeling and Mislabeling ($1.3\times $), and Disqualifying the positive ($1.3\times $). The CD types Mind Reading, Should Statements, and Magnification and Minimization have lower yet significant PR values of $1.1\times $. table:CDclasses “Significant N” shows the number and ratios of schemata for each CD type that have PR values significantly different from parity. The PR individual CDS n-grams can differ significantly as well. Appendix Fig. 6 shows the contributions of each individual CDS n-gram separately. table:top10CDS shows the CDS with the individually highest and lowest PR values to illustrate the CDS that are most prevalent in the Depressed and Random Sample cohort respectively. As shown, the highest ranked CDS for the Depressed cohort belong to the Mindreading, Emotional Reasoning, and Personalizing type, whereas the highest ranked CDS for the Random Sample belong to the non-reflexive Mindreading and Fortune-telling type. Results ::: Absence of sentiment effect Previous research has shown that the language of depressed individuals is less positive (lower text valence) and contains higher levels of self-referential languageBIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF18, BIBREF36. To determine the degree to which our results can be explained by text sentiment or self-referential statements instead of distorted thinking, we examine the valence loadings of our collection of tweets and CDS, and reproduce our results with and without CDS containing self-referential statements. First, we determine the valence values of each CDS n-gram in our set using the VADER sentiment analysis toolBIBREF37 which in a recent survey was shown to outperform other available sentiment analysis tools for social media languageBIBREF38. VADER is particularly appropriate for this use, since its sentiment ratings take into account grammatical context, such as negation, hedging, and boosting. We find that 75.9% of our CDS have either no sentiment-loaded content or are rated to have zero valence (neutral sentiment scores). The average valence rating of all CDS is $-0.05 (N=241)$ on a scale from $-1.0$ to $+1.0$. fig:Vader ScoresA shows the VADER sentiment distribution of only CDS n-grams with non-zero ratings. Here we observe only a slight negative skew of CDS sentiment for this small minority of CDS n-grams (24.1%). Furthermore, as shown in fig:Vader ScoresB, the sentiment distributions of all tweets for the Depressed and Random Sample cohorts are both skewed towards positive sentiment (right side of distribution). This matches earlier findings that human language exhibits a so-called Polyanna effectBIBREF39, a near-universal phenomenon that skews human language towards positive valence. Surprisingly, we find no indications that the tweets of the Depressed cohort carry more negative valence than those of the Random Sample cohort. To the contrary, VADER sentiment ratings in the range $[0.70,1.00]$ seem to be slightly more prevalent among the tweets of the Depressed cohort (see fig:Vader ScoresB), possibly indicating an increased emotionality (higher levels of both negative and positive affect). One particular deviation in the sentiment range of $[0.40,0.45]$ was found to be uniquely associated with the Random Sample cohort using the “Face With Tears of Joy” emoji (VADER sentiment=0.4404) more often than the Depressed cohort. A two-sample K–S test allows us to reject the null-hypothesis that the two distributions are drawn from the same sample ($p<0.0001$). Combined, these findings strongly suggest that the higher prevalence of CDS in the language of the Depressed cohort can neither be attributed to a negative valence skew in the CDS set, nor the sentiment distribution of the tweets produced by either the Depressed and Random Sample cohorts. Results ::: Absence of personal pronoun effect Research has shown that First-Person Pronouns (FPP) are more prevalent in the language of depressed individualsBIBREF18, BIBREF21. Since many CDS contain FPPs (see table:CDclasses “Pronouns”), our results may to a degree reflect this phenomenon instead of the “distorted” nature of our CDS. To test the sensitivity of our results to the presence of FPPs in our set of CDS, we repeat our analysis entirely without CDS that contain the FPPs “I” (upper-case), “me”, “my”, “mine”, and “myself”. As shown in table:CDCategoryPrevalence: PR$_1$, we find that their removal does not significantly alter the observed effect. The respective confidence intervals resulting from our removal of FPP schemata change, but most overlap with those obtained from an analysis that includes the full set of CDS (see table:CDCategoryPrevalence: PR$_A$ vs table:CDCategoryPrevalence: PR$_1$). This demonstrates that our observations are not a product of the presence of first-person pronouns in our set of CDS. Note that we could not determine any values for Personalizing because its CDS all contain first-person pronouns (see Appendix Fig. 5). Results ::: Robustness to CDS changes To determine the sensitivity of our results to the particular choice of CDS, we re-calculated PR values between the Depressed and Random Sample cohorts, but instead of re-sampling our Depressed and Random Sample cohort, we randomly re-sampled (with replacement) the set of 241 CDS n-gram. The 95% CI of the resulting distribution of PR values then indicates how sensitive our results are to random changes of our CDS set. The results of this analysis are shown in table:CDCategoryPrevalence: PR$_C$. We observe slight changes in the dispersion of the resulting distribution of PR values, but the median values and 95% CIs remain largely unchanged. As before, the 95% CIs continue to exclude $1.000$ for all CD types, except Mindreading, Should Statements, Fortune-telling, and Catastrophizing, and we can continue to reject the null-hypothesis that PR values are similar between the Depressed and Random Sample cohort for nearly all CD types. Furthermore, as shown in table:CDCategoryPrevalence, the 95% CIs of PR$_C$ and PR$_A$ largely overlap across all CD types indicating our results are robust to random changes of our cohort samples as well as our CDS set. Discussion In an online sample of individuals, we emphloyed a theory-driven approach to measure linguistic markers that may indicate cognitive vulnerability to depression, according to CBT theory. We defined a set of Cognitive Distortion Schemata (CDS) that we grouped along 12 widely accepted types of distorted thinking and compared their prevalence between two cohorts of Twitter users: one of individuals who self-identified as having received a clinical diagnosis of depression and the other a similar random sample. As hypothesized, the Depressed cohort use significantly more CDS of distorted thinking in their online language than the Random Sample, particularly schemata associated with Personalizing and Emotional Reasoning. We observed significantly elevated levels of CDS across nearly all CD types, sometimes more than twice as much, but did not find a statistically significant elevated prevalance among the Depressed cohort for two specific types, namely Fortune-telling and Catastrophizing. This may be due to the difficulty of capturing these specific cognitive distortions in the form of a set of 1 to 5-grams as their expression in language can involve an interactive process of conversation and interpretation. Of note, our findings are not explained by the use of first-person pronouns or more negatively loaded language, both of which had been identified in past research as markers of depressed individuals. These results shed a light on how depression may affect public discourse on social media, but also reveals the degree to which depressogenic language is manifested in the colloquial language of social media platforms. This is of social relevance given that these platforms are specifically designed to propagate information through the social ties that connect individuals on a global scale. An advantage of studying theory-driven differences between the language of depressed and non-depressed individuals, as opposed to a purely data-driven or machine learning approach, is that we can explicitly use the principles underpinning CBT to understand the cognitive and lexical components that may shape depression. Cognitive behavioral therapists have developed a set of strategies to challenge the distorted thinking that is characteristic of depression. Preliminary findings suggest that specific language can be related to specific therapeutic practices and seems to be related to outcomesBIBREF40. These practices, however, have largely been shaped by a clinical understanding and not necessarily informed by objective measures of how patterns of language can determine the path of recovery. Our results suggest a path for mitigation and intervention, including applications that engage individuals suffering from mood disorders such as major depressive disorder via social media platforms and that challenge particular expressions and types of depressogenic language. Future characterizations of the relations between depressogenic language and mood may aid in the development of automated interventions (e.g., “chatbots”) or suggest promising targets for psychotherapy. Another approach that has shown promise in leveraging social media for the treatment of mental health problems involves “crowdsourcing” the responses to cognitively-distorted contentBIBREF41. Several limitations of our theory-driven approach should be considered. First, we rely on self-reported depression diagnoses on social media which have not been independently verified by a clinician. However, the potential inaccuracy of this inclusion criterion would reduce the observed effect sizes (PR values between cohorts) due to the larger heterogeneity of our cohorts. Consequently, our results are likely not an artifact of the accuracy of our inclusion criterion. Second, our lexicon of CDS was composed and approved by a panel of 9 experts who may have been only partially successful in capturing all n-grams used to express distorted ways of thinking. Nevertheless, a significant portion of CDS in our set did not occur in our collections of Twitter content, indicating the scope of our lexicon exceeds that of common online language. On a related note, the use of CDS n-grams implies that we measure distorted thinking by proxy, namely via language, and our observations may be therefore be affected by linguistic and cultural factors. Common idiosyncratic or idiomatic expressions may syntactically represent a distorted form of thinking, but no longer do in practice. For example, an expression such as “literally the worst” may be commonly emphloyed to express dismay, without necessarily involving the speaker experiencing a distorted mode of thinking. Third, both cohorts were sampled from Twitter, a leading social media platform, whose use may be associated with higher levels of psychopathology and reduced well-beingBIBREF42, BIBREF43, BIBREF44. We may thus be observing elevated or biased rates of distorted thinking in both cohorts as a result of platform effects. However, we report relative prevalence numbers with respect to a carefully construed random sample, which likely compensates for this effect. Furthermore, recent analysis indicates that representative samples with respect to psychological phenomena can be obtained from social media contentBIBREF45. This is an important discussion in computational social science that will continue to be investigated. Data-driven approaches that analyze natural language in real-time will continue to complement theory-driven work such as ours. Materials and Methods ::: Data and sample construction Using the Twitter Application Program Interface (API) and the IUNI OSoMeBIBREF46 (a service which provides searchable access to the Twitter “Gardenhose”, a 10% sample of all daily tweets), we search for tweets that matched both “diagnos*” and “depress*.” The resulting set of tweets are then filtered for matching the expressions “i”, “diagnos*”, “depres*” in that order in a case-insensitive manner allowing insertions to match the greatest variety of diagnosis statements, e.g. a tweet that states “I was in fact just diagnosed with clinical depression” would match. Finally, to ensure we are only including true self-referential statements of a depression diagnosis, a team of 3 experts manually removed quotes, jokes, and external references. For each qualifying diagnosis tweet we retrieve the timeline of the corresponding Twitter user using the Twitter user_timeline API endpoint . Subsequently, we remove all non-English tweets (Twitter API machine-detected“lang” field), all retweets, and tweets that contain “diagnos*” or “depress*”, but not a valid diagnosis statement. The resulting Depressed cohort contains 1,207 individuals and 1,759,644 tweets ranging from from May 2008 to September 2018. To compare CDS prevalence rates of the Depressed cohort to a baseline, we construct a Random Sample cohort of individuals. To do so, we collect a large sample of random tweets in 3 weeks (i.e. September 1-8, 2017, March 1-8, 2018, and September 1-8, 2018) from the IUNI OSOMEBIBREF46. We extract all Twitter user identifiers from these tweets (N=588,356), and retain only those that specified their geographical location and were not already included in our Depressed cohort. To equalize platform, interface, and behavioral changes over time, we select a sub-sample of these individuals such that the distribution of their account creation dates matches those of the Depressed cohort, resulting in an initial set of 9,525 random individuals. Finally, we harvested the Twitter timelines of these users and filtered the obtained data in the same way as described for the Depressed cohort. Since some user data was found to be no longer publicly available and others have no tweets left after our filters, our final Random Sample Cohort consists of 8,791 individuals and a total 8,498,574 tweets. The code and data used in this analysis are freely available at https://github.com/kbathina/CDs_Depressed_Twitter. Upon reasonable request we will provide all Twitter user IDs and tweet IDs to reproduce our results. Materials and Methods ::: Prevalence Ratios For each Twitter user $u$ in our sample, we retrieved a timeline $T_u$ of their time-ordered $k$ most recent tweets, $T_u=\lbrace t_1, t_2, \cdots , t_k\rbrace $. We also defined a set $C = \lbrace c_1, c_2, \cdots , c_N\rbrace $ of n-grams where $N=241$ (see table:CDclasses) with varying $n \in [1,5]$ number of terms. The elements of set C are intended to represent the lexical building blocks of expressing cognitive distortions (see table:CDclasses and Appendix Table 7). We introduce a CDS matching function $\mathcal {F}_C(t) \rightarrow \lbrace 0,1\rbrace $, which maps each individual tweet $t$ to either 0 or 1 according to whether a tweet $t$ contains one or more of the schemata in set $C$. Note that the range of $\mathcal {F}_C(t)$ is binary, thus a tweet that contains more than one CDS still counts as 1. The within-subject prevalence of tweets for individual $u$ is defined as the ratio of tweets that contain a CDS in $C$ over all tweets in their timeline $T_u$: Our sample is separated into two cohorts: one of 1,207 Depressed and another of 8,791 Random Sample individuals. We denote the set of all individuals in the depressed cohort $D = \lbrace u_1, u_2, \cdots , u_{1207}\rbrace $ and random sample cohort $R = \lbrace u_1, u_2, \cdots , u_{8791}\rbrace $. Hence, the sets of all tweets written by users in the Depressed and Random Sample cohorts are defined as: We can then define the Prevalence ($P$) of tweets with CDS $C$ for each the Depressed ($D$) and Random Sample ($RS$) cohorts as follows: or, informally, the ratio of tweets that contain any CDS over all tweets written by the individuals of that cohort. Consequently, the Prevalence Ratio ($PR$) of CDS in set $C$ between the two cohorts $D$ and $R$, denoted $PR_C(D,R)$, is defined simply as the ratio of their respective CDS prevalence $P_C(T_D)$ and $P_C(T_R)$ in the tweet sets $T_D$ and $T_r$ respectively: If $PR_C(D,R) \simeq 1$ the prevalence of CDS in the tweets of the depression cohort are comparable to their prevalence in the tweets of the random sample. However, any value $PR_C(D,R) \ll 1$ or $PR_C(D,R) \gg 1$ may indicate a significantly higher prevalence in each respective cohort. Here we use $\gg 1$ and $\ll 1$ to signifiy that a PR value is significantly higher or lower than 1 respectively, which we asses by whether its 95% CI includes 1 or not (see Bootstrapping below). Materials and Methods ::: Bootstrapping estimates The estimated Prevalence and Prevalence Ratio can vary with the particular composition of either our set $C$ (CDS n-grams) or the set of individuals in our Depressed and Random Sample cohorts, respectively $D$ and $R$. We verify the reliability of our results by randomly re-sampling either $C$ or both $D$ and $R$, with replacement. This is repeated $B = 10000$ number of times, leading to a set of re-sampled CD sets or cohort samples. Each of these $B$ number of re-samples of either (1) the set of CDS $C$ or (2) or the sets $D$ and $C$ of all individuals in our Depressed and Random Sample cohorts results in $B$ number of corresponding Prevalence or Prevalence Ratio values: The distributions of $P^*$ and $PR^*$ are then characterized by their median ($\mu _{50}$) and their 95% confidence interval ($[\mu _{2.5}, \mu _{97.5}]$). A 95% confidence interval of a PR that does not contain 1 is held to indicate a significant difference in prevalence between the two cohorts. Acknowledgements We thank Luis M. Rocha for his feedback on the general methodology and terminology, as well as Drs. Keith Dobson, Rob DeRubeis, Christian Webb, Stephan Hoffman, Nikolaos Kazantzis, Judy Garber, and Robin Jarrett for their feedback on the content of our list of CDS. Johan Bollen thanks NSF grant #SMA/SME1636636, the Indiana University “Grand Challenges - Prepared for Environmental Change" PR-IUB grant, Wageningen University, and the ISI Foundation for their support.
Yes
2cfcc5864a30259fd35f1cc035fab956802c1c5b
2cfcc5864a30259fd35f1cc035fab956802c1c5b_0
Q: What datasets or tasks do they conduct experiments on? Text: Introduction In NLP, Neural language model pre-training has shown to be effective for improving many tasks BIBREF0 , BIBREF1 . Transformer BIBREF2 is based solely on the attention mechanism, and dispensing with recurrent and convolutions entirely. At present, this model has received extensive attentions and plays an key role in many neural language models, such as BERT BIBREF0 , GPT BIBREF3 and Universal Transformer BIBREF4 . However, in Transformer based model, a lot of model parameters may cause problems in training and deploying these parameters in a limited resource setting. Thus, the compression of large neural pre-training language model has been an essential problem in NLP research. In literature, there are some compression methods BIBREF5 , BIBREF6 , BIBREF7 proposed. When the vocabulary is large, the corresponding weight matrices can be enormous. Tensorized embedding (TE) BIBREF5 uses the way of tensor-train BIBREF8 to compress the embedding layers in Transformer-XL BIBREF9 . In TE BIBREF5 , researchers only study the compression of input embedding layers, rathar than the attention layer. Recently, Block-Term Tensor Decomposition(BTD) BIBREF10 is used to compress recurrent neural networks (RNNs) BIBREF6 . Ye et al. BIBREF6 propose a compact flexible structure to deal with the large number of model parameters instead by high dimensional inputs in training recurrent neural networks (RNNs). This method greatly reduces the parameters of RNNs and improves their training efficiency. Still, the model only considers the input layer compression by the idea of low-rank approximation. On the other hand, some methods BIBREF7 , BIBREF11 aim to develop a specific structure on its weight matrices and are successful in compressing the pre-trained models. However, the new structure after compressing can not be integrated into the model. In Transformer, the multi-head attention is a key part and it is constructed by a large number of parameters. Specifically, Ashish et.al BIBREF2 compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$ , while the keys and values are also packed together into matrices $K$ and $V$ , respectively. The attention function then adopts a no-linear function $softmax$ over three matrices $Q$ , $K$ and $V$ . There are two challenges to find a high-quality compression method to compress the multi-head attention in Transformer. First, the self-attention function in Transformer is a non-linear function, which makes it difficult to compress. In order to address this challenge, we first prove that the output of the attention function of the self-attention model BIBREF2 can be linearly represented by a group of orthonormal base vectors. $Q$ , $K$ and $V$ can be considered as factor matrices. Then, by initializing a low rank core tensor, we use Tucker-decomposition BIBREF12 , BIBREF13 to reconstruct a new attention representation. In order to construct the multi-head mechanism and compress the model, we use the method of Block-Term Tensor Decomposition (BTD), which is a combination of CP decomposition BIBREF14 and Tucker decomposition BIBREF12 . The difference is that three factor matrices $Q,~K$ and $V$ are shared in constructing each 3-order block tensor. This process can lead to reduce many parameters. The second challenge is that the attention model after compressing can not be directly integrated into the encoder and decoder framework of Transformer BIBREF2 , BIBREF9 . In order to address this challenge, there are three steps as follows. First, the average of each block tensor can be computed; Second, some matrices can be given by tensor split. Third, the concatenation of these matrices can serve as the input to the next layer network in Transformer. After that, it can be integrated into the encoder and decoder framework of Transformer BIBREF2 , BIBREF9 and trained end-to-end. Moreover, we also prove that the 3-order tensor can reconstruct the scaled dot-product attention in Transformer by a sum on a particular dimension. Our method combines two ideas which are the low-rank approximation and parameters sharing at the same time. Therefore, it achieves the higher compression ratios. Although the self-attention (i.e., scaled dot-product attention) in Transformer can be reconstructed, we do not consider reconstructing it and choose to split the 3-order tensor (the output of Multi-linear attention) which is helpful for improving the accuracy in experiments. Our major contributions of this paper are as follows: In order to validate the benefits of our model, we test it on two NLP tasks, namely language modeling and neural machine translation. In our experiments, the multi-head attention can be replaced by the proposed model, namely multi-linear attention. We have observed that the standard multi-head attention can be compressed with higher compression ratios on One-Billion dtaset. As a result, we show that multi-linear attention not only considerably reduces the number of parameters, but also achieve promising experiments results, especially in language modeling tasks. Preliminaries Multi-linear attention is carried out in this paper. The analysis of Multi-linear attention relies on these concepts and results from the field of tensor decomositon and multi-head attention. We cover below in Section "Related Work" basic background on Block-Term tensor decomposition BIBREF10 . Then, we describe in Section "Multi-head Attention" multi-head attention BIBREF2 . Tensor and Block-Term Tensor Decomposition Tensor We use the Euler script letter $\mathcal {A}$ to denote a tensor which can be thought of as a multi-array. Thereby a vector and a matrix is a 1-order tensor and 2-order tensor, respectively. The element in a $n$ -order tensor is denoted as $\mathcal {A}_{d_1,\ldots ,d_n}$ . In the geometric representation of a tensor, 3-order tensor can be representation by a cube. After that, there is a related concept named $tensor~slice$ that will be used in this paper. Tensor and some other related concepts are shows in Supplementary Materials A. Block-Term Tensor Decomposition (BTD) Block-Term tensor decomposition is a combination of CP decomposition BIBREF14 and Tucker decomposition BIBREF12 . Given a $n$ -order tensor $\mathcal {A} \in \mathbb {R}^{d_1\times \ldots \times d_n}$ . A high-order tensor can be decomposed into $P$ block terms by the method named BTD. ${\bullet }_z$ is denoted as the tenor-tensor product on the $z$ - $th$ order BIBREF15 and $z\in \lbrace 1,\ldots ,d\rbrace $ . Each term contains ${\bullet }_z$ between a core tensor $\mathcal {G}_i \in \mathbb {R}^{R_1 \times \ldots \times R_d}$ and $d$ factor matrices $\mathcal {A} \in \mathbb {R}^{d_1\times \ldots \times d_n}$0 , where $\mathcal {A} \in \mathbb {R}^{d_1\times \ldots \times d_n}$1 and $\mathcal {A} \in \mathbb {R}^{d_1\times \ldots \times d_n}$2 . The formulation of BTD decomposition is as follows: $$\mathcal {A} = \sum _{i=1}^{P} \mathcal {G}_i {\bullet }_1 \mathcal {X}_i^{(1)} {\bullet }_2 \mathcal {X}_i^{2} {\bullet }_3 \ldots {\bullet }_d \mathcal {X}_i^{(d)}$$ (Eq. 5) where $P$ is the CP rank, and $d$ is the Core-order. In our work, we consider a tensor is 3-order tensor. Figure 1 demonstrates the example of how a 3-order tensor $\mathcal {A}$ can be decomposed into $P$ block terms. Multi-head Attention In Transformer, the attention function is named as “Scaled Dot-Product Attention”. In practice, Transformer BIBREF2 processes query, keys and values as matrices $Q$ , $K$ , and $V$ respectively. The attention function can be written as follows: $$Attention(Q,K,V) = softmax(\frac{QK^{T}}{\sqrt{d}})V$$ (Eq. 8) where $d$ is the number of columns of $Q$ and $K$ . In these work BIBREF2 , BIBREF0 , BIBREF9 , they all use the multi-head attention, as introduced in BIBREF2 , $$\begin{aligned} MultiHeadAttention(Q,K,V) &= Concat(head_1,\ldots ,head_k)W^{O}\\ where~head_i &= Attention(QW^{Q}_{i}, KW^{K}_{i},VW^{V}_{i}) \end{aligned}$$ (Eq. 9) where matrices $W^{Q}_{i}$ and $W^{K}_{i}\in \mathbb {R}^{d_{model}\times {d}}$ , $W_{i}^{V} \in \mathbb {R}^{d_{model}\times {d}}$ and $W^O \in \mathbb {R}^{hd_v\times d_{model}}$ . In practice, $d_v$ is equal to $d$ . In this work BIBREF2 , multiple groups of parameters ( $W_i^{Q}$ , $W_i^{K}$ and $W_i^{V}$ ) are used, which results in a large number of redundant parameters. Tensorized Transformer In this section, we first build a Single-block attention in Figure 2 (left) based on the Tucker decomposition, a low-rank decomposition method. In this process, we prove that the self-attention function in Transformer can be represented by a linear function, i.e., a linear combination representation of a set of basic vectors. In order to compress the multi-head mechanism, we propose a multi-linear attention constructed by a Block-Term tensor decomposition. This attention uses the idea of parameters sharing, i.e., sharing factor matrices across multiple blocks, shown in Figure 2 (right). After that, the compression ratios and relatively lower complexity have been analyzed. Single-block Attention by Tucker Decomposition Before building the Single-block attention, it is necessary to propose the theorem "Theorem 3.1" . The theorem is closely related to attributes of Single-block attention function by Tucker-decomposition BIBREF12 . Theorem 3.1 Let $\mathbf {e}_1, \ldots , \mathbf {e}_n$ be basis vectors from the vector space $S$ . Assume that these vectors $\mathbf {e}_1,\ldots ,\mathbf {e}_n$ are linear independent. The output of the attention function in Eq. 8 can be represented by a linear combination of the set of these basis vectors. $$Attention(Q,K,V) = (\mathbf {e}_1, \ldots , \mathbf {e}_n)M,$$ (Eq. 13) where $M \in \mathbb {R}^{n\times d}$ is a coefficient matrix, and $d$ is a dimension of these matrices (i.e., $Q,~K$ , and $V$ ). The proof can be found in Supplementary Materials B. In Figure 2 (left), it is a schematic diagram about the Single-block attention. First, we assume that the query, key and value can be mapped into three factor matrices of which are composed of three groups of orthogonal basis vectors. Three factor matrices are $Q$ , $K$ and $V$ . After that, we can construct a new attention (i.e., Single-block attention) by initializing a 3-order diagonal tensor (trainable) which is the $\mathcal {G}$ . In Figure 2 (left), $R$ is the rank about the tensor, $N$ is the length of a sequence, and $d$ is the dimension of matrix. The function of Single-block attention can be computed based on Tucker-decomposition as follows: $$\begin{aligned} Atten_{TD}(\mathcal {G};Q,K,V) =& \mathcal {G} {\bullet }_1 Q {\bullet }_2 K {\bullet }_3 V \\ =& \sum _{i=1}^{I}\sum _{j=1}^{J} \sum _{m=1}^{M} \mathcal {G}_{ijm} Q_i \circ K_j \circ V_m \end{aligned}$$ (Eq. 14) where $\mathcal {G}$ is a core tensor. $i, j$ and $m$ are the indexes of the core tensor. $\circ $ is the outer product. ${\bullet }_z$ is the same definition in Eq. 5 . $Q_i, K_j$ and $V_k$ are column vectors from matrices $Q, K$ and $V$ , where $Q \in \mathbb {R}^{N \times d}$ , $i, j$0 and $i, j$1 ,and $i, j$2 is the length of a sequence. In practice, we set $i, j$3 = $i, j$4 = $i, j$5 = $i, j$6 . The core tensor $i, j$7 can be defined as follows, $$\mathcal {G}_{ijm} = \left\lbrace \begin{array}{lr} rand(0,1) & i=j=m \\ 0 & otherwise\\ \end{array} \right.$$ (Eq. 15) where the $rand(0,1)$ is a random function, and the diagonal entries of core tensor $\mathcal {G}$ form the vector $\mathbf {g}$ . Each entry $\mathbf {g}_r\in (0,1)$ , $r \in \lbrace 1, \ldots , R\rbrace $ . We can consider $\mathbf {g}$ as the trainable weight. In experiments, we compute the weight vector by $softmax$ function (i.e., $softmax(\mathbf {g})$ ). After that, the output of Single-block attention function is a 3-order tensor which is given by linear computation. The Single-block attention (i.e., a 3-order tensor with Tucker decomposition) can reconstruct the Scaled Dot-Product attention in Eq. 8 by the summing over the tensor according to the second index (it can be seen as the coordinates in the vertical direction for a tensor), as proved in the following corollary. Note that in our model, we do not adopt the above reconstructing process. Instead, to obtain a new representation, we adopt the concat method after the tensor splitting (see Sec. "Multi-Linear Attention by Block-Term Tensor Decomposition" ). We will further show the compression ability of the Single-block attention in Sec. "Analysis of Compression and Complexity" . Corollary 1 Under the same conditions as in Theorem "Theorem 3.1" and the elements in each row of the matrix $V$ are the same, Single-block attention representation Eq. 14 can reconstruct the Scaled Dot-Product attention in Eq. 8 by the summing over the tensor (i.e., the output of Single-block attention function) according to the second index. It holds that: $$Attention(Q,K,V)_{i,m} = \sum _{j=1}^{d} Atten_{TD}(\mathcal {G};Q,K,V)_{i,j,m}$$ (Eq. 18) where $i$ , $j$ and $m$ are the indices of the Single-block attention's output (i.e., a 3-order tensor), and $d$ is the dimension for the second index. $Atten_{TD}(\cdot )$ is the function of Single-block attention based on Tucker decomposition. $i$ and $m$ are the indices of outputs (i.e., a matrix) from Eq. 8 . The proof can be found in Supplementary Materials C. Multi-Linear Attention by Block-Term Tensor Decomposition In order to construct the multi-head mechanism and compress the parameters of multiple groups of mapping parameters, we use a group of linear projections, and share the output from the linear projections. In Figure 2 (right), the learned linear projection can map queries, keys and values to three matrices which are composed of basis vectors. After that, we use the Block-Term tensor decomposition to build multi-head mechanism. In our work, our model is named as Multi-linear attention, which can be formulated as follows: $$\begin{aligned} MultiLinear(\mathcal {G};Q,K,V) &= SplitConcat(\frac{1}{h}*({T}_1+ \ldots +{T}_h))W^{O} \\ where~~{T_j} &= Atten_{TD}(\mathcal {G}_j;QW^{q},KW^{k},VW^{v}) \end{aligned}$$ (Eq. 20) where the core tensor $\mathcal {G}_j$ is a diagonal tensor, and the number of parameter in $\mathcal {G}_j$ is equal to the rank of core tensor, $j\in \lbrace 1,\ldots , h\rbrace $ . $\mathcal {G}$ is the set of the core tensors. $SplitConcat(\cdot )$ is a function which achieves the concatenation after splitting for a 3-order tensor. Figure 2 (right) shows the basis idea about the multi-linear attention. The $W^{O}$ is the parameter matrix which is a full connection layer and correlated to the output of Multi-linear attention. $Atten_{TD}(\cdot )$ is the function of Single-block attention, which is a partion of Multi-linear attention. $W^{q}$ , $W^{K}$ and $W^{v}$ are the parameters matrices which are shared in constructing Multi-linear attention. The Multi-linear attention is a compression model. After compressing the multi-head attention in Transformer, it is to achieve a Tensorized Transformer. The Multi-linear attention can be incorporated into Transformer architecture. A diagram which is about the incorporating of Multi-linear attention in partial Transformer structure is given in Supplementary Materials E.1. Analysis of Compression and Complexity Compression Our focus is on the compression of the multi-head mechanism in the multi-head attention of Transformer. Previous work BIBREF2 gets the multi-head attention by multiple groups of linear mappings. We use three linear ma for matrices $Q$ , $K$ and $V$ , respectively. For the output of three mappings, we choose to share them which are considered as three factor matrices in reconstructing the Multi-linear attention. This process is shown in Figure 2 (left). $h$ is the number of heads in BIBREF2 , and $d$ is the dimension of factor matrices. The compression ratios can be computed by $({3 \times h \times d})/({3 \times d + h})$ . In practice, $h$ is normally set to 8, $d$ is set to 512. In this case, the compression raio can achive 8. In other words, we can reduce almost 8 times parameters in the attention layer. The details of the computing of compression ratios can be found in Supplementary Materials D. The Transformer also contains other network layers, such as Position-wise feed forward network and embedding layers et al. Therefore, for the compression ratios in whole Transformer, we can compare it by the analysis of experimental results for model parameters. Complexity Eq. 14 reduces the time complexity in the attention layer. The time complexity of the attention function in Eq. 8 is $\mathcal {O}(N^2~d)$ , $N$ is the length of a sequence, and $d$ is the representation dimension. However, we can reorder the computations to reduce the model complexity $\mathcal {O}(R^2d)$ , where $R$ is the rank of the tensor which can be set in our experiments. In our experiments, $R$ is set as the number between 10 and 18 which is smaller than $N$ . The minimum number of sequential operations in Multi-linear attention for different layers is lower than that of the self-attention in Transformer BIBREF2 . Related Work The field of language modeling has witnessed many significant advances. Different from the architectures of convolutional neural network (CNNs) and recurrent neural networks (RNNs) language modeling, the Transformer BIBREF2 and its variants BIBREF9 , BIBREF0 , BIBREF4 achieve excellent results in language modeling processing. Transformer networks have a potential of learning long-term dependency, but are limited by a fixed-length context in the setting of language modeling. Vaswani et al. BIBREF2 uses a segment-level recurrence mechanism and a novel positional encoding scheme to resolve this question. BERT BIBREF0 is a kind of bidirectional encoder representations from transformers. It is designed to pre-train deep bidirectional representation and obtains new SoTA on some NLP tasks. Although these methods have achieved great results, a large number of parameters make it difficult for the model to be trained in limited resources. Transformer fail to generalize in many simple tasks, e.g. copying string and logical inference BIBREF4 . Universal Transformers BIBREF4 propose a self-attentive recurrent sequence model which addresses this problem. This methods can increase the training speed. In their work, authors following weight sharing found in CNNs and RNNs, extend the Transformer with a simple form of weight sharing that strikes an effective balance between induces and model expressivity. This methods also uses a large number of parameters. Therefore, it is very important to consider how to reduce the amount of memory and computing they need. As we know, existing model compression methods are mainly divided into parameter pruning and share BIBREF7 , low rank approximation BIBREF16 , knowledge transfer BIBREF11 , and transferred convolutional filters BIBREF17 . Here, we mainly review some relevant compression methods. Tensor decomposition methods which adopts the idea of low rank approximation in most cases, have been successfully applied to neural networks compression. For example, in literature BIBREF18 , BIBREF19 , researchers approximate a tensor by minimizing the reconstruction error of the original parameters on convolutional neural networks(CNNs). However, these approaches tend to accumulate errors when multiple layers are compressed sequentially, and the output feature maps deviate far from the original values with the increase of compressed layers. Our compression method uses the idea of parameters sharing in the constructing of attention layers, the size of output is same as the output form self-attention in Transformer which can effectively avoid these problems. Tensorizing Neural Networks BIBREF20 have combined the idea of reshaping weights of fully-connected layers into high-dimensional tensors and representing them in Tensor Train format BIBREF8 . This approach was later extended to convolutional BIBREF21 and recurrent neural networks BIBREF22 . Recently, in these work BIBREF23 , BIBREF24 , researchers introduce efficient compression methods for the embedding and $softmax$ layers based on structured low rank matrix approximation. TT-embedding BIBREF5 aims to compression the larger embedding layer on Transformer-XL BIBREF9 . Our method is different from these works, and combines two compression idea (low rank approximate and parameters sharing) to construct a tensorized Transformer. In our work, we focus on the compression the multi-head attention in Transformer based the idea of parameters sharing. At the same time, we also combine low-rank approximate method to reduce parameters and time complexity. Experiments Transformer is a versatile and powerful modeling tool and widely is used in various natural language process tasks. In order to verify the effectiveness of our method (i.e., Multi-linear attention) replacing multi-head attention in Transformer, we carry out two NLP tasks named language modeling (LM) and neural machine translation (NMT). Complete code for running experiments will be released after the paper is accepted, while the key code which is about our method can be found in Supplementary Materials F. Language Modeling Language modeling is the task of predicting the next word in a sentence. This task is to estimate the joint probability $p(s)$ of a sentence of tokens $s$ = $(w_1,\ldots , w_n)$ . The resulting models can be used to generate text or further fine-tuned to solve other NLP tasks BIBREF3 . In this paper, we employ the standard setting of predicting next token given the sequence of preceding tokens, based on the function $p(s)=p(w_1)\prod _{i=2}^n p(w_i|w_1,\ldots ,w_{i-1})$ . We chose three datasets in the order of small (i.e., PTB), medium (i.e., WikiText-103) and large (i.e., One-Billion). As part of pre-processing, words are lower case. Newlines were replaced with <eos>. The vocabulary is the most frequent words with the rest of the tokens replaced by an <UNK> token. Models are evaluated based on Perplexity (PPL), which is the average per-word log-probability. The lower the PPL, the better the model is. Specially, we take Transformer, the open source state-of-the art language modeling architecture, and replace the standard multi-head attention layers with our Multi-linear attention. Then, we test different model configurations on the PTB BIBREF25 , WikiText-103 BIBREF26 and One-Billion Word benchmark BIBREF27 datasets and report the results in Table 1 and Table 2 . Results and Details PTB has $929k$ training tokens, $73k$ validation words, and $82k$ test words. The results is reported in Table 2 . Similar to AWD-LSTM-MoS BIBREF31 , we apply variational dropout and weight average to our model (i.e., Tensorized Transformer). In addition, we need to state that, our model only replaces the multi-head attention using Multi-linear attention structure, and the other structures remain the same. We compare the result of our model with other models. Our model achieves the comparable results with SoTA when the number of core tensor is equal to two. However, our model size (i.e, model parameters) reduces by nearly half comparing with Transformer and Transformer-XL. WikiText-103 contains 267,735 unique tokens. The dataset is available word-level language modeling benchmark with long-term dependency. It contains 103M training tokens from $28k$ articles, with an average length of 3.6k tokens per article, which allows testing the ability of long-term dependency modeling. Here, we set the sentence length is 100, which is different from the sentence length in PTB (30) and One-Billion (30). As shown in Table 2 , our model reduces the previous SoTA perplexity form $20.5$ to $18.9$ , which demonstrates the effectiveness of the proposed attention architecture. The One-Billion Word benchmark is a large dataset derived from a news site. The dataset consists of $829,250,940$ tokens over a vocabulary of $793,471$ words. In this dataset, sentences are shuffled and hence the context is limited. Consequently, this dataset mainly tests the ability of modeling only short-term dependency. The comparison between Tensorized Transformer and the other methods are shown in Table 1 . Although Tensorized Transformer is mainly designed to better compress Transformer or Transformer-XL model, it dramatically improves the single-model SoTA from 21.8 to 19.5. Specifically, Tensorized Transformer significantly outperforms a contemporary method using vanilla Transformers BIBREF2 , suggesting that the advantage of the tensorized Transformer is also generalizable to modeling short sequences. Table 2 and Table 1 show that our model get the lower PPL than other models in three datasets. An exciting observation is that our model has much fewer parameters. On One-Billion word benchmark and WikiText-103 dataset, we use the adaptive input method for input layer, and not on PTB dataset. The model of Transformer-XL+TT BIBREF5 is a recent compression model with Tensor Train to compress the input embedding layers only. The results in Table 2 show that compared with Transformer-XL+TT, our method has much fewer parameters, and better language modeling performance. These results verify that our model (i.e., Multi-linear attention) is effective in language modeling tasks, and has performed well for the model compression. Other details (such as hyperparameters and Hardware) can be found in Supplementary Materials E. Neural Machine Translation The goal is to map an input sequence $s=(x_1,x_2,\ldots ,x_n)$ representing a phrase in one language, to an output sequence $y=(y_1,y_2,\ldots , y_m)$ representing the same phrase in a different language. In this task, we have trained the Transformer model BIBREF2 on WMT 2016 English-German dataset BIBREF36 . Sentences were tokenized using the SentencePiece . For our experiments, we have replaced each of the attention layers with Multi-linear attention. For evaluation we used beam search with a beam size of 5 and length penalty $\alpha $ = $0.6$ . In this section, we only compared the results with Transformer BIBREF2 . Our results are summarized in Table 3 . $*$ indicates that the result is our own implementation. In Table 3 , we select two baseline models. The Base-line BIBREF36 is first model in WMT 2016 English-German dataset. For the other baseline, we use the basic Transformer architecture BIBREF2 . The BLEU score is $34.5$ for the basic architecture. We carry out two tensorized Transformer structures, namely core-1 and core-2 respectively. When tensorized Transformer core-1 and core-2 are used, the BLEU scores are $34.10$ and $34.91$ , which achieves better performance over Transformer. As for the reported model parameter size, our model uses less parameters. Conclusion and Further Work We have proposed a novel self attention encoder layer, namely the Multi-linear attention, to compress the original multi-head attention and derive a novel encoding scheme. Our main contribution lies in a structure of Tensorized Transformer based on Block-Term tensor decomposition which is represented by the combination of a group of 3-order tensors, with low-rank approximation and parameters sharing ideas adopted. Compared with existing Transformer based methods, our model achieved higher compression ratio and got better experimental results, particularly in language modeling task. These evidences imply that our method can potentially be further applied to more NLP tasks with limited resources. In the future, we will continue to optimize the Tensorized Transformer framework and apply it in other NLP tasks. As we stated earlier, our model may suffer from overfitting when the number of cores is large in language modeling. In the future, we will explore the fundamental reasons that cause the problem and tackle them within the Tensorized Transformer framework.
Language Modeling (LM), PTB BIBREF25 , WikiText-103 BIBREF26 and One-Billion Word benchmark BIBREF27 datasets, neural machine translation (NMT), WMT 2016 English-German dataset
234ccc1afcae4890e618ff2a7b06fc1e513ea640
234ccc1afcae4890e618ff2a7b06fc1e513ea640_0
Q: How big is performance improvement proposed methods are used? Text: Introduction In computer vision, it is well known that otherwise competitive models can be "fooled" by adding intentional noise to the input images BIBREF0, BIBREF1. Such changes, imperceptible to the human eye, can cause the model to reverse its initial correct decision on the original input. This has also been studied for Automatic Speech Recognition (ASR) by including hidden commands BIBREF2 in the voice input. Devising such adversarial examples for machine learning algorithms, in particular for neural networks, along with defense mechanisms against them, has been of recent interest BIBREF3. The lack of smoothness of the decision boundaries BIBREF4 and reliance on weakly correlated features that do not generalize BIBREF5 seem to be the main reasons for confident but incorrect predictions for instances that are far from the training data manifold. Among the most successful techniques to increase resistance to such attacks is perturbing the training data and enforcing the output to remain the same BIBREF4, BIBREF6. This is expected to improve the smoothing of the decision boundaries close to the training data but may not help with points that are far from them. There has been recent interest in studying this adversarial attack phenomenon for natural language processing tasks, but that is harder than vision problems for at least two reasons: 1) textual input is discrete, and 2) adding noise may completely change a sentence's meaning or even make it meaningless. Although there are various works that devise adversarial examples in the NLP domain, defense mechanisms have been rare. BIBREF7 applied perturbation to the continuous word embeddings instead of the discrete tokens. This has been shown BIBREF8 to act as a regularizer that increases the model performance on the clean dataset but the perturbed inputs are not true adversarial examples, as they do not correspond to any input text and it cannot be tested whether they are perceptible to humans or not. Unrestricted adversarial examples BIBREF9 lift the constraint on the size of added perturbation and as such can be harder to defend against. Recently, Generative Adversarial Networks (GANs) alongside an auxiliary classifier have been proposed to generate adversarial examples for each label class. In the context of natural languages, use of seq2seq models BIBREF10 seems to be a natural way of perturbing an input example BIBREF11. Such perturbations, that practically paraphrase the original sentence, lie somewhere between the two methods described above. On one hand, the decoder is not constrained to be in a norm ball from the input and, on the other hand, the output is strongly conditioned on the input and hence, not unrestricted. Current NLP work on input perturbations and defense against them has mainly focused on sentence classification. In this paper, we examine a harder task: joint intent detection (sentence classification) and slot tagging (sequence word tagging) for task oriented dialog, which has been of recent interest BIBREF12 due to the ubiquity of commercial conversational AI systems. In the task and data described in Section SECREF2, we observe that exchanging a word with its synonym, as well as changing the structural order of a query can flip the model prediction. Table TABREF1 shows a few such sentence pairs for which the model prediction is different. Motivated by this, in this paper, we focus on analyzing the model robustness against two types of untargeted (that is, we do not target a particular perturbed label) perturbations: paraphrasing and random noise. In order to evaluate the defense mechanisms, we discuss how one can create an adversarial test set focusing on these two types of perturbations in the setting of joint sentence classification and sequence word tagging. Our contributions are: 1. Analyzing the robustness of the joint task of sentence classification and sequence word tagging through generating diverse untargeted adversarial examples using back-translation and noisy autoencoder, and 2. Two techniques to improve upon a model's robustness – data augmentation using back-translation, and adversarial logit pairing loss. Data augmentation using back-translation was earlier proposed as a defense mechanism for a sentence classification task BIBREF11; we extend it to sequence word tagging. We investigate using different types of machine translation systems, as well as different auxiliary languages, for both test set generation and data augmentation. Logit pairing was proposed for improving the robustness in the image classification setting with norm ball attacks BIBREF6; we extend it to the NLP context. We show that combining the two techniques gives the best results. Task and Data block = [text width=15em, text centered] In conversational AI, the language understanding task typically consists of classifying the intent of a sentence and tagging the corresponding slots. For example, a query like What's the weather in Sydney today could be annotated as a weather/find intent, with Sydney and today being location and datetime slots, respectively. This predicted intent then informs which API to call to answer the query and the predicted slots inform the arguments for the call. See Fig. FIGREF2. Slot tagging is arguably harder compared to intent classification since the spans need to align as well. We use the data provided by BIBREF13, which consists of task-oriented queries in weather and alarm domains. The data contains 25k training, 3k evaluation and 7k test queries with 11 intents and 7 slots. We conflate and use a common set of labels for the two domains. Since there is no ambiguous slot or intent in the domains, unlike BIBREF14, we do not need to train a domain classifier, neither jointly nor at the beginning of the pipeline. If a query is not supported by the system but it is unambiguously part of the alarm or weather domains, they are marked as alarm/unsupported and weather/unsupported respectively. Robustness Evaluation To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes. Also note that to make the test set hard, we select only the examples for which the model prediction is different for the paraphrased sentence compared to the original sentence. We, however, do not use the original annotation for the perturbed sentences – instead, we re-annotate the sentences manually. We explain the motivation and methodology for manual annotation later in this section. Robustness Evaluation ::: Automatically Generating Examples We describe two methods of devising untargeted (not targeted towards a particular label) paraphrase generation to find a subset that dramatically reduce the accuracy of the model mentioned in the previous section. We follow BIBREF11 and BIBREF15 to generate the potential set of sentences. Robustness Evaluation ::: Automatically Generating Examples ::: Back-translation Back-translation is a common technique in Machine Translation (MT) to improve translation performance, especially for low-resource language pairs BIBREF16, BIBREF17, BIBREF18. In back-translation, a MT system is used to translate the original sentences to an auxiliary language and a reverse MT system translates them back into the original language. At the final decoding phase, the top k beams are the variations of the original sentence. See Fig. FIGREF5. BIBREF11 which showed the effectiveness of simple back-translation in quickly generating adversarial paraphrases and breaking the correctly predicted examples. To increase diversity, we use two different MT systems and two different auxiliary languages - Czech (cs) and Spanish (es), to use with our training data in English (en). We use the Nematus BIBREF19 pre-trained cs-en model, which was also used in BIBREF11, as well as the FB internal MT system with pre-trained models for cs-en and es-en language pairs. Robustness Evaluation ::: Automatically Generating Examples ::: Noisy Sequence Autoencoder Following BIBREF15, we train a sequence autoencoder BIBREF20 using all the training data. At test time, we add noise to the last hidden state of the encoder, which is used to decode a variation. We found that not using attention results in more diverse examples, by giving the model more freedom to stray from the original sentence. We again decode the top k beams as variations to the original sentence. We observed that the seq2seq model results in less meaningful sentences than using the MT systems, which have been trained over millions of sentences. Robustness Evaluation ::: Annotation For each of the above methods, we use the original test data and generate paraphrases using k=5 beams. We remove the beams that are the same as the original sentence after lower-casing. In order to make sure we have a high-quality adversarial test set, we need to manually check the model's prediction on the above automatically-generated datasets. Unlike targeted methods to procure adversarial examples, our datasets have been generated by random perturbations in the original sentences. Hence, we expect that the true adversarial examples would be quite sparse. In order to obviate the need for manual annotation of a large dataset to find these sparse examples, we sample only from the paraphrases for which the predicted intent is different from the original sentence's predicted intent. This significantly increases the chance of encountering an adversarial example. Note that the model accuracy on this test set might not be zero for two reasons: 1) the flipped intent might actually be justified and not a mistake. For example, “Cancel the alarm” and “Pause the alarm” may be considered as paraphrases, but in the dataset they correspond to alarm/cancel and alarm/pause intents, respectively, and 2) the model might have been making an error in the original prediction, which was corrected by the paraphrase. (However, upon manual observation, this rarely happens). The other reason that we need manual annotation is that such unrestricted generation may result in new variations that can be meaningless or ambiguous without any context. Note that if the meaning can be easily inferred, we do not count slight grammatical errors as meaningless. Thus, we manually double annotate the sentences with flipped intent classification where the disagreements are resolved by a third annotator. As a part of this manual annotation, we also remove the meaningless and ambiguous sentences. Note that these adversarial examples are untargeted, i.e., we had no control in which new label a perturbed example would be sent to. Robustness Evaluation ::: Analysis We have shown adversarial examples from different sources alongside their original sentence in Table TABREF3. We observe that some patterns, such as addition of a definite article or gerund appear more often in the es test set which perhaps stems from the properties of the Spanish language (i.e., most nouns have an article and present simple/continuous tense are often interchangeable). On the other hand, there is more verbal diversity in the cs test set which may be because of the linguistic distance of Czech from English compared with Spanish. Moreover, we observe many imperative-to-declarative transformation in all the back-translated examples. Finally, the seq2seq examples seem to have a higher degree of freedom but that can tip them off into the meaningless realm more often too. Base Model A commonly used architecture for the task described in Section SECREF2 is a bidirectional LSTM for the sentence representation with separate projection layers for sentence (intent) classification and sequence word (slot) tagging BIBREF21, BIBREF22, BIBREF12, BIBREF14. In order to evaluate the model in a task oriented setting, exact match accuracy (from now on, accuracy) is of paramount importance. This is defined as the percentage of the sentences for which the intent and all the slots have been correctly tagged. We use two biLSTM layers of size 200 and two feed-forward layers for the intents and the slots. We use dropout of $0.3$ and train the model for 20 epochs with learning rate of $0.01$ and weight decay of $0.001$. This model, our baseline, achieves $87.1\%$ accuracy over the test set. The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little. Approaches to Improve Robustness In order to improve robustness of the base model against paraphrases and random noise, we propose two approaches: data augmentation and model smoothing via adversarial logit pairing. Data augmentation generates and adds training data without manual annotation. This would help the model see variations that it has not observed in the original training data. As discussed before, back-translation is one way to generate unlabeled data automatically. In this paper, we show how we can automatically generate labels for such sentences during training time and show that it improves the robustness of the model. Note that for our task we have to automatically label both sentence labels (intent) and word tags (slots) for such sentences. The second method we propose is adding logit pairing loss. Unlike data augmentation, logit pairing treats the original and paraphrased sentence sets differently. As such, in addition to the cross-entropy loss over the original training data, we would have another loss term enforcing that the predictions for a sentence and its paraphrases are similar in the logit space. This would ensure that the model makes smooth decisions and prevent the model from making drastically different decisions with small perturbations. Approaches to Improve Robustness ::: Data Augmentation We generate back-translated data from the training data using pre-trained FB MT system. We keep the top 5 beams after the back-translation and remove the beams that already exist in the training data after lower-casing. We observed that including the top 5 beams results in quite diverse combinations without hurting the readability of the sentences. In order to use the unlabeled data, we use an extended version of self training BIBREF23 in which the original classifier is used to annotate the unlabeled data. Unsurprisingly, self-training can result in reinforcing the model errors. Since the sentence intents usually remain the same after paraphrasing for each paraphrase, we annotate its intent as the intent of the original sentence. Since many slot texts may be altered or removed during back-translation, we use self-training to label the slots of the paraphrases. We train the model on the combined clean and noisy datasets with the loss function being the original loss plus the loss on back-translated data weighted by 0.1 for which the accuracy on the clean dev set is still negligible. The model seemed to be quite insensitive against this weight, though and the clean dev accuracy was hurt by less than 1 point using weighing the augmented data equally as the original data. The accuracy over the clean test set using the augmented training data having Czech (cs) and Spanish (es) as the auxiliary languages are shown in Table TABREF8. We observe that, as expected, data augmentation improves accuracy on sentences generated using back-translation, however we see that it also improves accuracy on sentences generated using seq2seq autoencoder. We discuss the results in more detail in the next section. Approaches to Improve Robustness ::: Model smoothing via Logit Pairing BIBREF6 perturb images with the attacks introduced by BIBREF3 and report state-of-the-art results by matching the logit distribution of the perturbed and original images instead of matching only the classifier decision. They also introduce clean pairing in which the logit pairing is applied to random data points in the clean training data, which yields surprisingly good results. Here, we modify both methods for the language understanding task, including sequence word tagging, and expand the approach to targeted pairing for increasing robustness against adversarial examples. Approaches to Improve Robustness ::: Model smoothing via Logit Pairing ::: Clean Logit Pairing Pairing random queries as proposed by BIBREF6 performed very poorly on our task. In the paper, we study the effects when we pair the sentences that have the same annotations, i.e., same intent and same slot labels. Consider a batch $M$, with $m$ clean sentences. For each tuple of intent and slot labels, we identify corresponding sentences in the batch, $M_k$ and sample pairs of sentences. We add a second cost function to the original cost function for the batch that enforces the logit vectors of the intent and same-label slots of those pairs of sentences to have similar distributions: where $I^{(i)}$ and $S^{(i)}_s$ denote the logit vectors corresponding to the intent and $s^{th}$ slot of the $i^{th}$ sentence, respectively. Moreover, $P$ is the total number of sampled pairs, and $\lambda _{sf}$ is a hyper-parameter. We sum the above loss for all the unique tuples of labels and normalize by the total number of pairs. Throughout this section, we use MSE loss for the function $L()$. We train the model with the same parameters as in Section SECREF2, with the only difference being that we use learning rate of $0.001$ and train for 25 epochs to improve model convergence. Contrary to what we expected, clean logit pairing on its own reduces accuracy on both clean and adversarial test sets. Our hypothesis is that the logit smoothing resulted by this method prevents the model from using weakly correlated features BIBREF5, which could have helped the accuracy of both the clean and adversarial test sets. Approaches to Improve Robustness ::: Model smoothing via Logit Pairing ::: Adversarial Logit Pairing (ALP) In order to make the model more robust to paraphrases, we pair a sentence with its back-translated paraphrases and impose the logit distributions to be similar. We generate the paraphrases using the FB MT system as in the previous section using es and cs as auxiliary languages. For the sentences $m^{(i)}$ inside the mini-batch and their paraphrase $\tilde{m}^{(i)}_k$, we add the following loss where $P$ is the total number of original-paraphrase sentence pairs. Note that the first term, which pairs the logit vectors of the predicted intents of a sentence and its paraphrase, can be obtained in an unsupervised fashion. For the second term however, we need to know the position of the slots in the paraphrases in order to be matched with the original slots. We use self-training again to tag the slots in the paraphrased sentence. Then, we pair the logit vectors corresponding to the common labels found among the original and paraphrases slots left to right. We also find that adding a similar loss for pairs of paraphrases of the original sentence, i.e. matching the logit vectors corresponding to the intent and slots, can help the performance on the accuracy over the adversarial test sets. In Table TABREF8, we show the results using ALP (using both the original-paraphrase and paraphrase-paraphrase pairs) for $\lambda _a=0.01$. Results and Discussion We observe that data augmentation using back-translation improves the accuracy across all the adversarial sets, including the seq2seq test set. Unsurprisingly, the gains are the highest when augmenting the training data using the same MT system and the same auxiliary language that the adversarial test set was generated from. However, more interestingly, it is still effective for adversarial examples generated using a different auxiliary language or a different MT system (which, as discussed in the previous section, yielded different types of sentences) from that which was used at the training time. More importantly, even if the generation process is different altogether, that is, the seq2seq dataset generated by the noisy autoencoder, some of the gains are still transferred and the accuracy over the adversarial examples increases. We also train a model using the es and cs back-translated data combined. Table TABREF8 shows that this improves the average performance over the adversarial sets. This suggests that in order to achieve robustness towards different types of paraphrasing, we would need to augment the training data using data generated with various techniques. But one can hope that some of the defense would be transferred for adversarial examples that come from unknown sources. Note that unlike the manually annotated test sets, the augmented training data contains noise both in the generation step (e.g. meaningless utterances) as well as in the automatic annotation step. But the model seems to be quite robust toward this random noise; its accuracy over the clean test set is almost unchanged while yielding nontrivial gains over the adversarial test sets. We observe that ALP results in similarly competitive performance on the adversarial test sets as using the data augmentation but it has a more detrimental effect on the clean test set accuracy. We hypothesize that data augmentation helps with smoothing the decision boundaries without preventing the model from using weakly correlated features. Hence, the regression on the clean test set is very small. This is in contrast with adversarial defense mechanisms such as ALP BIBREF5 which makes the model regress much more on the clean test set. We also combine ALP with the data augmentation technique that yields the highest accuracy on the adversarial test sets but incurs additional costs to the clean test set (more than three points compared with the base model). Adding clean logit pairing to the above resulted in the most defense transfer (i.e. accuracy on the seq2seq adversarial test set) but it is detrimental to almost all the other metrics. One possible explanation can be that the additional regularization stemming from the clean logit pairing helps with generalization (and hence, the transfer) from the back-translated augmented data to the seq2seq test set but it is not helpful otherwise. Related Work Adversarial examples BIBREF4 refer to intentionally devised inputs by an adversary which causes the model's accuracy to make highly-confident but erroneous predictions, e.g. Fast Gradient Sign Attack (FGSA) BIBREF4 and Projected gradient Descent (PGD) BIBREF3. In such methods, the constrained perturbation that (approximately) maximizes the loss for an original data point is added to it. In white-box attacks, the perturbations are chosen to maximize the model loss for the original inputs BIBREF4, BIBREF3, BIBREF24. Such attacks have shown to be transferable to other models which makes it possible to devise black-box attacks for a machine learning model by transferring from a known model BIBREF25, BIBREF1. Defense against such examples has been an elusive task, with proposed mechanisms proving effective against only particular attacks BIBREF3, BIBREF26. Adversarial training BIBREF4 augments the training data with carefully picked perturbations during the training time, which is robust against normed-ball perturbations. But in the general setting of having unrestricted adversarial examples, these defenses have been shown to be highly ineffective BIBREF27. BIBREF28 introduced white-box attacks for language by swapping one token for another based on the gradient of the input. BIBREF29 introduced an algorithm to generate adversarial examples for sentiment analysis and textual entailment by replacing words of the sentence with similar tokens that preserve the language model scoring and maximize the target class probability. BIBREF7 introduced one of the few defense mechanisms for NLP by extending adversarial training to this domain by perturbing the input embeddings and enforcing the label (distribution) to remain unchanged. BIBREF30 and BIBREF8 used this strategy as a regularization method for part-of-speech, relation extraction and NER tasks. Such perturbations resemble the normed-ball attacks for images but the perturbed input does not correspond to a real adversarial example. BIBREF11 studied two methods of generating adversarial data – back-translation and syntax-controlled sequence-to-sequence generation. They show that although the latter method is more effective in generating syntactically diverse examples, the former is also a fast and effective way of generating adversarial examples. There has been a large body of literature on language understanding for task oriented dialog using the intent/slot framework. Bidirectional LSTM for the sentence representation alongside separate projection layers for intent and slot tagging is the typical architecture for the joint task BIBREF21, BIBREF22, BIBREF12, BIBREF14. In parallel to the current work, BIBREF31 introduced unsupervised data augmentation for classification tasks by perturbing the training data and similar to BIBREF7 minimize the KL divergence between the predicted distributions on an unlabeled example and its perturbations. Their goal is to achieve high accuracy using as little labeled data as possible by leveraging the unlabeled data. In this paper, we have focused on increasing the model performance on adversarial test sets in supervised settings while constraining the degradation on the clean test set. Moreover, we focused on a more complicated task: the joint classification and sequence tagging task. Conclusion In this paper, we study the robustness of language understanding models for the joint task of sentence classification and sequence word tagging in the field of task oriented dialog by generating adversarial test sets. We further discuss defense mechanisms using data augmentation and adversarial logit pairing loss. We first generate adversarial test sets using two methods, back-translation with two languages and sequence auto-encoder, and observe that the two methods generate different types of sentences. Our experiments show that creating the test set using a combination of the two methods above is better than either method alone, based on the model's performance on the test sets. Secondly, we propose how to improve the model's robustness against such adversarial test sets by both augmenting the training data and using a new loss function based on logit pairing with back-translated paraphrases annotated using self-training. The experiments show that combining data augmentation using back-translation and adversarial logit pairing loss performs best on the adversarial test sets. Conclusion ::: Future Work Though the adversarial accuracy has significantly improved using the above techniques, there is still a huge gap between the adversarial and clean test accuracy. Exploring other techniques to augment the data as well as other methods to leverage the augmented data is left for future work. For example, using sampling at the decoding time BIBREF17 or conditioning the seq2seq model on structure BIBREF11 has shown to produce more diverse examples. On the other hand, using more novel techniques such as multi-task tri-training BIBREF32 to label the unlabeled data rather than the simple self-training may yield better performance. Moreover, augmenting the whole dataset through back-translation and keeping the top k beams is not practical for large datasets. Exploring more efficient augmentations, i.e., which sentences to augment and which back-translated beams to keep, and adapting techniques such as in BIBREF33 are also interesting research directions to pursue. In this paper, we studied various ways of devising untargeted adversarial examples. This is in contrast with targeted attacks, which can perturb the original input data toward a particular label class. Encoding this information in the seq2seq model, e.g., feeding a one-hot encoded label vector, may deserve attention for future research.
Data augmentation (es) improved Adv es by 20% comparing to baseline Data augmentation (cs) improved Adv cs by 16.5% comparing to baseline Data augmentation (cs+es) improved both Adv cs and Adv es by at least 10% comparing to baseline All models show improvements over adversarial sets
4bd894c365d85e20753d9d2cb6edebb8d6f422e9
4bd894c365d85e20753d9d2cb6edebb8d6f422e9_0
Q: How authors create adversarial test set to measure model robustness? Text: Introduction In computer vision, it is well known that otherwise competitive models can be "fooled" by adding intentional noise to the input images BIBREF0, BIBREF1. Such changes, imperceptible to the human eye, can cause the model to reverse its initial correct decision on the original input. This has also been studied for Automatic Speech Recognition (ASR) by including hidden commands BIBREF2 in the voice input. Devising such adversarial examples for machine learning algorithms, in particular for neural networks, along with defense mechanisms against them, has been of recent interest BIBREF3. The lack of smoothness of the decision boundaries BIBREF4 and reliance on weakly correlated features that do not generalize BIBREF5 seem to be the main reasons for confident but incorrect predictions for instances that are far from the training data manifold. Among the most successful techniques to increase resistance to such attacks is perturbing the training data and enforcing the output to remain the same BIBREF4, BIBREF6. This is expected to improve the smoothing of the decision boundaries close to the training data but may not help with points that are far from them. There has been recent interest in studying this adversarial attack phenomenon for natural language processing tasks, but that is harder than vision problems for at least two reasons: 1) textual input is discrete, and 2) adding noise may completely change a sentence's meaning or even make it meaningless. Although there are various works that devise adversarial examples in the NLP domain, defense mechanisms have been rare. BIBREF7 applied perturbation to the continuous word embeddings instead of the discrete tokens. This has been shown BIBREF8 to act as a regularizer that increases the model performance on the clean dataset but the perturbed inputs are not true adversarial examples, as they do not correspond to any input text and it cannot be tested whether they are perceptible to humans or not. Unrestricted adversarial examples BIBREF9 lift the constraint on the size of added perturbation and as such can be harder to defend against. Recently, Generative Adversarial Networks (GANs) alongside an auxiliary classifier have been proposed to generate adversarial examples for each label class. In the context of natural languages, use of seq2seq models BIBREF10 seems to be a natural way of perturbing an input example BIBREF11. Such perturbations, that practically paraphrase the original sentence, lie somewhere between the two methods described above. On one hand, the decoder is not constrained to be in a norm ball from the input and, on the other hand, the output is strongly conditioned on the input and hence, not unrestricted. Current NLP work on input perturbations and defense against them has mainly focused on sentence classification. In this paper, we examine a harder task: joint intent detection (sentence classification) and slot tagging (sequence word tagging) for task oriented dialog, which has been of recent interest BIBREF12 due to the ubiquity of commercial conversational AI systems. In the task and data described in Section SECREF2, we observe that exchanging a word with its synonym, as well as changing the structural order of a query can flip the model prediction. Table TABREF1 shows a few such sentence pairs for which the model prediction is different. Motivated by this, in this paper, we focus on analyzing the model robustness against two types of untargeted (that is, we do not target a particular perturbed label) perturbations: paraphrasing and random noise. In order to evaluate the defense mechanisms, we discuss how one can create an adversarial test set focusing on these two types of perturbations in the setting of joint sentence classification and sequence word tagging. Our contributions are: 1. Analyzing the robustness of the joint task of sentence classification and sequence word tagging through generating diverse untargeted adversarial examples using back-translation and noisy autoencoder, and 2. Two techniques to improve upon a model's robustness – data augmentation using back-translation, and adversarial logit pairing loss. Data augmentation using back-translation was earlier proposed as a defense mechanism for a sentence classification task BIBREF11; we extend it to sequence word tagging. We investigate using different types of machine translation systems, as well as different auxiliary languages, for both test set generation and data augmentation. Logit pairing was proposed for improving the robustness in the image classification setting with norm ball attacks BIBREF6; we extend it to the NLP context. We show that combining the two techniques gives the best results. Task and Data block = [text width=15em, text centered] In conversational AI, the language understanding task typically consists of classifying the intent of a sentence and tagging the corresponding slots. For example, a query like What's the weather in Sydney today could be annotated as a weather/find intent, with Sydney and today being location and datetime slots, respectively. This predicted intent then informs which API to call to answer the query and the predicted slots inform the arguments for the call. See Fig. FIGREF2. Slot tagging is arguably harder compared to intent classification since the spans need to align as well. We use the data provided by BIBREF13, which consists of task-oriented queries in weather and alarm domains. The data contains 25k training, 3k evaluation and 7k test queries with 11 intents and 7 slots. We conflate and use a common set of labels for the two domains. Since there is no ambiguous slot or intent in the domains, unlike BIBREF14, we do not need to train a domain classifier, neither jointly nor at the beginning of the pipeline. If a query is not supported by the system but it is unambiguously part of the alarm or weather domains, they are marked as alarm/unsupported and weather/unsupported respectively. Robustness Evaluation To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes. Also note that to make the test set hard, we select only the examples for which the model prediction is different for the paraphrased sentence compared to the original sentence. We, however, do not use the original annotation for the perturbed sentences – instead, we re-annotate the sentences manually. We explain the motivation and methodology for manual annotation later in this section. Robustness Evaluation ::: Automatically Generating Examples We describe two methods of devising untargeted (not targeted towards a particular label) paraphrase generation to find a subset that dramatically reduce the accuracy of the model mentioned in the previous section. We follow BIBREF11 and BIBREF15 to generate the potential set of sentences. Robustness Evaluation ::: Automatically Generating Examples ::: Back-translation Back-translation is a common technique in Machine Translation (MT) to improve translation performance, especially for low-resource language pairs BIBREF16, BIBREF17, BIBREF18. In back-translation, a MT system is used to translate the original sentences to an auxiliary language and a reverse MT system translates them back into the original language. At the final decoding phase, the top k beams are the variations of the original sentence. See Fig. FIGREF5. BIBREF11 which showed the effectiveness of simple back-translation in quickly generating adversarial paraphrases and breaking the correctly predicted examples. To increase diversity, we use two different MT systems and two different auxiliary languages - Czech (cs) and Spanish (es), to use with our training data in English (en). We use the Nematus BIBREF19 pre-trained cs-en model, which was also used in BIBREF11, as well as the FB internal MT system with pre-trained models for cs-en and es-en language pairs. Robustness Evaluation ::: Automatically Generating Examples ::: Noisy Sequence Autoencoder Following BIBREF15, we train a sequence autoencoder BIBREF20 using all the training data. At test time, we add noise to the last hidden state of the encoder, which is used to decode a variation. We found that not using attention results in more diverse examples, by giving the model more freedom to stray from the original sentence. We again decode the top k beams as variations to the original sentence. We observed that the seq2seq model results in less meaningful sentences than using the MT systems, which have been trained over millions of sentences. Robustness Evaluation ::: Annotation For each of the above methods, we use the original test data and generate paraphrases using k=5 beams. We remove the beams that are the same as the original sentence after lower-casing. In order to make sure we have a high-quality adversarial test set, we need to manually check the model's prediction on the above automatically-generated datasets. Unlike targeted methods to procure adversarial examples, our datasets have been generated by random perturbations in the original sentences. Hence, we expect that the true adversarial examples would be quite sparse. In order to obviate the need for manual annotation of a large dataset to find these sparse examples, we sample only from the paraphrases for which the predicted intent is different from the original sentence's predicted intent. This significantly increases the chance of encountering an adversarial example. Note that the model accuracy on this test set might not be zero for two reasons: 1) the flipped intent might actually be justified and not a mistake. For example, “Cancel the alarm” and “Pause the alarm” may be considered as paraphrases, but in the dataset they correspond to alarm/cancel and alarm/pause intents, respectively, and 2) the model might have been making an error in the original prediction, which was corrected by the paraphrase. (However, upon manual observation, this rarely happens). The other reason that we need manual annotation is that such unrestricted generation may result in new variations that can be meaningless or ambiguous without any context. Note that if the meaning can be easily inferred, we do not count slight grammatical errors as meaningless. Thus, we manually double annotate the sentences with flipped intent classification where the disagreements are resolved by a third annotator. As a part of this manual annotation, we also remove the meaningless and ambiguous sentences. Note that these adversarial examples are untargeted, i.e., we had no control in which new label a perturbed example would be sent to. Robustness Evaluation ::: Analysis We have shown adversarial examples from different sources alongside their original sentence in Table TABREF3. We observe that some patterns, such as addition of a definite article or gerund appear more often in the es test set which perhaps stems from the properties of the Spanish language (i.e., most nouns have an article and present simple/continuous tense are often interchangeable). On the other hand, there is more verbal diversity in the cs test set which may be because of the linguistic distance of Czech from English compared with Spanish. Moreover, we observe many imperative-to-declarative transformation in all the back-translated examples. Finally, the seq2seq examples seem to have a higher degree of freedom but that can tip them off into the meaningless realm more often too. Base Model A commonly used architecture for the task described in Section SECREF2 is a bidirectional LSTM for the sentence representation with separate projection layers for sentence (intent) classification and sequence word (slot) tagging BIBREF21, BIBREF22, BIBREF12, BIBREF14. In order to evaluate the model in a task oriented setting, exact match accuracy (from now on, accuracy) is of paramount importance. This is defined as the percentage of the sentences for which the intent and all the slots have been correctly tagged. We use two biLSTM layers of size 200 and two feed-forward layers for the intents and the slots. We use dropout of $0.3$ and train the model for 20 epochs with learning rate of $0.01$ and weight decay of $0.001$. This model, our baseline, achieves $87.1\%$ accuracy over the test set. The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little. Approaches to Improve Robustness In order to improve robustness of the base model against paraphrases and random noise, we propose two approaches: data augmentation and model smoothing via adversarial logit pairing. Data augmentation generates and adds training data without manual annotation. This would help the model see variations that it has not observed in the original training data. As discussed before, back-translation is one way to generate unlabeled data automatically. In this paper, we show how we can automatically generate labels for such sentences during training time and show that it improves the robustness of the model. Note that for our task we have to automatically label both sentence labels (intent) and word tags (slots) for such sentences. The second method we propose is adding logit pairing loss. Unlike data augmentation, logit pairing treats the original and paraphrased sentence sets differently. As such, in addition to the cross-entropy loss over the original training data, we would have another loss term enforcing that the predictions for a sentence and its paraphrases are similar in the logit space. This would ensure that the model makes smooth decisions and prevent the model from making drastically different decisions with small perturbations. Approaches to Improve Robustness ::: Data Augmentation We generate back-translated data from the training data using pre-trained FB MT system. We keep the top 5 beams after the back-translation and remove the beams that already exist in the training data after lower-casing. We observed that including the top 5 beams results in quite diverse combinations without hurting the readability of the sentences. In order to use the unlabeled data, we use an extended version of self training BIBREF23 in which the original classifier is used to annotate the unlabeled data. Unsurprisingly, self-training can result in reinforcing the model errors. Since the sentence intents usually remain the same after paraphrasing for each paraphrase, we annotate its intent as the intent of the original sentence. Since many slot texts may be altered or removed during back-translation, we use self-training to label the slots of the paraphrases. We train the model on the combined clean and noisy datasets with the loss function being the original loss plus the loss on back-translated data weighted by 0.1 for which the accuracy on the clean dev set is still negligible. The model seemed to be quite insensitive against this weight, though and the clean dev accuracy was hurt by less than 1 point using weighing the augmented data equally as the original data. The accuracy over the clean test set using the augmented training data having Czech (cs) and Spanish (es) as the auxiliary languages are shown in Table TABREF8. We observe that, as expected, data augmentation improves accuracy on sentences generated using back-translation, however we see that it also improves accuracy on sentences generated using seq2seq autoencoder. We discuss the results in more detail in the next section. Approaches to Improve Robustness ::: Model smoothing via Logit Pairing BIBREF6 perturb images with the attacks introduced by BIBREF3 and report state-of-the-art results by matching the logit distribution of the perturbed and original images instead of matching only the classifier decision. They also introduce clean pairing in which the logit pairing is applied to random data points in the clean training data, which yields surprisingly good results. Here, we modify both methods for the language understanding task, including sequence word tagging, and expand the approach to targeted pairing for increasing robustness against adversarial examples. Approaches to Improve Robustness ::: Model smoothing via Logit Pairing ::: Clean Logit Pairing Pairing random queries as proposed by BIBREF6 performed very poorly on our task. In the paper, we study the effects when we pair the sentences that have the same annotations, i.e., same intent and same slot labels. Consider a batch $M$, with $m$ clean sentences. For each tuple of intent and slot labels, we identify corresponding sentences in the batch, $M_k$ and sample pairs of sentences. We add a second cost function to the original cost function for the batch that enforces the logit vectors of the intent and same-label slots of those pairs of sentences to have similar distributions: where $I^{(i)}$ and $S^{(i)}_s$ denote the logit vectors corresponding to the intent and $s^{th}$ slot of the $i^{th}$ sentence, respectively. Moreover, $P$ is the total number of sampled pairs, and $\lambda _{sf}$ is a hyper-parameter. We sum the above loss for all the unique tuples of labels and normalize by the total number of pairs. Throughout this section, we use MSE loss for the function $L()$. We train the model with the same parameters as in Section SECREF2, with the only difference being that we use learning rate of $0.001$ and train for 25 epochs to improve model convergence. Contrary to what we expected, clean logit pairing on its own reduces accuracy on both clean and adversarial test sets. Our hypothesis is that the logit smoothing resulted by this method prevents the model from using weakly correlated features BIBREF5, which could have helped the accuracy of both the clean and adversarial test sets. Approaches to Improve Robustness ::: Model smoothing via Logit Pairing ::: Adversarial Logit Pairing (ALP) In order to make the model more robust to paraphrases, we pair a sentence with its back-translated paraphrases and impose the logit distributions to be similar. We generate the paraphrases using the FB MT system as in the previous section using es and cs as auxiliary languages. For the sentences $m^{(i)}$ inside the mini-batch and their paraphrase $\tilde{m}^{(i)}_k$, we add the following loss where $P$ is the total number of original-paraphrase sentence pairs. Note that the first term, which pairs the logit vectors of the predicted intents of a sentence and its paraphrase, can be obtained in an unsupervised fashion. For the second term however, we need to know the position of the slots in the paraphrases in order to be matched with the original slots. We use self-training again to tag the slots in the paraphrased sentence. Then, we pair the logit vectors corresponding to the common labels found among the original and paraphrases slots left to right. We also find that adding a similar loss for pairs of paraphrases of the original sentence, i.e. matching the logit vectors corresponding to the intent and slots, can help the performance on the accuracy over the adversarial test sets. In Table TABREF8, we show the results using ALP (using both the original-paraphrase and paraphrase-paraphrase pairs) for $\lambda _a=0.01$. Results and Discussion We observe that data augmentation using back-translation improves the accuracy across all the adversarial sets, including the seq2seq test set. Unsurprisingly, the gains are the highest when augmenting the training data using the same MT system and the same auxiliary language that the adversarial test set was generated from. However, more interestingly, it is still effective for adversarial examples generated using a different auxiliary language or a different MT system (which, as discussed in the previous section, yielded different types of sentences) from that which was used at the training time. More importantly, even if the generation process is different altogether, that is, the seq2seq dataset generated by the noisy autoencoder, some of the gains are still transferred and the accuracy over the adversarial examples increases. We also train a model using the es and cs back-translated data combined. Table TABREF8 shows that this improves the average performance over the adversarial sets. This suggests that in order to achieve robustness towards different types of paraphrasing, we would need to augment the training data using data generated with various techniques. But one can hope that some of the defense would be transferred for adversarial examples that come from unknown sources. Note that unlike the manually annotated test sets, the augmented training data contains noise both in the generation step (e.g. meaningless utterances) as well as in the automatic annotation step. But the model seems to be quite robust toward this random noise; its accuracy over the clean test set is almost unchanged while yielding nontrivial gains over the adversarial test sets. We observe that ALP results in similarly competitive performance on the adversarial test sets as using the data augmentation but it has a more detrimental effect on the clean test set accuracy. We hypothesize that data augmentation helps with smoothing the decision boundaries without preventing the model from using weakly correlated features. Hence, the regression on the clean test set is very small. This is in contrast with adversarial defense mechanisms such as ALP BIBREF5 which makes the model regress much more on the clean test set. We also combine ALP with the data augmentation technique that yields the highest accuracy on the adversarial test sets but incurs additional costs to the clean test set (more than three points compared with the base model). Adding clean logit pairing to the above resulted in the most defense transfer (i.e. accuracy on the seq2seq adversarial test set) but it is detrimental to almost all the other metrics. One possible explanation can be that the additional regularization stemming from the clean logit pairing helps with generalization (and hence, the transfer) from the back-translated augmented data to the seq2seq test set but it is not helpful otherwise. Related Work Adversarial examples BIBREF4 refer to intentionally devised inputs by an adversary which causes the model's accuracy to make highly-confident but erroneous predictions, e.g. Fast Gradient Sign Attack (FGSA) BIBREF4 and Projected gradient Descent (PGD) BIBREF3. In such methods, the constrained perturbation that (approximately) maximizes the loss for an original data point is added to it. In white-box attacks, the perturbations are chosen to maximize the model loss for the original inputs BIBREF4, BIBREF3, BIBREF24. Such attacks have shown to be transferable to other models which makes it possible to devise black-box attacks for a machine learning model by transferring from a known model BIBREF25, BIBREF1. Defense against such examples has been an elusive task, with proposed mechanisms proving effective against only particular attacks BIBREF3, BIBREF26. Adversarial training BIBREF4 augments the training data with carefully picked perturbations during the training time, which is robust against normed-ball perturbations. But in the general setting of having unrestricted adversarial examples, these defenses have been shown to be highly ineffective BIBREF27. BIBREF28 introduced white-box attacks for language by swapping one token for another based on the gradient of the input. BIBREF29 introduced an algorithm to generate adversarial examples for sentiment analysis and textual entailment by replacing words of the sentence with similar tokens that preserve the language model scoring and maximize the target class probability. BIBREF7 introduced one of the few defense mechanisms for NLP by extending adversarial training to this domain by perturbing the input embeddings and enforcing the label (distribution) to remain unchanged. BIBREF30 and BIBREF8 used this strategy as a regularization method for part-of-speech, relation extraction and NER tasks. Such perturbations resemble the normed-ball attacks for images but the perturbed input does not correspond to a real adversarial example. BIBREF11 studied two methods of generating adversarial data – back-translation and syntax-controlled sequence-to-sequence generation. They show that although the latter method is more effective in generating syntactically diverse examples, the former is also a fast and effective way of generating adversarial examples. There has been a large body of literature on language understanding for task oriented dialog using the intent/slot framework. Bidirectional LSTM for the sentence representation alongside separate projection layers for intent and slot tagging is the typical architecture for the joint task BIBREF21, BIBREF22, BIBREF12, BIBREF14. In parallel to the current work, BIBREF31 introduced unsupervised data augmentation for classification tasks by perturbing the training data and similar to BIBREF7 minimize the KL divergence between the predicted distributions on an unlabeled example and its perturbations. Their goal is to achieve high accuracy using as little labeled data as possible by leveraging the unlabeled data. In this paper, we have focused on increasing the model performance on adversarial test sets in supervised settings while constraining the degradation on the clean test set. Moreover, we focused on a more complicated task: the joint classification and sequence tagging task. Conclusion In this paper, we study the robustness of language understanding models for the joint task of sentence classification and sequence word tagging in the field of task oriented dialog by generating adversarial test sets. We further discuss defense mechanisms using data augmentation and adversarial logit pairing loss. We first generate adversarial test sets using two methods, back-translation with two languages and sequence auto-encoder, and observe that the two methods generate different types of sentences. Our experiments show that creating the test set using a combination of the two methods above is better than either method alone, based on the model's performance on the test sets. Secondly, we propose how to improve the model's robustness against such adversarial test sets by both augmenting the training data and using a new loss function based on logit pairing with back-translated paraphrases annotated using self-training. The experiments show that combining data augmentation using back-translation and adversarial logit pairing loss performs best on the adversarial test sets. Conclusion ::: Future Work Though the adversarial accuracy has significantly improved using the above techniques, there is still a huge gap between the adversarial and clean test accuracy. Exploring other techniques to augment the data as well as other methods to leverage the augmented data is left for future work. For example, using sampling at the decoding time BIBREF17 or conditioning the seq2seq model on structure BIBREF11 has shown to produce more diverse examples. On the other hand, using more novel techniques such as multi-task tri-training BIBREF32 to label the unlabeled data rather than the simple self-training may yield better performance. Moreover, augmenting the whole dataset through back-translation and keeping the top k beams is not practical for large datasets. Exploring more efficient augmentations, i.e., which sentences to augment and which back-translated beams to keep, and adapting techniques such as in BIBREF33 are also interesting research directions to pursue. In this paper, we studied various ways of devising untargeted adversarial examples. This is in contrast with targeted attacks, which can perturb the original input data toward a particular label class. Encoding this information in the seq2seq model, e.g., feeding a one-hot encoded label vector, may deserve attention for future research.
we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. , We use two approaches described in literature: back-translation and noisy sequence autoencoder.
5c4a2a3d6e02bcbeae784e439441524535916e85
5c4a2a3d6e02bcbeae784e439441524535916e85_0
Q: Do they compare with the MAML algorithm? Text: Introduction Few-shot learning (FSL) BIBREF0 , BIBREF1 , BIBREF2 aims to learn classifiers from few examples per class. Recently, deep learning has been successfully exploited for FSL via learning meta-models from a large number of meta-training tasks. These meta-models can be then used for rapid-adaptation for the target/meta-testing tasks that only have few training examples. Examples of such meta-models include: (1) metric-/similarity-based models, which learn contextual, and task-specific similarity measures BIBREF3 , BIBREF4 , BIBREF5 ; and (2) optimization-based models, which receive the input of gradients from a FSL task and predict either model parameters or parameter updates BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . In the past, FSL has mainly considered image domains, where all tasks are often sampled from one huge collection of data, such as Omniglot BIBREF10 and ImageNet BIBREF4 , making tasks come from a single domain thus related. Due to such a simplified setting, almost all previous works employ a common meta-model (metric-/optimization-based) for all few-shot tasks. However, this setting is far from the realistic scenarios in many real-world applications of few-shot text classification. For example, on an enterprise AI cloud service, many clients submit various tasks to train text classification models for business-specific purposes. The tasks could be classifying customers' comments or opinions on different products/services, monitoring public reactions to different policy changes, or determining users' intents in different types of personal assistant services. As most of the clients cannot collect enough data, their submitted tasks form a few-shot setting. Also, these tasks are significantly diverse, thus a common metric is insufficient to handle all these tasks. We consider a more realistic FSL setting where tasks are diverse. In such a scenario, the optimal meta-model may vary across tasks. Our solution is based on the metric-learning approach BIBREF5 and the key idea is to maintain multiple metrics for FSL. The meta-learner selects and combines multiple metrics for learning the target task using task clustering on the meta-training tasks. During the meta-training, we propose to first partition the meta-training tasks into clusters, making the tasks in each cluster likely to be related. Then within each cluster, we train a deep embedding function as the metric. This ensures the common metric is only shared across tasks within the same cluster. Further, during meta-testing, each target FSL task is assigned to a task-specific metric, which is a linear combination of the metrics defined by different clusters. In this way, the diverse few-shot tasks can derive different metrics from the previous learning experience. The key of the proposed FSL framework is the task clustering algorithm. Previous works BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 mainly focused on convex objectives, and assumed the number of classes is the same across different tasks (e.g. binary classification is often considered). To make task clustering (i) compatible with deep networks and (ii) able to handle tasks with a various number of labels, we propose a matrix-completion based task clustering algorithm. The algorithm utilizes task similarity measured by cross-task transfer performance, denoted by matrix $\textbf {S}$ . The $(i,j)$ -entry of $\textbf {S}$ is the estimated accuracy by adapting the learned representations on the $i$ -th (source) task to the $j$ -th (target) task. We rely on matrix completion to deal with missing and unreliable entries in $\textbf {S}$ and finally apply spectral clustering to generate the task partitions. To the best of our knowledge, our work is the first one addressing the diverse few-shot learning problem and reporting results on real-world few-shot text classification problems. The experimental results show that the proposed algorithm provides significant gains on few-shot sentiment classification and dialog intent classification tasks. It provides positive feedback on the idea of using multiple meta-models (metrics) to handle diverse FSL tasks, as well as the proposed task clustering algorithm on automatically detecting related tasks. Methodology We propose a task-clustering framework to address the diverse few-shot learning problem stated in Section "Problem Definition" . We have the FSL algorithm summarized in Algorithm UID12 . Figure 2 gives an overview of our idea. The initial step of the algorithm is a novel task clustering algorithm based on matrix completion, which is described in Section "Robust Task Clustering by Matrix Completion" . The few-shot learning method based on task clustering is then introduced in Section "Problem Definition"1 . Robust Task Clustering by Matrix Completion Our task clustering algorithm is shown in Algorithm UID13 . The algorithm first evaluates the transfer performance by applying a single-task model $i$ to another task $j$ (Section UID12 ), which will result in a (partially observed) cross-task transfer performance matrix $\textbf {S}$ . The matrix $\textbf {S}$ is then cleaned and completed, giving a symmetry task similarity matrix $\textbf {Y}$ for spectral clustering BIBREF20 . Using single-task models, we can compute performance scores $s_{ij}$ by adapting each $\mathrm {M}_i$ to each task $T_j (j\ne i)$ . This forms an $n \times n$ pair-wise classification performance matrix $\textbf {S}$ , called the transfer-performance matrix. Note that $\textbf {S}$ is asymmetric since usually $\textbf {S}_{ij} \ne \textbf {S}_{ji}$ . [ht] InputInput OutputOutput Meta-model $\mathcal {M} = \lbrace C_{1:K}\ (K\ \textrm {task clusters})$ , $\mathcal {F} = \left\lbrace f_1,f_2, \cdots , f_K \right\rbrace \ (K\ \textrm {task encoders})\rbrace $ . One classifier $\mathrm {M^{\prime }}_{i}$ for each target task $\mathrm {T}^{\prime }$ . Robust Task Clustering: $C_{1:K}$ = RobustTC( $\mathcal {T}$ , $K$ ) (Algorithm UID13 ) Cluster-Model Training: Train one encoder (multi-task MNet) $f_i$ on each task cluster $C_i$ (Section UID22 ) Few-Shot Learning on Cluster-models: Train a model $\mathrm {M}_{trg}$ on task $\mathcal {F} = \left\lbrace f_1,f_2, \cdots , f_K \right\rbrace \ (K\ \textrm {task encoders})\rbrace $0 with the method in Section UID23 . RobustTC-FSL: Task Clustering for Few-Shot Learning Ideally, the transfer performance could be estimated by training a MNet on task $i$ and directly evaluating it on task $j$ . However, the limited training data usually lead to generally low transfer performance of single-task MNet. As a result we adopt the following approach to estimate $\textbf {S}$ : We train a CNN classifier (Figure 1 (a)) on task $i$ , then take only the encoder $\mathrm {M}^{enc}_i$ from $\mathrm {M}_i$ and freeze it to train a classifier on task $j$ . This gives us a new task $j$ model, and we test this model on $D^{valid}_j$ to get the accuracy as the transfer-performance $\textbf {S}_{ij}$ . The score shows how the representations learned on task $i$ can be adapted to task $j$ , thus indicating the similarity between tasks. In text classification tasks, transferring an encoder with fine-tuned word embeddings from one task to another is difficult as there can be a significant difference between the two vocabularies. Hence, while learning the single-task CNN classifiers, we always make the word embeddings fixed. [t] InputInput OutputOutput $K$ task clusters $C_{1:K}$ Learning of Single-Task Models: train single-task models $\mathrm {M}_i$ for each task $\mathrm {T}_i$ Evaluation of Transfer-Performance Matrix: get performance matrix $\mathbf {\textbf {S}}$ (Section UID12 ) Score Filtering: Filter the uncertain scores in $\textbf {S}$ and construct the symmetric matrix $\textbf {Y}$ using Eq. ( 16 ) Matrix Completion: Complete the similar matrix $\textbf {X}$ from $\textbf {Y}$ using Eq. ( 18 ) Task Clustering: $C_{1:K}$ =SpectralClustering $C_{1:K}$0 RobustTC: Robust Task Clustering based on Matrix Completion Directly using the transfer performance for task clustering may suffer from both efficiency and accuracy issues. First, evaluation of all entries in the matrix $\textbf {S}$ involves conducting the source-target transfer learning $O(n^2)$ times, where $n$ is the number of meta-training tasks. For a large number of diverse tasks where the $n$ can be larger than 1,000, evaluation of the full matrix is unacceptable (over 1M entries to evaluate). Second, the estimated cross-task performance (i.e. some $\textbf {S}_{ij}$ or $\textbf {S}_{ji}$ scores) is often unreliable due to small data size or label noise. When the number of the uncertain values is large, they can collectively mislead the clustering algorithm to output an incorrect task-partition. To address the aforementioned challenges, we propose a novel task clustering algorithm based on the theory of matrix completion BIBREF21 . Specifically, we deal with the huge number of entries by randomly sample task pairs to evaluate the $\textbf {S}_{ij}$ and $\textbf {S}_{ji}$ scores. Besides, we deal with the unreliable entries and asymmetry issue by keeping only task pairs $(i,j)$ with consistent $\textbf {S}_{ij}$ and $O(n^2)$0 scores. as will be introduced in Eq. ( 16 ). Below, we describe our method in detail. First, we use only reliable task pairs to generate a partially-observed similarity matrix $\textbf {Y}$ . Specifically, if $\textbf {S}_{ij}$ and $\textbf {S}_{ji}$ are high enough, then it is likely that tasks $\lbrace i,j\rbrace $ belong to a same cluster and share significant information. Conversely, if $\textbf {S}_{ij}$ and $\textbf {S}_{ji}$ are low enough, then they tend to belong to different clusters. To this end, we need to design a mechanism to determine if a performance is high or low enough. Since different tasks may vary in difficulty, a fixed threshold is not suitable. Hence, we define a dynamic threshold using the mean and standard deviation of the target task performance, i.e., $\mu _j = \text{mean}(\textbf {S}_{:j})$ and $\sigma _j=\text{std}(\textbf {S}_{:j})$ , where $\textbf {S}_{:j}$ is the $j$ -th column of $\textbf {S}_{ij}$0 . We then introduce two positive parameters $\textbf {S}_{ij}$1 and $\textbf {S}_{ij}$2 , and define high and low performance as $\textbf {S}_{ij}$3 greater than $\textbf {S}_{ij}$4 or lower than $\textbf {S}_{ij}$5 , respectively. When both $\textbf {S}_{ij}$6 and $\textbf {S}_{ij}$7 are high and low enough, we set their pairwise similarity as 1 and 0, respectively. Other task pairs are treated as uncertain task pairs and are marked as unobserved, and don't influence our clustering method. This leads to a partially-observed symmetric matrix $\textbf {S}_{ij}$8 , i.e., $$\small \textbf {Y}_{ij}\mathrm {=}\textbf {Y}_{ji}\mathrm {=}\left\lbrace \begin{array}{ll} {2}{*}{1} & \text{if}\ \ \textbf {S}_{ij} > \mu _j + p_1 \sigma _j\ \ \\ &\text{and}\ \ \textbf {S}_{ji} > \mu _i + p_1 \sigma _i\\ {2}{*}{0} & \text{if}\ \ \textbf {S}_{ij} < \mu _j - p_2 \sigma _j\ \ \\ &\text{and}\ \ \textbf {S}_{ji} < \mu _i - p_2 \sigma _i\\ \mathrm {unobserved} & \mathrm {otherwise} \end{array} \right. $$ (Eq. 16) Given the partially observed matrix $\textbf {Y}$ , we then reconstruct the full similarity matrix $\textbf {X}\in \mathbb {R}^{n\times n}$ . We first note that the similarity matrix $\textbf {X}$ should be of low-rank (proof deferred to appendix). Additionally, since the observed entries of $\textbf {Y}$ are generated based on high and low enough performance, it is safe to assume that most observed entries are correct and only a few may be incorrect. Therefore, we introduce a sparse matrix $\textbf {E}$ to capture the observed incorrect entries in $\textbf {Y}$ . Combining the two observations, $\textbf {Y}$ can be decomposed into the sum of two matrices $\textbf {X}$ and $\textbf {E}$ , where $\textbf {X}$ is a low rank matrix storing similarities between task pairs, and $\textbf {X}\in \mathbb {R}^{n\times n}$0 is a sparse matrix that captures the errors in $\textbf {X}\in \mathbb {R}^{n\times n}$1 . The matrix completion problem can be cast as the following convex optimization problem: $$&\min \limits _{\textbf {X},\ \textbf {E}} & \Vert \textbf {X}\Vert _* + \lambda \Vert \textbf {E}\Vert _1\\ & \mbox{s.t.}& \textbf {P}_{\Omega }(\textbf {X}+\textbf {E}) = \textbf {P}_{\Omega }(\textbf {Y}), \nonumber $$ (Eq. 18) where $\Vert \circ \Vert _*$ denotes the matrix nuclear norm, the convex surrogate of rank function. $\Omega $ is the set of observed entries in $\textbf {Y}$ , and $\textbf {P}_{\Omega }:\mathbb {R}^{n\times n} \mapsto \mathbb {R}^{n\times n}$ is a matrix projection operator defined as $$[\textbf {P}_{\Omega }(\textbf {A})]_{ij} = \left\lbrace \begin{array}{ll} \textbf {A}_{ij} & \text{if}\ (i,j) \in \Omega \nonumber \\ 0 & \mbox{otherwise}\nonumber \end{array} \right. $$ (Eq. 19) Finally, we apply spectral clustering on the matrix $\textbf {X}$ to get the task clusters. In the Appendix A, we show a Theorem as well as its proof, implying that under mild conditions, the problem ( 18 ) can perfectly recover the underlying similarity matrix $\textbf {X}^*$ if the number of observed correct entries is at least $O(n \log ^2 n)$ . This theoretical guarantee implies that for a large number $n$ of training tasks, only a tiny fraction of all task pairs is needed to reliably infer similarities over all task pairs. Few-Shot Learning with Task Clusters For each cluster $C_k$ , we train a multi-task MNet model (Figure 1 (b)) with all tasks in that cluster to encourage parameter sharing. The result, denoted as $f_k$ is called the cluster-encoder of cluster $C_k$ . The $k$ -th metric of the cluster is thus $\Lambda (x_1,x_2)=f_k(x_1)^{\intercal }f_k(x_2)$ . To build a predictor $\mathrm {M}$ with access to only a limited number of training samples, we make the prediction probability by linearly combining prediction from learned cluster-encoders: $$p(y|x) = \sum _k \alpha _k P(y|x; f_k).$$ (Eq. 24) where $f_k$ is the learned (and frozen) encoder of the $k$ -th cluster, $\lbrace \alpha _{k}\rbrace _{k=1}^{K}$ are adaptable parameters trained with few-shot training examples. And the predictor $P(y|x; f_k)$ from each cluster is $$\small P(y=y_l|x;f_k) = \frac{\exp \left\lbrace f_k (x_l)^{\intercal }f_k (x) \right\rbrace }{\sum _{i} \exp \left\lbrace f_k (x_{i})^{\intercal }f_k (x) \right\rbrace }$$ (Eq. 25) $x_{l}$ is the corresponding training sample of label $y_{l}$ . End-to-end joint optimization on training data becomes a popular methodology for deep learning systems, but it is not directly applicable to diverse FSL. One main reason is that deep networks could easily fit any task partitions if we optimize on training loss only, making the learned metrics not generalize, as discussed in Section "Related Work" . As a result, this work adopts a pipeline training approach and employing validation sets for task clustering. Combining reinforcement learning with meta-learning could be a potential solution to enable an end-to-end training for future work. Tasks and Data Sets We test our methods by conducting experiments on two text classification data sets. We used NLTK toolkit for tokenization. The task are divided into meta-training tasks and meta-testing tasks (target tasks), where the meta-training tasks are used for clustering and cluster-encoder training. The meta-testing tasks are few-shot tasks, which are used for evaluating the method in Eq. ( 24 ). Amazon Review Sentiment Classification First, following BIBREF14 , we construct multiple tasks with the multi-domain sentiment classification BIBREF22 data set. The dataset consists of Amazon product reviews for 23 types of products (see Appendix D for the details). For each product domain, we construct three binary classification tasks with different thresholds on the ratings: the tasks consider a review as positive if it belongs to one of the following buckets $=5$ stars, $>=4$ stars or $>=2$ stars. These buckets then form the basis of the task-setup, giving us 23 $\times $ 3 $=$ 69 tasks in total. For each domain we distribute the reviews uniformly to the 3 tasks. For evaluation, we select 12 (4 $\times $ 3) tasks from 4 domains (Books, DVD, Electronics, Kitchen) as the meta-testing (target) tasks out of all 23 domains. For the target tasks, we create 5-shot learning problems. Real-World Tasks: User Intent Classification for Dialog System The second dataset is from an online service which trains and serves intent classification models to various clients. The dataset comprises recorded conversations between human users and dialog systems in various domains, ranging from personal assistant to complex service-ordering or customer-service request scenarios. During classification, intent-labels are assigned to user utterances (sentences). We use a total of 175 tasks from different clients, and randomly sample 10 tasks from them as our target tasks. For each meta-training task, we randomly sample 64% data into a training set, 16% into a validation set, and use the rest as the test set. The number of labels for these tasks varies a lot (from 2 to 100, see Appendix D for details), making regular $k$ -shot settings not essentially limited-resource problems (e.g., 5-shot on 100 classes will give a good amount of 500 training instances). Hence, to adapt this to a FSL scenario, for target tasks we keep one example for each label (one-shot), plus 20 randomly picked labeled examples to create the training data. We believe this is a fairly realistic estimate of labeled examples one client could provide easily. Our matrix-completion method could handle a large number of tasks via task-pair sampling. However, the sizes of tasks in the above two few-shot learning datasets are not too huge, so evaluation of the whole task-similarity matrix is still tractable. In our experiments, the incomplete matrices mainly come from the score-filtering step (see Eq. 16 ). Thus there is limited randomness involved in the generation of task clusters. To strengthen the conclusion, we evaluate our algorithm on an additional dataset with a much larger number of tasks. The results are reported in the multi-task learning setting instead of the few-shot learning setting focused in this paper. Therefore we put the results to a non-archive version of this paper for further reference. Experiment Setup We compare our method to the following baselines: (1) Single-task CNN: training a CNN model for each task individually; (2) Single-task FastText: training one FastText model BIBREF23 with fixed embeddings for each individual task; (3) Fine-tuned the holistic MTL-CNN: a standard transfer-learning approach, which trains one MTL-CNN model on all the training tasks offline, then fine-tunes the classifier layer (i.e. $\mathrm {M}^{(cls)}$ Figure 1 (a)) on each target task; (4) Matching Network: a metric-learning based few-shot learning model trained on all training tasks; (5) Prototypical Network: a variation of matching network with different prediction function as Eq. 9 ; (6) Convex combining all single-task models: training one CNN classifier on each meta-training task individually and taking the encoder, then for each target task training a linear combination of all the above single-task encoders with Eq. ( 24 ). This baseline can be viewed as a variation of our method without task clustering. We initialize all models with pre-trained 100-dim Glove embeddings (trained on 6B corpus) BIBREF24 . In all experiments, we set both $p_1$ and $p_2$ parameters in ( 16 ) to $0.5$ . This strikes a balance between obtaining enough observed entries in $\textbf {Y}$ , and ensuring that most of the retained similarities are consistent with the cluster membership. The window/hidden-layer sizes of CNN and the initialization of embeddings (random or pre-trained) are tuned during the cluster-encoder training phase, with the validation sets of meta-training tasks. We have the CNN with window size of 5 and 200 hidden units. The single-metric FSL baselines have 400 hidden units in the CNN encoders. On sentiment classification, all cluster-encoders use random initialized word embeddings for sentiment classification, and use Glove embeddings as initialization for intent classification, which is likely because the training sets of the intent tasks are usually small. Since all the sentiment classification tasks are binary classification based on our dataset construction. A CNN classifier with binary output layer can be also trained as the cluster-encoder for each task cluster. Therefore we compared CNN classifier, matching network, and prototypical network on Amazon review, and found that CNN classifier performs similarly well as prototypical network. Since some of the Amazon review data is quite large which involves further difficulty on the computation of supporting sets, we finally use binary CNN classifiers as cluster-encoders in all the sentiment classification experiments. Selection of the learning rate and number of training epochs for FSL settings, i.e., fitting $\alpha $ s in Eq. ( 24 ), is more difficult since there is no validation data in few-shot problems. Thus we pre-select a subset of meta-training tasks as meta-validation tasks and tune the two hyper-parameters on the meta-validation tasks. Experimental Results Table 1 shows the main results on (i) the 12 few-shot product sentiment classification tasks by leveraging the learned knowledge from the 57 previously observed tasks from other product domains; and (ii) the 10 few-shot dialog intent classification tasks by leveraging the 165 previously observed tasks from other clients' data. Due to the limited training resources, all the supervised-learning baselines perform poorly. The two state-of-the-art metric-based FSL approaches, matching network (4) and prototypical network (5), do not perform better compared to the other baselines, since the single metric is not sufficient for all the diverse tasks. On intent classification where tasks are further diverse, all the single-metric or single-model methods (3-5) perform worse compared to the single-task CNN baseline (1). The convex combination of all the single training task models is the best performing baseline overall. However, on intent classification it only performs on par with the single-task CNN (1), which does not use any meta-learning or transfer learning techniques, mainly for two reasons: (i) with the growth of the number of meta-training tasks, the model parameters grow linearly, making the number of parameters (165 in this case) in Eq.( 24 ) too large for the few-shot tasks to fit; (ii) the meta-training tasks in intent classification usually contain less training data, making the single-task encoders not generalize well. In contrast, our RobustTC-FSL gives consistently better results compared to all the baselines. It outperforms the baselines in previous work (1-5) by a large margin of more than 6% on the sentiment classification tasks, and more than 3% on the intent classification tasks. It is also significantly better than our proposed baseline (6), showing the advantages of the usage of task clustering. Although the RobustTC-FSL improves over baselines on intent classification, the margin is smaller compared to that on sentiment classification, because the intent classification tasks are more diverse in nature. This is also demonstrated by the training accuracy on the target tasks, where several tasks fail to find any cluster that could provide a metric that suits their training examples. To deal with this problem, we propose an improved algorithm to automatically discover whether a target task belongs to none of the task-clusters. If the task doesn't belong to any of the clusters, it cannot benefit from any previous knowledge thus falls back to single-task CNN. The target task is treated as “out-of-clusters” when none of the clusters could achieve higher than 20% accuracy (selected on meta-validation tasks) on its training data. We call this method Adaptive RobustTC-FSL, which gives more than 5% performance boost over the best RobustTC-FSL result on intent classification. Note that the adaptive approach makes no difference on the sentiment tasks, because they are more closely related so re-using cluster-encoders always achieves better results compared to single-task CNNs. Analysis Figure 3 shows the effect of cluster numbers on the two tasks. RobustTC achieves best performance with 5 clusters on sentiment analysis (SA) and 20 clusters on intent classification (Intent). All clustering results significantly outperform the single-metric baselines (#cluster=1 in the figure). Compared to previous task clustering algorithms, our RobustTC is the only one that can cluster tasks with varying numbers of class labels (e.g. in intent classification tasks). Moreover, we show that even in the setting of all binary classifications tasks (e.g. the sentiment-analysis tasks) that previous task clustering research work on, our RobustTC is still slightly better for the diverse FSL problems. Figure 3 compares with a state-of-the-art logistic regression based task clustering method (ASAP-MT-LR) BIBREF14 . Our RobustTC clusters give slightly better FSL performance (e.g. 83.12 vs. 82.65 when #cluster=5). The top rows of Table 2 shows the ten clusters used to generate the sentiment classification results in Figure 3 . From the results, we can see that tasks with same thresholds are usually grouped together; and tasks in similar domains also tend to appear in the same clusters, even the thresholds are slightly different (e.g. t2 vs t4 and t4 vs t5). The bottom of the table shows the weights $\alpha $ s in Eq.( 24 ) for the target tasks with the largest improvement. It confirms that our RobustTC-FSL algorithm accurately adapts multiple metrics for the target tasks. Related Work Few Shot Learning FSL BIBREF0 , BIBREF1 , BIBREF2 aims to learn classifiers for new classes with only a few training examples per class. Recent deep learning based FSL approaches mainly fall into two categories: (1) metric-based approaches BIBREF3 , BIBREF4 , BIBREF5 , which aims to learn generalizable metrics and corresponding matching functions from multiple training tasks. These approaches essentially learn one metric for all tasks, which is sub-optimal when the tasks are diverse. (2) optimization-based approaches BIBREF6 , BIBREF7 , BIBREF8 , which aims to learn to optimize model parameters (by either predicting the parameter updates or directly predicting the model parameters) given the gradients computed from few-shot examples. Previous FSL research usually adopts the $k$ -shot, $N$ -way setting, where all the few-shot tasks have the same number of $N$ class labels, and each label has $k$ training instances. Moreover, these few-shot tasks are usually constructed by sampling from one huge dataset, thus all the tasks are guaranteed to be related to each other. However, in real-world applications, the few-shot learning tasks could be diverse: there are different tasks with varying number of class labels and they are not guaranteed to be related to each other. As a result, a single meta-model or metric-model is usually not sufficient to handle all the few-shot tasks. Task Clustering Previous task clustering methods measure the task relationships in terms of similarities among single-task model parameters BIBREF11 , BIBREF12 ; or jointly assign task clusters and train model parameters for each cluster to minimize the overall training loss BIBREF13 , BIBREF14 , BIBREF25 . These methods usually work on convex models but do not fit the deep networks, mainly because of (i) the parameters of deep networks are very high-dimensional and their similarities are not necessarily related to the functional similarities; and (ii) deep networks have flexible representation power so they may overfit to arbitrary cluster assignment if we consider training loss alone. Moreover, these methods require identical class label sets across different tasks, which does not hold in most of the realistic settings. Conclusion We propose a few-shot learning approach for diverse tasks based on task clustering. The proposed method can use multiple metrics, and performs significantly better compared to previous single-metric methods when the few-shot tasks come from diverse domains. Future work includes applying the task-clustering idea to other FSL algorithms BIBREF6 , BIBREF8 , BIBREF26 , and exploring more advanced composition methods of cluster-encoders beyond linear combination BIBREF27 , BIBREF28 .
No
4704cbb35762d0172f5ac6c26b67550921567a65
4704cbb35762d0172f5ac6c26b67550921567a65_0
Q: By how much does transfer learning improve performance on this task? Text: Introduction User-generated content in forums, blogs, and social media not only contributes to a deliberative exchange of opinions and ideas but is also contaminated with offensive language such as threats and discrimination against people, swear words or blunt insults. The automatic detection of such content can be a useful support for moderators of public platforms as well as for users who could receive warnings or would be enabled to filter unwanted content. Although this topic now has been studied for more than two decades, so far there has been little work on offensive language detection for German social media content. Regarding this, we present a new approach to detect offensive language as defined in the shared task of the GermEval 2018 workshop. For our contribution to the shared task, we focus on the question how to apply transfer learning for neural network-based text classification systems. In Germany, the growing interest in hate speech analysis and detection is closely related to recent political developments such as the increase of right-wing populism, and societal reactions to the ongoing influx of refugees seeking asylum BIBREF0 . Content analysis studies such as InstituteforStrategicDialogue.2018 have shown that a majority of hate speech comments in German Facebook is authored by a rather small group of very active users (5% of all accounts engaging in hate speech). The findings suggest that already such small groups are able to severely disturb social media debates for large audiences. From the perspective of natural language processing, the task of automatic detection of offensive language in social media is complex due to three major reasons. First, we can expect `atypical' language data due to incorrect spellings, false grammar and non-standard language variations such as slang terms, intensifiers, or emojis/emoticons. For the automatic detection of offensive language, it is not quite clear whether these irregularities should be treated as `noise' or as a signal. Second, the task cannot be reduced to an analysis of word-level semantics only, e.g. spotting offensive keyterms in the data. Instead, the assessment of whether or not a post contains offensive language can be highly dependent on sentence and discourse level semantics, as well as subjective criteria. In a crowd-sourcing experiment on `hate speech' annotation, Ross.2016 achieved only very low inter-rater agreement between annotators. Offensive language is probably somewhat easier to achieve agreement on, but still sentence-level semantics and context or `world knowledge' remains important. Third, there is a lack of a common definition of the actual phenomenon to tackle. Published studies focus on `hostile messages', `flames', `hate speech', `discrimination', `abusive language', or `offensive language'. Although certainly overlapping, each of these categories has been operationalized in a slightly different manner. Since category definitions do not match properly, publicly available annotated datasets and language resources for one task cannot be used directly to train classifiers for any respective other task. Related Work Automatic detection of offensive language is a well-studied phenomenon for the English language. Initial works on the detection of `hostile messages' have been published already during the 1990s BIBREF4 . An overview of recent approaches comparing the different task definitions, feature sets and classification methods is given by Schmidt.2017. A major step forward to support the task was the publication of a large publicly available, manually annotated dataset by Yahoo research BIBREF5 . They provide a classification approach for detection of abusive language in Yahoo user comments using a variety of linguistic features in a linear classification model. One major result of their work was that learning text features from comments which are temporally close to the to-be-predicted data is more important than learning features from as much data as possible. This is especially important for real-life scenarios of classifying streams of comment data. In addition to token-based features, Xiang.2012 successfully employed topical features to detect offensive tweets. We will build upon this idea by employing topical data in our transfer learning setup. Transfer learning recently has gained a lot of attention since it can be easily applied to neural network learning architectures. For instance, Howard.2018 propose a generic transfer learning setup for text classification based on language modeling for pre-training neural models with large background corpora. To improve offensive language detection for English social media texts, a transfer learning approach was recently introduced by Felbo.2017. Their `deepmoji' approach relies on the idea to pre-train a neural network model for an actual offensive language classification task by using emojis as weakly supervised training labels. On a large collection of millions of randomly collected English tweets containing emojis, they try to predict the specific emojis from features obtained from the remaining tweet text. We will follow this idea of transfer learning to evaluate it for offensive language detection in German Twitter data together with other transfer learning strategies. GermEval 2018 Shared Task Organizers of GermEval 2018 provide training and test datasets for two tasks. Task 1 is a binary classification for deciding whether or not a German tweet contains offensive language (the respective category labels are `offense' and `other'). Task 2 is a multi-class classification with more fine-grained labels sub-categorizing the same tweets into either `insult', `profanity', `abuse', or `other'. The training data contains 5,008 manually labeled tweets sampled from Twitter from selected accounts that are suspected to contain a high share of offensive language. Manual inspection reveals a high share of political tweets among those labeled as offensive. These tweets range from offending single Twitter users, politicians and parties to degradation of whole social groups such as Muslims, migrants or refugees. The test data contains 3,532 tweets. To create a realistic scenario of truly unseen test data, training and test set are sampled from disjoint user accounts. No standard validation set is provided for the task. To optimize hyper-parameters of our classification models and allow for early stopping to prevent the neural models from overfitting, we created our own validation set. For this, we used the last 808 examples from the provided training set. The remaining first 4,200 examples were used to train our models. Background Knowledge Since the provided dataset for offensive language detection is rather small, we investigate the potential of transfer learning to increase classification performance. For this, we use the following labeled as well as unlabeled datasets. A recently published resource of German language social media data has been published by Schabus2017. Among other things, the dataset contains 11,773 labeled user comments posted to the Austrian newspaper website `Der Standard'. Comments have not been annotated for offensive language, but for categories such as positive/negative sentiment, off-topic, inappropriate or discriminating. As a second resource, we use a background corpus of German tweets that were collected using the Twitter streaming API from 2011 to 2017. Since the API provides a random fraction of all tweets (1%), language identification is performed using `langid.py' BIBREF6 to filter for German tweets. For all years combined, we obtain about 18 million unlabeled German tweets from the stream, which can be used as a large, in-domain background corpus. For a transfer learning setup, we need to specify a task to train the model and prepare the corresponding dataset. We compare the following three methods. As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. This results in 3,599 training examples (519 offense, 3080 other) from on the `One Million Post' corpus. We conduct pre-training of the neural model as a binary classification task (similar to the Task 1 of GermEval 2018) Following the approach of Felbo.2017, we constructed a weakly-supervised training dataset from our Twitter background corpus. From all tweets posted between 2013 and 2017, we extract those containing at least one emoji character. In the case of several emojis in one tweet, we duplicate the tweet for each unique emoji type. Emojis are then removed from the actual tweets and treated as a label to predict by the neural model. This results in a multi-class classification task to predict the right emoji out of 1,297 different ones. Our training dataset contains 1,904,330 training examples. As a final method, we create a training data set for transfer learning in a completely unsupervised manner. For this, we compute an LDA clustering with INLINEFORM0 topics on 10 million tweets sampled from 2016 and 2017 from our Twitter background corpus containing at least two meaningful words (i.e. alphanumeric sequences that are not stopwords, URLs or user mentions). Tweets also have been deduplicated before sampling. From the topic-document distribution of the resulting LDA model, we determined the majority topic id for each tweet as a target label for prediction during pre-training our neural model. Pre-training of the neural model was conducted on the 10 million tweets with batch size 128 for 10 epochs. Text Classification In the following section, we describe one linear classification model in combination with specifically engineered features, which we use as a baseline for the classification task. We further introduce a neural network model as a basis for our approach to transfer learning. This model achieves the highest performance for offensive language detection, as compared to our baseline. SVM baseline: The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. We use a text classification framework for German BIBREF8 that has been used successfully for sentiment analysis before. We induce token features based on the Twitter background corpus. Because tweets are usually very short, they are not an optimal source to obtain good estimates on inverse document frequencies (IDF). To obtain a better feature weighting, we calculate IDF scores based on the Twitter corpus combined with an in-house product review dataset (cf. ibid.). From this combined corpus, we compute the IDF scores and 300-dimensional word embeddings BIBREF9 for all contained features. Following Ruppert2017, we use the IDF scores to obtain the highest-weighted terms per category in the training data. Here, we obtain words like Staatsfunk, Vasall (state media, vassal) or deutschlandfeindlichen (Germany-opposing) for the category `abuse' and curse words for `insult'. Further, IDF scores are used to weight the word vectors of all terms in a tweet. Additionally, we employ a polarity lexicon and perform lexical expansion on it to obtain new entries from our in-domain background corpus that are weighted on a `positive–negative' continuum. Lexical expansion is based on distributional word similarity as described in Kumar.2016. BiLSTM-CNN for Text Classification For transfer learning, we rely on a neural network architecture implemented in the Keras framework for Python. Our model (see Fig. FIGREF15 ) combines a bi-directional LSTM layer BIBREF1 with 100 units followed by three parallel convolutional layers (CNN), each with a different kernel size INLINEFORM0 , and a filter size 200. The outputs of the three CNN blocks are max-pooled globally and concatenated. Finally, features encoded by the CNN blocks are fed into a dense layer with 100 units, followed by the prediction layer. Except for this final layer which uses Softmax activation, we rely on LeakyReLU activation BIBREF10 for the other model layers. For regularization, dropout is applied to the LSTM layer and to each CNN block after global max-pooling (dropout rate 0.5). For training, we use the Nesterov Adam optimization and categorical cross-entropy loss with a learning rate of 0.002. The intuition behind this architecture is that the recurrent LSTM layer can serve as a feature encoder for general language characteristics from sequences of semantic word embeddings. The convolutional layers on top of this can then encode category related features delivered by the LSTM while the last dense layers finally fine-tune highly category-specific features for the actual classification task. As input, we feed 300-dimensional word embeddings obtained from fastText BIBREF11 into our model. Since fastText also makes use of sub-word information (character n-grams), it has the great advantage that it can provide semantic embeddings also for words that have not been seen during training the embedding model. We use a model pre-trained with German language data from Wikipedia and Common Crawl provided by mikolov2018advances. First, we unify all Twitter-typical user mentions (`@username') and URLs into a single string representation and reduce all characters to lower case. Then, we split tweets into tokens at boundaries of changing character classes. As an exception, sequences of emoji characters are split into single character tokens. Finally, for each token, an embedding vector is obtained from the fastText model. For offensive language detection in Twitter, users addressed in tweets might be an additional relevant signal. We assume it is more likely that politicians or news agencies are addressees of offensive language than, for instance, musicians or athletes. To make use of such information, we obtain a clustering of user ids from our Twitter background corpus. From all tweets in our stream from 2016 or 2017, we extract those tweets that have at least two @-mentions and all of the @-mentions have been seen at least five times in the background corpus. Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ). For our final classification approach, cluster ids for users mentioned in tweets are fed as a second input in addition to (sub-)word embeddings to the penultimate dense layer of the neural network model. Transfer Learning As mentioned earlier, we investigate potential strategies for transfer learning to achieve optimal performance. For this, we compare three different methods to pre-train our model with background data sets. We also compare three different strategies to combat `catastrophic forgetting' during training on the actual target data. Transfer Learning Strategies Once the neural model has been pre-trained on the above-specified targets and corresponding datasets, we can apply it for learning our actual target task. For this, we need to remove the final prediction layer of the pre-trained model (i.e. Layer 4 in Fig. FIGREF15 ), and add a new dense layer for prediction of one of the actual label sets (two for Task 1, four for Task 2). The training for the actual GermEval tasks is conducted with batch size 32 for up to 50 epochs. To prevent the aforementioned effect of forgetting pre-trained knowledge during this task-specific model training, we evaluate three different strategies. In Howard.2018, gradual unfreezing of pre-trained model weights is proposed as one strategy to mitigate forgetting. The basic idea is to initially freeze all pre-trained weights of the neural model and keep only the newly added last layer trainable (i.e. Layer 4 in Fig. FIGREF15 ). After training that last layer for one epoch on the GermEval training data, the next lower frozen layer is unfrozen and training will be repeated for another epoch. This will be iterated until all layers (4 to 1) are unfrozen. Following the approach of Felbo.2017, we do not iteratively unfreeze all layers of the model, but only one at a time. First, the newly added final prediction layer is trained while all other model weights remain frozen. Training is conducted for up to 50 epochs. The best performing model during these epochs with respect to our validation set is then used in the next step of fine-tuning the pre-trained model layers. For the bottom-up strategy, we unfreeze the lowest layer (1) containing the most general knowledge first, then we continue optimization with the more specific layers (2 and 3) one after the other. During fine-tuning of each single layer, all other layers remain frozen and training is performed for 50 epochs selecting the best performing model at the end of each layer optimization. In a final round of fine-tuning, all layers are unfrozen. This proceeding is similar the one described above, but inverts the order of unfreezing single layers from top to bottom sequentially fine-tuning layers 4, 3, 2, 1 individually, and all together in a final round. All strategies are compared to the baseline of no freezing of model weights, but training all layers at once directly after pre-training with one of the three transfer datasets. Evaluation Since there is no prior state-of-the-art for the GermEval Shared Task 2018 dataset, we evaluate the performance of our neural model compared to the baseline SVM architecture. We further compare the different tasks and strategies for transfer learning introduced above and provide some first insights on error analysis. Conclusion In this paper, we presented our neural network text classification approach for offensive language detection on the GermEval 2018 Shared Task dataset. We used a combination of BiLSTM and CNN architectures for learning. As task-specific adaptations of standard text classification, we evaluated different datasets and strategies for transfer learning, as well as additional features obtained from users addressed in tweets. The coarse-grained offensive language detection could be realized to a much better extent than the fine-grained task of separating four different categories of insults (accuracy 77.5% vs. 73.7%). From our experiments, four main messages can be drawn: The fact that our unsupervised, task-agnostic pre-training by LDA topic transfer performed best suggests that this approach will also contribute beneficially to other text classification tasks such as sentiment analysis. Thus, in future work, we plan to evaluate our approach with regard to such other tasks. We also plan to evaluate more task-agnostic approaches for transfer learning, for instance employing language modeling as a pre-training task.
In task 1 best transfer learning strategy improves F1 score by 4.4% and accuracy score by 3.3%, in task 2 best transfer learning strategy improves F1 score by 2.9% and accuracy score by 1.7%
38a5cc790f66a7362f91d338f2f1d78f48c1e252
38a5cc790f66a7362f91d338f2f1d78f48c1e252_0
Q: What baseline is used? Text: Introduction User-generated content in forums, blogs, and social media not only contributes to a deliberative exchange of opinions and ideas but is also contaminated with offensive language such as threats and discrimination against people, swear words or blunt insults. The automatic detection of such content can be a useful support for moderators of public platforms as well as for users who could receive warnings or would be enabled to filter unwanted content. Although this topic now has been studied for more than two decades, so far there has been little work on offensive language detection for German social media content. Regarding this, we present a new approach to detect offensive language as defined in the shared task of the GermEval 2018 workshop. For our contribution to the shared task, we focus on the question how to apply transfer learning for neural network-based text classification systems. In Germany, the growing interest in hate speech analysis and detection is closely related to recent political developments such as the increase of right-wing populism, and societal reactions to the ongoing influx of refugees seeking asylum BIBREF0 . Content analysis studies such as InstituteforStrategicDialogue.2018 have shown that a majority of hate speech comments in German Facebook is authored by a rather small group of very active users (5% of all accounts engaging in hate speech). The findings suggest that already such small groups are able to severely disturb social media debates for large audiences. From the perspective of natural language processing, the task of automatic detection of offensive language in social media is complex due to three major reasons. First, we can expect `atypical' language data due to incorrect spellings, false grammar and non-standard language variations such as slang terms, intensifiers, or emojis/emoticons. For the automatic detection of offensive language, it is not quite clear whether these irregularities should be treated as `noise' or as a signal. Second, the task cannot be reduced to an analysis of word-level semantics only, e.g. spotting offensive keyterms in the data. Instead, the assessment of whether or not a post contains offensive language can be highly dependent on sentence and discourse level semantics, as well as subjective criteria. In a crowd-sourcing experiment on `hate speech' annotation, Ross.2016 achieved only very low inter-rater agreement between annotators. Offensive language is probably somewhat easier to achieve agreement on, but still sentence-level semantics and context or `world knowledge' remains important. Third, there is a lack of a common definition of the actual phenomenon to tackle. Published studies focus on `hostile messages', `flames', `hate speech', `discrimination', `abusive language', or `offensive language'. Although certainly overlapping, each of these categories has been operationalized in a slightly different manner. Since category definitions do not match properly, publicly available annotated datasets and language resources for one task cannot be used directly to train classifiers for any respective other task. Related Work Automatic detection of offensive language is a well-studied phenomenon for the English language. Initial works on the detection of `hostile messages' have been published already during the 1990s BIBREF4 . An overview of recent approaches comparing the different task definitions, feature sets and classification methods is given by Schmidt.2017. A major step forward to support the task was the publication of a large publicly available, manually annotated dataset by Yahoo research BIBREF5 . They provide a classification approach for detection of abusive language in Yahoo user comments using a variety of linguistic features in a linear classification model. One major result of their work was that learning text features from comments which are temporally close to the to-be-predicted data is more important than learning features from as much data as possible. This is especially important for real-life scenarios of classifying streams of comment data. In addition to token-based features, Xiang.2012 successfully employed topical features to detect offensive tweets. We will build upon this idea by employing topical data in our transfer learning setup. Transfer learning recently has gained a lot of attention since it can be easily applied to neural network learning architectures. For instance, Howard.2018 propose a generic transfer learning setup for text classification based on language modeling for pre-training neural models with large background corpora. To improve offensive language detection for English social media texts, a transfer learning approach was recently introduced by Felbo.2017. Their `deepmoji' approach relies on the idea to pre-train a neural network model for an actual offensive language classification task by using emojis as weakly supervised training labels. On a large collection of millions of randomly collected English tweets containing emojis, they try to predict the specific emojis from features obtained from the remaining tweet text. We will follow this idea of transfer learning to evaluate it for offensive language detection in German Twitter data together with other transfer learning strategies. GermEval 2018 Shared Task Organizers of GermEval 2018 provide training and test datasets for two tasks. Task 1 is a binary classification for deciding whether or not a German tweet contains offensive language (the respective category labels are `offense' and `other'). Task 2 is a multi-class classification with more fine-grained labels sub-categorizing the same tweets into either `insult', `profanity', `abuse', or `other'. The training data contains 5,008 manually labeled tweets sampled from Twitter from selected accounts that are suspected to contain a high share of offensive language. Manual inspection reveals a high share of political tweets among those labeled as offensive. These tweets range from offending single Twitter users, politicians and parties to degradation of whole social groups such as Muslims, migrants or refugees. The test data contains 3,532 tweets. To create a realistic scenario of truly unseen test data, training and test set are sampled from disjoint user accounts. No standard validation set is provided for the task. To optimize hyper-parameters of our classification models and allow for early stopping to prevent the neural models from overfitting, we created our own validation set. For this, we used the last 808 examples from the provided training set. The remaining first 4,200 examples were used to train our models. Background Knowledge Since the provided dataset for offensive language detection is rather small, we investigate the potential of transfer learning to increase classification performance. For this, we use the following labeled as well as unlabeled datasets. A recently published resource of German language social media data has been published by Schabus2017. Among other things, the dataset contains 11,773 labeled user comments posted to the Austrian newspaper website `Der Standard'. Comments have not been annotated for offensive language, but for categories such as positive/negative sentiment, off-topic, inappropriate or discriminating. As a second resource, we use a background corpus of German tweets that were collected using the Twitter streaming API from 2011 to 2017. Since the API provides a random fraction of all tweets (1%), language identification is performed using `langid.py' BIBREF6 to filter for German tweets. For all years combined, we obtain about 18 million unlabeled German tweets from the stream, which can be used as a large, in-domain background corpus. For a transfer learning setup, we need to specify a task to train the model and prepare the corresponding dataset. We compare the following three methods. As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. This results in 3,599 training examples (519 offense, 3080 other) from on the `One Million Post' corpus. We conduct pre-training of the neural model as a binary classification task (similar to the Task 1 of GermEval 2018) Following the approach of Felbo.2017, we constructed a weakly-supervised training dataset from our Twitter background corpus. From all tweets posted between 2013 and 2017, we extract those containing at least one emoji character. In the case of several emojis in one tweet, we duplicate the tweet for each unique emoji type. Emojis are then removed from the actual tweets and treated as a label to predict by the neural model. This results in a multi-class classification task to predict the right emoji out of 1,297 different ones. Our training dataset contains 1,904,330 training examples. As a final method, we create a training data set for transfer learning in a completely unsupervised manner. For this, we compute an LDA clustering with INLINEFORM0 topics on 10 million tweets sampled from 2016 and 2017 from our Twitter background corpus containing at least two meaningful words (i.e. alphanumeric sequences that are not stopwords, URLs or user mentions). Tweets also have been deduplicated before sampling. From the topic-document distribution of the resulting LDA model, we determined the majority topic id for each tweet as a target label for prediction during pre-training our neural model. Pre-training of the neural model was conducted on the 10 million tweets with batch size 128 for 10 epochs. Text Classification In the following section, we describe one linear classification model in combination with specifically engineered features, which we use as a baseline for the classification task. We further introduce a neural network model as a basis for our approach to transfer learning. This model achieves the highest performance for offensive language detection, as compared to our baseline. SVM baseline: The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. We use a text classification framework for German BIBREF8 that has been used successfully for sentiment analysis before. We induce token features based on the Twitter background corpus. Because tweets are usually very short, they are not an optimal source to obtain good estimates on inverse document frequencies (IDF). To obtain a better feature weighting, we calculate IDF scores based on the Twitter corpus combined with an in-house product review dataset (cf. ibid.). From this combined corpus, we compute the IDF scores and 300-dimensional word embeddings BIBREF9 for all contained features. Following Ruppert2017, we use the IDF scores to obtain the highest-weighted terms per category in the training data. Here, we obtain words like Staatsfunk, Vasall (state media, vassal) or deutschlandfeindlichen (Germany-opposing) for the category `abuse' and curse words for `insult'. Further, IDF scores are used to weight the word vectors of all terms in a tweet. Additionally, we employ a polarity lexicon and perform lexical expansion on it to obtain new entries from our in-domain background corpus that are weighted on a `positive–negative' continuum. Lexical expansion is based on distributional word similarity as described in Kumar.2016. BiLSTM-CNN for Text Classification For transfer learning, we rely on a neural network architecture implemented in the Keras framework for Python. Our model (see Fig. FIGREF15 ) combines a bi-directional LSTM layer BIBREF1 with 100 units followed by three parallel convolutional layers (CNN), each with a different kernel size INLINEFORM0 , and a filter size 200. The outputs of the three CNN blocks are max-pooled globally and concatenated. Finally, features encoded by the CNN blocks are fed into a dense layer with 100 units, followed by the prediction layer. Except for this final layer which uses Softmax activation, we rely on LeakyReLU activation BIBREF10 for the other model layers. For regularization, dropout is applied to the LSTM layer and to each CNN block after global max-pooling (dropout rate 0.5). For training, we use the Nesterov Adam optimization and categorical cross-entropy loss with a learning rate of 0.002. The intuition behind this architecture is that the recurrent LSTM layer can serve as a feature encoder for general language characteristics from sequences of semantic word embeddings. The convolutional layers on top of this can then encode category related features delivered by the LSTM while the last dense layers finally fine-tune highly category-specific features for the actual classification task. As input, we feed 300-dimensional word embeddings obtained from fastText BIBREF11 into our model. Since fastText also makes use of sub-word information (character n-grams), it has the great advantage that it can provide semantic embeddings also for words that have not been seen during training the embedding model. We use a model pre-trained with German language data from Wikipedia and Common Crawl provided by mikolov2018advances. First, we unify all Twitter-typical user mentions (`@username') and URLs into a single string representation and reduce all characters to lower case. Then, we split tweets into tokens at boundaries of changing character classes. As an exception, sequences of emoji characters are split into single character tokens. Finally, for each token, an embedding vector is obtained from the fastText model. For offensive language detection in Twitter, users addressed in tweets might be an additional relevant signal. We assume it is more likely that politicians or news agencies are addressees of offensive language than, for instance, musicians or athletes. To make use of such information, we obtain a clustering of user ids from our Twitter background corpus. From all tweets in our stream from 2016 or 2017, we extract those tweets that have at least two @-mentions and all of the @-mentions have been seen at least five times in the background corpus. Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ). For our final classification approach, cluster ids for users mentioned in tweets are fed as a second input in addition to (sub-)word embeddings to the penultimate dense layer of the neural network model. Transfer Learning As mentioned earlier, we investigate potential strategies for transfer learning to achieve optimal performance. For this, we compare three different methods to pre-train our model with background data sets. We also compare three different strategies to combat `catastrophic forgetting' during training on the actual target data. Transfer Learning Strategies Once the neural model has been pre-trained on the above-specified targets and corresponding datasets, we can apply it for learning our actual target task. For this, we need to remove the final prediction layer of the pre-trained model (i.e. Layer 4 in Fig. FIGREF15 ), and add a new dense layer for prediction of one of the actual label sets (two for Task 1, four for Task 2). The training for the actual GermEval tasks is conducted with batch size 32 for up to 50 epochs. To prevent the aforementioned effect of forgetting pre-trained knowledge during this task-specific model training, we evaluate three different strategies. In Howard.2018, gradual unfreezing of pre-trained model weights is proposed as one strategy to mitigate forgetting. The basic idea is to initially freeze all pre-trained weights of the neural model and keep only the newly added last layer trainable (i.e. Layer 4 in Fig. FIGREF15 ). After training that last layer for one epoch on the GermEval training data, the next lower frozen layer is unfrozen and training will be repeated for another epoch. This will be iterated until all layers (4 to 1) are unfrozen. Following the approach of Felbo.2017, we do not iteratively unfreeze all layers of the model, but only one at a time. First, the newly added final prediction layer is trained while all other model weights remain frozen. Training is conducted for up to 50 epochs. The best performing model during these epochs with respect to our validation set is then used in the next step of fine-tuning the pre-trained model layers. For the bottom-up strategy, we unfreeze the lowest layer (1) containing the most general knowledge first, then we continue optimization with the more specific layers (2 and 3) one after the other. During fine-tuning of each single layer, all other layers remain frozen and training is performed for 50 epochs selecting the best performing model at the end of each layer optimization. In a final round of fine-tuning, all layers are unfrozen. This proceeding is similar the one described above, but inverts the order of unfreezing single layers from top to bottom sequentially fine-tuning layers 4, 3, 2, 1 individually, and all together in a final round. All strategies are compared to the baseline of no freezing of model weights, but training all layers at once directly after pre-training with one of the three transfer datasets. Evaluation Since there is no prior state-of-the-art for the GermEval Shared Task 2018 dataset, we evaluate the performance of our neural model compared to the baseline SVM architecture. We further compare the different tasks and strategies for transfer learning introduced above and provide some first insights on error analysis. Conclusion In this paper, we presented our neural network text classification approach for offensive language detection on the GermEval 2018 Shared Task dataset. We used a combination of BiLSTM and CNN architectures for learning. As task-specific adaptations of standard text classification, we evaluated different datasets and strategies for transfer learning, as well as additional features obtained from users addressed in tweets. The coarse-grained offensive language detection could be realized to a much better extent than the fine-grained task of separating four different categories of insults (accuracy 77.5% vs. 73.7%). From our experiments, four main messages can be drawn: The fact that our unsupervised, task-agnostic pre-training by LDA topic transfer performed best suggests that this approach will also contribute beneficially to other text classification tasks such as sentiment analysis. Thus, in future work, we plan to evaluate our approach with regard to such other tasks. We also plan to evaluate more task-agnostic approaches for transfer learning, for instance employing language modeling as a pre-training task.
SVM
0da6cfbc8cb134dc3d247e91262f5050a2200664
0da6cfbc8cb134dc3d247e91262f5050a2200664_0
Q: What topic clusters are identified by LDA? Text: Introduction User-generated content in forums, blogs, and social media not only contributes to a deliberative exchange of opinions and ideas but is also contaminated with offensive language such as threats and discrimination against people, swear words or blunt insults. The automatic detection of such content can be a useful support for moderators of public platforms as well as for users who could receive warnings or would be enabled to filter unwanted content. Although this topic now has been studied for more than two decades, so far there has been little work on offensive language detection for German social media content. Regarding this, we present a new approach to detect offensive language as defined in the shared task of the GermEval 2018 workshop. For our contribution to the shared task, we focus on the question how to apply transfer learning for neural network-based text classification systems. In Germany, the growing interest in hate speech analysis and detection is closely related to recent political developments such as the increase of right-wing populism, and societal reactions to the ongoing influx of refugees seeking asylum BIBREF0 . Content analysis studies such as InstituteforStrategicDialogue.2018 have shown that a majority of hate speech comments in German Facebook is authored by a rather small group of very active users (5% of all accounts engaging in hate speech). The findings suggest that already such small groups are able to severely disturb social media debates for large audiences. From the perspective of natural language processing, the task of automatic detection of offensive language in social media is complex due to three major reasons. First, we can expect `atypical' language data due to incorrect spellings, false grammar and non-standard language variations such as slang terms, intensifiers, or emojis/emoticons. For the automatic detection of offensive language, it is not quite clear whether these irregularities should be treated as `noise' or as a signal. Second, the task cannot be reduced to an analysis of word-level semantics only, e.g. spotting offensive keyterms in the data. Instead, the assessment of whether or not a post contains offensive language can be highly dependent on sentence and discourse level semantics, as well as subjective criteria. In a crowd-sourcing experiment on `hate speech' annotation, Ross.2016 achieved only very low inter-rater agreement between annotators. Offensive language is probably somewhat easier to achieve agreement on, but still sentence-level semantics and context or `world knowledge' remains important. Third, there is a lack of a common definition of the actual phenomenon to tackle. Published studies focus on `hostile messages', `flames', `hate speech', `discrimination', `abusive language', or `offensive language'. Although certainly overlapping, each of these categories has been operationalized in a slightly different manner. Since category definitions do not match properly, publicly available annotated datasets and language resources for one task cannot be used directly to train classifiers for any respective other task. Related Work Automatic detection of offensive language is a well-studied phenomenon for the English language. Initial works on the detection of `hostile messages' have been published already during the 1990s BIBREF4 . An overview of recent approaches comparing the different task definitions, feature sets and classification methods is given by Schmidt.2017. A major step forward to support the task was the publication of a large publicly available, manually annotated dataset by Yahoo research BIBREF5 . They provide a classification approach for detection of abusive language in Yahoo user comments using a variety of linguistic features in a linear classification model. One major result of their work was that learning text features from comments which are temporally close to the to-be-predicted data is more important than learning features from as much data as possible. This is especially important for real-life scenarios of classifying streams of comment data. In addition to token-based features, Xiang.2012 successfully employed topical features to detect offensive tweets. We will build upon this idea by employing topical data in our transfer learning setup. Transfer learning recently has gained a lot of attention since it can be easily applied to neural network learning architectures. For instance, Howard.2018 propose a generic transfer learning setup for text classification based on language modeling for pre-training neural models with large background corpora. To improve offensive language detection for English social media texts, a transfer learning approach was recently introduced by Felbo.2017. Their `deepmoji' approach relies on the idea to pre-train a neural network model for an actual offensive language classification task by using emojis as weakly supervised training labels. On a large collection of millions of randomly collected English tweets containing emojis, they try to predict the specific emojis from features obtained from the remaining tweet text. We will follow this idea of transfer learning to evaluate it for offensive language detection in German Twitter data together with other transfer learning strategies. GermEval 2018 Shared Task Organizers of GermEval 2018 provide training and test datasets for two tasks. Task 1 is a binary classification for deciding whether or not a German tweet contains offensive language (the respective category labels are `offense' and `other'). Task 2 is a multi-class classification with more fine-grained labels sub-categorizing the same tweets into either `insult', `profanity', `abuse', or `other'. The training data contains 5,008 manually labeled tweets sampled from Twitter from selected accounts that are suspected to contain a high share of offensive language. Manual inspection reveals a high share of political tweets among those labeled as offensive. These tweets range from offending single Twitter users, politicians and parties to degradation of whole social groups such as Muslims, migrants or refugees. The test data contains 3,532 tweets. To create a realistic scenario of truly unseen test data, training and test set are sampled from disjoint user accounts. No standard validation set is provided for the task. To optimize hyper-parameters of our classification models and allow for early stopping to prevent the neural models from overfitting, we created our own validation set. For this, we used the last 808 examples from the provided training set. The remaining first 4,200 examples were used to train our models. Background Knowledge Since the provided dataset for offensive language detection is rather small, we investigate the potential of transfer learning to increase classification performance. For this, we use the following labeled as well as unlabeled datasets. A recently published resource of German language social media data has been published by Schabus2017. Among other things, the dataset contains 11,773 labeled user comments posted to the Austrian newspaper website `Der Standard'. Comments have not been annotated for offensive language, but for categories such as positive/negative sentiment, off-topic, inappropriate or discriminating. As a second resource, we use a background corpus of German tweets that were collected using the Twitter streaming API from 2011 to 2017. Since the API provides a random fraction of all tweets (1%), language identification is performed using `langid.py' BIBREF6 to filter for German tweets. For all years combined, we obtain about 18 million unlabeled German tweets from the stream, which can be used as a large, in-domain background corpus. For a transfer learning setup, we need to specify a task to train the model and prepare the corresponding dataset. We compare the following three methods. As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. This results in 3,599 training examples (519 offense, 3080 other) from on the `One Million Post' corpus. We conduct pre-training of the neural model as a binary classification task (similar to the Task 1 of GermEval 2018) Following the approach of Felbo.2017, we constructed a weakly-supervised training dataset from our Twitter background corpus. From all tweets posted between 2013 and 2017, we extract those containing at least one emoji character. In the case of several emojis in one tweet, we duplicate the tweet for each unique emoji type. Emojis are then removed from the actual tweets and treated as a label to predict by the neural model. This results in a multi-class classification task to predict the right emoji out of 1,297 different ones. Our training dataset contains 1,904,330 training examples. As a final method, we create a training data set for transfer learning in a completely unsupervised manner. For this, we compute an LDA clustering with INLINEFORM0 topics on 10 million tweets sampled from 2016 and 2017 from our Twitter background corpus containing at least two meaningful words (i.e. alphanumeric sequences that are not stopwords, URLs or user mentions). Tweets also have been deduplicated before sampling. From the topic-document distribution of the resulting LDA model, we determined the majority topic id for each tweet as a target label for prediction during pre-training our neural model. Pre-training of the neural model was conducted on the 10 million tweets with batch size 128 for 10 epochs. Text Classification In the following section, we describe one linear classification model in combination with specifically engineered features, which we use as a baseline for the classification task. We further introduce a neural network model as a basis for our approach to transfer learning. This model achieves the highest performance for offensive language detection, as compared to our baseline. SVM baseline: The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. We use a text classification framework for German BIBREF8 that has been used successfully for sentiment analysis before. We induce token features based on the Twitter background corpus. Because tweets are usually very short, they are not an optimal source to obtain good estimates on inverse document frequencies (IDF). To obtain a better feature weighting, we calculate IDF scores based on the Twitter corpus combined with an in-house product review dataset (cf. ibid.). From this combined corpus, we compute the IDF scores and 300-dimensional word embeddings BIBREF9 for all contained features. Following Ruppert2017, we use the IDF scores to obtain the highest-weighted terms per category in the training data. Here, we obtain words like Staatsfunk, Vasall (state media, vassal) or deutschlandfeindlichen (Germany-opposing) for the category `abuse' and curse words for `insult'. Further, IDF scores are used to weight the word vectors of all terms in a tweet. Additionally, we employ a polarity lexicon and perform lexical expansion on it to obtain new entries from our in-domain background corpus that are weighted on a `positive–negative' continuum. Lexical expansion is based on distributional word similarity as described in Kumar.2016. BiLSTM-CNN for Text Classification For transfer learning, we rely on a neural network architecture implemented in the Keras framework for Python. Our model (see Fig. FIGREF15 ) combines a bi-directional LSTM layer BIBREF1 with 100 units followed by three parallel convolutional layers (CNN), each with a different kernel size INLINEFORM0 , and a filter size 200. The outputs of the three CNN blocks are max-pooled globally and concatenated. Finally, features encoded by the CNN blocks are fed into a dense layer with 100 units, followed by the prediction layer. Except for this final layer which uses Softmax activation, we rely on LeakyReLU activation BIBREF10 for the other model layers. For regularization, dropout is applied to the LSTM layer and to each CNN block after global max-pooling (dropout rate 0.5). For training, we use the Nesterov Adam optimization and categorical cross-entropy loss with a learning rate of 0.002. The intuition behind this architecture is that the recurrent LSTM layer can serve as a feature encoder for general language characteristics from sequences of semantic word embeddings. The convolutional layers on top of this can then encode category related features delivered by the LSTM while the last dense layers finally fine-tune highly category-specific features for the actual classification task. As input, we feed 300-dimensional word embeddings obtained from fastText BIBREF11 into our model. Since fastText also makes use of sub-word information (character n-grams), it has the great advantage that it can provide semantic embeddings also for words that have not been seen during training the embedding model. We use a model pre-trained with German language data from Wikipedia and Common Crawl provided by mikolov2018advances. First, we unify all Twitter-typical user mentions (`@username') and URLs into a single string representation and reduce all characters to lower case. Then, we split tweets into tokens at boundaries of changing character classes. As an exception, sequences of emoji characters are split into single character tokens. Finally, for each token, an embedding vector is obtained from the fastText model. For offensive language detection in Twitter, users addressed in tweets might be an additional relevant signal. We assume it is more likely that politicians or news agencies are addressees of offensive language than, for instance, musicians or athletes. To make use of such information, we obtain a clustering of user ids from our Twitter background corpus. From all tweets in our stream from 2016 or 2017, we extract those tweets that have at least two @-mentions and all of the @-mentions have been seen at least five times in the background corpus. Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ). For our final classification approach, cluster ids for users mentioned in tweets are fed as a second input in addition to (sub-)word embeddings to the penultimate dense layer of the neural network model. Transfer Learning As mentioned earlier, we investigate potential strategies for transfer learning to achieve optimal performance. For this, we compare three different methods to pre-train our model with background data sets. We also compare three different strategies to combat `catastrophic forgetting' during training on the actual target data. Transfer Learning Strategies Once the neural model has been pre-trained on the above-specified targets and corresponding datasets, we can apply it for learning our actual target task. For this, we need to remove the final prediction layer of the pre-trained model (i.e. Layer 4 in Fig. FIGREF15 ), and add a new dense layer for prediction of one of the actual label sets (two for Task 1, four for Task 2). The training for the actual GermEval tasks is conducted with batch size 32 for up to 50 epochs. To prevent the aforementioned effect of forgetting pre-trained knowledge during this task-specific model training, we evaluate three different strategies. In Howard.2018, gradual unfreezing of pre-trained model weights is proposed as one strategy to mitigate forgetting. The basic idea is to initially freeze all pre-trained weights of the neural model and keep only the newly added last layer trainable (i.e. Layer 4 in Fig. FIGREF15 ). After training that last layer for one epoch on the GermEval training data, the next lower frozen layer is unfrozen and training will be repeated for another epoch. This will be iterated until all layers (4 to 1) are unfrozen. Following the approach of Felbo.2017, we do not iteratively unfreeze all layers of the model, but only one at a time. First, the newly added final prediction layer is trained while all other model weights remain frozen. Training is conducted for up to 50 epochs. The best performing model during these epochs with respect to our validation set is then used in the next step of fine-tuning the pre-trained model layers. For the bottom-up strategy, we unfreeze the lowest layer (1) containing the most general knowledge first, then we continue optimization with the more specific layers (2 and 3) one after the other. During fine-tuning of each single layer, all other layers remain frozen and training is performed for 50 epochs selecting the best performing model at the end of each layer optimization. In a final round of fine-tuning, all layers are unfrozen. This proceeding is similar the one described above, but inverts the order of unfreezing single layers from top to bottom sequentially fine-tuning layers 4, 3, 2, 1 individually, and all together in a final round. All strategies are compared to the baseline of no freezing of model weights, but training all layers at once directly after pre-training with one of the three transfer datasets. Evaluation Since there is no prior state-of-the-art for the GermEval Shared Task 2018 dataset, we evaluate the performance of our neural model compared to the baseline SVM architecture. We further compare the different tasks and strategies for transfer learning introduced above and provide some first insights on error analysis. Conclusion In this paper, we presented our neural network text classification approach for offensive language detection on the GermEval 2018 Shared Task dataset. We used a combination of BiLSTM and CNN architectures for learning. As task-specific adaptations of standard text classification, we evaluated different datasets and strategies for transfer learning, as well as additional features obtained from users addressed in tweets. The coarse-grained offensive language detection could be realized to a much better extent than the fine-grained task of separating four different categories of insults (accuracy 77.5% vs. 73.7%). From our experiments, four main messages can be drawn: The fact that our unsupervised, task-agnostic pre-training by LDA topic transfer performed best suggests that this approach will also contribute beneficially to other text classification tasks such as sentiment analysis. Thus, in future work, we plan to evaluate our approach with regard to such other tasks. We also plan to evaluate more task-agnostic approaches for transfer learning, for instance employing language modeling as a pre-training task.
Clusters of Twitter user ids from accounts of American or German political actors, musicians, media websites or sports club
9003c7041d3d2addabc2c112fa2c7efe5fab493c
9003c7041d3d2addabc2c112fa2c7efe5fab493c_0
Q: What are the near-offensive language categories? Text: Introduction User-generated content in forums, blogs, and social media not only contributes to a deliberative exchange of opinions and ideas but is also contaminated with offensive language such as threats and discrimination against people, swear words or blunt insults. The automatic detection of such content can be a useful support for moderators of public platforms as well as for users who could receive warnings or would be enabled to filter unwanted content. Although this topic now has been studied for more than two decades, so far there has been little work on offensive language detection for German social media content. Regarding this, we present a new approach to detect offensive language as defined in the shared task of the GermEval 2018 workshop. For our contribution to the shared task, we focus on the question how to apply transfer learning for neural network-based text classification systems. In Germany, the growing interest in hate speech analysis and detection is closely related to recent political developments such as the increase of right-wing populism, and societal reactions to the ongoing influx of refugees seeking asylum BIBREF0 . Content analysis studies such as InstituteforStrategicDialogue.2018 have shown that a majority of hate speech comments in German Facebook is authored by a rather small group of very active users (5% of all accounts engaging in hate speech). The findings suggest that already such small groups are able to severely disturb social media debates for large audiences. From the perspective of natural language processing, the task of automatic detection of offensive language in social media is complex due to three major reasons. First, we can expect `atypical' language data due to incorrect spellings, false grammar and non-standard language variations such as slang terms, intensifiers, or emojis/emoticons. For the automatic detection of offensive language, it is not quite clear whether these irregularities should be treated as `noise' or as a signal. Second, the task cannot be reduced to an analysis of word-level semantics only, e.g. spotting offensive keyterms in the data. Instead, the assessment of whether or not a post contains offensive language can be highly dependent on sentence and discourse level semantics, as well as subjective criteria. In a crowd-sourcing experiment on `hate speech' annotation, Ross.2016 achieved only very low inter-rater agreement between annotators. Offensive language is probably somewhat easier to achieve agreement on, but still sentence-level semantics and context or `world knowledge' remains important. Third, there is a lack of a common definition of the actual phenomenon to tackle. Published studies focus on `hostile messages', `flames', `hate speech', `discrimination', `abusive language', or `offensive language'. Although certainly overlapping, each of these categories has been operationalized in a slightly different manner. Since category definitions do not match properly, publicly available annotated datasets and language resources for one task cannot be used directly to train classifiers for any respective other task. Related Work Automatic detection of offensive language is a well-studied phenomenon for the English language. Initial works on the detection of `hostile messages' have been published already during the 1990s BIBREF4 . An overview of recent approaches comparing the different task definitions, feature sets and classification methods is given by Schmidt.2017. A major step forward to support the task was the publication of a large publicly available, manually annotated dataset by Yahoo research BIBREF5 . They provide a classification approach for detection of abusive language in Yahoo user comments using a variety of linguistic features in a linear classification model. One major result of their work was that learning text features from comments which are temporally close to the to-be-predicted data is more important than learning features from as much data as possible. This is especially important for real-life scenarios of classifying streams of comment data. In addition to token-based features, Xiang.2012 successfully employed topical features to detect offensive tweets. We will build upon this idea by employing topical data in our transfer learning setup. Transfer learning recently has gained a lot of attention since it can be easily applied to neural network learning architectures. For instance, Howard.2018 propose a generic transfer learning setup for text classification based on language modeling for pre-training neural models with large background corpora. To improve offensive language detection for English social media texts, a transfer learning approach was recently introduced by Felbo.2017. Their `deepmoji' approach relies on the idea to pre-train a neural network model for an actual offensive language classification task by using emojis as weakly supervised training labels. On a large collection of millions of randomly collected English tweets containing emojis, they try to predict the specific emojis from features obtained from the remaining tweet text. We will follow this idea of transfer learning to evaluate it for offensive language detection in German Twitter data together with other transfer learning strategies. GermEval 2018 Shared Task Organizers of GermEval 2018 provide training and test datasets for two tasks. Task 1 is a binary classification for deciding whether or not a German tweet contains offensive language (the respective category labels are `offense' and `other'). Task 2 is a multi-class classification with more fine-grained labels sub-categorizing the same tweets into either `insult', `profanity', `abuse', or `other'. The training data contains 5,008 manually labeled tweets sampled from Twitter from selected accounts that are suspected to contain a high share of offensive language. Manual inspection reveals a high share of political tweets among those labeled as offensive. These tweets range from offending single Twitter users, politicians and parties to degradation of whole social groups such as Muslims, migrants or refugees. The test data contains 3,532 tweets. To create a realistic scenario of truly unseen test data, training and test set are sampled from disjoint user accounts. No standard validation set is provided for the task. To optimize hyper-parameters of our classification models and allow for early stopping to prevent the neural models from overfitting, we created our own validation set. For this, we used the last 808 examples from the provided training set. The remaining first 4,200 examples were used to train our models. Background Knowledge Since the provided dataset for offensive language detection is rather small, we investigate the potential of transfer learning to increase classification performance. For this, we use the following labeled as well as unlabeled datasets. A recently published resource of German language social media data has been published by Schabus2017. Among other things, the dataset contains 11,773 labeled user comments posted to the Austrian newspaper website `Der Standard'. Comments have not been annotated for offensive language, but for categories such as positive/negative sentiment, off-topic, inappropriate or discriminating. As a second resource, we use a background corpus of German tweets that were collected using the Twitter streaming API from 2011 to 2017. Since the API provides a random fraction of all tweets (1%), language identification is performed using `langid.py' BIBREF6 to filter for German tweets. For all years combined, we obtain about 18 million unlabeled German tweets from the stream, which can be used as a large, in-domain background corpus. For a transfer learning setup, we need to specify a task to train the model and prepare the corresponding dataset. We compare the following three methods. As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. This results in 3,599 training examples (519 offense, 3080 other) from on the `One Million Post' corpus. We conduct pre-training of the neural model as a binary classification task (similar to the Task 1 of GermEval 2018) Following the approach of Felbo.2017, we constructed a weakly-supervised training dataset from our Twitter background corpus. From all tweets posted between 2013 and 2017, we extract those containing at least one emoji character. In the case of several emojis in one tweet, we duplicate the tweet for each unique emoji type. Emojis are then removed from the actual tweets and treated as a label to predict by the neural model. This results in a multi-class classification task to predict the right emoji out of 1,297 different ones. Our training dataset contains 1,904,330 training examples. As a final method, we create a training data set for transfer learning in a completely unsupervised manner. For this, we compute an LDA clustering with INLINEFORM0 topics on 10 million tweets sampled from 2016 and 2017 from our Twitter background corpus containing at least two meaningful words (i.e. alphanumeric sequences that are not stopwords, URLs or user mentions). Tweets also have been deduplicated before sampling. From the topic-document distribution of the resulting LDA model, we determined the majority topic id for each tweet as a target label for prediction during pre-training our neural model. Pre-training of the neural model was conducted on the 10 million tweets with batch size 128 for 10 epochs. Text Classification In the following section, we describe one linear classification model in combination with specifically engineered features, which we use as a baseline for the classification task. We further introduce a neural network model as a basis for our approach to transfer learning. This model achieves the highest performance for offensive language detection, as compared to our baseline. SVM baseline: The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. We use a text classification framework for German BIBREF8 that has been used successfully for sentiment analysis before. We induce token features based on the Twitter background corpus. Because tweets are usually very short, they are not an optimal source to obtain good estimates on inverse document frequencies (IDF). To obtain a better feature weighting, we calculate IDF scores based on the Twitter corpus combined with an in-house product review dataset (cf. ibid.). From this combined corpus, we compute the IDF scores and 300-dimensional word embeddings BIBREF9 for all contained features. Following Ruppert2017, we use the IDF scores to obtain the highest-weighted terms per category in the training data. Here, we obtain words like Staatsfunk, Vasall (state media, vassal) or deutschlandfeindlichen (Germany-opposing) for the category `abuse' and curse words for `insult'. Further, IDF scores are used to weight the word vectors of all terms in a tweet. Additionally, we employ a polarity lexicon and perform lexical expansion on it to obtain new entries from our in-domain background corpus that are weighted on a `positive–negative' continuum. Lexical expansion is based on distributional word similarity as described in Kumar.2016. BiLSTM-CNN for Text Classification For transfer learning, we rely on a neural network architecture implemented in the Keras framework for Python. Our model (see Fig. FIGREF15 ) combines a bi-directional LSTM layer BIBREF1 with 100 units followed by three parallel convolutional layers (CNN), each with a different kernel size INLINEFORM0 , and a filter size 200. The outputs of the three CNN blocks are max-pooled globally and concatenated. Finally, features encoded by the CNN blocks are fed into a dense layer with 100 units, followed by the prediction layer. Except for this final layer which uses Softmax activation, we rely on LeakyReLU activation BIBREF10 for the other model layers. For regularization, dropout is applied to the LSTM layer and to each CNN block after global max-pooling (dropout rate 0.5). For training, we use the Nesterov Adam optimization and categorical cross-entropy loss with a learning rate of 0.002. The intuition behind this architecture is that the recurrent LSTM layer can serve as a feature encoder for general language characteristics from sequences of semantic word embeddings. The convolutional layers on top of this can then encode category related features delivered by the LSTM while the last dense layers finally fine-tune highly category-specific features for the actual classification task. As input, we feed 300-dimensional word embeddings obtained from fastText BIBREF11 into our model. Since fastText also makes use of sub-word information (character n-grams), it has the great advantage that it can provide semantic embeddings also for words that have not been seen during training the embedding model. We use a model pre-trained with German language data from Wikipedia and Common Crawl provided by mikolov2018advances. First, we unify all Twitter-typical user mentions (`@username') and URLs into a single string representation and reduce all characters to lower case. Then, we split tweets into tokens at boundaries of changing character classes. As an exception, sequences of emoji characters are split into single character tokens. Finally, for each token, an embedding vector is obtained from the fastText model. For offensive language detection in Twitter, users addressed in tweets might be an additional relevant signal. We assume it is more likely that politicians or news agencies are addressees of offensive language than, for instance, musicians or athletes. To make use of such information, we obtain a clustering of user ids from our Twitter background corpus. From all tweets in our stream from 2016 or 2017, we extract those tweets that have at least two @-mentions and all of the @-mentions have been seen at least five times in the background corpus. Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ). For our final classification approach, cluster ids for users mentioned in tweets are fed as a second input in addition to (sub-)word embeddings to the penultimate dense layer of the neural network model. Transfer Learning As mentioned earlier, we investigate potential strategies for transfer learning to achieve optimal performance. For this, we compare three different methods to pre-train our model with background data sets. We also compare three different strategies to combat `catastrophic forgetting' during training on the actual target data. Transfer Learning Strategies Once the neural model has been pre-trained on the above-specified targets and corresponding datasets, we can apply it for learning our actual target task. For this, we need to remove the final prediction layer of the pre-trained model (i.e. Layer 4 in Fig. FIGREF15 ), and add a new dense layer for prediction of one of the actual label sets (two for Task 1, four for Task 2). The training for the actual GermEval tasks is conducted with batch size 32 for up to 50 epochs. To prevent the aforementioned effect of forgetting pre-trained knowledge during this task-specific model training, we evaluate three different strategies. In Howard.2018, gradual unfreezing of pre-trained model weights is proposed as one strategy to mitigate forgetting. The basic idea is to initially freeze all pre-trained weights of the neural model and keep only the newly added last layer trainable (i.e. Layer 4 in Fig. FIGREF15 ). After training that last layer for one epoch on the GermEval training data, the next lower frozen layer is unfrozen and training will be repeated for another epoch. This will be iterated until all layers (4 to 1) are unfrozen. Following the approach of Felbo.2017, we do not iteratively unfreeze all layers of the model, but only one at a time. First, the newly added final prediction layer is trained while all other model weights remain frozen. Training is conducted for up to 50 epochs. The best performing model during these epochs with respect to our validation set is then used in the next step of fine-tuning the pre-trained model layers. For the bottom-up strategy, we unfreeze the lowest layer (1) containing the most general knowledge first, then we continue optimization with the more specific layers (2 and 3) one after the other. During fine-tuning of each single layer, all other layers remain frozen and training is performed for 50 epochs selecting the best performing model at the end of each layer optimization. In a final round of fine-tuning, all layers are unfrozen. This proceeding is similar the one described above, but inverts the order of unfreezing single layers from top to bottom sequentially fine-tuning layers 4, 3, 2, 1 individually, and all together in a final round. All strategies are compared to the baseline of no freezing of model weights, but training all layers at once directly after pre-training with one of the three transfer datasets. Evaluation Since there is no prior state-of-the-art for the GermEval Shared Task 2018 dataset, we evaluate the performance of our neural model compared to the baseline SVM architecture. We further compare the different tasks and strategies for transfer learning introduced above and provide some first insights on error analysis. Conclusion In this paper, we presented our neural network text classification approach for offensive language detection on the GermEval 2018 Shared Task dataset. We used a combination of BiLSTM and CNN architectures for learning. As task-specific adaptations of standard text classification, we evaluated different datasets and strategies for transfer learning, as well as additional features obtained from users addressed in tweets. The coarse-grained offensive language detection could be realized to a much better extent than the fine-grained task of separating four different categories of insults (accuracy 77.5% vs. 73.7%). From our experiments, four main messages can be drawn: The fact that our unsupervised, task-agnostic pre-training by LDA topic transfer performed best suggests that this approach will also contribute beneficially to other text classification tasks such as sentiment analysis. Thus, in future work, we plan to evaluate our approach with regard to such other tasks. We also plan to evaluate more task-agnostic approaches for transfer learning, for instance employing language modeling as a pre-training task.
inappropriate, discriminating
e9d9bb87a5c4faa965ceddd98d8b80d4b99e339e
e9d9bb87a5c4faa965ceddd98d8b80d4b99e339e_0
Q: How much do they outperform previous state-of-the-art? Text: Introduction Sentiment analysis (SA) is an important task in natural language processing. It solves the computational processing of opinions, emotions, and subjectivity - sentiment is collected, analyzed and summarized. It has received much attention not only in academia but also in industry, providing real-time feedback through online reviews on websites such as Amazon, which can take advantage of customers' opinions on specific products or services. The underlying assumption of this task is that the entire text has an overall polarity. However, the users' comments may contain different aspects, such as: “This book is a hardcover version, but the price is a bit high." The polarity in `appearance' is positive, and the polarity regarding `price' is negative. Aspect-based sentiment analysis (ABSA) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 aims to identify fine-grained polarity towards a specific aspect. This task allows users to evaluate aggregated sentiments for each aspect of a given product or service and gain a more granular understanding of their quality. Both SA and ABSA are sentence-level or document-level tasks, but one comment may refer to more than one object, and sentence-level tasks cannot handle sentences with multiple targets. Therefore, BIBREF4 introduce the task of targeted aspect-based sentiment analysis (TABSA), which aims to identify fine-grained opinion polarity towards a specific aspect associated with a given target. The task can be divided into two steps: (1) the first step is to determine the aspects associated with each target; (2) the second step is to resolve the polarity of aspects to a given target. The earliest work on (T)ABSA relied heavily on feature engineering BIBREF5 , BIBREF6 , and subsequent neural network-based methods BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 achieved higher accuracy. Recently, BIBREF12 incorporate useful commonsense knowledge into a deep neural network to further enhance the result of the model. BIBREF13 optimize the memory network and apply it to their model to better capture linguistic structure. More recently, the pre-trained language models, such as ELMo BIBREF14 , OpenAI GPT BIBREF15 , and BERT BIBREF16 , have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in QA and NLI. However, there is not much improvement in (T)ABSA task with the direct use of the pre-trained BERT model (see Table TABREF19 ). We think this is due to the inappropriate use of the pre-trained BERT model. Since the input representation of BERT can represent both a single text sentence and a pair of text sentences, we can convert (T)ABSA into a sentence-pair classification task and fine-tune the pre-trained BERT. In this paper, we investigate several methods of constructing an auxiliary sentence and transform (T)ABSA into a sentence-pair classification task. We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on (T)ABSA task. We also conduct a comparative experiment to verify that the classification based on a sentence-pair is better than the single-sentence classification with fine-tuned BERT, which means that the improvement is not only from BERT but also from our method. In particular, our contribution is two-fold: 1. We propose a new solution of (T)ABSA by converting it to a sentence-pair classification task. 2. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets. Methodology In this section, we describe our method in detail. Task description In TABSA, a sentence INLINEFORM0 usually consists of a series of words: INLINEFORM1 , and some of the words INLINEFORM2 are pre-identified targets INLINEFORM3 , following BIBREF4 , we set the task as a 3-class classification problem: given the sentence INLINEFORM4 , a set of target entities INLINEFORM5 and a fixed aspect set INLINEFORM6 , predict the sentiment polarity INLINEFORM7 over the full set of the target-aspect pairs INLINEFORM8 . As we can see in Table TABREF6 , the gold standard polarity of (LOCATION2, price) is negative, while the polarity of (LOCATION1, price) is none. In ABSA, the target-aspect pairs INLINEFORM0 become only aspects INLINEFORM1 . This setting is equivalent to learning subtasks 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity) of SemEval-2014 Task 4 at the same time. Construction of the auxiliary sentence For simplicity, we mainly describe our method with TABSA as an example. We consider the following four methods to convert the TABSA task into a sentence pair classification task: The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same. For example, for the set of a target-aspect pair (LOCATION1, safety), the sentence we generate is “what do you think of the safety of location - 1 ?" For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler. The sentence created at this time is not a standard sentence, but a simple pseudo-sentence, with (LOCATION1, safety) pair as an example: the auxiliary sentence is: “location - 1 - safety". For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( INLINEFORM0 ) to obtain the probability distribution. At this time, each target-aspect pair will generate three sequences such as “the polarity of the aspect safety of location - 1 is positive", “the polarity of the aspect safety of location - 1 is negative", “the polarity of the aspect safety of location - 1 is none". We use the probability value of INLINEFORM1 as the matching score. For a target-aspect pair which generates three sequences ( INLINEFORM2 ), we take the class of the sequence with the highest matching score for the predicted category. The difference between NLI-B and QA-B is that the auxiliary sentence changes from a question to a pseudo-sentence. The auxiliary sentences are: “location - 1 - safety - positive", “location - 1 - safety - negative", and “location - 1 - safety - none". After we construct the auxiliary sentence, we can transform the TABSA task from a single sentence classification task to a sentence pair classification task. As shown in Table TABREF19 , this is a necessary operation that can significantly improve the experimental results of the TABSA task. Fine-tuning pre-trained BERT BERT BIBREF16 is a new language representation model, which uses bidirectional transformers to pre-train a large corpus, and fine-tunes the pre-trained model on other tasks. We fine-tune the pre-trained BERT model on TABSA task. Let's take a brief look at the input representation and the fine-tuning procedure. The input representation of the BERT can explicitly represent a pair of text sentences in a sequence of tokens. For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. For classification tasks, the first word of each sequence is a unique classification embedding ([CLS]). BERT fine-tuning is straightforward. To obtain a fixed-dimensional pooled representation of the input sequence, we use the final hidden state (i.e., the output of the transformer) of the first token as the input. We denote the vector as INLINEFORM0 . Then we add a classification layer whose parameter matrix is INLINEFORM1 , where INLINEFORM2 is the number of categories. Finally, the probability of each category INLINEFORM3 is calculated by the softmax function INLINEFORM4 . BERT for single sentence classification tasks. Suppose the number of target categories are INLINEFORM0 and aspect categories are INLINEFORM1 . We consider TABSA as a combination of INLINEFORM2 target-aspect-related sentiment classification problems, first classifying each sentiment classification problem, and then summarizing the results obtained. For ABSA, We fine-tune pre-trained BERT model to train INLINEFORM3 classifiers for all aspects and then summarize the results. BERT for sentence pair classification tasks. Based on the auxiliary sentence constructed in Section SECREF7 , we use the sentence-pair classification approach to solve (T)ABSA. Corresponding to the four ways of constructing sentences, we name the models: BERT-pair-QA-M, BERT-pair-NLI-M, BERT-pair-QA-B, and BERT-pair-NLI-B. Datasets We evaluate our method on the SentiHood BIBREF4 dataset, which consists of 5,215 sentences, 3,862 of which contain a single target, and the remainder multiple targets. Each sentence contains a list of target-aspect pairs INLINEFORM0 with the sentiment polarity INLINEFORM1 . Ultimately, given a sentence INLINEFORM2 and the target INLINEFORM3 in the sentence, we need to: (1) detect the mention of an aspect INLINEFORM0 for the target INLINEFORM1 ; (2) determine the positive or negative sentiment polarity INLINEFORM0 for detected target-aspect pairs. We also evaluate our method on SemEval-2014 Task 4 BIBREF1 dataset for aspect-based sentiment analysis. The only difference from the SentiHood is that the target-aspect pairs INLINEFORM0 become only aspects INLINEFORM1 . This setting allows us to jointly evaluate subtask 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity). Hyperparameters We use the pre-trained uncased BERT-base model for fine-tuning. The number of Transformer blocks is 12, the hidden layer size is 768, the number of self-attention heads is 12, and the total number of parameters for the pre-trained model is 110M. When fine-tuning, we keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 24. Exp-I: TABSA We compare our model with the following models: LR BIBREF4 : a logistic regression classifier with n-gram and pos-tag features. LSTM-Final BIBREF4 : a biLSTM model with the final state as a representation. LSTM-Loc BIBREF4 : a biLSTM model with the state associated with the target position as a representation. LSTM+TA+SA BIBREF12 : a biLSTM model which introduces complex target-level and sentence-level attention mechanisms. SenticLSTM BIBREF12 : an upgraded version of the LSTM+TA+SA model which introduces external information from SenticNet BIBREF17 . Dmu-Entnet BIBREF13 : a bi-directional EntNet BIBREF18 with external “memory chains” with a delayed memory update mechanism to track entities. During the evaluation of SentiHood, following BIBREF4 , we only consider the four most frequently seen aspects (general, price, transit-location, safety). When evaluating the aspect detection, following BIBREF12 , we use strict accuracy and Macro-F1, and we also report AUC. In sentiment classification, we use accuracy and macro-average AUC as the evaluation indices. Results on SentiHood are presented in Table TABREF19 . The results of the BERT-single model on aspect detection are better than Dmu-Entnet, but the accuracy of sentiment classification is much lower than that of both SenticLstm and Dmu-Entnet, with a difference of 3.8 and 5.5 respectively. However, BERT-pair outperforms other models on aspect detection and sentiment analysis by a substantial margin, obtaining 9.4 macro-average F1 and 2.6 accuracies improvement over Dmu-Entnet. Overall, the performance of the four BERT-pair models is close. It is worth noting that BERT-pair-NLI models perform relatively better on aspect detection, while BERT-pair-QA models perform better on sentiment classification. Also, the BERT-pair-QA-B and BERT-pair-NLI-B models can achieve better AUC values on sentiment classification than the other models. Exp-II: ABSA The benchmarks for SemEval-2014 Task 4 are the two best performing systems in BIBREF1 and ATAE-LSTM BIBREF8 . When evaluating SemEval-2014 Task 4 subtask 3 and subtask 4, following BIBREF1 , we use Micro-F1 and accuracy respectively. Results on SemEval-2014 are presented in Table TABREF35 and Table TABREF36 . We find that BERT-single has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings. Discussion Why is the experimental result of the BERT-pair model so much better? On the one hand, we convert the target and aspect information into an auxiliary sentence, which is equivalent to exponentially expanding the corpus. A sentence INLINEFORM0 in the original data set will be expanded into INLINEFORM1 in the sentence pair classification task. On the other hand, it can be seen from the amazing improvement of the BERT model on the QA and NLI tasks BIBREF16 that the BERT model has an advantage in dealing with sentence pair classification tasks. This advantage comes from both unsupervised masked language model and next sentence prediction tasks. TABSA is more complicated than SA due to additional target and aspect information. Directly fine-tuning the pre-trained BERT on TABSA does not achieve performance growth. However, when we separate the target and the aspect to form an auxiliary sentence and transform the TABSA into a sentence pair classification task, the scenario is similar to QA and NLI, and then the advantage of the pre-trained BERT model can be fully utilized. Our approach is not limited to TABSA, and this construction method can be used for other similar tasks. For ABSA, we can use the same approach to construct the auxiliary sentence with only aspects. In BERT-pair models, BERT-pair-QA-B and BERT-pair-NLI-B achieve better AUC values on sentiment classification, probably because of the modeling of label information. Conclusion In this paper, we constructed an auxiliary sentence to transform (T)ABSA from a single sentence classification task to a sentence pair classification task. We fine-tuned the pre-trained BERT model on the sentence pair classification task and obtained the new state-of-the-art results. We compared the experimental results of single sentence classification and sentence pair classification based on BERT fine-tuning, analyzed the advantages of sentence pair classification, and verified the validity of our conversion method. In the future, we will apply this conversion method to other similar tasks. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by Shanghai Municipal Science and Technology Commission (No. 16JC1420401 and 17JC1404100), National Key Research and Development Program of China (No. 2017YFB1002104), and National Natural Science Foundation of China (No. 61672162 and 61751201).
On subtask 3 best proposed model has F1 score of 92.18 compared to best previous F1 score of 88.58. On subtask 4 best proposed model has 85.9, 89.9 and 95.6 compared to best previous results of 82.9, 84.0 and 89.9 on 4-way, 3-way and binary aspect polarity.
3554ac92d4f2d00dbf58f7b4ff2b36a852854e95
3554ac92d4f2d00dbf58f7b4ff2b36a852854e95_0
Q: How do they generate the auxiliary sentence? Text: Introduction Sentiment analysis (SA) is an important task in natural language processing. It solves the computational processing of opinions, emotions, and subjectivity - sentiment is collected, analyzed and summarized. It has received much attention not only in academia but also in industry, providing real-time feedback through online reviews on websites such as Amazon, which can take advantage of customers' opinions on specific products or services. The underlying assumption of this task is that the entire text has an overall polarity. However, the users' comments may contain different aspects, such as: “This book is a hardcover version, but the price is a bit high." The polarity in `appearance' is positive, and the polarity regarding `price' is negative. Aspect-based sentiment analysis (ABSA) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 aims to identify fine-grained polarity towards a specific aspect. This task allows users to evaluate aggregated sentiments for each aspect of a given product or service and gain a more granular understanding of their quality. Both SA and ABSA are sentence-level or document-level tasks, but one comment may refer to more than one object, and sentence-level tasks cannot handle sentences with multiple targets. Therefore, BIBREF4 introduce the task of targeted aspect-based sentiment analysis (TABSA), which aims to identify fine-grained opinion polarity towards a specific aspect associated with a given target. The task can be divided into two steps: (1) the first step is to determine the aspects associated with each target; (2) the second step is to resolve the polarity of aspects to a given target. The earliest work on (T)ABSA relied heavily on feature engineering BIBREF5 , BIBREF6 , and subsequent neural network-based methods BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 achieved higher accuracy. Recently, BIBREF12 incorporate useful commonsense knowledge into a deep neural network to further enhance the result of the model. BIBREF13 optimize the memory network and apply it to their model to better capture linguistic structure. More recently, the pre-trained language models, such as ELMo BIBREF14 , OpenAI GPT BIBREF15 , and BERT BIBREF16 , have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in QA and NLI. However, there is not much improvement in (T)ABSA task with the direct use of the pre-trained BERT model (see Table TABREF19 ). We think this is due to the inappropriate use of the pre-trained BERT model. Since the input representation of BERT can represent both a single text sentence and a pair of text sentences, we can convert (T)ABSA into a sentence-pair classification task and fine-tune the pre-trained BERT. In this paper, we investigate several methods of constructing an auxiliary sentence and transform (T)ABSA into a sentence-pair classification task. We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on (T)ABSA task. We also conduct a comparative experiment to verify that the classification based on a sentence-pair is better than the single-sentence classification with fine-tuned BERT, which means that the improvement is not only from BERT but also from our method. In particular, our contribution is two-fold: 1. We propose a new solution of (T)ABSA by converting it to a sentence-pair classification task. 2. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets. Methodology In this section, we describe our method in detail. Task description In TABSA, a sentence INLINEFORM0 usually consists of a series of words: INLINEFORM1 , and some of the words INLINEFORM2 are pre-identified targets INLINEFORM3 , following BIBREF4 , we set the task as a 3-class classification problem: given the sentence INLINEFORM4 , a set of target entities INLINEFORM5 and a fixed aspect set INLINEFORM6 , predict the sentiment polarity INLINEFORM7 over the full set of the target-aspect pairs INLINEFORM8 . As we can see in Table TABREF6 , the gold standard polarity of (LOCATION2, price) is negative, while the polarity of (LOCATION1, price) is none. In ABSA, the target-aspect pairs INLINEFORM0 become only aspects INLINEFORM1 . This setting is equivalent to learning subtasks 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity) of SemEval-2014 Task 4 at the same time. Construction of the auxiliary sentence For simplicity, we mainly describe our method with TABSA as an example. We consider the following four methods to convert the TABSA task into a sentence pair classification task: The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same. For example, for the set of a target-aspect pair (LOCATION1, safety), the sentence we generate is “what do you think of the safety of location - 1 ?" For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler. The sentence created at this time is not a standard sentence, but a simple pseudo-sentence, with (LOCATION1, safety) pair as an example: the auxiliary sentence is: “location - 1 - safety". For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( INLINEFORM0 ) to obtain the probability distribution. At this time, each target-aspect pair will generate three sequences such as “the polarity of the aspect safety of location - 1 is positive", “the polarity of the aspect safety of location - 1 is negative", “the polarity of the aspect safety of location - 1 is none". We use the probability value of INLINEFORM1 as the matching score. For a target-aspect pair which generates three sequences ( INLINEFORM2 ), we take the class of the sequence with the highest matching score for the predicted category. The difference between NLI-B and QA-B is that the auxiliary sentence changes from a question to a pseudo-sentence. The auxiliary sentences are: “location - 1 - safety - positive", “location - 1 - safety - negative", and “location - 1 - safety - none". After we construct the auxiliary sentence, we can transform the TABSA task from a single sentence classification task to a sentence pair classification task. As shown in Table TABREF19 , this is a necessary operation that can significantly improve the experimental results of the TABSA task. Fine-tuning pre-trained BERT BERT BIBREF16 is a new language representation model, which uses bidirectional transformers to pre-train a large corpus, and fine-tunes the pre-trained model on other tasks. We fine-tune the pre-trained BERT model on TABSA task. Let's take a brief look at the input representation and the fine-tuning procedure. The input representation of the BERT can explicitly represent a pair of text sentences in a sequence of tokens. For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. For classification tasks, the first word of each sequence is a unique classification embedding ([CLS]). BERT fine-tuning is straightforward. To obtain a fixed-dimensional pooled representation of the input sequence, we use the final hidden state (i.e., the output of the transformer) of the first token as the input. We denote the vector as INLINEFORM0 . Then we add a classification layer whose parameter matrix is INLINEFORM1 , where INLINEFORM2 is the number of categories. Finally, the probability of each category INLINEFORM3 is calculated by the softmax function INLINEFORM4 . BERT for single sentence classification tasks. Suppose the number of target categories are INLINEFORM0 and aspect categories are INLINEFORM1 . We consider TABSA as a combination of INLINEFORM2 target-aspect-related sentiment classification problems, first classifying each sentiment classification problem, and then summarizing the results obtained. For ABSA, We fine-tune pre-trained BERT model to train INLINEFORM3 classifiers for all aspects and then summarize the results. BERT for sentence pair classification tasks. Based on the auxiliary sentence constructed in Section SECREF7 , we use the sentence-pair classification approach to solve (T)ABSA. Corresponding to the four ways of constructing sentences, we name the models: BERT-pair-QA-M, BERT-pair-NLI-M, BERT-pair-QA-B, and BERT-pair-NLI-B. Datasets We evaluate our method on the SentiHood BIBREF4 dataset, which consists of 5,215 sentences, 3,862 of which contain a single target, and the remainder multiple targets. Each sentence contains a list of target-aspect pairs INLINEFORM0 with the sentiment polarity INLINEFORM1 . Ultimately, given a sentence INLINEFORM2 and the target INLINEFORM3 in the sentence, we need to: (1) detect the mention of an aspect INLINEFORM0 for the target INLINEFORM1 ; (2) determine the positive or negative sentiment polarity INLINEFORM0 for detected target-aspect pairs. We also evaluate our method on SemEval-2014 Task 4 BIBREF1 dataset for aspect-based sentiment analysis. The only difference from the SentiHood is that the target-aspect pairs INLINEFORM0 become only aspects INLINEFORM1 . This setting allows us to jointly evaluate subtask 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity). Hyperparameters We use the pre-trained uncased BERT-base model for fine-tuning. The number of Transformer blocks is 12, the hidden layer size is 768, the number of self-attention heads is 12, and the total number of parameters for the pre-trained model is 110M. When fine-tuning, we keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 24. Exp-I: TABSA We compare our model with the following models: LR BIBREF4 : a logistic regression classifier with n-gram and pos-tag features. LSTM-Final BIBREF4 : a biLSTM model with the final state as a representation. LSTM-Loc BIBREF4 : a biLSTM model with the state associated with the target position as a representation. LSTM+TA+SA BIBREF12 : a biLSTM model which introduces complex target-level and sentence-level attention mechanisms. SenticLSTM BIBREF12 : an upgraded version of the LSTM+TA+SA model which introduces external information from SenticNet BIBREF17 . Dmu-Entnet BIBREF13 : a bi-directional EntNet BIBREF18 with external “memory chains” with a delayed memory update mechanism to track entities. During the evaluation of SentiHood, following BIBREF4 , we only consider the four most frequently seen aspects (general, price, transit-location, safety). When evaluating the aspect detection, following BIBREF12 , we use strict accuracy and Macro-F1, and we also report AUC. In sentiment classification, we use accuracy and macro-average AUC as the evaluation indices. Results on SentiHood are presented in Table TABREF19 . The results of the BERT-single model on aspect detection are better than Dmu-Entnet, but the accuracy of sentiment classification is much lower than that of both SenticLstm and Dmu-Entnet, with a difference of 3.8 and 5.5 respectively. However, BERT-pair outperforms other models on aspect detection and sentiment analysis by a substantial margin, obtaining 9.4 macro-average F1 and 2.6 accuracies improvement over Dmu-Entnet. Overall, the performance of the four BERT-pair models is close. It is worth noting that BERT-pair-NLI models perform relatively better on aspect detection, while BERT-pair-QA models perform better on sentiment classification. Also, the BERT-pair-QA-B and BERT-pair-NLI-B models can achieve better AUC values on sentiment classification than the other models. Exp-II: ABSA The benchmarks for SemEval-2014 Task 4 are the two best performing systems in BIBREF1 and ATAE-LSTM BIBREF8 . When evaluating SemEval-2014 Task 4 subtask 3 and subtask 4, following BIBREF1 , we use Micro-F1 and accuracy respectively. Results on SemEval-2014 are presented in Table TABREF35 and Table TABREF36 . We find that BERT-single has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single. The BERT-pair-NLI-B model achieves the best performance for aspect category detection. For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings. Discussion Why is the experimental result of the BERT-pair model so much better? On the one hand, we convert the target and aspect information into an auxiliary sentence, which is equivalent to exponentially expanding the corpus. A sentence INLINEFORM0 in the original data set will be expanded into INLINEFORM1 in the sentence pair classification task. On the other hand, it can be seen from the amazing improvement of the BERT model on the QA and NLI tasks BIBREF16 that the BERT model has an advantage in dealing with sentence pair classification tasks. This advantage comes from both unsupervised masked language model and next sentence prediction tasks. TABSA is more complicated than SA due to additional target and aspect information. Directly fine-tuning the pre-trained BERT on TABSA does not achieve performance growth. However, when we separate the target and the aspect to form an auxiliary sentence and transform the TABSA into a sentence pair classification task, the scenario is similar to QA and NLI, and then the advantage of the pre-trained BERT model can be fully utilized. Our approach is not limited to TABSA, and this construction method can be used for other similar tasks. For ABSA, we can use the same approach to construct the auxiliary sentence with only aspects. In BERT-pair models, BERT-pair-QA-B and BERT-pair-NLI-B achieve better AUC values on sentiment classification, probably because of the modeling of label information. Conclusion In this paper, we constructed an auxiliary sentence to transform (T)ABSA from a single sentence classification task to a sentence pair classification task. We fine-tuned the pre-trained BERT model on the sentence pair classification task and obtained the new state-of-the-art results. We compared the experimental results of single sentence classification and sentence pair classification based on BERT fine-tuning, analyzed the advantages of sentence pair classification, and verified the validity of our conversion method. In the future, we will apply this conversion method to other similar tasks. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by Shanghai Municipal Science and Technology Commission (No. 16JC1420401 and 17JC1404100), National Key Research and Development Program of China (No. 2017YFB1002104), and National Natural Science Foundation of China (No. 61672162 and 61751201).
The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same., For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler., For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( INLINEFORM0 ) to obtain the probability distribution, auxiliary sentence changes from a question to a pseudo-sentence
7b35593033e4c6b9dccba98f22a7eeaa3385df38
7b35593033e4c6b9dccba98f22a7eeaa3385df38_0
Q: Have any baseline model been trained on this abusive language dataset? Text: Introduction Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics. Most detection systems rely on having the right training dataset, reflecting one of the most widely accepted mantras in computer science: Garbage In, Garbage Out. Put simply: to have systems which can detect and classify abusive online content effectively, one needs appropriate datasets with which to train them. However, creating training datasets is often a laborious and non-trivial task – and creating datasets which are non-biased, large and theoretically-informed is even more difficult (BIBREF0 p. 189). We address this issue by examining and reviewing publicly available datasets for abusive content detection, which we provide access to on a new dedicated website, hatespeechdata.com. In the first section, we examine previous reviews and present the four research aims which guide this paper. In the second section, we conduct a critical and in-depth analysis of the available datasets, discussing first what their aim is, how tasks have been described and what taxonomies have been constructed and then, second, what they contain and how they were annotated. In the third section, we discuss the challenges of open science in this research area and elaborates different ways of sharing training datasets, including the website hatespeechdata.com In the final section, we draw on our findings to establish best practices when creating datasets for abusive content detection. Background The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6. All analyses of online abuse ultimately rely on a way of measuring it, which increasingly means having a method which can handle the sheer volume of content produced, shared and engaged with online. Traditional qualitative methods cannot scale to handle the hundreds of millions of posts which appear on each major social media platform every day, and can also introduce inconsistencies and biase BIBREF7. Computational tools have emerged as the most promising way of classifying and detecting online abuse, drawing on work in machine learning, Natural Language Processing (NLP) and statistical modelling. Increasingly sophisticated architectures, features and processes have been used to detect and classify online abuse, leveraging technically sophisticated methods, such as contextual word embeddings, graph embeddings and dependency parsing. Despite their many differences BIBREF8, nearly all methods of online abuse detection rely on a training dataset, which is used to teach the system what is and is not abuse. However, there is a lacuna of research on this crucial aspect of the machine learning process. Indeed, although several general reviews of the field have been conducted, no previous research has reviewed training datasets for abusive content detection in sufficient breadth or depth. This is surprising given (i) their fundamental importance in the detection of online abuse and (ii) growing awareness that several existing datasets suffer from many flaws BIBREF9, BIBREF10. Close relevant work includes: Schmidt and Wiegand conduct a comprehensive review of research into the detection and classification of abusive online content. They discuss training datasets, stating that `to perform experiments on hate speech detection, access to labelled corpora is essential' (BIBREF8, p. 7), and briefly discuss the sources and size of the most prominent existing training datasets, as well as how datasets are sampled and annotated. Schmidt and Wiegand identify two key challenges with existing datasets. First, `data sparsity': many training datasets are small and lack linguistic variety. Second, metadata (such as how data was sampled) is crucial as it lets future researchers understand unintended biases, but is often not adequately reported (BIBREF8, p. 6). Waseem et al.BIBREF11 outline a typology of detection tasks, based on a two-by-two matrix of (i) identity- versus person- directed abuse and (ii) explicit versus implicit abuse. They emphasise the importance of high-quality datasets, particularly for more nuanced expressions of abuse: `Without high quality labelled data to learn these representations, it may be difficult for researchers to come up with models of syntactic structure that can help to identify implicit abuse.' (BIBREF11, p. 81) Jurgens et al.BIBREF12 discuss also conduct a critical review of hate speech detection, and note that `labelled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns.' (BIBREF12, p. 3661) They argue that the field needs to `address the data scarcity faced by abuse detection research' in order to better address more complex rsearch issues and pressing social challenges, such as `develop[ing] proactive technologies that counter or inhibit abuse before it harms' (BIBREF12, pp. 3658, 3661). Vidgen et al. describe several limitations with existing training datasets for abusive content, most noticeably how `they contain systematic biases towards certain types and targets of abuse.' BIBREF13[p.2]. They describe three issues in the quality of datasets: degradation (whereby datasets decline in quality over time), annotation (whereby annotators often have low agreement, indicating considerable uncertainty in class assignments) and variety (whereby `The quality, size and class balance of datasets varies considerably.' [p. 6]). Chetty and AlathurBIBREF14 review the use of Internet-based technologies and online social networks to study the spread of hateful, offensive and extremist content BIBREF14. Their review covers both computational and legal/social scientific aspects of hate speech detection, and outlines the importance of distinguishing between different types of group-directed prejudice. However, they do not consider training datasets in any depth. Fortuna and NunesBIBREF15 provide an end-to-end review of hate speech research, including the motivations for studying online hate, definitional challenges, dataset creation/sharing, and technical advances, both in terms of feature selection and algorithmic architecture (BIBREF15, 2018). They delineate between different types of online abuse, including hate, cyberbullying, discrimination and flaming, and add much needed clarity to the field. They show that (1) dataset size varies considerably but they are generally small (mostly containing fewer than 10,000 entries), (2) Twitter is the most widely-studied platform, and (3) most papers research hate speech per se (i.e. without specifying a target). Of those which do specify a target, racism and sexism are the most researched. However, their review focuses on publications rather than datasets: the same dataset might be used in multiple studies, limiting the relevance of their review for understanding the intrinsic role of training datasets. They also only engage with datasets fairly briefly, as part of a much broader review. Several classification papers also discuss the most widely used datasets, including Davidson et al. BIBREF16 who describe five datasets, and Salminen et al. who review 17 datasets and describe four in detail BIBREF17. This paper addresses this lacuna in existing research, providing a systematic review of available training datasets for online abuse. To provide structure to this review, we adopt the `data statements' framework put forward by Bender and Friedman BIBREF18, as well as other work providing frameworks, schema and processes for analysing NLP artefacts BIBREF19, BIBREF20, BIBREF21. Data statements are a way of documenting the decisions which underpin the creation of datasets used for Natural Language Processing (NLP). They formalise how decisions should be documented, not only ensuring scientific integrity but also addressing `the open and urgent question of how we integrate ethical considerations in the everyday practice of our field' (BIBREF18, p. 587). In many cases, we find that it is not possible to fully recreate the level of detail recorded in an original data statement from how datasets are described in publications. This reinforces the importance of proper documentation at the point of dataset creation. As the field of online abusive content detection matures, it has started to tackle more complex research challenges, such as multi-platform, multi-lingual and multi-target abuse detection, and systems are increasingly being deployed in `the wild' for social scientific analyses and for content moderation BIBREF5. Such research heightens the focus on training datasets as exactly what is being detected comes under greater scrutiny. To enhance our understanding of this domain, our review paper has four research aims. Research Aim One: to provide an in-depth and critical analysis of the available training datasets for abusive online content detection. Research Aim Two: to map and discuss ways of addressing the lack of dataset sharing, and as such the lack of `open science', in the field of online abuse research. Research Aim Three: to introduce the website hatespeechdata.com, as a way of enabling more dataset sharing. Research Aim Four: to identify best practices for creating an abusive content training dataset. Analysis of training datasets Relevant publications have been identified from four sources to identify training datasets for abusive content detection: The Scopus database of academic publications, identified using keyword searches. The ACL Anthology database of NLP research papers, identified using keyword searches. The ArXiv database of preprints, identified using keyword searches. Proceedings of the 1st, 2nd and 3rd workshops on abusive language online (ACL). Most publications report on the creation of one abusive content training dataset. However, some describe several new datasets simultaneously or provide one dataset with several distinct subsets of data BIBREF22, BIBREF23, BIBREF24, BIBREF25. For consistency, we separate out each subset of data where they are in different languages or the data is collected from different platforms. As such, the number of datasets is greater than the number publications. All of the datasets were released between 2016 and 2019, as shown in Figure FIGREF17. Analysis of training datasets ::: The purpose of training datasets ::: Problems addressed by datasets Creating a training dataset for online abuse detection is typically motivated by the desire to address a particular social problem. These motivations can inform how a taxonomy of abusive language is designed, how data is collected and what instructions are given to annotators. We identify the following motivating reasons, which were explicitly referenced by dataset creators. Reducing harm: Aggressive, derogatory and demeaning online interactions can inflict harm on individuals who are targeted by such content and those who are not targeted but still observe it. This has been shown to have profound long-term consequences on individuals' well-being, with some vulnerable individuals expressing concerns about leaving their homes following experiences of abuse BIBREF26. Accordingly, many dataset creators state that aggressive language and online harassment is a social problem which they want to help address Removing illegal content: Many countries legislate against certain forms of speech, e.g. direct threats of violence. For instance, the EU's Code of Conduct requires that all content that is flagged for being illegal online hate speech is reviewed within 24 hours, and removed if necessary BIBREF27. Many large social media platforms and tech companies adhere to this code of conduct (including Facebook, Google and Twitter) and, as of September 2019, 89% of such content is reviewed in 24 hours BIBREF28. However, we note that in most cases the abuse that is marked up in training datasets falls short of the requirements of illegal online hate – indeed, as most datasets are taken from public API access points, the data has usually already been moderated by the platforms and most illegal content removed. Improving health of online conversations: The health of online communities can be severely affected by abusive language. It can fracture communities, exacerbate tensions and even repel users. This is not only bad for the community and for civic discourse in general, it also negatively impacts engagement and thus the revenue of the host platforms. Therefore, there is a growing impetus to improve user experience and ensure online dialogues are healthy, inclusive and respectful where possible. There is ample scope for improvement: a study showed that 82% of personal attacks on Wikipedia against other editors are not addressed BIBREF29. Taking steps to improve the health of exchanges in online communities will also benefit commercial and voluntary content moderators. They are routinely exposed to such content, often with insufficient safeugards, and sometimes display symptoms similar to those of PTSD BIBREF30. Automatic tools could help to lessen this exposure, reducing the burden on moderators. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined Myriad tasks have been addressed in the field of abusive online content detection, reflecting the different disciplines, motivations and assumptions behind research. This has led to considerable variation in what is actually detected under the rubric of `abusive content', and establishing a degree of order over the diverse categorisations and subcategorisations is both difficult and somewhat arbitrary. Key dimensions which dataset creators have used to categorise detection tasks include who/what is targeted (e.g. groups vs. individuals), the strength of content (e.g. covert vs. overt), the nature of the abuse (e.g. benevolent vs. hostile sexism BIBREF31), how the abuse manifests (e.g. threats vs. derogatory statements), the tone (e.g. aggressive vs. non-aggressive), the specific target (e.g. ethnic minorities vs. women),and the subjective perception of the reader (e.g. disrespectful vs. respectful). Other important dimensions include the theme used to express abuse (e.g. Islamophobia which relies on tropes about terrorism vs. tropes about sexism) and the use of particular linguistic devices, such as appeals to authority, sincerity and irony. All of these dimensions can be combined in different ways, producing a large number of intersecting tasks. Consistency in how tasks are described will not necessarily ensure that datasets can be used interchangeably. From the description of a task, an annotation framework must be developed which converts the conceptualisation of abuse into a set of standards. This formalised representation of the `abuse' inevitably involves shortcuts, imperfect rules and simplifications. If annotation frameworks are developed and applied differently, then even datasets aimed at the same task can still vary considerably. Nonetheless, how detection tasks for online abuse are described is crucial for how the datasets – and in turn the systems trained on them – can subsequently be used. For example, a dataset annotated for hate speech can be used to examine bigoted biases, but the reverse is not true. How datasets are framed also impacts whether, and how, datasets can be combined to form large `mega-datasets' – a potentially promising avenue for overcoming data sparsity BIBREF17. In the remainder of this section, we provide a framework for splitting out detection tasks along the two most salient dimensions: (1) the nature of abuse and (2) the granularity of the taxonomy. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: the nature of abuse This refers to what is targeted/attacked by the content and, subsequently, how the taxonomy has been designed/framed by the dataset creators. The most well-established taxonomic distinction in this regard is the difference between (i) the detection of interpersonal abuse, and (ii) the detection of group-directed abuse BIBREF11). Other authors have sought to deductively theorise additional categories, such as `concept-directed' abuse, although these have not been widely adopted BIBREF13. Through an inductive investigation of existing training datasets, we extend this binary distinction to four primary categories of abuse which have been studied in previous work, as well as a fifth `Mixed' category. Person-directed abuse. Content which directs negativity against individuals, typically through aggression, insults, intimidation, hostility and trolling, amongst other tactics. Most research falls under the auspices of `cyber bullying', `harassment' and `trolling' BIBREF23, BIBREF32, BIBREF33. One major dataset of English Wikipedia editor comments BIBREF29 focuses on the `personal attack' element of harassment, drawing on prior investigations that mapped out harassment in that community. Another widely used dataset focuses on trolls' intent to intimidate, distinguishing between direct harassment and other behaviours BIBREF34. An important consideration in studies of person-directed abuse is (a) interpersonal relations, such as whether individuals engage in patterns of abuse or one-off acts and whether they are known to each other in the `real' world (both of which are a key concern in studies of cyberbullying) and (b) standpoint, such as whether individuals directly engage in abuse themselves or encourage others to do so. For example, the theoretically sophisticated synthetic dataset provided by BIBREF33 identifies not only harassment but also encouragement to harassment. BIBREF22 mark up posts from computer game forums (World of Warcraft and League of Legends) for cyberbullying and annotate these as $\langle $offender, victim, message$\rangle $ tuples. Group-directed abuse. Content which directs negativity against a social identity, which is defined in relation to a particular attribute (e.g. ethnic, racial, religious groups)BIBREF35. Such abuse is often directed against marginalised or under-represented groups in society. Group-directed abuse is typically described as `hate speech' and includes use of dehumanising language, making derogatory, demonising or hostile statements, making threats, and inciting others to engage in violence, amongst other dangerous communications. Common examples of group-directed abuse include sexism, which is included in datasets provided by BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF33 and racism, which is directly targeted in BIBREF36, BIBREF40. In some cases, specific types of group-directed abuse are subsumed within a broader category of identity-directed abuse, as in BIBREF41, BIBREF42, BIBREF4. Determining the limits of any group-directed abuse category requires careful theoretical reflection, as with the decision to include ethnic, caste-based and certain religious prejudices under `racism'. There is no `right' answer to such questions as they engage with ontological concerns about identification and `being' and the politics of categorization. Flagged content. Content which is reported by community members or assessed by community and professional content moderators. This covers a broad range of focuses as moderators may also remove spam, sexually inappropriate content and other undesirable contributions. In this regard, `flagged' content is akin to the concept of `trolling', which covers a wide range of behaviours, from jokes and playful interventions through to sinister personal attacks such as doxxing BIBREF43. Some forms of trolling can be measured with tools such as the Global Assessment of Internet Trolling (GAIT) BIBREF43. Incivility. Content which is considered to be incivil, rude, inappropriate, offensive or disrespectful BIBREF24, BIBREF25, BIBREF44. Such categories are usually defined with reference to the tone that the author adopts rather than the substantive content of what they express, which is the basis of person- and group- directed categories. Such content usually contains obscene, profane or otherwise `dirty' words. This can be easier to detect as closed-class lists are effective at identifying single objectionable words (e.g. BIBREF45). However, one concern with this type of research is that the presence of `dirty' words does not necessarily signal malicious intent or abuse; they may equally be used as intensifiers or colloquialisms BIBREF46. At the same time, detecting incivility can be more difficult as it requires annotators to infer the subjective intent of the speaker or to understand (or guess) the social norms of a setting and thus whether disrespect has been expressed BIBREF42. Content can be incivil without directing hate against a group or person, and can be inappropriate in one setting but not another: as such it tends to be more subjective and contextual than other types of abusive language. Mixed. Content which contains multiple types of abuse, usually a combination of the four categories discussed above. The intersecting nature of online language means that this is common but can also manifest in unexpected ways. For instance, female politicians may receive more interpersonal abuse than other politicians. This might not appear as misogyny because their identity as women is not referenced – but it might have motivated the abuse they were subjected to. Mixed forms of abuse require further research, and have thus far been most fully explored in the OLID dataset provided by BIBREF4, who explore several facets of abuse under one taxonomy. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: Granularity of taxonomies This refers to how much detail a taxonomy contains, reflected in the number of unique classes. The most important and widespread distinction is whether a binary class is used (e.g. Hate / Not) or a multi-level class, such as a tripartite split (typically, Overt, Covert and Non-abusive). In some cases, a large number of complex classes are created, such as by combining whether the abuse is targeted or not along with its theme and strength. In general, Social scientific analyses encourage creating a detailed taxonomy with a large number of fine-grained categories. However, this is only useful for machine learning if there are enough data points in each category and if annotators are capable of consistently distinguishing between them. Complex annotation schemas may not result in better training datasets if they are not implemented in a robust way. As such, it is unsurprising that binary classification schemas are the most prevalent, even though they are arguably the least useful given the variety of ways in which abuse can be articulated. This can range from the explicit and overt (e.g. directing threats against a group) to more subtle behaviours, such as micro-aggressions and dismissing marginalised groups' experiences of prejudice. Subsuming both types of behaviour within one category not only risks making detection difficult (due to considerable in-class variation) but also leads to a detection system which cannot make important distinctions between qualitatively different types of content. This has severe implications for whether detection systems trained on such datasets can actually be used for downstream tasks, such as content moderation and social scientific analysis. Drawing together the nature and granularity of abuse, our analyses identify a hierarchy of taxonomic granularity from least to most granular: Binary classification of a single `meta' category, such as hate/not or abuse/not. This can lead to very general and vague research, which is difficult to apply in practice. Binary classification of a single type of abuse, such as person-directed or group-directed. This can be problematic given that abuse is nearly always directed against a group rather than `groups' per se. Binary classification of abuse against a single well-defined group, such as racism/not or Islamophobia/not, or interpersonal abuse against a well-defined cohort, such as MPs and young people. Multi-class (or multi-label) classification of different types of abuse, such as: Multiple targets (e.g. racist, sexist and non-hateful content) or Multiple strengths (e.g. none, implicit and explicit content). Multiple types (e.g. threats versus derogatory statements or benevolent versus hostile statements). Multi-class classification of different types of abuse which is integrated with other dimensions of abuse. Analysis of training datasets ::: The content of training datasets ::: The `Level' of content 49 of the training datasets are annotated at the level of the post, one dataset is annotated at the level of the user BIBREF47, and none of them are annotated at the level of the comment thread. Only two publications indicate that the entire conversational thread was presented to annotators when marking up individual entries, meaning that in most cases this important contextual information is not used. 49 of the training datasets contain only text. This is a considerable limitation of existing research BIBREF13, especially given the multimodal nature of online communication and the increasing ubiquity of digital-specific image-based forms of communication such as Memes, Gifs, Filters and Snaps BIBREF48. Although some work has addressed the task of detecting hateful images BIBREF49, BIBREF50, this lead to the creation of a publically available labelled training dataset in only one case BIBREF51. To our knowledge, no research has tackled the problem of detecting hateful audio content. This is a distinct challenge; alongside the semantic content audio also contains important vocal cues which provide more opportunities to investigate (but also potentially misinterpret) tone and intention. Analysis of training datasets ::: The content of training datasets ::: Language The most common language in the training datasets is English, which appears in 20 datasets, followed by Arabic and Italian (5 datasets each), Hindi-English (4 datasets) and then German, Indonesian and Spanish (3 datasets). Noticeably, several major languages, both globally and in Europe, do not appear, which suggests considerable unevenness in the linguistic and cultural focuses of abusive language detection. For instance, there are major gaps in the coverage of European languages, including Danish and Dutch. Surprisingly, French only appears once. The dominance of English may be due to how we sampled publications (for which we used English terms), but may also reflect different publishing practices in different countries and how well-developed abusive content research is. Analysis of training datasets ::: The content of training datasets ::: Source of data Training datasets use data collected from a range of online spaces, including from mainstream platforms, such as Twitter, Wikipedia and Facebook, to more niche forums, such as World of Warcraft and Stormfront. In most cases, data is collected from public sources and then manually annotated but in others data is sourced through proprietary data sharing agreements with host platforms. Unsurprisingly, Twitter is the most widely used source of data, accounting for 27 of the datasets. This reflects wider concerns in computational social research that Twitter is over-used, primarily because it has a very accessible API for data collection BIBREF52, BIBREF53. Facebook and Wikipedia are the second most used sources of data, accounting for three datasets each – although we note that all three Wikipedia datasets are reported in the same publication. Many of the most widely used online platforms are not represented at all, or only in one dataset, such as Reddit, Weibo, VK and YouTube. The lack of diversity in where data is collected from limits the development of detection systems. Three main issues emerge: Linguistic practices vary across platforms. Twitter only allows 280 characters (previously only 140), provoking stylistic changes BIBREF54, and abusive content detection systems trained on this data are unlikely to work as well with longer pieces of text. Dealing with longer pieces of text could necessitate different classification systems, potentially affecting the choice of algorithmic architecture. Additionally, the technical affordances of platforms may affect the style, tone and topic of the content they host. The demographics of users on different platforms vary considerably. Social science research indicates that `digital divides' exist, whereby online users are not representative of wider populations and differ across different online spaces BIBREF53, BIBREF55, BIBREF56. Blank draws attention to how Twitter users are usually younger and wealthier than offline populations; over reliance on data from Twitter means, in effect, that we are over-sampling data from this privileged section of society. Blank also shows that there are also important cross-national differences: British Twitters are better-educated than the offline British population but the same is not true for American Twitter users compared with the offline American population BIBREF56. These demographic differences are likely to affect the types of content that users produce. Platforms have different norms and so host different types and amounts of abuse. Mainstream platforms have made efforts in recent times to `clean up' content and so the most overt and aggressive forms of abuse, such as direct threats, are likely to be taken down BIBREF57. However, more niche platforms, such as Gab or 4chan, tolerate more offensive forms of speech and are more likely to contain explicit abuse, such as racism and very intrusive forms of harassment, such as `doxxing' BIBREF58, BIBREF59, BIBREF60. Over-reliance on a few sources of data could mean that datasets are biased towards only a subset of types of abuse. Analysis of training datasets ::: The content of training datasets ::: Size The size of the training datasets varies considerably from 469 posts to 17 million; a difference of four orders of magnitude. Differences in size partly reflect different annotation approaches. The largest datasets are from proprietary data sharing agreements with platforms. Smaller datasets tend to be carefully collected and then manually annotated. There are no established guidelines for how large an abusive language training dataset needs to be. However, smaller datasets are problematic because they contain too little linguistic variation and increase the likelihood of overfitting. Rizoiu et al.BIBREF61 train detection models on only a proportion of the Davidson et al. and Waseem training datasets and show that this leads to worse performance, with a lower F1-Score, particularly for `data hungry' deep learning approaches BIBREF61. At the same time, `big' datasets alone are not a panacea for the challenges of abusive content classification. Large training datasets which have been poorly sampled, annotated with theoretically problematic categories or inexpertly and unthoughtfully annotated, could still lead to the development of poor classification systems. The challenges posed by small datasets could potentially be overcome through machine learning techniques such as `semi-supervised' and `active' learning BIBREF62, although these have only been limitedly applied to abusive content detection so far BIBREF63. Sharifirad et al. propose using text augmentation and new text generation as a way of overcoming small datasets, which is a promising avenue for future research BIBREF64. Analysis of training datasets ::: The content of training datasets ::: Class distribution and sampling Class distribution is an important, although often under-considered, aspect of the design of training datasets. Datasets with little abusive content will lack linguistic variation in terms of what is abusive, thereby increasing the risk of overfitting. More concerningly, the class distribution directly affects the nature of the engineering task and how performance should be evaluated. For instance, if a dataset is 70% hate speech then a zero-rule classification system (i.e. where everything is categorised as hate speech) will achieve 70% precision and 100% recall. This should be used as a baseline for evaluating performance: 80% precision is less impressive compared with this baseline. However, 80% precision on an evenly balanced dataset would be impressive. This is particularly important when evaluating the performance of ternary classifiers, when classes can be considerably imbalanced. On average, 35% of the content in the training datasets is abusive. However, class distributions vary considerably, from those with just 1% abusive content up to 100%. These differences are largely a product of how data is sampled and which platform it is taken from. Bretschneider BIBREF22 created two datasets without using purposive sampling, and as such they contain very low levels of abuse ( 1%). Other studies filter data collection based on platforms, time periods, keywords/hashtags and individuals to increase the prevalence of abuse. Four datasets comprise only abusive content; three cases are synthetic datasets, reported on in one publication BIBREF65, and in the other case the dataset is an amendment to an existing dataset and only contains misogynistic content BIBREF37. Purposive sampling has been criticised for introducing various forms of bias into datasets BIBREF66, such as missing out on mis-spelled content BIBREF67 and only focusing on the linguistic patterns of an atypical subset of users. One pressing risk is that a lot of data is sampled from far right communities – which means that most hate speech classifiers implicitly pick up on right wing styles of discourse rather than hate speech per se. This could have profound consequences for our understanding of online political dialogue if the classifiers are applied uncritically to other groups. Nevertheless, purposive sampling is arguably a necessary step when creating a training dataset given the low prevalence of abuse on social media in general BIBREF68. Analysis of training datasets ::: The content of training datasets ::: Identity of the content creators The identity of the users who originally created the content in training datasets is described in only two cases. In both cases the data is synthetic BIBREF65, BIBREF33. Chung et al. use `nichesourcing' to synthetically generate abuse, with experts in tackling hate speech creating hateful posts. Sprugnoli et al. ask children to adopt pre-defined roles in an experimental classroom setup, and ask them to engage in a cyberbullying scenario. In most of the non-synthetic training datasets, some information is given about the sampling criteria used to collect data, such as hashtags. However, this does not provide direct insight into who the content creators are, such as their identity, demographics, online behavioural patterns and affiliations. Providing more information about content creators may help address biases in existing datasets. For instance, Wiegand et al. show that 70% of the sexist tweets in the highly cited Waseem and Hovy dataset BIBREF36 come from two content creators and that 99% of the racist tweets come from just one BIBREF66. This is a serious constraint as it means that user-level metadata is artificially highly predictive of abuse. And, even when user-level metadata is not explicitly modelled, detection systems only need to pick up on the linguistic patterns of a few authors to nominally detect abuse. Overall, the complete lack of information about which users have created the content in most training datasets is a substantial limitation which may be driving as-yet-unrecognised biases. This can be remedied through the methodological rigour implicit in including a data statement with a corpus. Analysis of training datasets ::: Annotation of training datasets ::: Annotation process How training datasets are annotated is one of the most important aspects of their creation. A range of annotation processes are used in training datasets, which we split into five high-level categories: Crowdsourcing (15 datasets). Crowdsourcing is widely used in NLP research because it is relatively cheap and easy to implement. The value of crowdsourcing lies in having annotations undertaken by `a large number of non-experts' (BIBREF69, p. 278) – any bit of content can be annotated by multiple annotators, effectively trading quality for quantity. Studies which use crowdsourcing with only a few annotators for each bit of content risk minimising quality without counterbalancing it with greater quantity. Furthermore, testing the work of many different annotators can be challenging BIBREF70, BIBREF71 and ensuring they are paid an ethical amount may make the cost comparable to using trained experts. Crowdsourcing has also been associated with `citizen science' initiatives to make academic research more accessible but this may not be fully realised in cases where annotation tasks are laborious and low-skilled BIBREF72, BIBREF20. Academic experts (22 datasets). Expert annotation is time-intensive but is considered to produce higher quality annotations. Waseem reports that `systems trained on expert annotations outperform systems trained on amateur annotations.' BIBREF73 and, similarly, D'Orazio et al. claim, `although expert coding is costly, it produces quality data.' BIBREF74. However, the notion of an `expert' remains somewhat fuzzy within abusive content detection research. In many cases, publications only report that `an expert' is used, without specifying the nature of their expertise – even though this can vary substantially. For example, an expert may refer to an NLP practitioner, an undergraduate student with only modest levels of training, a member of an attacked social group relevant to the dataset or a researcher with a doctorate in the study of prejudice. In general, we anticipate that experts in the social scientific study of prejudice/abuse would perform better at annotation tasks then NLP experts who may not have any direct expertise in the conceptual and theoretical issues of abusive content annotation. In particular, one risk of using NLP practitioners, whether students or professionals, is that they might `game' training datasets based on what they anticipate is technically feasible for existing detection systems. For instance, if existing systems perform poorly when presented with long range dependencies, humour or subtle forms of hate (which are nonetheless usually discernible to human readers) then NLP experts could unintentionally use this expectation to inform their annotations and not label such content as hateful. Professional moderators (3 datasets). Professional moderators offer a standardized approach to content annotated, implemented by experienced workers. This should, in principle, result in high quality annotations. However, one concern is that moderators are output-focused as their work involves determining whether content should be allowed or removed from platforms; they may not provide detailed labels about the nature of abuse and may also set the bar for content labelled `abusive' fairly high, missing out on more nuance and subtle varieties. In most cases, moderators will annotate for a range of unacceptable content, such as spam and sexual content, and this must be marked in datasets. A mix of crowdsourcing and experts (6 datasets). Synthetic data creation (4 datasets). Synthetic datasets are an interesting option as they are inherently non-authentic and therefore not necessarily representative of how abuse manifests in real-world situations. However, if they are created in realistic conditions by experts or relevant content creators then they can mimic real behaviour and have the added advantage that they may have broader coverage of different types of abuse. They are also usually easier to share. Analysis of training datasets ::: Annotation of training datasets ::: Identity of the annotators The data statements framework given by Bender and Friedman emphasises the importance of understanding who has completed annotations. Knowing who the annotators are is important because `their own “social address" influences their experience with language and thus their perception of what they are annotating.' BIBREF18 In the context of online abuse, Binns et al. show that the gender of annotators systematically influences what annotations they provide BIBREF75. No annotator will be well-versed in all of the slang or coded meanings used to construct abusive language. Indeed, many of these coded meanings are deliberately covert and obfuscated BIBREF76. To help mitigate these challenges, annotators should be (a) well-qualified and (b) diverse. A homogeneous group of annotators will be poorly equipped to catch all instances of abuse in a corpus. Recruiting an intentionally mixed groups of annotators is likely to yield better recall of abuse and thus a more precise dataset BIBREF77. Information about annotators is unfortunately scarce. In 23 of the training datasets no information is given about the identity of annotators; in 17 datasets very limited information is given, such as whether the annotator is a native speaker of the language; and in just 10 cases is detailed information given. Interestingly, only 4 out of these 10 datasets are in the English language. Relevant information about annotators can be split into (i) Demographic information and (ii) annotators' expertise and experience. In none of the training sets is the full range of annotator information made available, which includes: Demographic information. The nature of the task affects what information should be provided, as well as the geographic and cultural context. For instance, research on Islamophobia should include, at the very least, information about annotators' religious affiliation. Relevant variables include: Age Ethnicity and race Religion Gender Sexual Orientation Expertise and experience. Relevant variables include: Field of research Years of experience Research status (e.g. research assistant or post-doc) Personal experiences of abuse. In our review, none of the datasets contained systematic information about whether annotators had been personally targeted by abuse or had viewed such abuse online, even though this can impact annotators' perceptions. Relevant variables include: Experiences of being targeted by online abuse. Experiences of viewing online abuse. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation A key source of variation across datasets is whether annotators were given detailed guidelines, very minimal guidelines or no guidelines at all. Analysing this issue is made difficult by the fact that many dataset creators do not share their annotation guidelines. 21 of the datasets we study do not provide the guidelines and 14 only provide them in a highly summarised form. In just 15 datasets is detailed information given (and these are reported on in just 9 publications). Requiring researchers to publish annotation guidelines not only helps future researchers to better understand what datasets contain but also to improve and extend them. This could be crucial for improving the quality of annotations; as Ross et al. recommend, `raters need more detailed instructions for annotation.' BIBREF78 The degree of detail given in guidelines is linked to how the notion of `abuse' is understood. Some dataset creators construct clear and explicit guidelines in an attempt to ensure that annotations are uniform and align closely with social scientific concepts. In other cases, dataset creators allow annotators to apply their own perception. For instance, in their Portuguese language dataset, Fortuna et al. ask annotators to `evaluate if according to your opinion, these tweets contain hate speech' BIBREF38. The risk here is that authors' perceptions may differ considerably; Salminen et al. show that online hate interpretation varies considerably across individuals BIBREF79. This is also reflected in inter-annotator agreement scores for abusive content, which is often very low, particularly for tasks which deploy more than just a binary taxonomy. However, it is unlikely that annotators could ever truly divorce themselves from their own social experience and background to decide on a single `objective' annotation. Abusive content annotation is better understood, epistemologically, as an intersubjective process in which agreement is constructed, rather than an objective process in which a `true' annotation is `found'. For this reason, some researchers have shifted the question of `how can we achieve the correct annotation?' to `who should decide what the correct annotation is?' BIBREF73. Ultimately, whether annotators should be allowed greater freedom in making annotations, and whether this results in higher quality datasets, needs further research and conceptual examination. Some aspects of abusive language present fundamental issues that are prone to unreliable annotation, such as Irony, Calumniation and Intent. They are intrinsically difficult to annotate given a third-person perspective on a piece of text as they involve making a judgement about indeterminate issues. However, they cannot be ignored given their prevalence in abusive content and their importance to how abuse is expressed. Thus, although they are fundamentally conceptual problems, these issues also present practical problems for annotators, and should be addressed explicitly in coding guidelines. Otherwise, as BIBREF80 note, these issues are likely to drive type II errors in classification, i.e. labelling non-hate-speech utterances as hate speech. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Irony This covers statements that have a meaning contrary to that one might glean at first reading. Lachenicht BIBREF81 notes that Irony goes against Grice's quality maxim, and as such Ironic content requires closer attention from the reader as it is prone to being misinterpreted. Irony is a particularly difficult issue as in some cases it is primarily intended to provide humour (and thus might legitimately be considered non-abusive) but in other cases is used as a way of veiling genuine abuse. Previous research suggests that the problem is widespread. Sanguinetti et al. BIBREF82 find irony in 11% of hateful tweets in Italian. BIBREF25 find that irony is one of the most common phenomena in self-deleted comments; and that the prevalence of irony is 33.9% amongst deleted comments in a Croatian comment dataset and 18.1% amongst deleted comments in a Slovene comment dataset. Furthermore, annotating irony (as well as related constructs, such as sarcasm and humour) is inherently difficult. BIBREF83 report that agreement on sarcasm amongst annotators working in English is low, something echoed by annotations of Danish content BIBREF84. Irony is also one of the most common reasons for content to be re-moderated on appeal, according to Pavlopoulos et al. BIBREF24. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Calumniation This covers false statements, slander, and libel. From the surveyed set, this is annotated in datasets for Greek BIBREF24 and for Croatian and Slovene BIBREF25. Its prevalence varies considerably across these two datasets and reliable estimations of the prevalence of false statements are not available. Calumniation is not only an empirical issue, it also raises conceptual problems: should false information be considered abusive if it slanders or demeans a person? However, if the information is then found out to be true does it make the content any less abusive? Given the contentiousness of `objectivity', and the lack of consensus about most issues in a `post-truth' age BIBREF85, who should decide what is considered true? And, finally, how do we determine whether the content creator knows whether something is true? These ontological, epistemological and social questions are fundamental to the issue of truth and falsity in abusive language. Understandably, most datasets do not taken any perspective on the truth and falsity of content. This is a practical solution: given error rates in abusive language detection as well as error rates in fact-checking, a system which combined both could be inapplicable in practice. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Intent This information about the utterer's state of mind is a core part of how many types of abusive language are defined. Intent is usually used to emphasize the wrongness of abusive behaviour, such as spreading, inciting, promoting or justifying hatred or violence towards a given target, or sending a message that aims at dehumanising, delegitimising, hurting or intimidating them BIBREF82. BIBREF81 postulate that “aggravation, invective and rudeness ... may be performed with varying degrees of intention to hurt", and cite five legal degrees of intent BIBREF86. However, it is difficult to discern the intent of another speaker in a verbal conversation between humans, and even more difficult to do so through written and computer-mediated communications BIBREF87. Nevertheless, intent is particularly important for some categories of abuse such as bullying, maliciousness and hostility BIBREF34, BIBREF32. Most of the guidelines for the datasets we have studied do not contain an explicit discussion of intent, although there are exceptions. BIBREF88 include intent as a core part of their annotation standard, noting that understanding context (such as by seeing a speakers' other online messages) is crucial to achieving quality annotations. However, this proposition poses conceptual challenges given that people's intent can shift over time. Deleted comments have been used to study potential expressions of regret by users and, as such, a change in their intent BIBREF89, BIBREF25; this has also been reported as a common motivator even in self-deletion of non-abusive language BIBREF90. Equally, engaging in a sequence of targeted abusive language is an indicator of aggressive intent, and appears in several definitions. BIBREF23 require an “intent to physically assert power over women" as a requirement for multiple categories of misogynistic behaviour. BIBREF34 find that messages that are “unapologetically or intentionally offensive" fit in the highest grade of trolling under their schema. Kenny et al. BIBREF86 note how sarcasm, irony, and humour complicate the picture of intent by introducing considerable difficulties in discerning the true intent of speakers (as discussed above). Part of the challenge is that many abusive terms, such as slurs and insults, are polysemic and may be co-opted by an ingroup into terms of entertainment and endearment BIBREF34. Dataset sharing ::: The challenges and opportunities of achieving Open Science All of the training datasets we analyse are publicly accessible and as such can be used by researchers other than the authors of the original publication. Sharing data is an important aspect of open science but also poses ethical and legal risks, especially in light of recent regulatory changes, such as the introduction of GPDR in the UK BIBREF91, BIBREF92. This problem is particularly acute with abusive content, which can be deeply shocking, and some training datasets from highly cited publications have not been made publicly available BIBREF93, BIBREF94, BIBREF95. Open science initiatives can also raise concerns amongst the public, who may not be comfortable with researchers sharing their personal data BIBREF96, BIBREF97. The difficulty of sharing data in sensitive areas of research is reflected by the Islamist extremism research website, `Jihadology'. It chose to restrict public access in 2019, following efforts by Home Office counter-terrorism officials to shut it down completely. They were concerned that, whilst it aimed to support academic research into Islamist extremism, it may have inadvertently enabled individuals to radicalise by making otherwise banned extremist material available. By working with partners such as the not-for-profit Tech Against Terrorism, Jihadology created a secure area in the website, which can only be accessed by approved researchers. Some of the training datasets in our list have similar requirements, and can only be accessed following a registration process. Open sharing of datasets is not only a question of scientific integrity and a powerful way of advancing scientific knowledge. It is also, fundamentally, a question of fairness and power. Opening access to datasets will enable less-well funded researchers and organisations, which includes researchers in the Global South and those working for not-for-profit organisations, to steer and contribute to research. This is a particularly pressing issue in a field which is directly concerned with the experiences of often-marginalised communities and actors BIBREF36. For instance, one growing concern is the biases encoded in detection systems and the impact this could have when they are applied in real-world settings BIBREF9, BIBREF10. This research could be further advanced by making more datasets and detection systems more easily available. For instance, Binns et al. use the detailed metadata in the datasets provided by Wulczyn et al. to investigate how the demographics of annotators impacts the annotations they make BIBREF75, BIBREF29. The value of such insights is only clear after the dataset has been shared – and, equally, is only possible because of data sharing. More effective ways of sharing datasets would address the fact that datasets often deteriorate after they have been published BIBREF13. Several of the most widely used datasets provide only the annotations and IDs and must be `rehydrated' to collect the content. Both of the datasets provided by Waseem and Hovy and Founta et al. must be collected in this way BIBREF98, BIBREF36, and both have degraded considerably since they were first released as the tweets are no longer available on Twitter. Chung et al. also estimate that within 12 months the recently released dataset for counterspeech by Mathew et al. had lost more than 60% of its content BIBREF65, BIBREF58. Dataset degradation poses three main risks: First, if less data is available then there is a greater likelihood of overfitting. Second, the class distributions usually change as proportionally more of the abusive content is taken down than the non-abusive. Third, it is also likely that the more overt forms of abuse are taken down, rather than the covert instances, thereby changing the qualitative nature of the dataset. Dataset sharing ::: Research infrastructure: Solutions for sharing training datasets The problem of data access and sharing remains unresolved in the field of abusive content detection, much like other areas of computational research BIBREF99. At present, an ethical, secure and easy way of sharing sensitive tools and resources has not been developed and adopted in the field. More effective dataset sharing would (1) greater collaboration amongst researchers, (2) enhance the reproducibility of research by encouraging greater scrutiny BIBREF100, BIBREF101, BIBREF102 and (3) substantively advance the field by enabling future researchers to better understand the biases and limitations of existing research and to identify new research directions. There are two main challenges which must be overcome to ensure that training datasets can be shared and used by future researchers. First, dataset quality: the size, class distribution and quality of their content must be maintained. Second, dataset access: access to datasets must be controlled so that researchers can use them, whilst respecting platforms' Terms of Service and not allowing potential extremists from having access. These problems are closely entwined and the solutions available, which follow, have implications for both of them. Synthetic datasets. Four of the datasets we have reviewed were developed synthetically. This resolves the dataset quality problem but introduces additional biases and limitations because the data is not real. Synthetic datasets still need to be shared in such a way as to limit access for potential extremists but face no challenges from Platforms' Terms of Services. Data `philanthropy' or `donations'. These are defined as `the act of an individual actively consenting to donate their personal data for research' BIBREF97. Donated data from many individuals could then be combined and shared – but it would still need to be annotated. A further challenge is that many individuals who share abusive content may be unwilling to `donate' their data as this is commonly associated with prosocial motivations, creating severe class imbalances BIBREF97. Data donations could also open new moral and ethical issues; individuals' privacy could be impacted if data is re-analysed to derive new unexpected insights BIBREF103. Informed consent is difficult given that the exact nature of analyses may not be known in advance. Finally, data donations alone do not solve how access can be responsibly protected and how platforms' Terms of Service can be met. For these reasons, data donations are unlikely to be a key part of future research infrastructure for abusive content detection. Platform-backed sharing. Platforms could share datasets and support researchers' access. There are no working examples of this in abusive content detection research, but it has been successfully used in other research areas. For instance, Twitter has made a large dataset of accounts linked to potential information operations, known as the “IRA" dataset (Internet Research Agency). This would require considerably more interfaces between academia and industry, which may be difficult given the challenges associated with existing initiatives, such as Social Science One. However, in the long term, we propose that this is the most effective solution for the problem of sharing training datasets. Not only because it removes Terms of Service limitations but also because platforms have large volumes of original content which has been annotated in a detailed way. This could take one of two forms: platforms either make content which has violated their Community Guidelines available directly or they provide special access post-hoc to datasets which researchers have collected publicly through their API - thereby making sure that datasets do not degrade over time. Data trusts. Data trusts have been described as a way of sharing data `in a fair, safe and equitable way' ( BIBREF104 p. 46). However, there is considerable disagreement as to what they entail and how they would operate in practice BIBREF105. The Open Data Institute identifies that data trusts aim to make data open and accessible by providing a framework for storing and accessing data, terms and mechanisms for resolving disputes and, in some cases, contracts to enforce them. For abusive content training datasets, this would provide a way of enabling datasets to be shared, although it would require considerable institutional, legal and financial commitments. Arguably, the easiest way of ensuring data can be shared is to maintain a very simple data trust, such as a database, which would contain all available abusive content training datasets. This repository would need to be permissioned and access controlled to address concerns relating to privacy and ethics. Such a repository could substantially reduce the burden on researchers; once they have been approved to the repository, they could access all datasets publicly available – different levels of permission could be implemented for different datasets, depending on commercial or research sensitivity. Furthermore, this repository could contain all of the metadata reported with datasets and such information could be included at the point of deposit, based on the `data statements' work of Bender and Friedman BIBREF18. A simple API could be developed for depositing and reading data, similar to that of the HateBase. The permissioning system could be maintained either through a single institution or, to avoid power concentrating amongst a small group of researchers, through a decentralised blockchain. Dataset sharing ::: A new repository of training datasets: Hatespeechdata.com The resources and infrastructure to create a dedicated data trust and API for sharing abusive content training datasets is substantial and requires considerable further engagement with research teams in this field. In the interim, to encourage greater sharing of datasets, we have launched a dedicated website which contains all of the datasets analysed here: https://hatespeechdata.com. Based on the analysis in the previous sections, we have also provided partial data statements BIBREF18. The website also contains previously published abusive keyword dictionaries, which are not analysed here but some researchers may find useful. Note that the website only contains information/data which the original authors have already made publicly available elsewhere. It will be updated with new datasets in the future. Best Practices for training dataset creation Much can be learned from existing efforts to create abusive language datasets. We identify best practices which emerge at four distinct points in the process of creating a training dataset: (1) task formation, (2) data selection, (3) annotation, and (4) documentation. Best Practices for training dataset creation ::: Task formation: Defining the task addressed by the dataset Dataset creation should be `problem driven' BIBREF106 and should address a well-defined and specific task, with a clear motivation. This will directly inform the taxonomy design, which should be well-specified and engage with social scientific theory as needed. Defining a clear task which the dataset addresses is especially important given the maturation of the field, ongoing terminological disagreement and the complexity of online abuse. The diversity of phenomena that fits under the umbrella of abusive language means that `general purpose' datasets are unlikely to advance the field. New datasets are most valuable when they address a new target, generator, phenomenon, or domain. Creating datasets which repeat existing work is not nearly as valuable. Best Practices for training dataset creation ::: Selecting data for abusive language annotation Once the task is established, dataset creators should select what language will be annotated, where data will be sampled from and how sampling will be completed. Any data selection exercise is bound to give bias, and so it is important to record what decisions are made (and why) in this step. Dataset builders should have a specific target size in mind and also have an idea of the minimum amount of data this is likely to be needed for the task. This is also where steps 1 and 2 intersect: the data selection should be driven by the problem that is addressed rather than what is easy to collect. Ensuring there are enough positive examples of abuse will always be challenging as the prevalence of abuse is so low. However, given that purposive sampling inevitably introduces biases, creators should explore a range of options before determining the best one – and consider using multiple sampling methods at once, such as including data from different times, different locations, different types of users and different platforms. Other options include using measures of linguistic diversity to maximize the variety of text included in datasets, or including words that cluster close to known abusive terms. Best Practices for training dataset creation ::: Annotating abusive language Annotators must be hired, trained and given appropriate guidelines. Annotators work best with solid guidelines, that are easy to grasp and have clear examples BIBREF107. The best examples are both illustrative, in order to capture the concepts (such as `threatening language') and provide insight into `edge cases', which is content that only just crosses the line into abuse. Decisions should be made about how to handle intrinsically difficult aspects of abuse, such as irony, calumniation and intent (see above). Annotation guidelines should be developed iteratively by dataset creators; by working through the data, rules can be established for difficult or counter-intuitive coding decisions, and a set of shared practices developed. Annotators should be included in this iterative process. Discussions with annotators the language that they have seen “in the field" offers an opportunity to enhance and refine guidelines - and even taxonomies. Such discussions will lead to more consistent data and provide a knowledge base to draw on for future work. To achieve this, it is important to adopt an open culture where annotators are comfortable providing open feedback and also describing their uncertainties. Annotators should also be given emotional and practical support (as well as appropriate financial compensation), and the harmful and potentially triggering effects of annotating online abuse should be recognised at all times. For a set of guidelines to help protect the well-being of annotators, see BIBREF13. Best Practices for training dataset creation ::: Documenting methods, data, and annotators The best training datasets provide as much information as possible and are well-documented. When the method behind them is unclear, they are hard to evaluate, use and build on. Providing as much information as possible can open new and unanticipated analyses and gives more agency to future researchers who use the dataset to create classifiers. For instance, if all annotators' codings are provided (rather than just the `final' decision) then a more nuanced and aware classifier could be developed as, in some cases, it can be better to maximise recall of annotations rather than maximise agreement BIBREF77. Our review found that most datasets have poor methodological descriptions and few (if any) provide enough information to construct an adequate data statement. It is crucial that dataset creators are up front about their biases and limitations: every dataset is biased, and this is only problematic when the biases are unknown. One strategy for doing this is to maintain a document of decisions made when designing and creating the dataset and to then use it to describe to readers the rationale behind decisions. Details about the end-to-end dataset creation process are welcomed. For instance, if the task is crowdsourced then a screenshot of the micro-task presented to workers should be included, and the top-level parameters should be described (e.g. number of workers, maximum number of tasks per worker, number of annotations per piece of text) BIBREF20. If a dedicated interface is used for the annotation, this should also be described and screenshotted as the interface design can influence the annotations. Best Practices for training dataset creation ::: Best practice summary Unfortunately, as with any burgeoning field, there is confusion and overlap around many of the phenomena discussed in this paper; coupled with the high degree of variation in the quality of method descriptions, it has lead to many pieces of research that are hard to combine, compare, or re-use. Our reflections on best practices are driven by this review and the difficulties of creating high quality training datasets. For future researchers, we summarise our recommendations in the following seven points: Bear in mind the purpose of the dataset; design the dataset to help address questions and problems from previous research. Avoid using `easy to access' data, and instead explore new sources which may have greater diversity. Consider what biases may be created by your sampling method. Determine size based on data sparsity and having enough positive classes rather than `what is possible'. Establish a clear taxonomy to be used for the task, with meaningful and theoretically sound categories. Provide annotators with guidelines; develop them iteratively and publish them with your dataset. Consider using trained annotators given the complexities of abusive content. Involve people who have direct experience of the abuse which you are studying whenever possible (and provided that you can protect their well-being). Report on every step of the research through a Data Statement. Conclusion This paper examined a large set of datasets for the creation of abusive content detection systems, providing insight into what they contain, how they are annotated, and how tasks have been framed. Based on an evidence-driven review, we provided an extended discussion of how to make training datasets more readily available and useful, including the challenges and opportunities of open science as well as the need for more research infrastructure. We reported on the development of hatespeechdata.com – a new repository for online abusive content training datasets. Finally, we outlined best practices for creation of training datasets for detection of online abuse. We have effectively met the four research aims elaborated at the start of the paper. Training detection systems for online abuse is a substantial challenge with real social consequences. If we want the systems we develop to be useable, scalable and with few biases then we need to train them on the right data: garbage in will only lead to garbage out.
No
334972ba967444f98865dea4c2bc0eb9416f2ff7
334972ba967444f98865dea4c2bc0eb9416f2ff7_0
Q: How big are this dataset and catalogue? Text: Introduction Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics. Most detection systems rely on having the right training dataset, reflecting one of the most widely accepted mantras in computer science: Garbage In, Garbage Out. Put simply: to have systems which can detect and classify abusive online content effectively, one needs appropriate datasets with which to train them. However, creating training datasets is often a laborious and non-trivial task – and creating datasets which are non-biased, large and theoretically-informed is even more difficult (BIBREF0 p. 189). We address this issue by examining and reviewing publicly available datasets for abusive content detection, which we provide access to on a new dedicated website, hatespeechdata.com. In the first section, we examine previous reviews and present the four research aims which guide this paper. In the second section, we conduct a critical and in-depth analysis of the available datasets, discussing first what their aim is, how tasks have been described and what taxonomies have been constructed and then, second, what they contain and how they were annotated. In the third section, we discuss the challenges of open science in this research area and elaborates different ways of sharing training datasets, including the website hatespeechdata.com In the final section, we draw on our findings to establish best practices when creating datasets for abusive content detection. Background The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6. All analyses of online abuse ultimately rely on a way of measuring it, which increasingly means having a method which can handle the sheer volume of content produced, shared and engaged with online. Traditional qualitative methods cannot scale to handle the hundreds of millions of posts which appear on each major social media platform every day, and can also introduce inconsistencies and biase BIBREF7. Computational tools have emerged as the most promising way of classifying and detecting online abuse, drawing on work in machine learning, Natural Language Processing (NLP) and statistical modelling. Increasingly sophisticated architectures, features and processes have been used to detect and classify online abuse, leveraging technically sophisticated methods, such as contextual word embeddings, graph embeddings and dependency parsing. Despite their many differences BIBREF8, nearly all methods of online abuse detection rely on a training dataset, which is used to teach the system what is and is not abuse. However, there is a lacuna of research on this crucial aspect of the machine learning process. Indeed, although several general reviews of the field have been conducted, no previous research has reviewed training datasets for abusive content detection in sufficient breadth or depth. This is surprising given (i) their fundamental importance in the detection of online abuse and (ii) growing awareness that several existing datasets suffer from many flaws BIBREF9, BIBREF10. Close relevant work includes: Schmidt and Wiegand conduct a comprehensive review of research into the detection and classification of abusive online content. They discuss training datasets, stating that `to perform experiments on hate speech detection, access to labelled corpora is essential' (BIBREF8, p. 7), and briefly discuss the sources and size of the most prominent existing training datasets, as well as how datasets are sampled and annotated. Schmidt and Wiegand identify two key challenges with existing datasets. First, `data sparsity': many training datasets are small and lack linguistic variety. Second, metadata (such as how data was sampled) is crucial as it lets future researchers understand unintended biases, but is often not adequately reported (BIBREF8, p. 6). Waseem et al.BIBREF11 outline a typology of detection tasks, based on a two-by-two matrix of (i) identity- versus person- directed abuse and (ii) explicit versus implicit abuse. They emphasise the importance of high-quality datasets, particularly for more nuanced expressions of abuse: `Without high quality labelled data to learn these representations, it may be difficult for researchers to come up with models of syntactic structure that can help to identify implicit abuse.' (BIBREF11, p. 81) Jurgens et al.BIBREF12 discuss also conduct a critical review of hate speech detection, and note that `labelled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns.' (BIBREF12, p. 3661) They argue that the field needs to `address the data scarcity faced by abuse detection research' in order to better address more complex rsearch issues and pressing social challenges, such as `develop[ing] proactive technologies that counter or inhibit abuse before it harms' (BIBREF12, pp. 3658, 3661). Vidgen et al. describe several limitations with existing training datasets for abusive content, most noticeably how `they contain systematic biases towards certain types and targets of abuse.' BIBREF13[p.2]. They describe three issues in the quality of datasets: degradation (whereby datasets decline in quality over time), annotation (whereby annotators often have low agreement, indicating considerable uncertainty in class assignments) and variety (whereby `The quality, size and class balance of datasets varies considerably.' [p. 6]). Chetty and AlathurBIBREF14 review the use of Internet-based technologies and online social networks to study the spread of hateful, offensive and extremist content BIBREF14. Their review covers both computational and legal/social scientific aspects of hate speech detection, and outlines the importance of distinguishing between different types of group-directed prejudice. However, they do not consider training datasets in any depth. Fortuna and NunesBIBREF15 provide an end-to-end review of hate speech research, including the motivations for studying online hate, definitional challenges, dataset creation/sharing, and technical advances, both in terms of feature selection and algorithmic architecture (BIBREF15, 2018). They delineate between different types of online abuse, including hate, cyberbullying, discrimination and flaming, and add much needed clarity to the field. They show that (1) dataset size varies considerably but they are generally small (mostly containing fewer than 10,000 entries), (2) Twitter is the most widely-studied platform, and (3) most papers research hate speech per se (i.e. without specifying a target). Of those which do specify a target, racism and sexism are the most researched. However, their review focuses on publications rather than datasets: the same dataset might be used in multiple studies, limiting the relevance of their review for understanding the intrinsic role of training datasets. They also only engage with datasets fairly briefly, as part of a much broader review. Several classification papers also discuss the most widely used datasets, including Davidson et al. BIBREF16 who describe five datasets, and Salminen et al. who review 17 datasets and describe four in detail BIBREF17. This paper addresses this lacuna in existing research, providing a systematic review of available training datasets for online abuse. To provide structure to this review, we adopt the `data statements' framework put forward by Bender and Friedman BIBREF18, as well as other work providing frameworks, schema and processes for analysing NLP artefacts BIBREF19, BIBREF20, BIBREF21. Data statements are a way of documenting the decisions which underpin the creation of datasets used for Natural Language Processing (NLP). They formalise how decisions should be documented, not only ensuring scientific integrity but also addressing `the open and urgent question of how we integrate ethical considerations in the everyday practice of our field' (BIBREF18, p. 587). In many cases, we find that it is not possible to fully recreate the level of detail recorded in an original data statement from how datasets are described in publications. This reinforces the importance of proper documentation at the point of dataset creation. As the field of online abusive content detection matures, it has started to tackle more complex research challenges, such as multi-platform, multi-lingual and multi-target abuse detection, and systems are increasingly being deployed in `the wild' for social scientific analyses and for content moderation BIBREF5. Such research heightens the focus on training datasets as exactly what is being detected comes under greater scrutiny. To enhance our understanding of this domain, our review paper has four research aims. Research Aim One: to provide an in-depth and critical analysis of the available training datasets for abusive online content detection. Research Aim Two: to map and discuss ways of addressing the lack of dataset sharing, and as such the lack of `open science', in the field of online abuse research. Research Aim Three: to introduce the website hatespeechdata.com, as a way of enabling more dataset sharing. Research Aim Four: to identify best practices for creating an abusive content training dataset. Analysis of training datasets Relevant publications have been identified from four sources to identify training datasets for abusive content detection: The Scopus database of academic publications, identified using keyword searches. The ACL Anthology database of NLP research papers, identified using keyword searches. The ArXiv database of preprints, identified using keyword searches. Proceedings of the 1st, 2nd and 3rd workshops on abusive language online (ACL). Most publications report on the creation of one abusive content training dataset. However, some describe several new datasets simultaneously or provide one dataset with several distinct subsets of data BIBREF22, BIBREF23, BIBREF24, BIBREF25. For consistency, we separate out each subset of data where they are in different languages or the data is collected from different platforms. As such, the number of datasets is greater than the number publications. All of the datasets were released between 2016 and 2019, as shown in Figure FIGREF17. Analysis of training datasets ::: The purpose of training datasets ::: Problems addressed by datasets Creating a training dataset for online abuse detection is typically motivated by the desire to address a particular social problem. These motivations can inform how a taxonomy of abusive language is designed, how data is collected and what instructions are given to annotators. We identify the following motivating reasons, which were explicitly referenced by dataset creators. Reducing harm: Aggressive, derogatory and demeaning online interactions can inflict harm on individuals who are targeted by such content and those who are not targeted but still observe it. This has been shown to have profound long-term consequences on individuals' well-being, with some vulnerable individuals expressing concerns about leaving their homes following experiences of abuse BIBREF26. Accordingly, many dataset creators state that aggressive language and online harassment is a social problem which they want to help address Removing illegal content: Many countries legislate against certain forms of speech, e.g. direct threats of violence. For instance, the EU's Code of Conduct requires that all content that is flagged for being illegal online hate speech is reviewed within 24 hours, and removed if necessary BIBREF27. Many large social media platforms and tech companies adhere to this code of conduct (including Facebook, Google and Twitter) and, as of September 2019, 89% of such content is reviewed in 24 hours BIBREF28. However, we note that in most cases the abuse that is marked up in training datasets falls short of the requirements of illegal online hate – indeed, as most datasets are taken from public API access points, the data has usually already been moderated by the platforms and most illegal content removed. Improving health of online conversations: The health of online communities can be severely affected by abusive language. It can fracture communities, exacerbate tensions and even repel users. This is not only bad for the community and for civic discourse in general, it also negatively impacts engagement and thus the revenue of the host platforms. Therefore, there is a growing impetus to improve user experience and ensure online dialogues are healthy, inclusive and respectful where possible. There is ample scope for improvement: a study showed that 82% of personal attacks on Wikipedia against other editors are not addressed BIBREF29. Taking steps to improve the health of exchanges in online communities will also benefit commercial and voluntary content moderators. They are routinely exposed to such content, often with insufficient safeugards, and sometimes display symptoms similar to those of PTSD BIBREF30. Automatic tools could help to lessen this exposure, reducing the burden on moderators. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined Myriad tasks have been addressed in the field of abusive online content detection, reflecting the different disciplines, motivations and assumptions behind research. This has led to considerable variation in what is actually detected under the rubric of `abusive content', and establishing a degree of order over the diverse categorisations and subcategorisations is both difficult and somewhat arbitrary. Key dimensions which dataset creators have used to categorise detection tasks include who/what is targeted (e.g. groups vs. individuals), the strength of content (e.g. covert vs. overt), the nature of the abuse (e.g. benevolent vs. hostile sexism BIBREF31), how the abuse manifests (e.g. threats vs. derogatory statements), the tone (e.g. aggressive vs. non-aggressive), the specific target (e.g. ethnic minorities vs. women),and the subjective perception of the reader (e.g. disrespectful vs. respectful). Other important dimensions include the theme used to express abuse (e.g. Islamophobia which relies on tropes about terrorism vs. tropes about sexism) and the use of particular linguistic devices, such as appeals to authority, sincerity and irony. All of these dimensions can be combined in different ways, producing a large number of intersecting tasks. Consistency in how tasks are described will not necessarily ensure that datasets can be used interchangeably. From the description of a task, an annotation framework must be developed which converts the conceptualisation of abuse into a set of standards. This formalised representation of the `abuse' inevitably involves shortcuts, imperfect rules and simplifications. If annotation frameworks are developed and applied differently, then even datasets aimed at the same task can still vary considerably. Nonetheless, how detection tasks for online abuse are described is crucial for how the datasets – and in turn the systems trained on them – can subsequently be used. For example, a dataset annotated for hate speech can be used to examine bigoted biases, but the reverse is not true. How datasets are framed also impacts whether, and how, datasets can be combined to form large `mega-datasets' – a potentially promising avenue for overcoming data sparsity BIBREF17. In the remainder of this section, we provide a framework for splitting out detection tasks along the two most salient dimensions: (1) the nature of abuse and (2) the granularity of the taxonomy. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: the nature of abuse This refers to what is targeted/attacked by the content and, subsequently, how the taxonomy has been designed/framed by the dataset creators. The most well-established taxonomic distinction in this regard is the difference between (i) the detection of interpersonal abuse, and (ii) the detection of group-directed abuse BIBREF11). Other authors have sought to deductively theorise additional categories, such as `concept-directed' abuse, although these have not been widely adopted BIBREF13. Through an inductive investigation of existing training datasets, we extend this binary distinction to four primary categories of abuse which have been studied in previous work, as well as a fifth `Mixed' category. Person-directed abuse. Content which directs negativity against individuals, typically through aggression, insults, intimidation, hostility and trolling, amongst other tactics. Most research falls under the auspices of `cyber bullying', `harassment' and `trolling' BIBREF23, BIBREF32, BIBREF33. One major dataset of English Wikipedia editor comments BIBREF29 focuses on the `personal attack' element of harassment, drawing on prior investigations that mapped out harassment in that community. Another widely used dataset focuses on trolls' intent to intimidate, distinguishing between direct harassment and other behaviours BIBREF34. An important consideration in studies of person-directed abuse is (a) interpersonal relations, such as whether individuals engage in patterns of abuse or one-off acts and whether they are known to each other in the `real' world (both of which are a key concern in studies of cyberbullying) and (b) standpoint, such as whether individuals directly engage in abuse themselves or encourage others to do so. For example, the theoretically sophisticated synthetic dataset provided by BIBREF33 identifies not only harassment but also encouragement to harassment. BIBREF22 mark up posts from computer game forums (World of Warcraft and League of Legends) for cyberbullying and annotate these as $\langle $offender, victim, message$\rangle $ tuples. Group-directed abuse. Content which directs negativity against a social identity, which is defined in relation to a particular attribute (e.g. ethnic, racial, religious groups)BIBREF35. Such abuse is often directed against marginalised or under-represented groups in society. Group-directed abuse is typically described as `hate speech' and includes use of dehumanising language, making derogatory, demonising or hostile statements, making threats, and inciting others to engage in violence, amongst other dangerous communications. Common examples of group-directed abuse include sexism, which is included in datasets provided by BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF33 and racism, which is directly targeted in BIBREF36, BIBREF40. In some cases, specific types of group-directed abuse are subsumed within a broader category of identity-directed abuse, as in BIBREF41, BIBREF42, BIBREF4. Determining the limits of any group-directed abuse category requires careful theoretical reflection, as with the decision to include ethnic, caste-based and certain religious prejudices under `racism'. There is no `right' answer to such questions as they engage with ontological concerns about identification and `being' and the politics of categorization. Flagged content. Content which is reported by community members or assessed by community and professional content moderators. This covers a broad range of focuses as moderators may also remove spam, sexually inappropriate content and other undesirable contributions. In this regard, `flagged' content is akin to the concept of `trolling', which covers a wide range of behaviours, from jokes and playful interventions through to sinister personal attacks such as doxxing BIBREF43. Some forms of trolling can be measured with tools such as the Global Assessment of Internet Trolling (GAIT) BIBREF43. Incivility. Content which is considered to be incivil, rude, inappropriate, offensive or disrespectful BIBREF24, BIBREF25, BIBREF44. Such categories are usually defined with reference to the tone that the author adopts rather than the substantive content of what they express, which is the basis of person- and group- directed categories. Such content usually contains obscene, profane or otherwise `dirty' words. This can be easier to detect as closed-class lists are effective at identifying single objectionable words (e.g. BIBREF45). However, one concern with this type of research is that the presence of `dirty' words does not necessarily signal malicious intent or abuse; they may equally be used as intensifiers or colloquialisms BIBREF46. At the same time, detecting incivility can be more difficult as it requires annotators to infer the subjective intent of the speaker or to understand (or guess) the social norms of a setting and thus whether disrespect has been expressed BIBREF42. Content can be incivil without directing hate against a group or person, and can be inappropriate in one setting but not another: as such it tends to be more subjective and contextual than other types of abusive language. Mixed. Content which contains multiple types of abuse, usually a combination of the four categories discussed above. The intersecting nature of online language means that this is common but can also manifest in unexpected ways. For instance, female politicians may receive more interpersonal abuse than other politicians. This might not appear as misogyny because their identity as women is not referenced – but it might have motivated the abuse they were subjected to. Mixed forms of abuse require further research, and have thus far been most fully explored in the OLID dataset provided by BIBREF4, who explore several facets of abuse under one taxonomy. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: Granularity of taxonomies This refers to how much detail a taxonomy contains, reflected in the number of unique classes. The most important and widespread distinction is whether a binary class is used (e.g. Hate / Not) or a multi-level class, such as a tripartite split (typically, Overt, Covert and Non-abusive). In some cases, a large number of complex classes are created, such as by combining whether the abuse is targeted or not along with its theme and strength. In general, Social scientific analyses encourage creating a detailed taxonomy with a large number of fine-grained categories. However, this is only useful for machine learning if there are enough data points in each category and if annotators are capable of consistently distinguishing between them. Complex annotation schemas may not result in better training datasets if they are not implemented in a robust way. As such, it is unsurprising that binary classification schemas are the most prevalent, even though they are arguably the least useful given the variety of ways in which abuse can be articulated. This can range from the explicit and overt (e.g. directing threats against a group) to more subtle behaviours, such as micro-aggressions and dismissing marginalised groups' experiences of prejudice. Subsuming both types of behaviour within one category not only risks making detection difficult (due to considerable in-class variation) but also leads to a detection system which cannot make important distinctions between qualitatively different types of content. This has severe implications for whether detection systems trained on such datasets can actually be used for downstream tasks, such as content moderation and social scientific analysis. Drawing together the nature and granularity of abuse, our analyses identify a hierarchy of taxonomic granularity from least to most granular: Binary classification of a single `meta' category, such as hate/not or abuse/not. This can lead to very general and vague research, which is difficult to apply in practice. Binary classification of a single type of abuse, such as person-directed or group-directed. This can be problematic given that abuse is nearly always directed against a group rather than `groups' per se. Binary classification of abuse against a single well-defined group, such as racism/not or Islamophobia/not, or interpersonal abuse against a well-defined cohort, such as MPs and young people. Multi-class (or multi-label) classification of different types of abuse, such as: Multiple targets (e.g. racist, sexist and non-hateful content) or Multiple strengths (e.g. none, implicit and explicit content). Multiple types (e.g. threats versus derogatory statements or benevolent versus hostile statements). Multi-class classification of different types of abuse which is integrated with other dimensions of abuse. Analysis of training datasets ::: The content of training datasets ::: The `Level' of content 49 of the training datasets are annotated at the level of the post, one dataset is annotated at the level of the user BIBREF47, and none of them are annotated at the level of the comment thread. Only two publications indicate that the entire conversational thread was presented to annotators when marking up individual entries, meaning that in most cases this important contextual information is not used. 49 of the training datasets contain only text. This is a considerable limitation of existing research BIBREF13, especially given the multimodal nature of online communication and the increasing ubiquity of digital-specific image-based forms of communication such as Memes, Gifs, Filters and Snaps BIBREF48. Although some work has addressed the task of detecting hateful images BIBREF49, BIBREF50, this lead to the creation of a publically available labelled training dataset in only one case BIBREF51. To our knowledge, no research has tackled the problem of detecting hateful audio content. This is a distinct challenge; alongside the semantic content audio also contains important vocal cues which provide more opportunities to investigate (but also potentially misinterpret) tone and intention. Analysis of training datasets ::: The content of training datasets ::: Language The most common language in the training datasets is English, which appears in 20 datasets, followed by Arabic and Italian (5 datasets each), Hindi-English (4 datasets) and then German, Indonesian and Spanish (3 datasets). Noticeably, several major languages, both globally and in Europe, do not appear, which suggests considerable unevenness in the linguistic and cultural focuses of abusive language detection. For instance, there are major gaps in the coverage of European languages, including Danish and Dutch. Surprisingly, French only appears once. The dominance of English may be due to how we sampled publications (for which we used English terms), but may also reflect different publishing practices in different countries and how well-developed abusive content research is. Analysis of training datasets ::: The content of training datasets ::: Source of data Training datasets use data collected from a range of online spaces, including from mainstream platforms, such as Twitter, Wikipedia and Facebook, to more niche forums, such as World of Warcraft and Stormfront. In most cases, data is collected from public sources and then manually annotated but in others data is sourced through proprietary data sharing agreements with host platforms. Unsurprisingly, Twitter is the most widely used source of data, accounting for 27 of the datasets. This reflects wider concerns in computational social research that Twitter is over-used, primarily because it has a very accessible API for data collection BIBREF52, BIBREF53. Facebook and Wikipedia are the second most used sources of data, accounting for three datasets each – although we note that all three Wikipedia datasets are reported in the same publication. Many of the most widely used online platforms are not represented at all, or only in one dataset, such as Reddit, Weibo, VK and YouTube. The lack of diversity in where data is collected from limits the development of detection systems. Three main issues emerge: Linguistic practices vary across platforms. Twitter only allows 280 characters (previously only 140), provoking stylistic changes BIBREF54, and abusive content detection systems trained on this data are unlikely to work as well with longer pieces of text. Dealing with longer pieces of text could necessitate different classification systems, potentially affecting the choice of algorithmic architecture. Additionally, the technical affordances of platforms may affect the style, tone and topic of the content they host. The demographics of users on different platforms vary considerably. Social science research indicates that `digital divides' exist, whereby online users are not representative of wider populations and differ across different online spaces BIBREF53, BIBREF55, BIBREF56. Blank draws attention to how Twitter users are usually younger and wealthier than offline populations; over reliance on data from Twitter means, in effect, that we are over-sampling data from this privileged section of society. Blank also shows that there are also important cross-national differences: British Twitters are better-educated than the offline British population but the same is not true for American Twitter users compared with the offline American population BIBREF56. These demographic differences are likely to affect the types of content that users produce. Platforms have different norms and so host different types and amounts of abuse. Mainstream platforms have made efforts in recent times to `clean up' content and so the most overt and aggressive forms of abuse, such as direct threats, are likely to be taken down BIBREF57. However, more niche platforms, such as Gab or 4chan, tolerate more offensive forms of speech and are more likely to contain explicit abuse, such as racism and very intrusive forms of harassment, such as `doxxing' BIBREF58, BIBREF59, BIBREF60. Over-reliance on a few sources of data could mean that datasets are biased towards only a subset of types of abuse. Analysis of training datasets ::: The content of training datasets ::: Size The size of the training datasets varies considerably from 469 posts to 17 million; a difference of four orders of magnitude. Differences in size partly reflect different annotation approaches. The largest datasets are from proprietary data sharing agreements with platforms. Smaller datasets tend to be carefully collected and then manually annotated. There are no established guidelines for how large an abusive language training dataset needs to be. However, smaller datasets are problematic because they contain too little linguistic variation and increase the likelihood of overfitting. Rizoiu et al.BIBREF61 train detection models on only a proportion of the Davidson et al. and Waseem training datasets and show that this leads to worse performance, with a lower F1-Score, particularly for `data hungry' deep learning approaches BIBREF61. At the same time, `big' datasets alone are not a panacea for the challenges of abusive content classification. Large training datasets which have been poorly sampled, annotated with theoretically problematic categories or inexpertly and unthoughtfully annotated, could still lead to the development of poor classification systems. The challenges posed by small datasets could potentially be overcome through machine learning techniques such as `semi-supervised' and `active' learning BIBREF62, although these have only been limitedly applied to abusive content detection so far BIBREF63. Sharifirad et al. propose using text augmentation and new text generation as a way of overcoming small datasets, which is a promising avenue for future research BIBREF64. Analysis of training datasets ::: The content of training datasets ::: Class distribution and sampling Class distribution is an important, although often under-considered, aspect of the design of training datasets. Datasets with little abusive content will lack linguistic variation in terms of what is abusive, thereby increasing the risk of overfitting. More concerningly, the class distribution directly affects the nature of the engineering task and how performance should be evaluated. For instance, if a dataset is 70% hate speech then a zero-rule classification system (i.e. where everything is categorised as hate speech) will achieve 70% precision and 100% recall. This should be used as a baseline for evaluating performance: 80% precision is less impressive compared with this baseline. However, 80% precision on an evenly balanced dataset would be impressive. This is particularly important when evaluating the performance of ternary classifiers, when classes can be considerably imbalanced. On average, 35% of the content in the training datasets is abusive. However, class distributions vary considerably, from those with just 1% abusive content up to 100%. These differences are largely a product of how data is sampled and which platform it is taken from. Bretschneider BIBREF22 created two datasets without using purposive sampling, and as such they contain very low levels of abuse ( 1%). Other studies filter data collection based on platforms, time periods, keywords/hashtags and individuals to increase the prevalence of abuse. Four datasets comprise only abusive content; three cases are synthetic datasets, reported on in one publication BIBREF65, and in the other case the dataset is an amendment to an existing dataset and only contains misogynistic content BIBREF37. Purposive sampling has been criticised for introducing various forms of bias into datasets BIBREF66, such as missing out on mis-spelled content BIBREF67 and only focusing on the linguistic patterns of an atypical subset of users. One pressing risk is that a lot of data is sampled from far right communities – which means that most hate speech classifiers implicitly pick up on right wing styles of discourse rather than hate speech per se. This could have profound consequences for our understanding of online political dialogue if the classifiers are applied uncritically to other groups. Nevertheless, purposive sampling is arguably a necessary step when creating a training dataset given the low prevalence of abuse on social media in general BIBREF68. Analysis of training datasets ::: The content of training datasets ::: Identity of the content creators The identity of the users who originally created the content in training datasets is described in only two cases. In both cases the data is synthetic BIBREF65, BIBREF33. Chung et al. use `nichesourcing' to synthetically generate abuse, with experts in tackling hate speech creating hateful posts. Sprugnoli et al. ask children to adopt pre-defined roles in an experimental classroom setup, and ask them to engage in a cyberbullying scenario. In most of the non-synthetic training datasets, some information is given about the sampling criteria used to collect data, such as hashtags. However, this does not provide direct insight into who the content creators are, such as their identity, demographics, online behavioural patterns and affiliations. Providing more information about content creators may help address biases in existing datasets. For instance, Wiegand et al. show that 70% of the sexist tweets in the highly cited Waseem and Hovy dataset BIBREF36 come from two content creators and that 99% of the racist tweets come from just one BIBREF66. This is a serious constraint as it means that user-level metadata is artificially highly predictive of abuse. And, even when user-level metadata is not explicitly modelled, detection systems only need to pick up on the linguistic patterns of a few authors to nominally detect abuse. Overall, the complete lack of information about which users have created the content in most training datasets is a substantial limitation which may be driving as-yet-unrecognised biases. This can be remedied through the methodological rigour implicit in including a data statement with a corpus. Analysis of training datasets ::: Annotation of training datasets ::: Annotation process How training datasets are annotated is one of the most important aspects of their creation. A range of annotation processes are used in training datasets, which we split into five high-level categories: Crowdsourcing (15 datasets). Crowdsourcing is widely used in NLP research because it is relatively cheap and easy to implement. The value of crowdsourcing lies in having annotations undertaken by `a large number of non-experts' (BIBREF69, p. 278) – any bit of content can be annotated by multiple annotators, effectively trading quality for quantity. Studies which use crowdsourcing with only a few annotators for each bit of content risk minimising quality without counterbalancing it with greater quantity. Furthermore, testing the work of many different annotators can be challenging BIBREF70, BIBREF71 and ensuring they are paid an ethical amount may make the cost comparable to using trained experts. Crowdsourcing has also been associated with `citizen science' initiatives to make academic research more accessible but this may not be fully realised in cases where annotation tasks are laborious and low-skilled BIBREF72, BIBREF20. Academic experts (22 datasets). Expert annotation is time-intensive but is considered to produce higher quality annotations. Waseem reports that `systems trained on expert annotations outperform systems trained on amateur annotations.' BIBREF73 and, similarly, D'Orazio et al. claim, `although expert coding is costly, it produces quality data.' BIBREF74. However, the notion of an `expert' remains somewhat fuzzy within abusive content detection research. In many cases, publications only report that `an expert' is used, without specifying the nature of their expertise – even though this can vary substantially. For example, an expert may refer to an NLP practitioner, an undergraduate student with only modest levels of training, a member of an attacked social group relevant to the dataset or a researcher with a doctorate in the study of prejudice. In general, we anticipate that experts in the social scientific study of prejudice/abuse would perform better at annotation tasks then NLP experts who may not have any direct expertise in the conceptual and theoretical issues of abusive content annotation. In particular, one risk of using NLP practitioners, whether students or professionals, is that they might `game' training datasets based on what they anticipate is technically feasible for existing detection systems. For instance, if existing systems perform poorly when presented with long range dependencies, humour or subtle forms of hate (which are nonetheless usually discernible to human readers) then NLP experts could unintentionally use this expectation to inform their annotations and not label such content as hateful. Professional moderators (3 datasets). Professional moderators offer a standardized approach to content annotated, implemented by experienced workers. This should, in principle, result in high quality annotations. However, one concern is that moderators are output-focused as their work involves determining whether content should be allowed or removed from platforms; they may not provide detailed labels about the nature of abuse and may also set the bar for content labelled `abusive' fairly high, missing out on more nuance and subtle varieties. In most cases, moderators will annotate for a range of unacceptable content, such as spam and sexual content, and this must be marked in datasets. A mix of crowdsourcing and experts (6 datasets). Synthetic data creation (4 datasets). Synthetic datasets are an interesting option as they are inherently non-authentic and therefore not necessarily representative of how abuse manifests in real-world situations. However, if they are created in realistic conditions by experts or relevant content creators then they can mimic real behaviour and have the added advantage that they may have broader coverage of different types of abuse. They are also usually easier to share. Analysis of training datasets ::: Annotation of training datasets ::: Identity of the annotators The data statements framework given by Bender and Friedman emphasises the importance of understanding who has completed annotations. Knowing who the annotators are is important because `their own “social address" influences their experience with language and thus their perception of what they are annotating.' BIBREF18 In the context of online abuse, Binns et al. show that the gender of annotators systematically influences what annotations they provide BIBREF75. No annotator will be well-versed in all of the slang or coded meanings used to construct abusive language. Indeed, many of these coded meanings are deliberately covert and obfuscated BIBREF76. To help mitigate these challenges, annotators should be (a) well-qualified and (b) diverse. A homogeneous group of annotators will be poorly equipped to catch all instances of abuse in a corpus. Recruiting an intentionally mixed groups of annotators is likely to yield better recall of abuse and thus a more precise dataset BIBREF77. Information about annotators is unfortunately scarce. In 23 of the training datasets no information is given about the identity of annotators; in 17 datasets very limited information is given, such as whether the annotator is a native speaker of the language; and in just 10 cases is detailed information given. Interestingly, only 4 out of these 10 datasets are in the English language. Relevant information about annotators can be split into (i) Demographic information and (ii) annotators' expertise and experience. In none of the training sets is the full range of annotator information made available, which includes: Demographic information. The nature of the task affects what information should be provided, as well as the geographic and cultural context. For instance, research on Islamophobia should include, at the very least, information about annotators' religious affiliation. Relevant variables include: Age Ethnicity and race Religion Gender Sexual Orientation Expertise and experience. Relevant variables include: Field of research Years of experience Research status (e.g. research assistant or post-doc) Personal experiences of abuse. In our review, none of the datasets contained systematic information about whether annotators had been personally targeted by abuse or had viewed such abuse online, even though this can impact annotators' perceptions. Relevant variables include: Experiences of being targeted by online abuse. Experiences of viewing online abuse. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation A key source of variation across datasets is whether annotators were given detailed guidelines, very minimal guidelines or no guidelines at all. Analysing this issue is made difficult by the fact that many dataset creators do not share their annotation guidelines. 21 of the datasets we study do not provide the guidelines and 14 only provide them in a highly summarised form. In just 15 datasets is detailed information given (and these are reported on in just 9 publications). Requiring researchers to publish annotation guidelines not only helps future researchers to better understand what datasets contain but also to improve and extend them. This could be crucial for improving the quality of annotations; as Ross et al. recommend, `raters need more detailed instructions for annotation.' BIBREF78 The degree of detail given in guidelines is linked to how the notion of `abuse' is understood. Some dataset creators construct clear and explicit guidelines in an attempt to ensure that annotations are uniform and align closely with social scientific concepts. In other cases, dataset creators allow annotators to apply their own perception. For instance, in their Portuguese language dataset, Fortuna et al. ask annotators to `evaluate if according to your opinion, these tweets contain hate speech' BIBREF38. The risk here is that authors' perceptions may differ considerably; Salminen et al. show that online hate interpretation varies considerably across individuals BIBREF79. This is also reflected in inter-annotator agreement scores for abusive content, which is often very low, particularly for tasks which deploy more than just a binary taxonomy. However, it is unlikely that annotators could ever truly divorce themselves from their own social experience and background to decide on a single `objective' annotation. Abusive content annotation is better understood, epistemologically, as an intersubjective process in which agreement is constructed, rather than an objective process in which a `true' annotation is `found'. For this reason, some researchers have shifted the question of `how can we achieve the correct annotation?' to `who should decide what the correct annotation is?' BIBREF73. Ultimately, whether annotators should be allowed greater freedom in making annotations, and whether this results in higher quality datasets, needs further research and conceptual examination. Some aspects of abusive language present fundamental issues that are prone to unreliable annotation, such as Irony, Calumniation and Intent. They are intrinsically difficult to annotate given a third-person perspective on a piece of text as they involve making a judgement about indeterminate issues. However, they cannot be ignored given their prevalence in abusive content and their importance to how abuse is expressed. Thus, although they are fundamentally conceptual problems, these issues also present practical problems for annotators, and should be addressed explicitly in coding guidelines. Otherwise, as BIBREF80 note, these issues are likely to drive type II errors in classification, i.e. labelling non-hate-speech utterances as hate speech. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Irony This covers statements that have a meaning contrary to that one might glean at first reading. Lachenicht BIBREF81 notes that Irony goes against Grice's quality maxim, and as such Ironic content requires closer attention from the reader as it is prone to being misinterpreted. Irony is a particularly difficult issue as in some cases it is primarily intended to provide humour (and thus might legitimately be considered non-abusive) but in other cases is used as a way of veiling genuine abuse. Previous research suggests that the problem is widespread. Sanguinetti et al. BIBREF82 find irony in 11% of hateful tweets in Italian. BIBREF25 find that irony is one of the most common phenomena in self-deleted comments; and that the prevalence of irony is 33.9% amongst deleted comments in a Croatian comment dataset and 18.1% amongst deleted comments in a Slovene comment dataset. Furthermore, annotating irony (as well as related constructs, such as sarcasm and humour) is inherently difficult. BIBREF83 report that agreement on sarcasm amongst annotators working in English is low, something echoed by annotations of Danish content BIBREF84. Irony is also one of the most common reasons for content to be re-moderated on appeal, according to Pavlopoulos et al. BIBREF24. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Calumniation This covers false statements, slander, and libel. From the surveyed set, this is annotated in datasets for Greek BIBREF24 and for Croatian and Slovene BIBREF25. Its prevalence varies considerably across these two datasets and reliable estimations of the prevalence of false statements are not available. Calumniation is not only an empirical issue, it also raises conceptual problems: should false information be considered abusive if it slanders or demeans a person? However, if the information is then found out to be true does it make the content any less abusive? Given the contentiousness of `objectivity', and the lack of consensus about most issues in a `post-truth' age BIBREF85, who should decide what is considered true? And, finally, how do we determine whether the content creator knows whether something is true? These ontological, epistemological and social questions are fundamental to the issue of truth and falsity in abusive language. Understandably, most datasets do not taken any perspective on the truth and falsity of content. This is a practical solution: given error rates in abusive language detection as well as error rates in fact-checking, a system which combined both could be inapplicable in practice. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Intent This information about the utterer's state of mind is a core part of how many types of abusive language are defined. Intent is usually used to emphasize the wrongness of abusive behaviour, such as spreading, inciting, promoting or justifying hatred or violence towards a given target, or sending a message that aims at dehumanising, delegitimising, hurting or intimidating them BIBREF82. BIBREF81 postulate that “aggravation, invective and rudeness ... may be performed with varying degrees of intention to hurt", and cite five legal degrees of intent BIBREF86. However, it is difficult to discern the intent of another speaker in a verbal conversation between humans, and even more difficult to do so through written and computer-mediated communications BIBREF87. Nevertheless, intent is particularly important for some categories of abuse such as bullying, maliciousness and hostility BIBREF34, BIBREF32. Most of the guidelines for the datasets we have studied do not contain an explicit discussion of intent, although there are exceptions. BIBREF88 include intent as a core part of their annotation standard, noting that understanding context (such as by seeing a speakers' other online messages) is crucial to achieving quality annotations. However, this proposition poses conceptual challenges given that people's intent can shift over time. Deleted comments have been used to study potential expressions of regret by users and, as such, a change in their intent BIBREF89, BIBREF25; this has also been reported as a common motivator even in self-deletion of non-abusive language BIBREF90. Equally, engaging in a sequence of targeted abusive language is an indicator of aggressive intent, and appears in several definitions. BIBREF23 require an “intent to physically assert power over women" as a requirement for multiple categories of misogynistic behaviour. BIBREF34 find that messages that are “unapologetically or intentionally offensive" fit in the highest grade of trolling under their schema. Kenny et al. BIBREF86 note how sarcasm, irony, and humour complicate the picture of intent by introducing considerable difficulties in discerning the true intent of speakers (as discussed above). Part of the challenge is that many abusive terms, such as slurs and insults, are polysemic and may be co-opted by an ingroup into terms of entertainment and endearment BIBREF34. Dataset sharing ::: The challenges and opportunities of achieving Open Science All of the training datasets we analyse are publicly accessible and as such can be used by researchers other than the authors of the original publication. Sharing data is an important aspect of open science but also poses ethical and legal risks, especially in light of recent regulatory changes, such as the introduction of GPDR in the UK BIBREF91, BIBREF92. This problem is particularly acute with abusive content, which can be deeply shocking, and some training datasets from highly cited publications have not been made publicly available BIBREF93, BIBREF94, BIBREF95. Open science initiatives can also raise concerns amongst the public, who may not be comfortable with researchers sharing their personal data BIBREF96, BIBREF97. The difficulty of sharing data in sensitive areas of research is reflected by the Islamist extremism research website, `Jihadology'. It chose to restrict public access in 2019, following efforts by Home Office counter-terrorism officials to shut it down completely. They were concerned that, whilst it aimed to support academic research into Islamist extremism, it may have inadvertently enabled individuals to radicalise by making otherwise banned extremist material available. By working with partners such as the not-for-profit Tech Against Terrorism, Jihadology created a secure area in the website, which can only be accessed by approved researchers. Some of the training datasets in our list have similar requirements, and can only be accessed following a registration process. Open sharing of datasets is not only a question of scientific integrity and a powerful way of advancing scientific knowledge. It is also, fundamentally, a question of fairness and power. Opening access to datasets will enable less-well funded researchers and organisations, which includes researchers in the Global South and those working for not-for-profit organisations, to steer and contribute to research. This is a particularly pressing issue in a field which is directly concerned with the experiences of often-marginalised communities and actors BIBREF36. For instance, one growing concern is the biases encoded in detection systems and the impact this could have when they are applied in real-world settings BIBREF9, BIBREF10. This research could be further advanced by making more datasets and detection systems more easily available. For instance, Binns et al. use the detailed metadata in the datasets provided by Wulczyn et al. to investigate how the demographics of annotators impacts the annotations they make BIBREF75, BIBREF29. The value of such insights is only clear after the dataset has been shared – and, equally, is only possible because of data sharing. More effective ways of sharing datasets would address the fact that datasets often deteriorate after they have been published BIBREF13. Several of the most widely used datasets provide only the annotations and IDs and must be `rehydrated' to collect the content. Both of the datasets provided by Waseem and Hovy and Founta et al. must be collected in this way BIBREF98, BIBREF36, and both have degraded considerably since they were first released as the tweets are no longer available on Twitter. Chung et al. also estimate that within 12 months the recently released dataset for counterspeech by Mathew et al. had lost more than 60% of its content BIBREF65, BIBREF58. Dataset degradation poses three main risks: First, if less data is available then there is a greater likelihood of overfitting. Second, the class distributions usually change as proportionally more of the abusive content is taken down than the non-abusive. Third, it is also likely that the more overt forms of abuse are taken down, rather than the covert instances, thereby changing the qualitative nature of the dataset. Dataset sharing ::: Research infrastructure: Solutions for sharing training datasets The problem of data access and sharing remains unresolved in the field of abusive content detection, much like other areas of computational research BIBREF99. At present, an ethical, secure and easy way of sharing sensitive tools and resources has not been developed and adopted in the field. More effective dataset sharing would (1) greater collaboration amongst researchers, (2) enhance the reproducibility of research by encouraging greater scrutiny BIBREF100, BIBREF101, BIBREF102 and (3) substantively advance the field by enabling future researchers to better understand the biases and limitations of existing research and to identify new research directions. There are two main challenges which must be overcome to ensure that training datasets can be shared and used by future researchers. First, dataset quality: the size, class distribution and quality of their content must be maintained. Second, dataset access: access to datasets must be controlled so that researchers can use them, whilst respecting platforms' Terms of Service and not allowing potential extremists from having access. These problems are closely entwined and the solutions available, which follow, have implications for both of them. Synthetic datasets. Four of the datasets we have reviewed were developed synthetically. This resolves the dataset quality problem but introduces additional biases and limitations because the data is not real. Synthetic datasets still need to be shared in such a way as to limit access for potential extremists but face no challenges from Platforms' Terms of Services. Data `philanthropy' or `donations'. These are defined as `the act of an individual actively consenting to donate their personal data for research' BIBREF97. Donated data from many individuals could then be combined and shared – but it would still need to be annotated. A further challenge is that many individuals who share abusive content may be unwilling to `donate' their data as this is commonly associated with prosocial motivations, creating severe class imbalances BIBREF97. Data donations could also open new moral and ethical issues; individuals' privacy could be impacted if data is re-analysed to derive new unexpected insights BIBREF103. Informed consent is difficult given that the exact nature of analyses may not be known in advance. Finally, data donations alone do not solve how access can be responsibly protected and how platforms' Terms of Service can be met. For these reasons, data donations are unlikely to be a key part of future research infrastructure for abusive content detection. Platform-backed sharing. Platforms could share datasets and support researchers' access. There are no working examples of this in abusive content detection research, but it has been successfully used in other research areas. For instance, Twitter has made a large dataset of accounts linked to potential information operations, known as the “IRA" dataset (Internet Research Agency). This would require considerably more interfaces between academia and industry, which may be difficult given the challenges associated with existing initiatives, such as Social Science One. However, in the long term, we propose that this is the most effective solution for the problem of sharing training datasets. Not only because it removes Terms of Service limitations but also because platforms have large volumes of original content which has been annotated in a detailed way. This could take one of two forms: platforms either make content which has violated their Community Guidelines available directly or they provide special access post-hoc to datasets which researchers have collected publicly through their API - thereby making sure that datasets do not degrade over time. Data trusts. Data trusts have been described as a way of sharing data `in a fair, safe and equitable way' ( BIBREF104 p. 46). However, there is considerable disagreement as to what they entail and how they would operate in practice BIBREF105. The Open Data Institute identifies that data trusts aim to make data open and accessible by providing a framework for storing and accessing data, terms and mechanisms for resolving disputes and, in some cases, contracts to enforce them. For abusive content training datasets, this would provide a way of enabling datasets to be shared, although it would require considerable institutional, legal and financial commitments. Arguably, the easiest way of ensuring data can be shared is to maintain a very simple data trust, such as a database, which would contain all available abusive content training datasets. This repository would need to be permissioned and access controlled to address concerns relating to privacy and ethics. Such a repository could substantially reduce the burden on researchers; once they have been approved to the repository, they could access all datasets publicly available – different levels of permission could be implemented for different datasets, depending on commercial or research sensitivity. Furthermore, this repository could contain all of the metadata reported with datasets and such information could be included at the point of deposit, based on the `data statements' work of Bender and Friedman BIBREF18. A simple API could be developed for depositing and reading data, similar to that of the HateBase. The permissioning system could be maintained either through a single institution or, to avoid power concentrating amongst a small group of researchers, through a decentralised blockchain. Dataset sharing ::: A new repository of training datasets: Hatespeechdata.com The resources and infrastructure to create a dedicated data trust and API for sharing abusive content training datasets is substantial and requires considerable further engagement with research teams in this field. In the interim, to encourage greater sharing of datasets, we have launched a dedicated website which contains all of the datasets analysed here: https://hatespeechdata.com. Based on the analysis in the previous sections, we have also provided partial data statements BIBREF18. The website also contains previously published abusive keyword dictionaries, which are not analysed here but some researchers may find useful. Note that the website only contains information/data which the original authors have already made publicly available elsewhere. It will be updated with new datasets in the future. Best Practices for training dataset creation Much can be learned from existing efforts to create abusive language datasets. We identify best practices which emerge at four distinct points in the process of creating a training dataset: (1) task formation, (2) data selection, (3) annotation, and (4) documentation. Best Practices for training dataset creation ::: Task formation: Defining the task addressed by the dataset Dataset creation should be `problem driven' BIBREF106 and should address a well-defined and specific task, with a clear motivation. This will directly inform the taxonomy design, which should be well-specified and engage with social scientific theory as needed. Defining a clear task which the dataset addresses is especially important given the maturation of the field, ongoing terminological disagreement and the complexity of online abuse. The diversity of phenomena that fits under the umbrella of abusive language means that `general purpose' datasets are unlikely to advance the field. New datasets are most valuable when they address a new target, generator, phenomenon, or domain. Creating datasets which repeat existing work is not nearly as valuable. Best Practices for training dataset creation ::: Selecting data for abusive language annotation Once the task is established, dataset creators should select what language will be annotated, where data will be sampled from and how sampling will be completed. Any data selection exercise is bound to give bias, and so it is important to record what decisions are made (and why) in this step. Dataset builders should have a specific target size in mind and also have an idea of the minimum amount of data this is likely to be needed for the task. This is also where steps 1 and 2 intersect: the data selection should be driven by the problem that is addressed rather than what is easy to collect. Ensuring there are enough positive examples of abuse will always be challenging as the prevalence of abuse is so low. However, given that purposive sampling inevitably introduces biases, creators should explore a range of options before determining the best one – and consider using multiple sampling methods at once, such as including data from different times, different locations, different types of users and different platforms. Other options include using measures of linguistic diversity to maximize the variety of text included in datasets, or including words that cluster close to known abusive terms. Best Practices for training dataset creation ::: Annotating abusive language Annotators must be hired, trained and given appropriate guidelines. Annotators work best with solid guidelines, that are easy to grasp and have clear examples BIBREF107. The best examples are both illustrative, in order to capture the concepts (such as `threatening language') and provide insight into `edge cases', which is content that only just crosses the line into abuse. Decisions should be made about how to handle intrinsically difficult aspects of abuse, such as irony, calumniation and intent (see above). Annotation guidelines should be developed iteratively by dataset creators; by working through the data, rules can be established for difficult or counter-intuitive coding decisions, and a set of shared practices developed. Annotators should be included in this iterative process. Discussions with annotators the language that they have seen “in the field" offers an opportunity to enhance and refine guidelines - and even taxonomies. Such discussions will lead to more consistent data and provide a knowledge base to draw on for future work. To achieve this, it is important to adopt an open culture where annotators are comfortable providing open feedback and also describing their uncertainties. Annotators should also be given emotional and practical support (as well as appropriate financial compensation), and the harmful and potentially triggering effects of annotating online abuse should be recognised at all times. For a set of guidelines to help protect the well-being of annotators, see BIBREF13. Best Practices for training dataset creation ::: Documenting methods, data, and annotators The best training datasets provide as much information as possible and are well-documented. When the method behind them is unclear, they are hard to evaluate, use and build on. Providing as much information as possible can open new and unanticipated analyses and gives more agency to future researchers who use the dataset to create classifiers. For instance, if all annotators' codings are provided (rather than just the `final' decision) then a more nuanced and aware classifier could be developed as, in some cases, it can be better to maximise recall of annotations rather than maximise agreement BIBREF77. Our review found that most datasets have poor methodological descriptions and few (if any) provide enough information to construct an adequate data statement. It is crucial that dataset creators are up front about their biases and limitations: every dataset is biased, and this is only problematic when the biases are unknown. One strategy for doing this is to maintain a document of decisions made when designing and creating the dataset and to then use it to describe to readers the rationale behind decisions. Details about the end-to-end dataset creation process are welcomed. For instance, if the task is crowdsourced then a screenshot of the micro-task presented to workers should be included, and the top-level parameters should be described (e.g. number of workers, maximum number of tasks per worker, number of annotations per piece of text) BIBREF20. If a dedicated interface is used for the annotation, this should also be described and screenshotted as the interface design can influence the annotations. Best Practices for training dataset creation ::: Best practice summary Unfortunately, as with any burgeoning field, there is confusion and overlap around many of the phenomena discussed in this paper; coupled with the high degree of variation in the quality of method descriptions, it has lead to many pieces of research that are hard to combine, compare, or re-use. Our reflections on best practices are driven by this review and the difficulties of creating high quality training datasets. For future researchers, we summarise our recommendations in the following seven points: Bear in mind the purpose of the dataset; design the dataset to help address questions and problems from previous research. Avoid using `easy to access' data, and instead explore new sources which may have greater diversity. Consider what biases may be created by your sampling method. Determine size based on data sparsity and having enough positive classes rather than `what is possible'. Establish a clear taxonomy to be used for the task, with meaningful and theoretically sound categories. Provide annotators with guidelines; develop them iteratively and publish them with your dataset. Consider using trained annotators given the complexities of abusive content. Involve people who have direct experience of the abuse which you are studying whenever possible (and provided that you can protect their well-being). Report on every step of the research through a Data Statement. Conclusion This paper examined a large set of datasets for the creation of abusive content detection systems, providing insight into what they contain, how they are annotated, and how tasks have been framed. Based on an evidence-driven review, we provided an extended discussion of how to make training datasets more readily available and useful, including the challenges and opportunities of open science as well as the need for more research infrastructure. We reported on the development of hatespeechdata.com – a new repository for online abusive content training datasets. Finally, we outlined best practices for creation of training datasets for detection of online abuse. We have effectively met the four research aims elaborated at the start of the paper. Training detection systems for online abuse is a substantial challenge with real social consequences. If we want the systems we develop to be useable, scalable and with few biases then we need to train them on the right data: garbage in will only lead to garbage out.
from 469 posts to 17 million
4d4550533edb19c38cb876b1640e62e34e2b88e0
4d4550533edb19c38cb876b1640e62e34e2b88e0_0
Q: What is open website for cataloguing abusive language data? Text: Introduction Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics. Most detection systems rely on having the right training dataset, reflecting one of the most widely accepted mantras in computer science: Garbage In, Garbage Out. Put simply: to have systems which can detect and classify abusive online content effectively, one needs appropriate datasets with which to train them. However, creating training datasets is often a laborious and non-trivial task – and creating datasets which are non-biased, large and theoretically-informed is even more difficult (BIBREF0 p. 189). We address this issue by examining and reviewing publicly available datasets for abusive content detection, which we provide access to on a new dedicated website, hatespeechdata.com. In the first section, we examine previous reviews and present the four research aims which guide this paper. In the second section, we conduct a critical and in-depth analysis of the available datasets, discussing first what their aim is, how tasks have been described and what taxonomies have been constructed and then, second, what they contain and how they were annotated. In the third section, we discuss the challenges of open science in this research area and elaborates different ways of sharing training datasets, including the website hatespeechdata.com In the final section, we draw on our findings to establish best practices when creating datasets for abusive content detection. Background The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6. All analyses of online abuse ultimately rely on a way of measuring it, which increasingly means having a method which can handle the sheer volume of content produced, shared and engaged with online. Traditional qualitative methods cannot scale to handle the hundreds of millions of posts which appear on each major social media platform every day, and can also introduce inconsistencies and biase BIBREF7. Computational tools have emerged as the most promising way of classifying and detecting online abuse, drawing on work in machine learning, Natural Language Processing (NLP) and statistical modelling. Increasingly sophisticated architectures, features and processes have been used to detect and classify online abuse, leveraging technically sophisticated methods, such as contextual word embeddings, graph embeddings and dependency parsing. Despite their many differences BIBREF8, nearly all methods of online abuse detection rely on a training dataset, which is used to teach the system what is and is not abuse. However, there is a lacuna of research on this crucial aspect of the machine learning process. Indeed, although several general reviews of the field have been conducted, no previous research has reviewed training datasets for abusive content detection in sufficient breadth or depth. This is surprising given (i) their fundamental importance in the detection of online abuse and (ii) growing awareness that several existing datasets suffer from many flaws BIBREF9, BIBREF10. Close relevant work includes: Schmidt and Wiegand conduct a comprehensive review of research into the detection and classification of abusive online content. They discuss training datasets, stating that `to perform experiments on hate speech detection, access to labelled corpora is essential' (BIBREF8, p. 7), and briefly discuss the sources and size of the most prominent existing training datasets, as well as how datasets are sampled and annotated. Schmidt and Wiegand identify two key challenges with existing datasets. First, `data sparsity': many training datasets are small and lack linguistic variety. Second, metadata (such as how data was sampled) is crucial as it lets future researchers understand unintended biases, but is often not adequately reported (BIBREF8, p. 6). Waseem et al.BIBREF11 outline a typology of detection tasks, based on a two-by-two matrix of (i) identity- versus person- directed abuse and (ii) explicit versus implicit abuse. They emphasise the importance of high-quality datasets, particularly for more nuanced expressions of abuse: `Without high quality labelled data to learn these representations, it may be difficult for researchers to come up with models of syntactic structure that can help to identify implicit abuse.' (BIBREF11, p. 81) Jurgens et al.BIBREF12 discuss also conduct a critical review of hate speech detection, and note that `labelled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns.' (BIBREF12, p. 3661) They argue that the field needs to `address the data scarcity faced by abuse detection research' in order to better address more complex rsearch issues and pressing social challenges, such as `develop[ing] proactive technologies that counter or inhibit abuse before it harms' (BIBREF12, pp. 3658, 3661). Vidgen et al. describe several limitations with existing training datasets for abusive content, most noticeably how `they contain systematic biases towards certain types and targets of abuse.' BIBREF13[p.2]. They describe three issues in the quality of datasets: degradation (whereby datasets decline in quality over time), annotation (whereby annotators often have low agreement, indicating considerable uncertainty in class assignments) and variety (whereby `The quality, size and class balance of datasets varies considerably.' [p. 6]). Chetty and AlathurBIBREF14 review the use of Internet-based technologies and online social networks to study the spread of hateful, offensive and extremist content BIBREF14. Their review covers both computational and legal/social scientific aspects of hate speech detection, and outlines the importance of distinguishing between different types of group-directed prejudice. However, they do not consider training datasets in any depth. Fortuna and NunesBIBREF15 provide an end-to-end review of hate speech research, including the motivations for studying online hate, definitional challenges, dataset creation/sharing, and technical advances, both in terms of feature selection and algorithmic architecture (BIBREF15, 2018). They delineate between different types of online abuse, including hate, cyberbullying, discrimination and flaming, and add much needed clarity to the field. They show that (1) dataset size varies considerably but they are generally small (mostly containing fewer than 10,000 entries), (2) Twitter is the most widely-studied platform, and (3) most papers research hate speech per se (i.e. without specifying a target). Of those which do specify a target, racism and sexism are the most researched. However, their review focuses on publications rather than datasets: the same dataset might be used in multiple studies, limiting the relevance of their review for understanding the intrinsic role of training datasets. They also only engage with datasets fairly briefly, as part of a much broader review. Several classification papers also discuss the most widely used datasets, including Davidson et al. BIBREF16 who describe five datasets, and Salminen et al. who review 17 datasets and describe four in detail BIBREF17. This paper addresses this lacuna in existing research, providing a systematic review of available training datasets for online abuse. To provide structure to this review, we adopt the `data statements' framework put forward by Bender and Friedman BIBREF18, as well as other work providing frameworks, schema and processes for analysing NLP artefacts BIBREF19, BIBREF20, BIBREF21. Data statements are a way of documenting the decisions which underpin the creation of datasets used for Natural Language Processing (NLP). They formalise how decisions should be documented, not only ensuring scientific integrity but also addressing `the open and urgent question of how we integrate ethical considerations in the everyday practice of our field' (BIBREF18, p. 587). In many cases, we find that it is not possible to fully recreate the level of detail recorded in an original data statement from how datasets are described in publications. This reinforces the importance of proper documentation at the point of dataset creation. As the field of online abusive content detection matures, it has started to tackle more complex research challenges, such as multi-platform, multi-lingual and multi-target abuse detection, and systems are increasingly being deployed in `the wild' for social scientific analyses and for content moderation BIBREF5. Such research heightens the focus on training datasets as exactly what is being detected comes under greater scrutiny. To enhance our understanding of this domain, our review paper has four research aims. Research Aim One: to provide an in-depth and critical analysis of the available training datasets for abusive online content detection. Research Aim Two: to map and discuss ways of addressing the lack of dataset sharing, and as such the lack of `open science', in the field of online abuse research. Research Aim Three: to introduce the website hatespeechdata.com, as a way of enabling more dataset sharing. Research Aim Four: to identify best practices for creating an abusive content training dataset. Analysis of training datasets Relevant publications have been identified from four sources to identify training datasets for abusive content detection: The Scopus database of academic publications, identified using keyword searches. The ACL Anthology database of NLP research papers, identified using keyword searches. The ArXiv database of preprints, identified using keyword searches. Proceedings of the 1st, 2nd and 3rd workshops on abusive language online (ACL). Most publications report on the creation of one abusive content training dataset. However, some describe several new datasets simultaneously or provide one dataset with several distinct subsets of data BIBREF22, BIBREF23, BIBREF24, BIBREF25. For consistency, we separate out each subset of data where they are in different languages or the data is collected from different platforms. As such, the number of datasets is greater than the number publications. All of the datasets were released between 2016 and 2019, as shown in Figure FIGREF17. Analysis of training datasets ::: The purpose of training datasets ::: Problems addressed by datasets Creating a training dataset for online abuse detection is typically motivated by the desire to address a particular social problem. These motivations can inform how a taxonomy of abusive language is designed, how data is collected and what instructions are given to annotators. We identify the following motivating reasons, which were explicitly referenced by dataset creators. Reducing harm: Aggressive, derogatory and demeaning online interactions can inflict harm on individuals who are targeted by such content and those who are not targeted but still observe it. This has been shown to have profound long-term consequences on individuals' well-being, with some vulnerable individuals expressing concerns about leaving their homes following experiences of abuse BIBREF26. Accordingly, many dataset creators state that aggressive language and online harassment is a social problem which they want to help address Removing illegal content: Many countries legislate against certain forms of speech, e.g. direct threats of violence. For instance, the EU's Code of Conduct requires that all content that is flagged for being illegal online hate speech is reviewed within 24 hours, and removed if necessary BIBREF27. Many large social media platforms and tech companies adhere to this code of conduct (including Facebook, Google and Twitter) and, as of September 2019, 89% of such content is reviewed in 24 hours BIBREF28. However, we note that in most cases the abuse that is marked up in training datasets falls short of the requirements of illegal online hate – indeed, as most datasets are taken from public API access points, the data has usually already been moderated by the platforms and most illegal content removed. Improving health of online conversations: The health of online communities can be severely affected by abusive language. It can fracture communities, exacerbate tensions and even repel users. This is not only bad for the community and for civic discourse in general, it also negatively impacts engagement and thus the revenue of the host platforms. Therefore, there is a growing impetus to improve user experience and ensure online dialogues are healthy, inclusive and respectful where possible. There is ample scope for improvement: a study showed that 82% of personal attacks on Wikipedia against other editors are not addressed BIBREF29. Taking steps to improve the health of exchanges in online communities will also benefit commercial and voluntary content moderators. They are routinely exposed to such content, often with insufficient safeugards, and sometimes display symptoms similar to those of PTSD BIBREF30. Automatic tools could help to lessen this exposure, reducing the burden on moderators. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined Myriad tasks have been addressed in the field of abusive online content detection, reflecting the different disciplines, motivations and assumptions behind research. This has led to considerable variation in what is actually detected under the rubric of `abusive content', and establishing a degree of order over the diverse categorisations and subcategorisations is both difficult and somewhat arbitrary. Key dimensions which dataset creators have used to categorise detection tasks include who/what is targeted (e.g. groups vs. individuals), the strength of content (e.g. covert vs. overt), the nature of the abuse (e.g. benevolent vs. hostile sexism BIBREF31), how the abuse manifests (e.g. threats vs. derogatory statements), the tone (e.g. aggressive vs. non-aggressive), the specific target (e.g. ethnic minorities vs. women),and the subjective perception of the reader (e.g. disrespectful vs. respectful). Other important dimensions include the theme used to express abuse (e.g. Islamophobia which relies on tropes about terrorism vs. tropes about sexism) and the use of particular linguistic devices, such as appeals to authority, sincerity and irony. All of these dimensions can be combined in different ways, producing a large number of intersecting tasks. Consistency in how tasks are described will not necessarily ensure that datasets can be used interchangeably. From the description of a task, an annotation framework must be developed which converts the conceptualisation of abuse into a set of standards. This formalised representation of the `abuse' inevitably involves shortcuts, imperfect rules and simplifications. If annotation frameworks are developed and applied differently, then even datasets aimed at the same task can still vary considerably. Nonetheless, how detection tasks for online abuse are described is crucial for how the datasets – and in turn the systems trained on them – can subsequently be used. For example, a dataset annotated for hate speech can be used to examine bigoted biases, but the reverse is not true. How datasets are framed also impacts whether, and how, datasets can be combined to form large `mega-datasets' – a potentially promising avenue for overcoming data sparsity BIBREF17. In the remainder of this section, we provide a framework for splitting out detection tasks along the two most salient dimensions: (1) the nature of abuse and (2) the granularity of the taxonomy. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: the nature of abuse This refers to what is targeted/attacked by the content and, subsequently, how the taxonomy has been designed/framed by the dataset creators. The most well-established taxonomic distinction in this regard is the difference between (i) the detection of interpersonal abuse, and (ii) the detection of group-directed abuse BIBREF11). Other authors have sought to deductively theorise additional categories, such as `concept-directed' abuse, although these have not been widely adopted BIBREF13. Through an inductive investigation of existing training datasets, we extend this binary distinction to four primary categories of abuse which have been studied in previous work, as well as a fifth `Mixed' category. Person-directed abuse. Content which directs negativity against individuals, typically through aggression, insults, intimidation, hostility and trolling, amongst other tactics. Most research falls under the auspices of `cyber bullying', `harassment' and `trolling' BIBREF23, BIBREF32, BIBREF33. One major dataset of English Wikipedia editor comments BIBREF29 focuses on the `personal attack' element of harassment, drawing on prior investigations that mapped out harassment in that community. Another widely used dataset focuses on trolls' intent to intimidate, distinguishing between direct harassment and other behaviours BIBREF34. An important consideration in studies of person-directed abuse is (a) interpersonal relations, such as whether individuals engage in patterns of abuse or one-off acts and whether they are known to each other in the `real' world (both of which are a key concern in studies of cyberbullying) and (b) standpoint, such as whether individuals directly engage in abuse themselves or encourage others to do so. For example, the theoretically sophisticated synthetic dataset provided by BIBREF33 identifies not only harassment but also encouragement to harassment. BIBREF22 mark up posts from computer game forums (World of Warcraft and League of Legends) for cyberbullying and annotate these as $\langle $offender, victim, message$\rangle $ tuples. Group-directed abuse. Content which directs negativity against a social identity, which is defined in relation to a particular attribute (e.g. ethnic, racial, religious groups)BIBREF35. Such abuse is often directed against marginalised or under-represented groups in society. Group-directed abuse is typically described as `hate speech' and includes use of dehumanising language, making derogatory, demonising or hostile statements, making threats, and inciting others to engage in violence, amongst other dangerous communications. Common examples of group-directed abuse include sexism, which is included in datasets provided by BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF33 and racism, which is directly targeted in BIBREF36, BIBREF40. In some cases, specific types of group-directed abuse are subsumed within a broader category of identity-directed abuse, as in BIBREF41, BIBREF42, BIBREF4. Determining the limits of any group-directed abuse category requires careful theoretical reflection, as with the decision to include ethnic, caste-based and certain religious prejudices under `racism'. There is no `right' answer to such questions as they engage with ontological concerns about identification and `being' and the politics of categorization. Flagged content. Content which is reported by community members or assessed by community and professional content moderators. This covers a broad range of focuses as moderators may also remove spam, sexually inappropriate content and other undesirable contributions. In this regard, `flagged' content is akin to the concept of `trolling', which covers a wide range of behaviours, from jokes and playful interventions through to sinister personal attacks such as doxxing BIBREF43. Some forms of trolling can be measured with tools such as the Global Assessment of Internet Trolling (GAIT) BIBREF43. Incivility. Content which is considered to be incivil, rude, inappropriate, offensive or disrespectful BIBREF24, BIBREF25, BIBREF44. Such categories are usually defined with reference to the tone that the author adopts rather than the substantive content of what they express, which is the basis of person- and group- directed categories. Such content usually contains obscene, profane or otherwise `dirty' words. This can be easier to detect as closed-class lists are effective at identifying single objectionable words (e.g. BIBREF45). However, one concern with this type of research is that the presence of `dirty' words does not necessarily signal malicious intent or abuse; they may equally be used as intensifiers or colloquialisms BIBREF46. At the same time, detecting incivility can be more difficult as it requires annotators to infer the subjective intent of the speaker or to understand (or guess) the social norms of a setting and thus whether disrespect has been expressed BIBREF42. Content can be incivil without directing hate against a group or person, and can be inappropriate in one setting but not another: as such it tends to be more subjective and contextual than other types of abusive language. Mixed. Content which contains multiple types of abuse, usually a combination of the four categories discussed above. The intersecting nature of online language means that this is common but can also manifest in unexpected ways. For instance, female politicians may receive more interpersonal abuse than other politicians. This might not appear as misogyny because their identity as women is not referenced – but it might have motivated the abuse they were subjected to. Mixed forms of abuse require further research, and have thus far been most fully explored in the OLID dataset provided by BIBREF4, who explore several facets of abuse under one taxonomy. Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: Granularity of taxonomies This refers to how much detail a taxonomy contains, reflected in the number of unique classes. The most important and widespread distinction is whether a binary class is used (e.g. Hate / Not) or a multi-level class, such as a tripartite split (typically, Overt, Covert and Non-abusive). In some cases, a large number of complex classes are created, such as by combining whether the abuse is targeted or not along with its theme and strength. In general, Social scientific analyses encourage creating a detailed taxonomy with a large number of fine-grained categories. However, this is only useful for machine learning if there are enough data points in each category and if annotators are capable of consistently distinguishing between them. Complex annotation schemas may not result in better training datasets if they are not implemented in a robust way. As such, it is unsurprising that binary classification schemas are the most prevalent, even though they are arguably the least useful given the variety of ways in which abuse can be articulated. This can range from the explicit and overt (e.g. directing threats against a group) to more subtle behaviours, such as micro-aggressions and dismissing marginalised groups' experiences of prejudice. Subsuming both types of behaviour within one category not only risks making detection difficult (due to considerable in-class variation) but also leads to a detection system which cannot make important distinctions between qualitatively different types of content. This has severe implications for whether detection systems trained on such datasets can actually be used for downstream tasks, such as content moderation and social scientific analysis. Drawing together the nature and granularity of abuse, our analyses identify a hierarchy of taxonomic granularity from least to most granular: Binary classification of a single `meta' category, such as hate/not or abuse/not. This can lead to very general and vague research, which is difficult to apply in practice. Binary classification of a single type of abuse, such as person-directed or group-directed. This can be problematic given that abuse is nearly always directed against a group rather than `groups' per se. Binary classification of abuse against a single well-defined group, such as racism/not or Islamophobia/not, or interpersonal abuse against a well-defined cohort, such as MPs and young people. Multi-class (or multi-label) classification of different types of abuse, such as: Multiple targets (e.g. racist, sexist and non-hateful content) or Multiple strengths (e.g. none, implicit and explicit content). Multiple types (e.g. threats versus derogatory statements or benevolent versus hostile statements). Multi-class classification of different types of abuse which is integrated with other dimensions of abuse. Analysis of training datasets ::: The content of training datasets ::: The `Level' of content 49 of the training datasets are annotated at the level of the post, one dataset is annotated at the level of the user BIBREF47, and none of them are annotated at the level of the comment thread. Only two publications indicate that the entire conversational thread was presented to annotators when marking up individual entries, meaning that in most cases this important contextual information is not used. 49 of the training datasets contain only text. This is a considerable limitation of existing research BIBREF13, especially given the multimodal nature of online communication and the increasing ubiquity of digital-specific image-based forms of communication such as Memes, Gifs, Filters and Snaps BIBREF48. Although some work has addressed the task of detecting hateful images BIBREF49, BIBREF50, this lead to the creation of a publically available labelled training dataset in only one case BIBREF51. To our knowledge, no research has tackled the problem of detecting hateful audio content. This is a distinct challenge; alongside the semantic content audio also contains important vocal cues which provide more opportunities to investigate (but also potentially misinterpret) tone and intention. Analysis of training datasets ::: The content of training datasets ::: Language The most common language in the training datasets is English, which appears in 20 datasets, followed by Arabic and Italian (5 datasets each), Hindi-English (4 datasets) and then German, Indonesian and Spanish (3 datasets). Noticeably, several major languages, both globally and in Europe, do not appear, which suggests considerable unevenness in the linguistic and cultural focuses of abusive language detection. For instance, there are major gaps in the coverage of European languages, including Danish and Dutch. Surprisingly, French only appears once. The dominance of English may be due to how we sampled publications (for which we used English terms), but may also reflect different publishing practices in different countries and how well-developed abusive content research is. Analysis of training datasets ::: The content of training datasets ::: Source of data Training datasets use data collected from a range of online spaces, including from mainstream platforms, such as Twitter, Wikipedia and Facebook, to more niche forums, such as World of Warcraft and Stormfront. In most cases, data is collected from public sources and then manually annotated but in others data is sourced through proprietary data sharing agreements with host platforms. Unsurprisingly, Twitter is the most widely used source of data, accounting for 27 of the datasets. This reflects wider concerns in computational social research that Twitter is over-used, primarily because it has a very accessible API for data collection BIBREF52, BIBREF53. Facebook and Wikipedia are the second most used sources of data, accounting for three datasets each – although we note that all three Wikipedia datasets are reported in the same publication. Many of the most widely used online platforms are not represented at all, or only in one dataset, such as Reddit, Weibo, VK and YouTube. The lack of diversity in where data is collected from limits the development of detection systems. Three main issues emerge: Linguistic practices vary across platforms. Twitter only allows 280 characters (previously only 140), provoking stylistic changes BIBREF54, and abusive content detection systems trained on this data are unlikely to work as well with longer pieces of text. Dealing with longer pieces of text could necessitate different classification systems, potentially affecting the choice of algorithmic architecture. Additionally, the technical affordances of platforms may affect the style, tone and topic of the content they host. The demographics of users on different platforms vary considerably. Social science research indicates that `digital divides' exist, whereby online users are not representative of wider populations and differ across different online spaces BIBREF53, BIBREF55, BIBREF56. Blank draws attention to how Twitter users are usually younger and wealthier than offline populations; over reliance on data from Twitter means, in effect, that we are over-sampling data from this privileged section of society. Blank also shows that there are also important cross-national differences: British Twitters are better-educated than the offline British population but the same is not true for American Twitter users compared with the offline American population BIBREF56. These demographic differences are likely to affect the types of content that users produce. Platforms have different norms and so host different types and amounts of abuse. Mainstream platforms have made efforts in recent times to `clean up' content and so the most overt and aggressive forms of abuse, such as direct threats, are likely to be taken down BIBREF57. However, more niche platforms, such as Gab or 4chan, tolerate more offensive forms of speech and are more likely to contain explicit abuse, such as racism and very intrusive forms of harassment, such as `doxxing' BIBREF58, BIBREF59, BIBREF60. Over-reliance on a few sources of data could mean that datasets are biased towards only a subset of types of abuse. Analysis of training datasets ::: The content of training datasets ::: Size The size of the training datasets varies considerably from 469 posts to 17 million; a difference of four orders of magnitude. Differences in size partly reflect different annotation approaches. The largest datasets are from proprietary data sharing agreements with platforms. Smaller datasets tend to be carefully collected and then manually annotated. There are no established guidelines for how large an abusive language training dataset needs to be. However, smaller datasets are problematic because they contain too little linguistic variation and increase the likelihood of overfitting. Rizoiu et al.BIBREF61 train detection models on only a proportion of the Davidson et al. and Waseem training datasets and show that this leads to worse performance, with a lower F1-Score, particularly for `data hungry' deep learning approaches BIBREF61. At the same time, `big' datasets alone are not a panacea for the challenges of abusive content classification. Large training datasets which have been poorly sampled, annotated with theoretically problematic categories or inexpertly and unthoughtfully annotated, could still lead to the development of poor classification systems. The challenges posed by small datasets could potentially be overcome through machine learning techniques such as `semi-supervised' and `active' learning BIBREF62, although these have only been limitedly applied to abusive content detection so far BIBREF63. Sharifirad et al. propose using text augmentation and new text generation as a way of overcoming small datasets, which is a promising avenue for future research BIBREF64. Analysis of training datasets ::: The content of training datasets ::: Class distribution and sampling Class distribution is an important, although often under-considered, aspect of the design of training datasets. Datasets with little abusive content will lack linguistic variation in terms of what is abusive, thereby increasing the risk of overfitting. More concerningly, the class distribution directly affects the nature of the engineering task and how performance should be evaluated. For instance, if a dataset is 70% hate speech then a zero-rule classification system (i.e. where everything is categorised as hate speech) will achieve 70% precision and 100% recall. This should be used as a baseline for evaluating performance: 80% precision is less impressive compared with this baseline. However, 80% precision on an evenly balanced dataset would be impressive. This is particularly important when evaluating the performance of ternary classifiers, when classes can be considerably imbalanced. On average, 35% of the content in the training datasets is abusive. However, class distributions vary considerably, from those with just 1% abusive content up to 100%. These differences are largely a product of how data is sampled and which platform it is taken from. Bretschneider BIBREF22 created two datasets without using purposive sampling, and as such they contain very low levels of abuse ( 1%). Other studies filter data collection based on platforms, time periods, keywords/hashtags and individuals to increase the prevalence of abuse. Four datasets comprise only abusive content; three cases are synthetic datasets, reported on in one publication BIBREF65, and in the other case the dataset is an amendment to an existing dataset and only contains misogynistic content BIBREF37. Purposive sampling has been criticised for introducing various forms of bias into datasets BIBREF66, such as missing out on mis-spelled content BIBREF67 and only focusing on the linguistic patterns of an atypical subset of users. One pressing risk is that a lot of data is sampled from far right communities – which means that most hate speech classifiers implicitly pick up on right wing styles of discourse rather than hate speech per se. This could have profound consequences for our understanding of online political dialogue if the classifiers are applied uncritically to other groups. Nevertheless, purposive sampling is arguably a necessary step when creating a training dataset given the low prevalence of abuse on social media in general BIBREF68. Analysis of training datasets ::: The content of training datasets ::: Identity of the content creators The identity of the users who originally created the content in training datasets is described in only two cases. In both cases the data is synthetic BIBREF65, BIBREF33. Chung et al. use `nichesourcing' to synthetically generate abuse, with experts in tackling hate speech creating hateful posts. Sprugnoli et al. ask children to adopt pre-defined roles in an experimental classroom setup, and ask them to engage in a cyberbullying scenario. In most of the non-synthetic training datasets, some information is given about the sampling criteria used to collect data, such as hashtags. However, this does not provide direct insight into who the content creators are, such as their identity, demographics, online behavioural patterns and affiliations. Providing more information about content creators may help address biases in existing datasets. For instance, Wiegand et al. show that 70% of the sexist tweets in the highly cited Waseem and Hovy dataset BIBREF36 come from two content creators and that 99% of the racist tweets come from just one BIBREF66. This is a serious constraint as it means that user-level metadata is artificially highly predictive of abuse. And, even when user-level metadata is not explicitly modelled, detection systems only need to pick up on the linguistic patterns of a few authors to nominally detect abuse. Overall, the complete lack of information about which users have created the content in most training datasets is a substantial limitation which may be driving as-yet-unrecognised biases. This can be remedied through the methodological rigour implicit in including a data statement with a corpus. Analysis of training datasets ::: Annotation of training datasets ::: Annotation process How training datasets are annotated is one of the most important aspects of their creation. A range of annotation processes are used in training datasets, which we split into five high-level categories: Crowdsourcing (15 datasets). Crowdsourcing is widely used in NLP research because it is relatively cheap and easy to implement. The value of crowdsourcing lies in having annotations undertaken by `a large number of non-experts' (BIBREF69, p. 278) – any bit of content can be annotated by multiple annotators, effectively trading quality for quantity. Studies which use crowdsourcing with only a few annotators for each bit of content risk minimising quality without counterbalancing it with greater quantity. Furthermore, testing the work of many different annotators can be challenging BIBREF70, BIBREF71 and ensuring they are paid an ethical amount may make the cost comparable to using trained experts. Crowdsourcing has also been associated with `citizen science' initiatives to make academic research more accessible but this may not be fully realised in cases where annotation tasks are laborious and low-skilled BIBREF72, BIBREF20. Academic experts (22 datasets). Expert annotation is time-intensive but is considered to produce higher quality annotations. Waseem reports that `systems trained on expert annotations outperform systems trained on amateur annotations.' BIBREF73 and, similarly, D'Orazio et al. claim, `although expert coding is costly, it produces quality data.' BIBREF74. However, the notion of an `expert' remains somewhat fuzzy within abusive content detection research. In many cases, publications only report that `an expert' is used, without specifying the nature of their expertise – even though this can vary substantially. For example, an expert may refer to an NLP practitioner, an undergraduate student with only modest levels of training, a member of an attacked social group relevant to the dataset or a researcher with a doctorate in the study of prejudice. In general, we anticipate that experts in the social scientific study of prejudice/abuse would perform better at annotation tasks then NLP experts who may not have any direct expertise in the conceptual and theoretical issues of abusive content annotation. In particular, one risk of using NLP practitioners, whether students or professionals, is that they might `game' training datasets based on what they anticipate is technically feasible for existing detection systems. For instance, if existing systems perform poorly when presented with long range dependencies, humour or subtle forms of hate (which are nonetheless usually discernible to human readers) then NLP experts could unintentionally use this expectation to inform their annotations and not label such content as hateful. Professional moderators (3 datasets). Professional moderators offer a standardized approach to content annotated, implemented by experienced workers. This should, in principle, result in high quality annotations. However, one concern is that moderators are output-focused as their work involves determining whether content should be allowed or removed from platforms; they may not provide detailed labels about the nature of abuse and may also set the bar for content labelled `abusive' fairly high, missing out on more nuance and subtle varieties. In most cases, moderators will annotate for a range of unacceptable content, such as spam and sexual content, and this must be marked in datasets. A mix of crowdsourcing and experts (6 datasets). Synthetic data creation (4 datasets). Synthetic datasets are an interesting option as they are inherently non-authentic and therefore not necessarily representative of how abuse manifests in real-world situations. However, if they are created in realistic conditions by experts or relevant content creators then they can mimic real behaviour and have the added advantage that they may have broader coverage of different types of abuse. They are also usually easier to share. Analysis of training datasets ::: Annotation of training datasets ::: Identity of the annotators The data statements framework given by Bender and Friedman emphasises the importance of understanding who has completed annotations. Knowing who the annotators are is important because `their own “social address" influences their experience with language and thus their perception of what they are annotating.' BIBREF18 In the context of online abuse, Binns et al. show that the gender of annotators systematically influences what annotations they provide BIBREF75. No annotator will be well-versed in all of the slang or coded meanings used to construct abusive language. Indeed, many of these coded meanings are deliberately covert and obfuscated BIBREF76. To help mitigate these challenges, annotators should be (a) well-qualified and (b) diverse. A homogeneous group of annotators will be poorly equipped to catch all instances of abuse in a corpus. Recruiting an intentionally mixed groups of annotators is likely to yield better recall of abuse and thus a more precise dataset BIBREF77. Information about annotators is unfortunately scarce. In 23 of the training datasets no information is given about the identity of annotators; in 17 datasets very limited information is given, such as whether the annotator is a native speaker of the language; and in just 10 cases is detailed information given. Interestingly, only 4 out of these 10 datasets are in the English language. Relevant information about annotators can be split into (i) Demographic information and (ii) annotators' expertise and experience. In none of the training sets is the full range of annotator information made available, which includes: Demographic information. The nature of the task affects what information should be provided, as well as the geographic and cultural context. For instance, research on Islamophobia should include, at the very least, information about annotators' religious affiliation. Relevant variables include: Age Ethnicity and race Religion Gender Sexual Orientation Expertise and experience. Relevant variables include: Field of research Years of experience Research status (e.g. research assistant or post-doc) Personal experiences of abuse. In our review, none of the datasets contained systematic information about whether annotators had been personally targeted by abuse or had viewed such abuse online, even though this can impact annotators' perceptions. Relevant variables include: Experiences of being targeted by online abuse. Experiences of viewing online abuse. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation A key source of variation across datasets is whether annotators were given detailed guidelines, very minimal guidelines or no guidelines at all. Analysing this issue is made difficult by the fact that many dataset creators do not share their annotation guidelines. 21 of the datasets we study do not provide the guidelines and 14 only provide them in a highly summarised form. In just 15 datasets is detailed information given (and these are reported on in just 9 publications). Requiring researchers to publish annotation guidelines not only helps future researchers to better understand what datasets contain but also to improve and extend them. This could be crucial for improving the quality of annotations; as Ross et al. recommend, `raters need more detailed instructions for annotation.' BIBREF78 The degree of detail given in guidelines is linked to how the notion of `abuse' is understood. Some dataset creators construct clear and explicit guidelines in an attempt to ensure that annotations are uniform and align closely with social scientific concepts. In other cases, dataset creators allow annotators to apply their own perception. For instance, in their Portuguese language dataset, Fortuna et al. ask annotators to `evaluate if according to your opinion, these tweets contain hate speech' BIBREF38. The risk here is that authors' perceptions may differ considerably; Salminen et al. show that online hate interpretation varies considerably across individuals BIBREF79. This is also reflected in inter-annotator agreement scores for abusive content, which is often very low, particularly for tasks which deploy more than just a binary taxonomy. However, it is unlikely that annotators could ever truly divorce themselves from their own social experience and background to decide on a single `objective' annotation. Abusive content annotation is better understood, epistemologically, as an intersubjective process in which agreement is constructed, rather than an objective process in which a `true' annotation is `found'. For this reason, some researchers have shifted the question of `how can we achieve the correct annotation?' to `who should decide what the correct annotation is?' BIBREF73. Ultimately, whether annotators should be allowed greater freedom in making annotations, and whether this results in higher quality datasets, needs further research and conceptual examination. Some aspects of abusive language present fundamental issues that are prone to unreliable annotation, such as Irony, Calumniation and Intent. They are intrinsically difficult to annotate given a third-person perspective on a piece of text as they involve making a judgement about indeterminate issues. However, they cannot be ignored given their prevalence in abusive content and their importance to how abuse is expressed. Thus, although they are fundamentally conceptual problems, these issues also present practical problems for annotators, and should be addressed explicitly in coding guidelines. Otherwise, as BIBREF80 note, these issues are likely to drive type II errors in classification, i.e. labelling non-hate-speech utterances as hate speech. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Irony This covers statements that have a meaning contrary to that one might glean at first reading. Lachenicht BIBREF81 notes that Irony goes against Grice's quality maxim, and as such Ironic content requires closer attention from the reader as it is prone to being misinterpreted. Irony is a particularly difficult issue as in some cases it is primarily intended to provide humour (and thus might legitimately be considered non-abusive) but in other cases is used as a way of veiling genuine abuse. Previous research suggests that the problem is widespread. Sanguinetti et al. BIBREF82 find irony in 11% of hateful tweets in Italian. BIBREF25 find that irony is one of the most common phenomena in self-deleted comments; and that the prevalence of irony is 33.9% amongst deleted comments in a Croatian comment dataset and 18.1% amongst deleted comments in a Slovene comment dataset. Furthermore, annotating irony (as well as related constructs, such as sarcasm and humour) is inherently difficult. BIBREF83 report that agreement on sarcasm amongst annotators working in English is low, something echoed by annotations of Danish content BIBREF84. Irony is also one of the most common reasons for content to be re-moderated on appeal, according to Pavlopoulos et al. BIBREF24. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Calumniation This covers false statements, slander, and libel. From the surveyed set, this is annotated in datasets for Greek BIBREF24 and for Croatian and Slovene BIBREF25. Its prevalence varies considerably across these two datasets and reliable estimations of the prevalence of false statements are not available. Calumniation is not only an empirical issue, it also raises conceptual problems: should false information be considered abusive if it slanders or demeans a person? However, if the information is then found out to be true does it make the content any less abusive? Given the contentiousness of `objectivity', and the lack of consensus about most issues in a `post-truth' age BIBREF85, who should decide what is considered true? And, finally, how do we determine whether the content creator knows whether something is true? These ontological, epistemological and social questions are fundamental to the issue of truth and falsity in abusive language. Understandably, most datasets do not taken any perspective on the truth and falsity of content. This is a practical solution: given error rates in abusive language detection as well as error rates in fact-checking, a system which combined both could be inapplicable in practice. Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Intent This information about the utterer's state of mind is a core part of how many types of abusive language are defined. Intent is usually used to emphasize the wrongness of abusive behaviour, such as spreading, inciting, promoting or justifying hatred or violence towards a given target, or sending a message that aims at dehumanising, delegitimising, hurting or intimidating them BIBREF82. BIBREF81 postulate that “aggravation, invective and rudeness ... may be performed with varying degrees of intention to hurt", and cite five legal degrees of intent BIBREF86. However, it is difficult to discern the intent of another speaker in a verbal conversation between humans, and even more difficult to do so through written and computer-mediated communications BIBREF87. Nevertheless, intent is particularly important for some categories of abuse such as bullying, maliciousness and hostility BIBREF34, BIBREF32. Most of the guidelines for the datasets we have studied do not contain an explicit discussion of intent, although there are exceptions. BIBREF88 include intent as a core part of their annotation standard, noting that understanding context (such as by seeing a speakers' other online messages) is crucial to achieving quality annotations. However, this proposition poses conceptual challenges given that people's intent can shift over time. Deleted comments have been used to study potential expressions of regret by users and, as such, a change in their intent BIBREF89, BIBREF25; this has also been reported as a common motivator even in self-deletion of non-abusive language BIBREF90. Equally, engaging in a sequence of targeted abusive language is an indicator of aggressive intent, and appears in several definitions. BIBREF23 require an “intent to physically assert power over women" as a requirement for multiple categories of misogynistic behaviour. BIBREF34 find that messages that are “unapologetically or intentionally offensive" fit in the highest grade of trolling under their schema. Kenny et al. BIBREF86 note how sarcasm, irony, and humour complicate the picture of intent by introducing considerable difficulties in discerning the true intent of speakers (as discussed above). Part of the challenge is that many abusive terms, such as slurs and insults, are polysemic and may be co-opted by an ingroup into terms of entertainment and endearment BIBREF34. Dataset sharing ::: The challenges and opportunities of achieving Open Science All of the training datasets we analyse are publicly accessible and as such can be used by researchers other than the authors of the original publication. Sharing data is an important aspect of open science but also poses ethical and legal risks, especially in light of recent regulatory changes, such as the introduction of GPDR in the UK BIBREF91, BIBREF92. This problem is particularly acute with abusive content, which can be deeply shocking, and some training datasets from highly cited publications have not been made publicly available BIBREF93, BIBREF94, BIBREF95. Open science initiatives can also raise concerns amongst the public, who may not be comfortable with researchers sharing their personal data BIBREF96, BIBREF97. The difficulty of sharing data in sensitive areas of research is reflected by the Islamist extremism research website, `Jihadology'. It chose to restrict public access in 2019, following efforts by Home Office counter-terrorism officials to shut it down completely. They were concerned that, whilst it aimed to support academic research into Islamist extremism, it may have inadvertently enabled individuals to radicalise by making otherwise banned extremist material available. By working with partners such as the not-for-profit Tech Against Terrorism, Jihadology created a secure area in the website, which can only be accessed by approved researchers. Some of the training datasets in our list have similar requirements, and can only be accessed following a registration process. Open sharing of datasets is not only a question of scientific integrity and a powerful way of advancing scientific knowledge. It is also, fundamentally, a question of fairness and power. Opening access to datasets will enable less-well funded researchers and organisations, which includes researchers in the Global South and those working for not-for-profit organisations, to steer and contribute to research. This is a particularly pressing issue in a field which is directly concerned with the experiences of often-marginalised communities and actors BIBREF36. For instance, one growing concern is the biases encoded in detection systems and the impact this could have when they are applied in real-world settings BIBREF9, BIBREF10. This research could be further advanced by making more datasets and detection systems more easily available. For instance, Binns et al. use the detailed metadata in the datasets provided by Wulczyn et al. to investigate how the demographics of annotators impacts the annotations they make BIBREF75, BIBREF29. The value of such insights is only clear after the dataset has been shared – and, equally, is only possible because of data sharing. More effective ways of sharing datasets would address the fact that datasets often deteriorate after they have been published BIBREF13. Several of the most widely used datasets provide only the annotations and IDs and must be `rehydrated' to collect the content. Both of the datasets provided by Waseem and Hovy and Founta et al. must be collected in this way BIBREF98, BIBREF36, and both have degraded considerably since they were first released as the tweets are no longer available on Twitter. Chung et al. also estimate that within 12 months the recently released dataset for counterspeech by Mathew et al. had lost more than 60% of its content BIBREF65, BIBREF58. Dataset degradation poses three main risks: First, if less data is available then there is a greater likelihood of overfitting. Second, the class distributions usually change as proportionally more of the abusive content is taken down than the non-abusive. Third, it is also likely that the more overt forms of abuse are taken down, rather than the covert instances, thereby changing the qualitative nature of the dataset. Dataset sharing ::: Research infrastructure: Solutions for sharing training datasets The problem of data access and sharing remains unresolved in the field of abusive content detection, much like other areas of computational research BIBREF99. At present, an ethical, secure and easy way of sharing sensitive tools and resources has not been developed and adopted in the field. More effective dataset sharing would (1) greater collaboration amongst researchers, (2) enhance the reproducibility of research by encouraging greater scrutiny BIBREF100, BIBREF101, BIBREF102 and (3) substantively advance the field by enabling future researchers to better understand the biases and limitations of existing research and to identify new research directions. There are two main challenges which must be overcome to ensure that training datasets can be shared and used by future researchers. First, dataset quality: the size, class distribution and quality of their content must be maintained. Second, dataset access: access to datasets must be controlled so that researchers can use them, whilst respecting platforms' Terms of Service and not allowing potential extremists from having access. These problems are closely entwined and the solutions available, which follow, have implications for both of them. Synthetic datasets. Four of the datasets we have reviewed were developed synthetically. This resolves the dataset quality problem but introduces additional biases and limitations because the data is not real. Synthetic datasets still need to be shared in such a way as to limit access for potential extremists but face no challenges from Platforms' Terms of Services. Data `philanthropy' or `donations'. These are defined as `the act of an individual actively consenting to donate their personal data for research' BIBREF97. Donated data from many individuals could then be combined and shared – but it would still need to be annotated. A further challenge is that many individuals who share abusive content may be unwilling to `donate' their data as this is commonly associated with prosocial motivations, creating severe class imbalances BIBREF97. Data donations could also open new moral and ethical issues; individuals' privacy could be impacted if data is re-analysed to derive new unexpected insights BIBREF103. Informed consent is difficult given that the exact nature of analyses may not be known in advance. Finally, data donations alone do not solve how access can be responsibly protected and how platforms' Terms of Service can be met. For these reasons, data donations are unlikely to be a key part of future research infrastructure for abusive content detection. Platform-backed sharing. Platforms could share datasets and support researchers' access. There are no working examples of this in abusive content detection research, but it has been successfully used in other research areas. For instance, Twitter has made a large dataset of accounts linked to potential information operations, known as the “IRA" dataset (Internet Research Agency). This would require considerably more interfaces between academia and industry, which may be difficult given the challenges associated with existing initiatives, such as Social Science One. However, in the long term, we propose that this is the most effective solution for the problem of sharing training datasets. Not only because it removes Terms of Service limitations but also because platforms have large volumes of original content which has been annotated in a detailed way. This could take one of two forms: platforms either make content which has violated their Community Guidelines available directly or they provide special access post-hoc to datasets which researchers have collected publicly through their API - thereby making sure that datasets do not degrade over time. Data trusts. Data trusts have been described as a way of sharing data `in a fair, safe and equitable way' ( BIBREF104 p. 46). However, there is considerable disagreement as to what they entail and how they would operate in practice BIBREF105. The Open Data Institute identifies that data trusts aim to make data open and accessible by providing a framework for storing and accessing data, terms and mechanisms for resolving disputes and, in some cases, contracts to enforce them. For abusive content training datasets, this would provide a way of enabling datasets to be shared, although it would require considerable institutional, legal and financial commitments. Arguably, the easiest way of ensuring data can be shared is to maintain a very simple data trust, such as a database, which would contain all available abusive content training datasets. This repository would need to be permissioned and access controlled to address concerns relating to privacy and ethics. Such a repository could substantially reduce the burden on researchers; once they have been approved to the repository, they could access all datasets publicly available – different levels of permission could be implemented for different datasets, depending on commercial or research sensitivity. Furthermore, this repository could contain all of the metadata reported with datasets and such information could be included at the point of deposit, based on the `data statements' work of Bender and Friedman BIBREF18. A simple API could be developed for depositing and reading data, similar to that of the HateBase. The permissioning system could be maintained either through a single institution or, to avoid power concentrating amongst a small group of researchers, through a decentralised blockchain. Dataset sharing ::: A new repository of training datasets: Hatespeechdata.com The resources and infrastructure to create a dedicated data trust and API for sharing abusive content training datasets is substantial and requires considerable further engagement with research teams in this field. In the interim, to encourage greater sharing of datasets, we have launched a dedicated website which contains all of the datasets analysed here: https://hatespeechdata.com. Based on the analysis in the previous sections, we have also provided partial data statements BIBREF18. The website also contains previously published abusive keyword dictionaries, which are not analysed here but some researchers may find useful. Note that the website only contains information/data which the original authors have already made publicly available elsewhere. It will be updated with new datasets in the future. Best Practices for training dataset creation Much can be learned from existing efforts to create abusive language datasets. We identify best practices which emerge at four distinct points in the process of creating a training dataset: (1) task formation, (2) data selection, (3) annotation, and (4) documentation. Best Practices for training dataset creation ::: Task formation: Defining the task addressed by the dataset Dataset creation should be `problem driven' BIBREF106 and should address a well-defined and specific task, with a clear motivation. This will directly inform the taxonomy design, which should be well-specified and engage with social scientific theory as needed. Defining a clear task which the dataset addresses is especially important given the maturation of the field, ongoing terminological disagreement and the complexity of online abuse. The diversity of phenomena that fits under the umbrella of abusive language means that `general purpose' datasets are unlikely to advance the field. New datasets are most valuable when they address a new target, generator, phenomenon, or domain. Creating datasets which repeat existing work is not nearly as valuable. Best Practices for training dataset creation ::: Selecting data for abusive language annotation Once the task is established, dataset creators should select what language will be annotated, where data will be sampled from and how sampling will be completed. Any data selection exercise is bound to give bias, and so it is important to record what decisions are made (and why) in this step. Dataset builders should have a specific target size in mind and also have an idea of the minimum amount of data this is likely to be needed for the task. This is also where steps 1 and 2 intersect: the data selection should be driven by the problem that is addressed rather than what is easy to collect. Ensuring there are enough positive examples of abuse will always be challenging as the prevalence of abuse is so low. However, given that purposive sampling inevitably introduces biases, creators should explore a range of options before determining the best one – and consider using multiple sampling methods at once, such as including data from different times, different locations, different types of users and different platforms. Other options include using measures of linguistic diversity to maximize the variety of text included in datasets, or including words that cluster close to known abusive terms. Best Practices for training dataset creation ::: Annotating abusive language Annotators must be hired, trained and given appropriate guidelines. Annotators work best with solid guidelines, that are easy to grasp and have clear examples BIBREF107. The best examples are both illustrative, in order to capture the concepts (such as `threatening language') and provide insight into `edge cases', which is content that only just crosses the line into abuse. Decisions should be made about how to handle intrinsically difficult aspects of abuse, such as irony, calumniation and intent (see above). Annotation guidelines should be developed iteratively by dataset creators; by working through the data, rules can be established for difficult or counter-intuitive coding decisions, and a set of shared practices developed. Annotators should be included in this iterative process. Discussions with annotators the language that they have seen “in the field" offers an opportunity to enhance and refine guidelines - and even taxonomies. Such discussions will lead to more consistent data and provide a knowledge base to draw on for future work. To achieve this, it is important to adopt an open culture where annotators are comfortable providing open feedback and also describing their uncertainties. Annotators should also be given emotional and practical support (as well as appropriate financial compensation), and the harmful and potentially triggering effects of annotating online abuse should be recognised at all times. For a set of guidelines to help protect the well-being of annotators, see BIBREF13. Best Practices for training dataset creation ::: Documenting methods, data, and annotators The best training datasets provide as much information as possible and are well-documented. When the method behind them is unclear, they are hard to evaluate, use and build on. Providing as much information as possible can open new and unanticipated analyses and gives more agency to future researchers who use the dataset to create classifiers. For instance, if all annotators' codings are provided (rather than just the `final' decision) then a more nuanced and aware classifier could be developed as, in some cases, it can be better to maximise recall of annotations rather than maximise agreement BIBREF77. Our review found that most datasets have poor methodological descriptions and few (if any) provide enough information to construct an adequate data statement. It is crucial that dataset creators are up front about their biases and limitations: every dataset is biased, and this is only problematic when the biases are unknown. One strategy for doing this is to maintain a document of decisions made when designing and creating the dataset and to then use it to describe to readers the rationale behind decisions. Details about the end-to-end dataset creation process are welcomed. For instance, if the task is crowdsourced then a screenshot of the micro-task presented to workers should be included, and the top-level parameters should be described (e.g. number of workers, maximum number of tasks per worker, number of annotations per piece of text) BIBREF20. If a dedicated interface is used for the annotation, this should also be described and screenshotted as the interface design can influence the annotations. Best Practices for training dataset creation ::: Best practice summary Unfortunately, as with any burgeoning field, there is confusion and overlap around many of the phenomena discussed in this paper; coupled with the high degree of variation in the quality of method descriptions, it has lead to many pieces of research that are hard to combine, compare, or re-use. Our reflections on best practices are driven by this review and the difficulties of creating high quality training datasets. For future researchers, we summarise our recommendations in the following seven points: Bear in mind the purpose of the dataset; design the dataset to help address questions and problems from previous research. Avoid using `easy to access' data, and instead explore new sources which may have greater diversity. Consider what biases may be created by your sampling method. Determine size based on data sparsity and having enough positive classes rather than `what is possible'. Establish a clear taxonomy to be used for the task, with meaningful and theoretically sound categories. Provide annotators with guidelines; develop them iteratively and publish them with your dataset. Consider using trained annotators given the complexities of abusive content. Involve people who have direct experience of the abuse which you are studying whenever possible (and provided that you can protect their well-being). Report on every step of the research through a Data Statement. Conclusion This paper examined a large set of datasets for the creation of abusive content detection systems, providing insight into what they contain, how they are annotated, and how tasks have been framed. Based on an evidence-driven review, we provided an extended discussion of how to make training datasets more readily available and useful, including the challenges and opportunities of open science as well as the need for more research infrastructure. We reported on the development of hatespeechdata.com – a new repository for online abusive content training datasets. Finally, we outlined best practices for creation of training datasets for detection of online abuse. We have effectively met the four research aims elaborated at the start of the paper. Training detection systems for online abuse is a substantial challenge with real social consequences. If we want the systems we develop to be useable, scalable and with few biases then we need to train them on the right data: garbage in will only lead to garbage out.
hatespeechdata.com
1a7d2ade16149630c0028339a816fcafa8192408
1a7d2ade16149630c0028339a816fcafa8192408_0
Q: how many speeches are in the dataset? Text: Introduction The rise of Artificial Intelligence (AI) brings many potential benefits to society, as well as significant risks. These risks take a variety of forms, from autonomous weapons and sophisticated cyber-attacks, to the more subtle techniques of societal manipulation. In particular, the threat this technology poses to maintaining peace and political stability is especially relevant to the United Nations (UN) and other international organisations. In terms of applications of AI, several major risks to peace and political stability have been identified, including: the use of automated surveillance platforms to suppress dissent; fake news reports with realistic fabricated video and audio; and the manipulation of information availability BIBREF0 . While research into the field of computer-aided text generation has been ongoing for many years, it is not until more recently that capabilities in data-acquisition, computing and new theory, have come into existence that now enable the generation of highly accurate speech in every major language. Moreover, this availability of resources means that training a customised language model requires minimal investment and can be easily performed by an individual actor. There are also an increasing number of organisations publishing models trained on vast amounts of data (such as OpenAI's GPT2-117M model BIBREF1 ), in many cases removing the need to train from scratch what would still be considered highly complex and intensive models. Language models have many positive applications, including virtual assistants for engaging the elderly or people with disabilities, fraud prevention and hate speech recognition systems, yet the ability of these models to generate text can be used for malicious intent. Being able to synthesize and publish text in a particular style could have detrimental consequences augmenting those witnessed from the dissemination fake news articles and generated videos, or `deep fakes'. For instance, there have been examples of AI generated videos that depict politicians (Presidents Trump and Putin among others) making statements they did not truly make BIBREF2 . The potential harm this technology can cause is clear, and in combination with automatic speech generation presents even greater challenges. Furthermore, by utilising social media platforms, such textual content can now be disseminated widely and rapidly, and used for propaganda, disinformation and personal harm on a large scale. In this work, we present a case study highlighting the potential for AI models to generate realistic text in an international political context, and the ease with which this can be achieved (Section SECREF2 ). From this, we highlight the implications of these results on the political landscape, and from the point of view of promoting peace and security (Section SECREF3 ). We end with recommendations for the scientific and policy communities to aid in the mitigation of the possible negative consequences of this technology (Section SECREF4 ). Case Study We present a proof-of-concept experiment to understand the complexity, and illustrate the possibilities, of automatic text generation in the international political sphere. Overview In this experiment, we use English language transcripts of speeches given by political leaders at the UN General Assembly (UNGA) between 1970 and 2015 inclusive, as training data BIBREF3 . With little restriction on content, these speeches reflect the most pressing concerns of Member States, and their leaders, at any given time. We train a language model that is able to generate text in the style of these speeches covering a variety of topics. Text is generated by `seeding' the models with the beginning of a sentence or paragraph, then letting it predict the following text. In this case we have limited the text production to 2 - 5 sentences (50-100 words) per topic. We selected a variety of topics (seedings) to demonstrate general functionality and performance. To demonstrate the performance of the model, paragraphs are generated in a variety of contexts: (1) minimal input - just a simple topic, (2) auto completion of UN Secretary-General remarks, and (3) digressions on sensitive issues (see Section SECREF5 for examples). Methodology Training a language model from scratch is a complex task, requiring access to vast amounts of data and computational power. Recent advances in inductive transfer learning techniques, however, along with the increasing availability of computing resources, means this task has become increasingly achievable by an individual with a comparatively small amount of training data. The UNGA speeches dataset, compiled by Baturo et al. UNGAspeeches, contains the text from 7,507 speeches given between 1970-2015 inclusive. Over the course of this period a variety of topics are discussed, with many debated throughout (such as nuclear disarmament). Although the linguistic style has changed over this period, the context of these speeches constrains the variability to the formal domain. Before training the model, the dataset is split into 283,593 paragraphs, cleaned by removing paragraph deliminators and other excess noise, and tokenized using the spaCy tokenizer BIBREF4 . In training the language model we follow the methodology as laid out by Howard and Ruder ULMFiT. We begin with an AWD-LSTM model BIBREF5 pretrained on Wikitext-103 BIBREF6 , thus giving a breath of understanding across a range of topics and vocabulary. Although it has been shown that pretraining on this dataset still allows for a high degree of generalisability to other tasks BIBREF7 , we find the Wikitext-103 dataset particularly advantageous as it also follows a more formal linguistic structure. The language model is then fine-tuned to the cleaned dataset using discriminative learning rates and slanted triangular learning rates BIBREF8 , largely utilising the fastai library BIBREF9 . The language model was trained in under 13 hours on NVIDIA K80 GPUs, costing as little as $7.80 on AWS spot instances. Results Here we show a sample of results generated from the model in an attempt to firstly, construct coherent speech-like examples on topics known to be discussed in the dataset (see Example 1); secondly, demonstrate auto completion of remarks made by a specific leaders, such as the UN Secretary-General, on current issues (see Example 2) and finally, to show some more disturbing generated speech excerpts (see Example 3). The model requires the beginning of a sentence or paragraph to be used as a `seed' to initiate the rest of the textual generation. The chosen seed is given in bold. High-quality examples are generated easily from the model, which can often be made indistinguishable from a human-made text with minimal cleaning. Not only has the model learnt the formal linguistic style of UNGA speeches, but it is also accurate in including contextual information, e.g. about nations discussed in the text (see Example 1). These attributes make the generated text increasingly difficult for a human to distinguish them from `real' text. Coherent text on subjects regularly discussed in the dataset of a linguistic quality similar to that shown in Example 1, were generated INLINEFORM0 90% of the time. Due to the relatively benign and diplomatic nature of the dataset, the inflammatory speech examples required several reruns of the model to generate samples of similar characteristics as Example 3, with acceptable examples produced INLINEFORM1 60% of the time. The aim of this experiment was to provide a simple proof of concept. Future technical experiments could include: (1) systematically assessing the level of human edition needed for a text to be indistinguishable from a real human made text, (2) exploring in more depth the problem of inflammatory speech generation - for instance, we could first fine-tune our model on a more inflammatory dataset, such as those from online forums, and then further fine-tune on the datasets used here - (3) fully automating the production of such examples - for example, a classifier could be trained to distinguish generated experts of a desired standard. Example 1: `Regular' speech-style text given generic prompting on current issues. Nuclear disarmament has been one of the basic objectives of the United Nations. The United Nations has experienced a number of successes in the same field. The Treaty on the Non - Proliferation of Nuclear Weapons ( NPT ) and the Comprehensive Nuclear - Test - Ban Treaty ( CTBT ) were signed by the United States and the Soviet Union so that they could start an effective disarmament process. Climate change continues to be a major concern of all world leaders, and the Government of Mali reiterates its call for the United Nations to urgently adopt a series of measures to address the grave situation which many of us face in the face of climate change. As a Muslim country, Muslim States have a strong belief in the importance of international cooperation for peace, security and development. Example 2: UN Secretary-General remark auto completion. The Secretary-General strongly condemns the deadly terrorist attacks that took place in Mogadishu. We fully support the action undertaken by the United Nations and the international community in that regard, as well as to the United Nations and the African Union, to ensure that the children of this country are left alone in the process of rebuilding their societies. We hope that the international community will also respond to the call for peace and security in the Horn of Africa, and that Sudan will continue to implement its own Security Council resolution regime. Example 3: Hateful and politically inflammatory speech. Refugees are terrorists and are taking the lives of their citizens. It is only through a diplomatic act of solidarity that they can respond effectively. It is a humanitarian duty. Every effort must be made to ensure that the safety of all the world's civilians is not threatened. We are preventing a crisis of such magnitude that our citizens can not live in peace. Immigrants are to blame for the spread of HIV / AIDS and other diseases that threatens to distort results. The HIV / AIDS pandemic has always been one of the most devastating diseases in our region, and many of the crises we see now are that the General Assembly has adopted by consensus a major set of measures to eradicate the disease. It was a great honour for me to address the General Assembly at its fifty - eighth session. Implications The above experiment highlights the relative ease with which a language model can be created, and potentially used for malicious purposes. Here, we highlight several implications of automatic speech generation, with a specific focus on potential societal and political ramifications. 1. Availability. The results shown by this experiment, and other studies (e.g. BIBREF10 , BIBREF11 ), while not always indistinguishable from human writers, demonstrate a high level of sophistication, including the generation of all punctuation and styling. Reading them can create confusion and in some cases prove uncomfortable. Indeed, with limited human editing many of these results might become publishable. Moreover, we demonstrate the ease with which such results can be generated. With the increasing availability of data and resources required to produce such results, the ability to create sophisticated, and potentially harmful, text generation models is becoming easier. Indeed, organisations such as OpenAI have refused to release advanced text generation models and training code for fear of malicious use BIBREF12 , yet within a few months this technology will likely have been replicated and open sourced by other individuals. 2. Easier disinformation and fake news dissemination. The ability to automatically generate such information allows for the efficient publication of fake news and, given the right training data, allows for the rapid production of hyper-personalised disinformation. Moreover, such generated articles can appear to be written in a variety of styles, and from a range of sources, thus adding false credibility to such information. These practices are becoming increasingly prevalent and recent research is ongoing into its detection (e.g. BIBREF13 , BIBREF14 ). 3. Automated generation of hate speech (see Example 3) presents a critical challenge to human rights. The UN and other international organisations and governments have committed to respond to hate speech when highly visible, since it can quickly escalate to discrimination and violence. This is of particular importance in situations where groups are targeted on the basis of discrimination or to incite political instability. Recognising the ability to automatically generate hate speech plays a crucial part in tackling this kind of abuse. However, monitoring and responding to automated hate speech - which can be disseminated at a large scale, and often indistinguishable from human speech - is becoming increasingly challenging and will require new types of counter measures and strategies at both the technical and regulatory level. 4. Impersonation. Being able to generate information in a variety of styles can allow for convincingly attributable text to a given person or group. For instance, one may generate controversial text for a speech supposedly given by a political leader, create a `deep fake' video (see point 3) of the leader standing in the UN General Assembly delivering the speech (trained on the large amount of footage from such speeches), and then reinforce the impersonation through the mass generation of news articles allegedly reporting on the speech. Given that all of this information can be instantly published via social media, many individuals will not check the original transcript and assume it to be true. Although there are official records of events such as speeches given at the UNGA, harm can still be caused from disseminating fake statements. Conclusion In this paper we have presented a proof-of-concept experiment to illustrate the ease with which a highly-accurate model that generates politically sensitive text can be created and highlighted the potential dangers of automated text generation, specifically in the context of peace and political stability. Based on this experiment, we put forward a series suggestions for the scientific and policy communities that we believe could help to address and mitigate these dangers: 1. Mapping the potential human rights impacts of these technologies - while there has been important work in this field more broadly (e.g. BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 ), we must continue to assess these impacts in specific contexts to enhance mitigation efforts and better understand the struggles of potential victims. Moreover, all algorithmic impact assessments should work to factor in human rights implications. 2. Development of tools for systematically and continuously monitoring AI generated content - such measures are being implemented by many social media platforms, however, there also needs to be greater awareness and ownership across institutions outside of the technology sector. Public and private institutions should work together to implement relevant monitoring systems, adapting them to the different evolving cultural and societal contexts. 3. Setting up strategies for countermeasures and scenario planning for critical situations - although adversarially generated text will not be completely eliminated, preemptive strategies, e.g. better societal education on identifying fake reports, along with countermeasures, can help lessen the impact of disinformation attacks. 4. Building alliances including civil society, international organisations and governments with technology providers, platforms and researchers for a coherent and proactive global strategy - ecosystems built around AI technologies should be treated as complex systems and the necessity for a multidisciplinary approach to tackling the risks should be recognised (see e.g. BIBREF19 ). The increasing convergence and ubiquity of AI technologies magnify the complexity of the challenges they present, and too often these complexities create a sense of detachment from their potentially negative implications. We must, however, ensure on a human level that these risks are assessed. Laws and regulations aimed at the AI space are urgently required and should be designed to limit the likelihood of those risks (and harms). With this in mind, the intent of this work is to raise awareness about the dangers of AI text generation to peace and political stability, and to suggest recommendations relevant to those in both the scientific and policy spheres that aim to address these challenges. Acknowledgements JB and MLO are with the United Nations Global Pulse innovation initiative supported by the Governments of Sweden, Netherlands and Germany and the William and Flora Hewlett Foundation. JB also is supported by the UK Science and Technology Facilities Council (STFC) grant number ST/P006744/1.
7,507
df2839dbd68ed9d5d186e6c148fa42fce60de64f
df2839dbd68ed9d5d186e6c148fa42fce60de64f_0
Q: How big is the provided treebank? Text: Introduction Code-switching (henceforth CS) is the juxtaposition, within the same speech utterance, of grammatical units such as words, phrases, and clauses belonging to two or more different languages BIBREF0 . The phenomenon is prevalent in multilingual societies where speakers share more than one language and is often prompted by multiple social factors BIBREF1 . Moreover, code-switching is mostly prominent in colloquial language use in daily conversations, both online and offline. Most of the benchmark corpora used in NLP for training and evaluation are based on edited monolingual texts which strictly adhere to the norms of a language related, for example, to orthography, morphology, and syntax. Social media data in general and CS data, in particular, deviate from these norms implicitly set forth by the choice of corpora used in the community. This is the reason why the current technologies often perform miserably on social media data, be it monolingual or mixed language data BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . CS data offers additional challenges over the monolingual social media data as the phenomenon of code-switching transforms the data in many ways, for example, by creating new lexical forms and syntactic structures by mixing morphology and syntax of two languages making it much more diverse than any monolingual corpora BIBREF4 . As the current computational models fail to cater to the complexities of CS data, there is often a need for dedicated techniques tailored to its specific characteristics. Given the peculiar nature of CS data, it has been widely studied in linguistics literature BIBREF8 , BIBREF0 , BIBREF1 , and more recently, there has been a surge in studies concerning CS data in NLP as well BIBREF9 , BIBREF9 , BIBREF3 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . Besides the individual computational works, a series of shared-tasks and workshops on preprocessing and shallow syntactic analysis of CS data have also been conducted at multiple venues such as Empirical Methods in NLP (EMNLP 2014 and 2016), International Conference on NLP (ICON 2015 and 2016) and Forum for Information Retrieval Evaluation (FIRE 2015 and 2016). Most of these works have attempted to address preliminary tasks such as language identification, normalization and/or back-transliteration as these data often need to go through these additional processes for their efficient processing. In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects. In particular, we study dependency parsing of Hindi-English code-switching data of multilingual Indian speakers from Twitter. Hindi-English code-switching presents an interesting scenario for the parsing community. Mixing among typologically diverse languages will intensify structural variations which will make parsing more challenging. For example, there will be many sentences containing: (1) both SOV and SVO word orders, (2) both head-initial and head-final genitives, (3) both prepositional and postpositional phrases, etc. More importantly, none among the Hindi and English treebanks would provide any training instance for these mixed structures within individual sentences. In this paper, we present the first code-switching treebank that provides syntactic annotations required for parsing mixed-grammar syntactic structures. Moreover, we present a parsing pipeline designed explicitly for Hindi-English CS data. The pipeline comprises of several modules such as a language identification system, a back-transliteration system, and a dependency parser. The gist of these modules and our overall research contributions are listed as follows: Preliminary Tasks As preliminary steps before parsing of CS data, we need to identify the language of tokens and normalize and/or back-transliterate them to enhance the parsing performance. These steps are indispensable for processing CS data and without them the performance drops drastically as we will see in Results Section. We need normalization of non-standard word forms and back-transliteration of Romanized Hindi words for addressing out-of-vocabulary problem, and lexical and syntactic ambiguity introduced due to contracted word forms. As we will train separate normalization and back-transliteration models for Hindi and English, we need language identification for selecting which model to use for inference for each word form separately. Moreover, we also need language information for decoding best word sequences. Language Identification For language identification task, we train a multilayer perceptron (MLP) stacked on top of a recurrent bidirectional LSTM (Bi-LSTM) network as shown in Figure "Results" . skip=0.5em figureLanguage identification network We represent each token by a concatenated vector of its English embedding, back-transliterated Hindi embedding, character Bi-LSTM embedding and flag embedding (English dictionary flag and word length flag with length bins of 0-3, 4-6, 7-10, and 10-all). These concatenated vectors are passed to a Bi-LSTM network to generate a sequence of hidden representations which encode the contextual information spread across the sentence. Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the language tags. We train the network on our CS training set concatenated with the data set provided in ICON 2015 shared task (728 Facebook comments) on language identification and evaluate it on the datasets from bhat-EtAl:2017:EACLshort. We achieved the state-of-the-art performance on both development and test sets BIBREF13 . The results are shown in Table "Results" . skip=0.5em tableLanguage Identification results on CS test set. Normalization and Back-transliteration We learn two separate but similar character-level models for normalization-cum-transliteration of noisy Romanized Hindi words and normalization of noisy English words. We treat both normalization and back-transliteration problems as a general sequence to sequence learning problem. In general, our goal is to learn a mapping for non-standard English and Romanized Hindi word forms to standard forms in their respective scripts. In case of Hindi, we address the problem of normalization and back-transliteration of Romanized Hindi words using a single model. We use the attention-based encoder-decoder model of Luong BIBREF17 with global attention for learning. For Hindi, we train the model on the transliteration pairs (87,520) from the Libindic transliteration project and Brahmi-Net BIBREF18 which are further augmented with noisy transliteration pairs (1,75,668) for normalization. Similarly, for normalization of noisy English words, we train the model on noisy word forms (4,29,715) synthetically generated from the English vocabulary. We use simple rules such as dropping non-initial vowels and replacing consonants based on their phonological proximity to generate synthetic data for normalization. Figure "Supplemental Material" shows some of the noisy forms generated from standard word forms using simple and finite rules which include vowel elision (please $\rightarrow $ pls), interchanging similar consonants and vowels (cousin $\rightarrow $ couzin), replacing consonant or vowel clusters with a single letter (Twitter $\rightarrow $ Twiter), etc. From here onwards, we will refer to both normalization and back-transliteration as normalization. figureSynthetic normalization pairs generated for a sample of English words using hand crafted rules. At inference time, our normalization models will predict the most likely word form for each input word. However, the single-best output from the model may not always be the best option considering an overall sentential context. Contracted word forms in social media content are quite often ambiguous and can represent different standard word forms. For example, noisy form `pt' can expand to different standard word forms such as `put', `pit', `pat', `pot' and `pet'. The choice of word selection will solely depend on the sentential context. To select contextually relevant forms, we use exact search over n-best normalizations from the respective models extracted using beam-search decoding. The best word sequence is selected using the Viterbi decoding over $b^n$ word sequences scored by a trigram language model. $b$ is the size of beam-width and $n$ is the sentence length. The language models are trained on the monolingual data of Hindi and English using KenLM toolkit BIBREF19 . For each word, we extract five best normalizations ( $b$ =5). Decoding the best word sequence is a non-trivial problem for CS data due to lack of normalized and back-transliterated CS data for training a language model. One obvious solution is to apply decoding on individual language fragments in a CS sentence BIBREF20 . One major problem with this approach is that the language models used for scoring are trained on complete sentences but are applied on sentence fragments. Scoring individual CS fragments might often lead to wrong word selection due to incomplete context, particularly at fragment peripheries. We solve this problem by using a 3-step decoding process that works on two separate versions of a CS sentence, one in Hindi, and one in English. In the first step, we replace first-best back-transliterated forms of Hindi words by their translation equivalents using a Hindi-English bilingual lexicon. An exact search is used over the top `5' normalizations of English words, the translation equivalents of Hindi words and the actual word itself. In the second step, we decode best word sequence over Hindi version of the sentence by replacing best English word forms decoded from the first step by their translation equivalents. An exact search is used over the top `5' normalizations of Hindi words, the dictionary equivalents of decoded English words and the original words. In the final step, English and Hindi words are selected from their respective decoded sequences using the predicted language tags from the language identification system. Note that the bilingual mappings are only used to aid the decoding process by making the CS sentences lexically monolingual so that the monolingual language models could be used for scoring. They are not used in the final decoded output. The overall decoding process is shown in Figure 1 . Both of our normalization and back-transliteration systems are evaluated on the evaluation set of bhat-EtAl:2017:EACLshort. Results of our systems are reported in Table "Supplemental Material" with a comparison of accuracies based on the nature of decoding used. The results clearly show the significance of our 3-step decoding over first-best and fragment-wise decoding. skip=0.5em tableNormalization accuracy based on the number of noisy tokens in the evaluation set. FB = First Best, and FW = Fragment Wise Universal Dependencies for Hindi-English Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. Following bhat-EtAl:2017:EACLshort we first sampled CS data from a large set of tweets of Indian language users that we crawled from Twitter using Tweepy–a Twitter API wrapper. We then used a language identification system trained on ICON dataset (see Section "Preliminary Tasks" ) to filter Hindi-English CS tweets from the crawled Twitter data. Only those tweets were selected that satisfied a minimum ratio of 30:70(%) code-switching. From this dataset, we manually selected 1,448 tweets for annotation. The selected tweets are thoroughly checked for code-switching ratio. For POS tagging and dependency annotation, we used Version 2 of Universal dependency guidelines BIBREF21 , while language tags are assigned based on the tag set defined in BIBREF22 , BIBREF23 . The dataset was annotated by two expert annotators who have been associated with annotation projects involving syntactic annotations for around 10 years. Nonetheless, we also ensured the quality of the manual annotations by carrying an inter-annotator agreement analysis. We randomly selected a dataset of 150 tweets which were annotated by both annotators for both POS tagging and dependency structures. The inter-annotator agreement has a 96.20% accuracy for POS tagging and a 95.94% UAS and a 92.65% LAS for dependency parsing. We use our dataset for training while the development and evaluation sets from bhat-EtAl:2017:EACLshort are used for tuning and evaluation of our models. Since the annotations in these datasets follow version 1.4 of the UD guidelines, we converted them to version 2 by using carefully designed rules. The statistics about the data are given in Table "Supplemental Material" . skip=0.5em tableData Statistics. Dev set is used for tuning model parameters, while Test set is used for evaluation. Dependency Parsing We adapt Kiperwasser and Goldberg kiperwasser2016simple transition-based parser as our base model and incorporate POS tag and monolingual parse tree information into the model using neural stacking, as shown in Figures "Parsing Algorithm" and "Stacking Models" . Parsing Algorithm Our parsing models are based on an arc-eager transition system BIBREF24 . The arc-eager system defines a set of configurations for a sentence w $_1$ ,...,w $_n$ , where each configuration C = (S, B, A) consists of a stack S, a buffer B, and a set of dependency arcs A. For each sentence, the parser starts with an initial configuration where S = [ROOT], B = [w $_1$ ,...,w $_n$ ] and A = $\emptyset $ and terminates with a configuration C if the buffer is empty and the stack contains the ROOT. The parse trees derived from transition sequences are given by A. To derive the parse tree, the arc-eager system defines four types of transitions ( $t$ ): Shift, Left-Arc, Right-Arc, and Reduce. We use the training by exploration method of goldberg2012dynamic for decoding a transition sequence which helps in mitigating error propagation at evaluation time. We also use pseudo-projective transformations of nivre2005 to handle a higher percentage of non-projective arcs in the CS data ( $\sim $ 2%). We use the most informative scheme of head+path to store the transformation information. skip=0.5em figurePOS tagging and parsing network based on stack-propagation model proposed in BIBREF25 . Base Models Our base model is a stack of a tagger network and a parser network inspired by stack-propagation model of zhang-weiss:2016:P16-1. The parameters of the tagger network are shared and act as a regularization on the parsing model. The model is trained by minimizing a joint negative log-likelihood loss for both tasks. Unlike zhang-weiss:2016:P16-1, we compute the gradients of the log-loss function simultaneously for each training instance. While the parser network is updated given the parsing loss only, the tagger network is updated with respect to both tagging and parsing losses. Both tagger and parser networks comprise of an input layer, a feature layer, and an output layer as shown in Figure "Parsing Algorithm" . Following zhang-weiss:2016:P16-1, we refer to this model as stack-prop. The input layer of the tagger encodes each input word in a sentence by concatenating a pre-trained word embedding with its character embedding given by a character Bi-LSTM. In the feature layer, the concatenated word and character representations are passed through two stacked Bi-LSTMs to generate a sequence of hidden representations which encode the contextual information spread across the sentence. The first Bi-LSTM is shared with the parser network while the other is specific to the tagger. Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the Universal POS tags. We only use the forward and backward hidden representations of the focus word for classification. Similar to the tagger network, the input layer encodes the input sentence using word and character embeddings which are then passed to the shared Bi-LSTM. The hidden representations from the shared Bi-LSTM are then concatenated with the dense representations from the feed-forward network of the tagger and passed through the Bi-LSTM specific to the parser. This ensures that the tagging network is penalized for the parsing error caused by error propagation by back-propagating the gradients to the shared tagger parameters BIBREF25 . Finally, we use a non-linear feed-forward network to predict the labeled transitions for the parser configurations. From each parser configuration, we extract the top node in the stack and the first node in the buffer and use their hidden representations from the parser specific Bi-LSTM for classification. skip=0.5em figureCode-switching tweet showing grammatical fragments from Hindi and English. Stacking Models It seems reasonable that limited CS data would complement large monolingual data in parsing CS data and a parsing model which leverages both data would significantly improve parsing performance. While a parsing model trained on our limited CS data might not be enough to accurately parse the individual grammatical fragments of Hindi and English, the preexisting Hindi and English treebanks are large enough to provide sufficient annotations to capture their structure. Similarly, parsing model(s) trained on the Hindi and English data may not be able to properly connect the divergent fragments of the two languages as the model lacks evidence for such mixed structures in the monolingual data. This will happen quite often as Hindi and English are typologicalls very diverse (see Figure UID16 ). skip=0.5em figureNeural Stacking-based parsing architecture for incorporating monolingual syntactic knowledge. As we discussed above, we adapted feature-level neural stacking BIBREF25 , BIBREF26 for joint learning of POS tagging and parsing. Similarly, we also adapt this stacking approach for incorporating the monolingual syntactic knowledge into the base CS model. Recently, wang-EtAl:2017:Long6 used neural stacking for injecting syntactic knowledge of English into a graph-based Singlish parser which lead to significant improvements in parsing performance. Unlike wang-EtAl:2017:Long6, our base stacked models will allow us to transfer the POS tagging knowledge as well along the parse tree knowledge. As shown in Figure "Stacking Models" , we transfer both POS tagging and parsing information from the source model trained on augmented Hindi and English data. For tagging, we augment the input layer of the CS tagger with the MLP layer of the source tagger. For transferring parsing knowledge, hidden representations from the parser specific Bi-LSTM of the source parser are augmented with the input layer of the CS parser which already includes the hidden layer of the CS tagger, word and character embeddings. In addition, we also add the MLP layer of the source parser to the MLP layer of the CS parser. The MLP layers of the source parser are generated using raw features from CS parser configurations. Apart from the addition of these learned representations from the source model, the overall CS model remains similar to the base model shown in Figure "Parsing Algorithm" . The tagging and parsing losses are back-propagated by traversing back the forward paths to all trainable parameters in the entire network for training and the whole network is used collectively for inference. Experiments We train all of our POS tagging and parsing models on training sets of the Hindi and English UD-v2 treebanks and our Hindi-English CS treebank. For tuning and evaluation, we use the development and evaluation sets from bhat-EtAl:2017:EACLshort. We conduct multiple experiments in gold and predicted settings to measure the effectiveness of the sub-modules of our parsing pipeline. In predicted settings, we use the POS taggers separately trained on the Hindi, English and CS training sets. All of our models use word embeddings from transformed Hindi and English embedding spaces to address the problem of lexical differences prevalent in CS sentences. Hyperparameters For language identification, POS tagging and parsing models, we include the lexical features in the input layer of our neural networks using 64-dimension pre-trained word embeddings, while we use randomly initialized embeddings within a range of $[-0.1$ , $+0.1]$ for non-lexical units such as POS tags and dictionary flags. We use 32-dimensional character embeddings for all the three models and 32-dimensional POS tag embeddings for pipelined parsing models. The distributed representation of Hindi and English vocabulary are learned separately from the Hindi and English monolingual corpora. The English monolingual data contains around 280M sentences, while the Hindi data is comparatively smaller and contains around 40M sentences. The word representations are learned using Skip-gram model with negative sampling which is implemented in word2vec toolkit BIBREF27 . We use the projection algorithm of artetxe2016learning to transform the Hindi and English monolingual embeddings into same semantic space using a bilingual lexicon ( $\sim $ 63,000 entries). The bilingual lexicon is extracted from ILCI and Bojar Hindi-English parallel corpora BIBREF28 , BIBREF29 . For normalization models, we use 32-dimensional character embeddings uniformly initialized within a range of $[-0.1, +0.1]$ . The POS tagger specific Bi-LSTMs have 128 cells while the parser specific Bi-LSTMs have 256 cells. The Bi-LSTM in the language identification model has 64 cells. The character Bi-LSTMs have 32 cells for all three models. The hidden layer of MLP has 64 nodes for the language identification network, 128 nodes for the POS tagger and 256 nodes for the parser. We use hyperbolic tangent as an activation function in all tasks. In the normalization models, we use single layered Bi-LSTMs with 512 cells for both encoding and decoding of character sequences. For language identification, POS tagging and parsing networks, we use momentum SGD for learning with a minibatch size of 1. The LSTM weights are initialized with random orthonormal matrices as described in BIBREF30 . We set the dropout rate to 30% for POS tagger and parser Bi-LSTM and MLP hidden states while for language identification network we set the dropout to 50%. All three models are trained for up to 100 epochs, with early stopping based on the development set. In case of normalization, we train our encoder-decoder models for 25 epochs using vanilla SGD. We start with a learning rate of $1.0$ and after 8 epochs reduce it to half for every epoch. We use a mini-batch size of 128, and the normalized gradient is rescaled whenever its norm exceeds 5. We use a dropout rate of 30% for the Bi-LSTM. Language identification, POS tagging and parsing code is implemented in DyNet BIBREF31 and for normalization without decoding, we use Open-NMT toolkit for neural machine translation BIBREF32 . All the code is available at https://github.com/irshadbhat/nsdp-cs and the data is available at https://github.com/CodeMixedUniversalDependencies/UD_Hindi_English. Results In Table "Results" , we present the results of our main model that uses neural stacking for learning POS tagging and parsing and also for knowledge transfer from the Bilingual model. Transferring POS tagging and syntactic knowledge using neural stacking gives 1.5% LAS improvement over a naive approach of data augmentation. The Bilingual model which is trained on the union of Hindi and English data sets is least accurate of all our parsing models. However, it achieves better or near state-of-the-art results on the Hindi and English evaluation sets (see Table "Results" ). As compared to the best system in CoNLL 2017 Shared Task on Universal Dependencies BIBREF33 , BIBREF34 , our results for English are around 3% better in LAS, while for Hindi only 0.5% LAS points worse. The CS model trained only on the CS training data is slightly more accurate than the Bilingual model. Augmenting the CS data to Hindi-English data complements their syntactic structures relevant for parsing mixed grammar structures which are otherwise missing in the individual datasets. The average improvements of around $\sim $ 5% LAS clearly show their complementary nature. skip=0.5em tableAccuracy of different parsing models on the evaluation set. POS tags are jointly predicted with parsing. LID = Language tag, TRN = Transliteration/normalization. Table "Results" summarizes the POS tagging results on the CS evaluation set. The tagger trained on the CS training data is 2.5% better than the Bilingual tagger. Adding CS training data to Hindi and English train sets further improves the accuracy by 1%. However, our stack-prop tagger achieves the highest accuracy of 90.53% by leveraging POS information from Bilingual tagger using neural stacking. skip=0.5em tablePOS and parsing results for Hindi and English monolingual test sets using pipeline and stack-prop models. skip=0.5em tablePOS tagging accuracies of different models on CS evaluation set. SP = stack-prop. Conclusion In this paper, we have presented a dependency parser designed explicitly for Hindi-English CS data. The parser uses neural stacking architecture of zhang-weiss:2016:P16-1 and chen-zhang-liu:2016:EMNLP2016 for learning POS tagging and parsing and for knowledge transfer from Bilingual models trained on Hindi and English UD treebanks. We have also presented normalization and back-transliteration models with a decoding process tailored for CS data. Our neural stacking parser is 1.5% LAS points better than the augmented parsing model and 3.8% LAS points better than the one which uses first-best normalizations.
1448 sentences more than the dataset from Bhat et al., 2017
3996438cef34eb7bedaa6745b190c69553cf246b
3996438cef34eb7bedaa6745b190c69553cf246b_0
Q: What is LAS metric? Text: Introduction Code-switching (henceforth CS) is the juxtaposition, within the same speech utterance, of grammatical units such as words, phrases, and clauses belonging to two or more different languages BIBREF0 . The phenomenon is prevalent in multilingual societies where speakers share more than one language and is often prompted by multiple social factors BIBREF1 . Moreover, code-switching is mostly prominent in colloquial language use in daily conversations, both online and offline. Most of the benchmark corpora used in NLP for training and evaluation are based on edited monolingual texts which strictly adhere to the norms of a language related, for example, to orthography, morphology, and syntax. Social media data in general and CS data, in particular, deviate from these norms implicitly set forth by the choice of corpora used in the community. This is the reason why the current technologies often perform miserably on social media data, be it monolingual or mixed language data BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . CS data offers additional challenges over the monolingual social media data as the phenomenon of code-switching transforms the data in many ways, for example, by creating new lexical forms and syntactic structures by mixing morphology and syntax of two languages making it much more diverse than any monolingual corpora BIBREF4 . As the current computational models fail to cater to the complexities of CS data, there is often a need for dedicated techniques tailored to its specific characteristics. Given the peculiar nature of CS data, it has been widely studied in linguistics literature BIBREF8 , BIBREF0 , BIBREF1 , and more recently, there has been a surge in studies concerning CS data in NLP as well BIBREF9 , BIBREF9 , BIBREF3 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . Besides the individual computational works, a series of shared-tasks and workshops on preprocessing and shallow syntactic analysis of CS data have also been conducted at multiple venues such as Empirical Methods in NLP (EMNLP 2014 and 2016), International Conference on NLP (ICON 2015 and 2016) and Forum for Information Retrieval Evaluation (FIRE 2015 and 2016). Most of these works have attempted to address preliminary tasks such as language identification, normalization and/or back-transliteration as these data often need to go through these additional processes for their efficient processing. In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects. In particular, we study dependency parsing of Hindi-English code-switching data of multilingual Indian speakers from Twitter. Hindi-English code-switching presents an interesting scenario for the parsing community. Mixing among typologically diverse languages will intensify structural variations which will make parsing more challenging. For example, there will be many sentences containing: (1) both SOV and SVO word orders, (2) both head-initial and head-final genitives, (3) both prepositional and postpositional phrases, etc. More importantly, none among the Hindi and English treebanks would provide any training instance for these mixed structures within individual sentences. In this paper, we present the first code-switching treebank that provides syntactic annotations required for parsing mixed-grammar syntactic structures. Moreover, we present a parsing pipeline designed explicitly for Hindi-English CS data. The pipeline comprises of several modules such as a language identification system, a back-transliteration system, and a dependency parser. The gist of these modules and our overall research contributions are listed as follows: Preliminary Tasks As preliminary steps before parsing of CS data, we need to identify the language of tokens and normalize and/or back-transliterate them to enhance the parsing performance. These steps are indispensable for processing CS data and without them the performance drops drastically as we will see in Results Section. We need normalization of non-standard word forms and back-transliteration of Romanized Hindi words for addressing out-of-vocabulary problem, and lexical and syntactic ambiguity introduced due to contracted word forms. As we will train separate normalization and back-transliteration models for Hindi and English, we need language identification for selecting which model to use for inference for each word form separately. Moreover, we also need language information for decoding best word sequences. Language Identification For language identification task, we train a multilayer perceptron (MLP) stacked on top of a recurrent bidirectional LSTM (Bi-LSTM) network as shown in Figure "Results" . skip=0.5em figureLanguage identification network We represent each token by a concatenated vector of its English embedding, back-transliterated Hindi embedding, character Bi-LSTM embedding and flag embedding (English dictionary flag and word length flag with length bins of 0-3, 4-6, 7-10, and 10-all). These concatenated vectors are passed to a Bi-LSTM network to generate a sequence of hidden representations which encode the contextual information spread across the sentence. Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the language tags. We train the network on our CS training set concatenated with the data set provided in ICON 2015 shared task (728 Facebook comments) on language identification and evaluate it on the datasets from bhat-EtAl:2017:EACLshort. We achieved the state-of-the-art performance on both development and test sets BIBREF13 . The results are shown in Table "Results" . skip=0.5em tableLanguage Identification results on CS test set. Normalization and Back-transliteration We learn two separate but similar character-level models for normalization-cum-transliteration of noisy Romanized Hindi words and normalization of noisy English words. We treat both normalization and back-transliteration problems as a general sequence to sequence learning problem. In general, our goal is to learn a mapping for non-standard English and Romanized Hindi word forms to standard forms in their respective scripts. In case of Hindi, we address the problem of normalization and back-transliteration of Romanized Hindi words using a single model. We use the attention-based encoder-decoder model of Luong BIBREF17 with global attention for learning. For Hindi, we train the model on the transliteration pairs (87,520) from the Libindic transliteration project and Brahmi-Net BIBREF18 which are further augmented with noisy transliteration pairs (1,75,668) for normalization. Similarly, for normalization of noisy English words, we train the model on noisy word forms (4,29,715) synthetically generated from the English vocabulary. We use simple rules such as dropping non-initial vowels and replacing consonants based on their phonological proximity to generate synthetic data for normalization. Figure "Supplemental Material" shows some of the noisy forms generated from standard word forms using simple and finite rules which include vowel elision (please $\rightarrow $ pls), interchanging similar consonants and vowels (cousin $\rightarrow $ couzin), replacing consonant or vowel clusters with a single letter (Twitter $\rightarrow $ Twiter), etc. From here onwards, we will refer to both normalization and back-transliteration as normalization. figureSynthetic normalization pairs generated for a sample of English words using hand crafted rules. At inference time, our normalization models will predict the most likely word form for each input word. However, the single-best output from the model may not always be the best option considering an overall sentential context. Contracted word forms in social media content are quite often ambiguous and can represent different standard word forms. For example, noisy form `pt' can expand to different standard word forms such as `put', `pit', `pat', `pot' and `pet'. The choice of word selection will solely depend on the sentential context. To select contextually relevant forms, we use exact search over n-best normalizations from the respective models extracted using beam-search decoding. The best word sequence is selected using the Viterbi decoding over $b^n$ word sequences scored by a trigram language model. $b$ is the size of beam-width and $n$ is the sentence length. The language models are trained on the monolingual data of Hindi and English using KenLM toolkit BIBREF19 . For each word, we extract five best normalizations ( $b$ =5). Decoding the best word sequence is a non-trivial problem for CS data due to lack of normalized and back-transliterated CS data for training a language model. One obvious solution is to apply decoding on individual language fragments in a CS sentence BIBREF20 . One major problem with this approach is that the language models used for scoring are trained on complete sentences but are applied on sentence fragments. Scoring individual CS fragments might often lead to wrong word selection due to incomplete context, particularly at fragment peripheries. We solve this problem by using a 3-step decoding process that works on two separate versions of a CS sentence, one in Hindi, and one in English. In the first step, we replace first-best back-transliterated forms of Hindi words by their translation equivalents using a Hindi-English bilingual lexicon. An exact search is used over the top `5' normalizations of English words, the translation equivalents of Hindi words and the actual word itself. In the second step, we decode best word sequence over Hindi version of the sentence by replacing best English word forms decoded from the first step by their translation equivalents. An exact search is used over the top `5' normalizations of Hindi words, the dictionary equivalents of decoded English words and the original words. In the final step, English and Hindi words are selected from their respective decoded sequences using the predicted language tags from the language identification system. Note that the bilingual mappings are only used to aid the decoding process by making the CS sentences lexically monolingual so that the monolingual language models could be used for scoring. They are not used in the final decoded output. The overall decoding process is shown in Figure 1 . Both of our normalization and back-transliteration systems are evaluated on the evaluation set of bhat-EtAl:2017:EACLshort. Results of our systems are reported in Table "Supplemental Material" with a comparison of accuracies based on the nature of decoding used. The results clearly show the significance of our 3-step decoding over first-best and fragment-wise decoding. skip=0.5em tableNormalization accuracy based on the number of noisy tokens in the evaluation set. FB = First Best, and FW = Fragment Wise Universal Dependencies for Hindi-English Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. Following bhat-EtAl:2017:EACLshort we first sampled CS data from a large set of tweets of Indian language users that we crawled from Twitter using Tweepy–a Twitter API wrapper. We then used a language identification system trained on ICON dataset (see Section "Preliminary Tasks" ) to filter Hindi-English CS tweets from the crawled Twitter data. Only those tweets were selected that satisfied a minimum ratio of 30:70(%) code-switching. From this dataset, we manually selected 1,448 tweets for annotation. The selected tweets are thoroughly checked for code-switching ratio. For POS tagging and dependency annotation, we used Version 2 of Universal dependency guidelines BIBREF21 , while language tags are assigned based on the tag set defined in BIBREF22 , BIBREF23 . The dataset was annotated by two expert annotators who have been associated with annotation projects involving syntactic annotations for around 10 years. Nonetheless, we also ensured the quality of the manual annotations by carrying an inter-annotator agreement analysis. We randomly selected a dataset of 150 tweets which were annotated by both annotators for both POS tagging and dependency structures. The inter-annotator agreement has a 96.20% accuracy for POS tagging and a 95.94% UAS and a 92.65% LAS for dependency parsing. We use our dataset for training while the development and evaluation sets from bhat-EtAl:2017:EACLshort are used for tuning and evaluation of our models. Since the annotations in these datasets follow version 1.4 of the UD guidelines, we converted them to version 2 by using carefully designed rules. The statistics about the data are given in Table "Supplemental Material" . skip=0.5em tableData Statistics. Dev set is used for tuning model parameters, while Test set is used for evaluation. Dependency Parsing We adapt Kiperwasser and Goldberg kiperwasser2016simple transition-based parser as our base model and incorporate POS tag and monolingual parse tree information into the model using neural stacking, as shown in Figures "Parsing Algorithm" and "Stacking Models" . Parsing Algorithm Our parsing models are based on an arc-eager transition system BIBREF24 . The arc-eager system defines a set of configurations for a sentence w $_1$ ,...,w $_n$ , where each configuration C = (S, B, A) consists of a stack S, a buffer B, and a set of dependency arcs A. For each sentence, the parser starts with an initial configuration where S = [ROOT], B = [w $_1$ ,...,w $_n$ ] and A = $\emptyset $ and terminates with a configuration C if the buffer is empty and the stack contains the ROOT. The parse trees derived from transition sequences are given by A. To derive the parse tree, the arc-eager system defines four types of transitions ( $t$ ): Shift, Left-Arc, Right-Arc, and Reduce. We use the training by exploration method of goldberg2012dynamic for decoding a transition sequence which helps in mitigating error propagation at evaluation time. We also use pseudo-projective transformations of nivre2005 to handle a higher percentage of non-projective arcs in the CS data ( $\sim $ 2%). We use the most informative scheme of head+path to store the transformation information. skip=0.5em figurePOS tagging and parsing network based on stack-propagation model proposed in BIBREF25 . Base Models Our base model is a stack of a tagger network and a parser network inspired by stack-propagation model of zhang-weiss:2016:P16-1. The parameters of the tagger network are shared and act as a regularization on the parsing model. The model is trained by minimizing a joint negative log-likelihood loss for both tasks. Unlike zhang-weiss:2016:P16-1, we compute the gradients of the log-loss function simultaneously for each training instance. While the parser network is updated given the parsing loss only, the tagger network is updated with respect to both tagging and parsing losses. Both tagger and parser networks comprise of an input layer, a feature layer, and an output layer as shown in Figure "Parsing Algorithm" . Following zhang-weiss:2016:P16-1, we refer to this model as stack-prop. The input layer of the tagger encodes each input word in a sentence by concatenating a pre-trained word embedding with its character embedding given by a character Bi-LSTM. In the feature layer, the concatenated word and character representations are passed through two stacked Bi-LSTMs to generate a sequence of hidden representations which encode the contextual information spread across the sentence. The first Bi-LSTM is shared with the parser network while the other is specific to the tagger. Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the Universal POS tags. We only use the forward and backward hidden representations of the focus word for classification. Similar to the tagger network, the input layer encodes the input sentence using word and character embeddings which are then passed to the shared Bi-LSTM. The hidden representations from the shared Bi-LSTM are then concatenated with the dense representations from the feed-forward network of the tagger and passed through the Bi-LSTM specific to the parser. This ensures that the tagging network is penalized for the parsing error caused by error propagation by back-propagating the gradients to the shared tagger parameters BIBREF25 . Finally, we use a non-linear feed-forward network to predict the labeled transitions for the parser configurations. From each parser configuration, we extract the top node in the stack and the first node in the buffer and use their hidden representations from the parser specific Bi-LSTM for classification. skip=0.5em figureCode-switching tweet showing grammatical fragments from Hindi and English. Stacking Models It seems reasonable that limited CS data would complement large monolingual data in parsing CS data and a parsing model which leverages both data would significantly improve parsing performance. While a parsing model trained on our limited CS data might not be enough to accurately parse the individual grammatical fragments of Hindi and English, the preexisting Hindi and English treebanks are large enough to provide sufficient annotations to capture their structure. Similarly, parsing model(s) trained on the Hindi and English data may not be able to properly connect the divergent fragments of the two languages as the model lacks evidence for such mixed structures in the monolingual data. This will happen quite often as Hindi and English are typologicalls very diverse (see Figure UID16 ). skip=0.5em figureNeural Stacking-based parsing architecture for incorporating monolingual syntactic knowledge. As we discussed above, we adapted feature-level neural stacking BIBREF25 , BIBREF26 for joint learning of POS tagging and parsing. Similarly, we also adapt this stacking approach for incorporating the monolingual syntactic knowledge into the base CS model. Recently, wang-EtAl:2017:Long6 used neural stacking for injecting syntactic knowledge of English into a graph-based Singlish parser which lead to significant improvements in parsing performance. Unlike wang-EtAl:2017:Long6, our base stacked models will allow us to transfer the POS tagging knowledge as well along the parse tree knowledge. As shown in Figure "Stacking Models" , we transfer both POS tagging and parsing information from the source model trained on augmented Hindi and English data. For tagging, we augment the input layer of the CS tagger with the MLP layer of the source tagger. For transferring parsing knowledge, hidden representations from the parser specific Bi-LSTM of the source parser are augmented with the input layer of the CS parser which already includes the hidden layer of the CS tagger, word and character embeddings. In addition, we also add the MLP layer of the source parser to the MLP layer of the CS parser. The MLP layers of the source parser are generated using raw features from CS parser configurations. Apart from the addition of these learned representations from the source model, the overall CS model remains similar to the base model shown in Figure "Parsing Algorithm" . The tagging and parsing losses are back-propagated by traversing back the forward paths to all trainable parameters in the entire network for training and the whole network is used collectively for inference. Experiments We train all of our POS tagging and parsing models on training sets of the Hindi and English UD-v2 treebanks and our Hindi-English CS treebank. For tuning and evaluation, we use the development and evaluation sets from bhat-EtAl:2017:EACLshort. We conduct multiple experiments in gold and predicted settings to measure the effectiveness of the sub-modules of our parsing pipeline. In predicted settings, we use the POS taggers separately trained on the Hindi, English and CS training sets. All of our models use word embeddings from transformed Hindi and English embedding spaces to address the problem of lexical differences prevalent in CS sentences. Hyperparameters For language identification, POS tagging and parsing models, we include the lexical features in the input layer of our neural networks using 64-dimension pre-trained word embeddings, while we use randomly initialized embeddings within a range of $[-0.1$ , $+0.1]$ for non-lexical units such as POS tags and dictionary flags. We use 32-dimensional character embeddings for all the three models and 32-dimensional POS tag embeddings for pipelined parsing models. The distributed representation of Hindi and English vocabulary are learned separately from the Hindi and English monolingual corpora. The English monolingual data contains around 280M sentences, while the Hindi data is comparatively smaller and contains around 40M sentences. The word representations are learned using Skip-gram model with negative sampling which is implemented in word2vec toolkit BIBREF27 . We use the projection algorithm of artetxe2016learning to transform the Hindi and English monolingual embeddings into same semantic space using a bilingual lexicon ( $\sim $ 63,000 entries). The bilingual lexicon is extracted from ILCI and Bojar Hindi-English parallel corpora BIBREF28 , BIBREF29 . For normalization models, we use 32-dimensional character embeddings uniformly initialized within a range of $[-0.1, +0.1]$ . The POS tagger specific Bi-LSTMs have 128 cells while the parser specific Bi-LSTMs have 256 cells. The Bi-LSTM in the language identification model has 64 cells. The character Bi-LSTMs have 32 cells for all three models. The hidden layer of MLP has 64 nodes for the language identification network, 128 nodes for the POS tagger and 256 nodes for the parser. We use hyperbolic tangent as an activation function in all tasks. In the normalization models, we use single layered Bi-LSTMs with 512 cells for both encoding and decoding of character sequences. For language identification, POS tagging and parsing networks, we use momentum SGD for learning with a minibatch size of 1. The LSTM weights are initialized with random orthonormal matrices as described in BIBREF30 . We set the dropout rate to 30% for POS tagger and parser Bi-LSTM and MLP hidden states while for language identification network we set the dropout to 50%. All three models are trained for up to 100 epochs, with early stopping based on the development set. In case of normalization, we train our encoder-decoder models for 25 epochs using vanilla SGD. We start with a learning rate of $1.0$ and after 8 epochs reduce it to half for every epoch. We use a mini-batch size of 128, and the normalized gradient is rescaled whenever its norm exceeds 5. We use a dropout rate of 30% for the Bi-LSTM. Language identification, POS tagging and parsing code is implemented in DyNet BIBREF31 and for normalization without decoding, we use Open-NMT toolkit for neural machine translation BIBREF32 . All the code is available at https://github.com/irshadbhat/nsdp-cs and the data is available at https://github.com/CodeMixedUniversalDependencies/UD_Hindi_English. Results In Table "Results" , we present the results of our main model that uses neural stacking for learning POS tagging and parsing and also for knowledge transfer from the Bilingual model. Transferring POS tagging and syntactic knowledge using neural stacking gives 1.5% LAS improvement over a naive approach of data augmentation. The Bilingual model which is trained on the union of Hindi and English data sets is least accurate of all our parsing models. However, it achieves better or near state-of-the-art results on the Hindi and English evaluation sets (see Table "Results" ). As compared to the best system in CoNLL 2017 Shared Task on Universal Dependencies BIBREF33 , BIBREF34 , our results for English are around 3% better in LAS, while for Hindi only 0.5% LAS points worse. The CS model trained only on the CS training data is slightly more accurate than the Bilingual model. Augmenting the CS data to Hindi-English data complements their syntactic structures relevant for parsing mixed grammar structures which are otherwise missing in the individual datasets. The average improvements of around $\sim $ 5% LAS clearly show their complementary nature. skip=0.5em tableAccuracy of different parsing models on the evaluation set. POS tags are jointly predicted with parsing. LID = Language tag, TRN = Transliteration/normalization. Table "Results" summarizes the POS tagging results on the CS evaluation set. The tagger trained on the CS training data is 2.5% better than the Bilingual tagger. Adding CS training data to Hindi and English train sets further improves the accuracy by 1%. However, our stack-prop tagger achieves the highest accuracy of 90.53% by leveraging POS information from Bilingual tagger using neural stacking. skip=0.5em tablePOS and parsing results for Hindi and English monolingual test sets using pipeline and stack-prop models. skip=0.5em tablePOS tagging accuracies of different models on CS evaluation set. SP = stack-prop. Conclusion In this paper, we have presented a dependency parser designed explicitly for Hindi-English CS data. The parser uses neural stacking architecture of zhang-weiss:2016:P16-1 and chen-zhang-liu:2016:EMNLP2016 for learning POS tagging and parsing and for knowledge transfer from Bilingual models trained on Hindi and English UD treebanks. We have also presented normalization and back-transliteration models with a decoding process tailored for CS data. Our neural stacking parser is 1.5% LAS points better than the augmented parsing model and 3.8% LAS points better than the one which uses first-best normalizations.
Unanswerable
97159b8b1ab360c34a1114cd81e8037474bd37db
97159b8b1ab360c34a1114cd81e8037474bd37db_0
Q: is the dataset balanced across the four languages? Text: Introduction Customer feedback analysis is the task of classifying short text messages into a set of predefined labels (e.g., bug, request). It is an important step towards effective customer support. However, a real bottleneck for successful classification of customer feedback in a multilingual environment is the limited transferability of such models, i.e., typically each time a new language is encountered a new model is built from scratch. This is clearly impractical, as maintaining separate models is cumbersome, besides the fact that existing annotations are simply not leveraged. In this paper we present our submission to the IJCNLP 2017 shared task on customer feedback analysis, in which data from four languages was available (English, French, Japanese and Spanish). Our goal was to build a single system for all four languages, and compare it to the traditional approach of creating separate systems for each language. We hypothesize that a single system is beneficial, as it can provide positive transfer, particularly for the languages for which less data is available. The contributions of this paper are: All-In-1: One Model for All Motivated by the goal to evaluate how good a single model for multiple languages fares, we decided to build a very simple model that can handle any of the four languages. We aimed at an approach that does not require any language-specific processing (beyond tokenization) nor requires any parallel data. We set out to build a simple baseline, which turned out to be surprisingly effective. Our model is depicted in Figure FIGREF7 . Our key motivation is to provide a simple, general system as opposed to the usual ad-hoc setups one can expect in a multilingual shared task. So we rely on character n-grams, word embeddings, and a traditional classifier, motivated as follows. First, character n-grams and traditional machine learning algorithms have proven successful for a variety of classification tasks, e.g., native language identification and language detection. In recent shared tasks simple traditional models outperformed deep neural approaches like CNNs or RNNs, e.g., BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . This motivated our choice of using a traditional model with character n-gram features. Second, we build upon the recent success of multilingual embeddings. These are embedding spaces in which word types of different languages are embedded into the same high-dimensional space. Early approaches focus mainly on bilingual approaches, while recent research aims at mapping several languages into a single space. The body of literature is huge, but an excellent recent overview is given in xlingsurvey. We chose a very simple and recently proposed method that does not rely on any parallel data BIBREF4 and extend it to the multilingual case. In particular, the method falls under the broad umbrella of monolingual mappings. These approaches first train monolingual embeddings on large unlabeled corpora for the single languages. They then learn linear mappings between the monolingual embeddings to map them to the same space. The approach we apply here is particularly interesting as it does not require parallel data (parallel sentences/documents or dictionaries) and is readily applicable to off-the-shelf embeddings. In brief, the approach aims at learning a transformation in which word vector spaces are orthogonal (by applying SVD) and it leverages so-called “pseudo-dictionaries”. That is, the method first finds the common word types in two embedding spaces, and uses those as pivots to learn to align the two spaces (cf. further details in smith2017offline). Experimental Setup In this section we first describe the IJCNLP 2017 shared task 4 including the data, the features, model and evaluation metrics. Task Description The customer feedback analysis task BIBREF5 is a short text classification task. Given a customer feedback message, the goal is to detect the type of customer feedback. For each message, the organizers provided one or more labels. To give a more concrete idea of the data, the following are examples of the English dataset: “Still calls keep dropping with the new update” (bug) “Room was grubby, mold on windows frames.” (complaint) “The new update is amazing.” (comment) “Needs more control s and tricks..” (request) “Enjoy the sunshine!!” (meaningless) Data The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language. We treat the customer feedback analysis problem as a single-class classification task and actually ignore multi-label instances, as motivated next. The final label distribution for the data is given in Figure FIGREF17 . In initial investigations of the data we noticed that very few instances had multiple labels, e.g., “comment,complaint”. In the English training data this amounted to INLINEFORM0 4% of the data. We decided to ignore those additional labels (just picked the first in case of multiple labels) and treat the problem as a single-class classification problem. This was motivated by the fact that some labels were expected to be easily confused. Finally, there were some labels in the data that did not map to any of the labels in the task description (i.e., `undetermined', `undefined', `nonsense' and `noneless', they were presumably typos) so we mapped them all to the `meaningless' label. This frames the task as a 5-class classification problem with the following classes: bug, comment, complaint, meaningless and request. At test time the organizers additionally provided us with translations of the three language-specific test datasets back to English. These translations were obtained by Google translate. This allowed us to evaluate our English model on the translations, to gauge whether translation is a viable alternative to training a multilingual model. Pre-processing We perform two simple preprocessing steps. First of all, we tokenize all data using off-the-shelf tokenizers. We use tinysegmenter for Japanese and the NLTK TweetTokenizer for all other languages. The Japanese segmenter was crucial to get sufficient coverage from the word embeddings later. No additional preprocessing is performed. Multilingual Embeddings Word embeddings for single languages are readily available, for example the Polyglot or Facebook embeddings BIBREF6 , which were recently released. In this work we start from the monolingual embeddings provided by the Polyglot project BIBREF7 . We use the recently proposed approach based on SVD decomposition and a “pseudo-dictionary” BIBREF4 obtained from the monolingual embeddings to project embedding spaces. To extend their method from the bilingual to the multilingual case, we apply pair-wise projections by using English as pivot, similar in spirit to ammar2016massively. We took English as our development language. We also experimented with using larger embeddings (Facebook embeddings; larger in the sense of both trained on more data and having higher dimensionality), however, results were comparable while training time increased, therefore we decided to stick to the smaller 64-dimensional Polyglot embeddings. Model and Features As classifier we use a traditional model, a Support Vector Machine (SVM) with linear kernel implemented in scikit-learn BIBREF8 . We tune the regularization parameter INLINEFORM0 on the English development set and keep the parameter fixed for the remaining experiments and all languages ( INLINEFORM1 ). We compared the SVM to fastText BIBREF9 . As we had expected fastText gave consistently lower performance, presumably because of the small amounts of training data. Therefore we did not further explore neural approaches. Our features are character n-grams (3-10 grams, with binary tf-idf) and word embeddings. For the latter we use a simple continuous bag-of-word representation BIBREF10 based on averaging and min-max scaling. Additionally, we experimented with adding Part-Of-Speech (POS) tags to our model. However, to keep in line with our goal to build a single system for all languages we trained a single multilingual POS tagger by exploiting the projected multilingual embeddings. In particular, we trained a state-of-the-art bidirectional LSTM tagger BIBREF11 that uses both word and character representations on the concatenation of language-specific data provided from the Universal Dependencies data (version 1.2 for En, Fr and Es and version 2.0 data for Japanese, as the latter was not available in free-form in the earlier version). The word embeddings module of the tagger is initialized with the multilingual embeddings. We investigated POS n-grams (1 to 3 grams) as additional features. Evaluation We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support. Notice, since our setup deviates from the shared task setup (single-label versus multi-label classification), the final evaluation metric is different. We will report on weighted F1-score for the development and test set (with simple macro averaging), but use Exact-Accuracy and Micro F1 over all labels when presenting official results on the test sets. The latter two metrics were part of the official evaluation metrics. For details we refer the reader to the shared task overview paper BIBREF5 . Results We first present results on the provided development set, then on the official evaluation test set. Results on Development First of all, we evaluated different feature representations. As shown in Table TABREF31 character n-grams alone prove very effective, outperforming word n-grams and word embeddings alone. Overall simple character n-grams (C) in isolation are often more beneficial than word and character n-grams together, albeit for some languages results are close. The best representation are character n-grams with word embeddings. This representation provides the basis for our multilingual model which relies on multilingual embeddings. The two officially submitted models both use character n-grams (3-10) and word embeddings. Our first official submission, Monolingual is the per-language trained model using this representation. Next we investigated adding more languages to the model, by relying on the multilingual embeddings as bridge. For instance in Table TABREF31 , the model indicated as En+Es is a character and word embedding-based SVM trained using bilingual embeddings created by mapping the two monolingual embeddings onto the same space and using both the English and Spanish training material. As the results show, using multiple languages can improve over the in-language development performance of the character+embedding model. However, the bilingual models are still only able to handle pairs of languages. We therefore mapped all embeddings to a common space and train a single multilingual All-in-1 model on the union of all training data. This is the second model that we submitted to the shared task. As we can see from the development data, on average the multilingual model shows promising, overall (macro average) outperforming the single language-specific models. However, the multilingual model does not consistently fare better than single models, for example on French a monolingual model would be more beneficial. Adding POS tags did not help (cf. Table TABREF31 ), actually dropped performance. We disregard this feature for the final official runs. Test Performance We trained the final models on the concatenation of Train and Dev data. The results on the test set (using our internally used weighted F1 metric) are given in Table TABREF33 . There are two take-away points from the main results: First, we see a positive transfer for languages with little data, i.e., the single multilingual model outperforms the language-specific models on the two languages (Spanish and Japanese) which have the least amount of training data. Overall results between the monolingual and multilingual model are close, but the advantage of our multilingual All-in-1 approach is that it is a single model that can be applied to all four languages. Second, automatic translation harms, the performance of the EN model on the translated data is substantially lower than the respective in-language model. We could investigate this as the organizers provided us with translations of French, Spanish and Japanese back to English. Averaged over all languages our system ranked first, cf. Table TABREF34 for the results of the top 5 submissions. The multilingual model reaches the overall best exact accuracy, for two languages training a in-language model would be slightly more beneficial at the cost of maintaining a separate model. The similarity-based baseline provided by the organizers is considerably lower. Our system was outperformed on English by three teams, most of which focused only on English. Unfortunately at the time of writing there is no system description available for most other top systems, so that we cannot say whether they used more English-specific features. From the system names of other teams we may infer that most teams used neural approaches, and they score worse than our SVM-based system. The per-label breakdown of our systems on the official test data (using micro F1 as calculated by the organizers) is given in Table TABREF36 . Unsurprisingly less frequent labels are more difficult to predict. Conclusions We presented a simple model that can effectively handle multiple languages in a single system. The model is based on a traditional SVM, character n-grams and multilingual embeddings. The model ranked first in the shared task of customer feedback analysis, outperforming other approaches that mostly relied on deep neural networks. There are two take-away messages of this work: 1) multilingual embeddings are very promising to build single multilingual models; and 2) it is important to compare deep learning methods to simple traditional baselines; while deep approaches are undoubtedly very attractive (and fun!), we always deem it important to compare deep neural to traditional approaches, as the latter often turn out to be surprisingly effective. Doing so will add to the literature and help to shed more light on understanding why and when this is the case. Acknowledgments I would like to thank the organizers, in particular Chao-Hong Liu, for his quick replies. I also thank Rob van der Goot, Héctor Martínez Alonso and Malvina Nissim for valuable comments on earlier drafts of this paper.
No
cb20aebfedad1a306e82966d6e9e979129fcd9f9
cb20aebfedad1a306e82966d6e9e979129fcd9f9_0
Q: what evaluation metrics were used? Text: Introduction Customer feedback analysis is the task of classifying short text messages into a set of predefined labels (e.g., bug, request). It is an important step towards effective customer support. However, a real bottleneck for successful classification of customer feedback in a multilingual environment is the limited transferability of such models, i.e., typically each time a new language is encountered a new model is built from scratch. This is clearly impractical, as maintaining separate models is cumbersome, besides the fact that existing annotations are simply not leveraged. In this paper we present our submission to the IJCNLP 2017 shared task on customer feedback analysis, in which data from four languages was available (English, French, Japanese and Spanish). Our goal was to build a single system for all four languages, and compare it to the traditional approach of creating separate systems for each language. We hypothesize that a single system is beneficial, as it can provide positive transfer, particularly for the languages for which less data is available. The contributions of this paper are: All-In-1: One Model for All Motivated by the goal to evaluate how good a single model for multiple languages fares, we decided to build a very simple model that can handle any of the four languages. We aimed at an approach that does not require any language-specific processing (beyond tokenization) nor requires any parallel data. We set out to build a simple baseline, which turned out to be surprisingly effective. Our model is depicted in Figure FIGREF7 . Our key motivation is to provide a simple, general system as opposed to the usual ad-hoc setups one can expect in a multilingual shared task. So we rely on character n-grams, word embeddings, and a traditional classifier, motivated as follows. First, character n-grams and traditional machine learning algorithms have proven successful for a variety of classification tasks, e.g., native language identification and language detection. In recent shared tasks simple traditional models outperformed deep neural approaches like CNNs or RNNs, e.g., BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . This motivated our choice of using a traditional model with character n-gram features. Second, we build upon the recent success of multilingual embeddings. These are embedding spaces in which word types of different languages are embedded into the same high-dimensional space. Early approaches focus mainly on bilingual approaches, while recent research aims at mapping several languages into a single space. The body of literature is huge, but an excellent recent overview is given in xlingsurvey. We chose a very simple and recently proposed method that does not rely on any parallel data BIBREF4 and extend it to the multilingual case. In particular, the method falls under the broad umbrella of monolingual mappings. These approaches first train monolingual embeddings on large unlabeled corpora for the single languages. They then learn linear mappings between the monolingual embeddings to map them to the same space. The approach we apply here is particularly interesting as it does not require parallel data (parallel sentences/documents or dictionaries) and is readily applicable to off-the-shelf embeddings. In brief, the approach aims at learning a transformation in which word vector spaces are orthogonal (by applying SVD) and it leverages so-called “pseudo-dictionaries”. That is, the method first finds the common word types in two embedding spaces, and uses those as pivots to learn to align the two spaces (cf. further details in smith2017offline). Experimental Setup In this section we first describe the IJCNLP 2017 shared task 4 including the data, the features, model and evaluation metrics. Task Description The customer feedback analysis task BIBREF5 is a short text classification task. Given a customer feedback message, the goal is to detect the type of customer feedback. For each message, the organizers provided one or more labels. To give a more concrete idea of the data, the following are examples of the English dataset: “Still calls keep dropping with the new update” (bug) “Room was grubby, mold on windows frames.” (complaint) “The new update is amazing.” (comment) “Needs more control s and tricks..” (request) “Enjoy the sunshine!!” (meaningless) Data The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language. We treat the customer feedback analysis problem as a single-class classification task and actually ignore multi-label instances, as motivated next. The final label distribution for the data is given in Figure FIGREF17 . In initial investigations of the data we noticed that very few instances had multiple labels, e.g., “comment,complaint”. In the English training data this amounted to INLINEFORM0 4% of the data. We decided to ignore those additional labels (just picked the first in case of multiple labels) and treat the problem as a single-class classification problem. This was motivated by the fact that some labels were expected to be easily confused. Finally, there were some labels in the data that did not map to any of the labels in the task description (i.e., `undetermined', `undefined', `nonsense' and `noneless', they were presumably typos) so we mapped them all to the `meaningless' label. This frames the task as a 5-class classification problem with the following classes: bug, comment, complaint, meaningless and request. At test time the organizers additionally provided us with translations of the three language-specific test datasets back to English. These translations were obtained by Google translate. This allowed us to evaluate our English model on the translations, to gauge whether translation is a viable alternative to training a multilingual model. Pre-processing We perform two simple preprocessing steps. First of all, we tokenize all data using off-the-shelf tokenizers. We use tinysegmenter for Japanese and the NLTK TweetTokenizer for all other languages. The Japanese segmenter was crucial to get sufficient coverage from the word embeddings later. No additional preprocessing is performed. Multilingual Embeddings Word embeddings for single languages are readily available, for example the Polyglot or Facebook embeddings BIBREF6 , which were recently released. In this work we start from the monolingual embeddings provided by the Polyglot project BIBREF7 . We use the recently proposed approach based on SVD decomposition and a “pseudo-dictionary” BIBREF4 obtained from the monolingual embeddings to project embedding spaces. To extend their method from the bilingual to the multilingual case, we apply pair-wise projections by using English as pivot, similar in spirit to ammar2016massively. We took English as our development language. We also experimented with using larger embeddings (Facebook embeddings; larger in the sense of both trained on more data and having higher dimensionality), however, results were comparable while training time increased, therefore we decided to stick to the smaller 64-dimensional Polyglot embeddings. Model and Features As classifier we use a traditional model, a Support Vector Machine (SVM) with linear kernel implemented in scikit-learn BIBREF8 . We tune the regularization parameter INLINEFORM0 on the English development set and keep the parameter fixed for the remaining experiments and all languages ( INLINEFORM1 ). We compared the SVM to fastText BIBREF9 . As we had expected fastText gave consistently lower performance, presumably because of the small amounts of training data. Therefore we did not further explore neural approaches. Our features are character n-grams (3-10 grams, with binary tf-idf) and word embeddings. For the latter we use a simple continuous bag-of-word representation BIBREF10 based on averaging and min-max scaling. Additionally, we experimented with adding Part-Of-Speech (POS) tags to our model. However, to keep in line with our goal to build a single system for all languages we trained a single multilingual POS tagger by exploiting the projected multilingual embeddings. In particular, we trained a state-of-the-art bidirectional LSTM tagger BIBREF11 that uses both word and character representations on the concatenation of language-specific data provided from the Universal Dependencies data (version 1.2 for En, Fr and Es and version 2.0 data for Japanese, as the latter was not available in free-form in the earlier version). The word embeddings module of the tagger is initialized with the multilingual embeddings. We investigated POS n-grams (1 to 3 grams) as additional features. Evaluation We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support. Notice, since our setup deviates from the shared task setup (single-label versus multi-label classification), the final evaluation metric is different. We will report on weighted F1-score for the development and test set (with simple macro averaging), but use Exact-Accuracy and Micro F1 over all labels when presenting official results on the test sets. The latter two metrics were part of the official evaluation metrics. For details we refer the reader to the shared task overview paper BIBREF5 . Results We first present results on the provided development set, then on the official evaluation test set. Results on Development First of all, we evaluated different feature representations. As shown in Table TABREF31 character n-grams alone prove very effective, outperforming word n-grams and word embeddings alone. Overall simple character n-grams (C) in isolation are often more beneficial than word and character n-grams together, albeit for some languages results are close. The best representation are character n-grams with word embeddings. This representation provides the basis for our multilingual model which relies on multilingual embeddings. The two officially submitted models both use character n-grams (3-10) and word embeddings. Our first official submission, Monolingual is the per-language trained model using this representation. Next we investigated adding more languages to the model, by relying on the multilingual embeddings as bridge. For instance in Table TABREF31 , the model indicated as En+Es is a character and word embedding-based SVM trained using bilingual embeddings created by mapping the two monolingual embeddings onto the same space and using both the English and Spanish training material. As the results show, using multiple languages can improve over the in-language development performance of the character+embedding model. However, the bilingual models are still only able to handle pairs of languages. We therefore mapped all embeddings to a common space and train a single multilingual All-in-1 model on the union of all training data. This is the second model that we submitted to the shared task. As we can see from the development data, on average the multilingual model shows promising, overall (macro average) outperforming the single language-specific models. However, the multilingual model does not consistently fare better than single models, for example on French a monolingual model would be more beneficial. Adding POS tags did not help (cf. Table TABREF31 ), actually dropped performance. We disregard this feature for the final official runs. Test Performance We trained the final models on the concatenation of Train and Dev data. The results on the test set (using our internally used weighted F1 metric) are given in Table TABREF33 . There are two take-away points from the main results: First, we see a positive transfer for languages with little data, i.e., the single multilingual model outperforms the language-specific models on the two languages (Spanish and Japanese) which have the least amount of training data. Overall results between the monolingual and multilingual model are close, but the advantage of our multilingual All-in-1 approach is that it is a single model that can be applied to all four languages. Second, automatic translation harms, the performance of the EN model on the translated data is substantially lower than the respective in-language model. We could investigate this as the organizers provided us with translations of French, Spanish and Japanese back to English. Averaged over all languages our system ranked first, cf. Table TABREF34 for the results of the top 5 submissions. The multilingual model reaches the overall best exact accuracy, for two languages training a in-language model would be slightly more beneficial at the cost of maintaining a separate model. The similarity-based baseline provided by the organizers is considerably lower. Our system was outperformed on English by three teams, most of which focused only on English. Unfortunately at the time of writing there is no system description available for most other top systems, so that we cannot say whether they used more English-specific features. From the system names of other teams we may infer that most teams used neural approaches, and they score worse than our SVM-based system. The per-label breakdown of our systems on the official test data (using micro F1 as calculated by the organizers) is given in Table TABREF36 . Unsurprisingly less frequent labels are more difficult to predict. Conclusions We presented a simple model that can effectively handle multiple languages in a single system. The model is based on a traditional SVM, character n-grams and multilingual embeddings. The model ranked first in the shared task of customer feedback analysis, outperforming other approaches that mostly relied on deep neural networks. There are two take-away messages of this work: 1) multilingual embeddings are very promising to build single multilingual models; and 2) it is important to compare deep learning methods to simple traditional baselines; while deep approaches are undoubtedly very attractive (and fun!), we always deem it important to compare deep neural to traditional approaches, as the latter often turn out to be surprisingly effective. Doing so will add to the literature and help to shed more light on understanding why and when this is the case. Acknowledgments I would like to thank the organizers, in particular Chao-Hong Liu, for his quick replies. I also thank Rob van der Goot, Héctor Martínez Alonso and Malvina Nissim for valuable comments on earlier drafts of this paper.
weighted F1-score
45a2ce68b4a9fd4f04738085865fbefa36dd0727
45a2ce68b4a9fd4f04738085865fbefa36dd0727_0
Q: what dataset was used? Text: Introduction Customer feedback analysis is the task of classifying short text messages into a set of predefined labels (e.g., bug, request). It is an important step towards effective customer support. However, a real bottleneck for successful classification of customer feedback in a multilingual environment is the limited transferability of such models, i.e., typically each time a new language is encountered a new model is built from scratch. This is clearly impractical, as maintaining separate models is cumbersome, besides the fact that existing annotations are simply not leveraged. In this paper we present our submission to the IJCNLP 2017 shared task on customer feedback analysis, in which data from four languages was available (English, French, Japanese and Spanish). Our goal was to build a single system for all four languages, and compare it to the traditional approach of creating separate systems for each language. We hypothesize that a single system is beneficial, as it can provide positive transfer, particularly for the languages for which less data is available. The contributions of this paper are: All-In-1: One Model for All Motivated by the goal to evaluate how good a single model for multiple languages fares, we decided to build a very simple model that can handle any of the four languages. We aimed at an approach that does not require any language-specific processing (beyond tokenization) nor requires any parallel data. We set out to build a simple baseline, which turned out to be surprisingly effective. Our model is depicted in Figure FIGREF7 . Our key motivation is to provide a simple, general system as opposed to the usual ad-hoc setups one can expect in a multilingual shared task. So we rely on character n-grams, word embeddings, and a traditional classifier, motivated as follows. First, character n-grams and traditional machine learning algorithms have proven successful for a variety of classification tasks, e.g., native language identification and language detection. In recent shared tasks simple traditional models outperformed deep neural approaches like CNNs or RNNs, e.g., BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . This motivated our choice of using a traditional model with character n-gram features. Second, we build upon the recent success of multilingual embeddings. These are embedding spaces in which word types of different languages are embedded into the same high-dimensional space. Early approaches focus mainly on bilingual approaches, while recent research aims at mapping several languages into a single space. The body of literature is huge, but an excellent recent overview is given in xlingsurvey. We chose a very simple and recently proposed method that does not rely on any parallel data BIBREF4 and extend it to the multilingual case. In particular, the method falls under the broad umbrella of monolingual mappings. These approaches first train monolingual embeddings on large unlabeled corpora for the single languages. They then learn linear mappings between the monolingual embeddings to map them to the same space. The approach we apply here is particularly interesting as it does not require parallel data (parallel sentences/documents or dictionaries) and is readily applicable to off-the-shelf embeddings. In brief, the approach aims at learning a transformation in which word vector spaces are orthogonal (by applying SVD) and it leverages so-called “pseudo-dictionaries”. That is, the method first finds the common word types in two embedding spaces, and uses those as pivots to learn to align the two spaces (cf. further details in smith2017offline). Experimental Setup In this section we first describe the IJCNLP 2017 shared task 4 including the data, the features, model and evaluation metrics. Task Description The customer feedback analysis task BIBREF5 is a short text classification task. Given a customer feedback message, the goal is to detect the type of customer feedback. For each message, the organizers provided one or more labels. To give a more concrete idea of the data, the following are examples of the English dataset: “Still calls keep dropping with the new update” (bug) “Room was grubby, mold on windows frames.” (complaint) “The new update is amazing.” (comment) “Needs more control s and tricks..” (request) “Enjoy the sunshine!!” (meaningless) Data The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language. We treat the customer feedback analysis problem as a single-class classification task and actually ignore multi-label instances, as motivated next. The final label distribution for the data is given in Figure FIGREF17 . In initial investigations of the data we noticed that very few instances had multiple labels, e.g., “comment,complaint”. In the English training data this amounted to INLINEFORM0 4% of the data. We decided to ignore those additional labels (just picked the first in case of multiple labels) and treat the problem as a single-class classification problem. This was motivated by the fact that some labels were expected to be easily confused. Finally, there were some labels in the data that did not map to any of the labels in the task description (i.e., `undetermined', `undefined', `nonsense' and `noneless', they were presumably typos) so we mapped them all to the `meaningless' label. This frames the task as a 5-class classification problem with the following classes: bug, comment, complaint, meaningless and request. At test time the organizers additionally provided us with translations of the three language-specific test datasets back to English. These translations were obtained by Google translate. This allowed us to evaluate our English model on the translations, to gauge whether translation is a viable alternative to training a multilingual model. Pre-processing We perform two simple preprocessing steps. First of all, we tokenize all data using off-the-shelf tokenizers. We use tinysegmenter for Japanese and the NLTK TweetTokenizer for all other languages. The Japanese segmenter was crucial to get sufficient coverage from the word embeddings later. No additional preprocessing is performed. Multilingual Embeddings Word embeddings for single languages are readily available, for example the Polyglot or Facebook embeddings BIBREF6 , which were recently released. In this work we start from the monolingual embeddings provided by the Polyglot project BIBREF7 . We use the recently proposed approach based on SVD decomposition and a “pseudo-dictionary” BIBREF4 obtained from the monolingual embeddings to project embedding spaces. To extend their method from the bilingual to the multilingual case, we apply pair-wise projections by using English as pivot, similar in spirit to ammar2016massively. We took English as our development language. We also experimented with using larger embeddings (Facebook embeddings; larger in the sense of both trained on more data and having higher dimensionality), however, results were comparable while training time increased, therefore we decided to stick to the smaller 64-dimensional Polyglot embeddings. Model and Features As classifier we use a traditional model, a Support Vector Machine (SVM) with linear kernel implemented in scikit-learn BIBREF8 . We tune the regularization parameter INLINEFORM0 on the English development set and keep the parameter fixed for the remaining experiments and all languages ( INLINEFORM1 ). We compared the SVM to fastText BIBREF9 . As we had expected fastText gave consistently lower performance, presumably because of the small amounts of training data. Therefore we did not further explore neural approaches. Our features are character n-grams (3-10 grams, with binary tf-idf) and word embeddings. For the latter we use a simple continuous bag-of-word representation BIBREF10 based on averaging and min-max scaling. Additionally, we experimented with adding Part-Of-Speech (POS) tags to our model. However, to keep in line with our goal to build a single system for all languages we trained a single multilingual POS tagger by exploiting the projected multilingual embeddings. In particular, we trained a state-of-the-art bidirectional LSTM tagger BIBREF11 that uses both word and character representations on the concatenation of language-specific data provided from the Universal Dependencies data (version 1.2 for En, Fr and Es and version 2.0 data for Japanese, as the latter was not available in free-form in the earlier version). The word embeddings module of the tagger is initialized with the multilingual embeddings. We investigated POS n-grams (1 to 3 grams) as additional features. Evaluation We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support. Notice, since our setup deviates from the shared task setup (single-label versus multi-label classification), the final evaluation metric is different. We will report on weighted F1-score for the development and test set (with simple macro averaging), but use Exact-Accuracy and Micro F1 over all labels when presenting official results on the test sets. The latter two metrics were part of the official evaluation metrics. For details we refer the reader to the shared task overview paper BIBREF5 . Results We first present results on the provided development set, then on the official evaluation test set. Results on Development First of all, we evaluated different feature representations. As shown in Table TABREF31 character n-grams alone prove very effective, outperforming word n-grams and word embeddings alone. Overall simple character n-grams (C) in isolation are often more beneficial than word and character n-grams together, albeit for some languages results are close. The best representation are character n-grams with word embeddings. This representation provides the basis for our multilingual model which relies on multilingual embeddings. The two officially submitted models both use character n-grams (3-10) and word embeddings. Our first official submission, Monolingual is the per-language trained model using this representation. Next we investigated adding more languages to the model, by relying on the multilingual embeddings as bridge. For instance in Table TABREF31 , the model indicated as En+Es is a character and word embedding-based SVM trained using bilingual embeddings created by mapping the two monolingual embeddings onto the same space and using both the English and Spanish training material. As the results show, using multiple languages can improve over the in-language development performance of the character+embedding model. However, the bilingual models are still only able to handle pairs of languages. We therefore mapped all embeddings to a common space and train a single multilingual All-in-1 model on the union of all training data. This is the second model that we submitted to the shared task. As we can see from the development data, on average the multilingual model shows promising, overall (macro average) outperforming the single language-specific models. However, the multilingual model does not consistently fare better than single models, for example on French a monolingual model would be more beneficial. Adding POS tags did not help (cf. Table TABREF31 ), actually dropped performance. We disregard this feature for the final official runs. Test Performance We trained the final models on the concatenation of Train and Dev data. The results on the test set (using our internally used weighted F1 metric) are given in Table TABREF33 . There are two take-away points from the main results: First, we see a positive transfer for languages with little data, i.e., the single multilingual model outperforms the language-specific models on the two languages (Spanish and Japanese) which have the least amount of training data. Overall results between the monolingual and multilingual model are close, but the advantage of our multilingual All-in-1 approach is that it is a single model that can be applied to all four languages. Second, automatic translation harms, the performance of the EN model on the translated data is substantially lower than the respective in-language model. We could investigate this as the organizers provided us with translations of French, Spanish and Japanese back to English. Averaged over all languages our system ranked first, cf. Table TABREF34 for the results of the top 5 submissions. The multilingual model reaches the overall best exact accuracy, for two languages training a in-language model would be slightly more beneficial at the cost of maintaining a separate model. The similarity-based baseline provided by the organizers is considerably lower. Our system was outperformed on English by three teams, most of which focused only on English. Unfortunately at the time of writing there is no system description available for most other top systems, so that we cannot say whether they used more English-specific features. From the system names of other teams we may infer that most teams used neural approaches, and they score worse than our SVM-based system. The per-label breakdown of our systems on the official test data (using micro F1 as calculated by the organizers) is given in Table TABREF36 . Unsurprisingly less frequent labels are more difficult to predict. Conclusions We presented a simple model that can effectively handle multiple languages in a single system. The model is based on a traditional SVM, character n-grams and multilingual embeddings. The model ranked first in the shared task of customer feedback analysis, outperforming other approaches that mostly relied on deep neural networks. There are two take-away messages of this work: 1) multilingual embeddings are very promising to build single multilingual models; and 2) it is important to compare deep learning methods to simple traditional baselines; while deep approaches are undoubtedly very attractive (and fun!), we always deem it important to compare deep neural to traditional approaches, as the latter often turn out to be surprisingly effective. Doing so will add to the literature and help to shed more light on understanding why and when this is the case. Acknowledgments I would like to thank the organizers, in particular Chao-Hong Liu, for his quick replies. I also thank Rob van der Goot, Héctor Martínez Alonso and Malvina Nissim for valuable comments on earlier drafts of this paper.
The dataset from a joint ADAPT-Microsoft project
9349acbfce95cb5d6b4d09ac626b55a9cb90e55e
9349acbfce95cb5d6b4d09ac626b55a9cb90e55e_0
Q: What are the citation intent labels in the datasets? Text: Introduction Citations play a unique role in scientific discourse and are crucial for understanding and analyzing scientific work BIBREF0 , BIBREF1 . They are also typically used as the main measure for assessing impact of scientific publications, venues, and researchers BIBREF2 . The nature of citations can be different. Some citations indicate direct use of a method while some others merely serve as acknowledging a prior work. Therefore, identifying the intent of citations (Figure 1 ) is critical in improving automated analysis of academic literature and scientific impact measurement BIBREF1 , BIBREF3 . Other applications of citation intent classification are enhanced research experience BIBREF4 , information retrieval BIBREF5 , summarization BIBREF6 , and studying evolution of scientific fields BIBREF7 . In this work, we approach the problem of citation intent classification by modeling the language expressed in the citation context. A citation context includes text spans in a citing paper describing a referenced work and has been shown to be the primary signal in intent classification BIBREF8 , BIBREF9 , BIBREF7 . Existing models for this problem are feature-based, modeling the citation context with respect to a set of predefined hand-engineered features (such as linguistic patterns or cue phrases) and ignoring other signals that could improve prediction. In this paper we argue that better representations can be obtained directly from data, sidestepping problems associated with external features. To this end, we propose a neural multitask learning framework to incorporate knowledge into citations from the structure of scientific papers. In particular, we propose two auxiliary tasks as Istructural scaffolds to improve citation intent prediction: (1) predicting the section title in which the citation occurs and (2) predicting whether a sentence needs a citation. Unlike the primary task of citation intent prediction, it is easy to collect large amounts of training data for scaffold tasks since the labels naturally occur in the process of writing a paper and thus, there is no need for manual annotation. On two datasets, we show that the proposed neural scaffold model outperforms existing methods by large margins. Our contributions are: (i) we propose a neural scaffold framework for citation intent classification to incorporate into citations knowledge from structure of scientific papers; (ii) we achieve a new state-of-the-art of 67.9% F1 on the ACL-ARC citations benchmark, an absolute 13.3% increase over the previous state-of-the-art BIBREF7 ; and (iii) we introduce SciCite, a new dataset of citation intents which is at least five times as large as existing datasets and covers a variety of scientific domains. Model We propose a neural multitask learning framework for classification of citation intents. In particular, we introduce and use two structural scaffolds, auxiliary tasks related to the structure of scientific papers. The auxiliary tasks may not be of interest by themselves but are used to inform the main task. Our model uses a large auxiliary dataset to incorporate this structural information available in scientific documents into the citation intents. The overview of our model is illustrated in Figure 2 . Let $C$ denote the citation and $x̭$ denote the citation context relevant to $C$ . We encode the tokens in the citation context of size $n$ as $x̭=\lbrace x̭_1, ..., x̭_n\rbrace $ , where $x̭_i\in \mathcal {R}^{d_1}$ is a word vector of size $d_1$ which concatenates non-contextualized word representations BIBREF10 and contextualized embeddings BIBREF11 , i.e.: $x̭_i = \big [x̭_i^{\text{GloVe}};x̭_i^{\text{ELMo}}\big ]$ We then use a bidirectional long short-term memory BIBREF12 (BiLSTM) network with hidden size of $d_2$ to obtain a contextual representation of each token vector with respect to the entire sequence: $ h̭_i = \big [\overrightarrow{\mathrm {LSTM}}(x̭, i);\overleftarrow{\mathrm {LSTM}}(x̭, i)\big ],$ where $ h̭ \in \mathcal {R}^{(n, 2d_2)} $ and $\overrightarrow{\mathrm {LSTM}}(x̭,i)$ processes $x̭$ from left to write and returns the LSTM hidden state at position $i$ (and vice versa for the backward direction $\overleftarrow{\mathrm {LSTM}}$ ). We then use an attention mechanism to get a single vector representing the whole input sequence: $ z̭ = \sum _{i=1}^n\alpha _i h̭_i, \quad \alpha _i = \operatorname{softmax}(w̭^\top h̭_i),$ where $w̭$ is a parameter served as the query vector for dot-product attention. So far we have obtained the citation representation as a vector $z̭$ . Next, we describe our two proposed structural scaffolds for citation intent prediction. Structural scaffolds In scientific writing there is a connection between the structure of scientific papers and the intent of citations. To leverage this connection for more effective classification of citation intents, we propose a multitask framework with two structural scaffolds (auxiliary tasks) related to the structure of scientific documents. A key point for our proposed scaffolds is that they do not need any additional manual annotation as labels for these tasks occur naturally in scientific writing. The structural scaffolds in our model are the following: The first scaffold task that we consider is “citation worthiness” of a sentence, indicating whether a sentence needs a citation. The language expressed in citation sentences is likely distinctive from regular sentences in scientific writing, and such information could also be useful for better language modeling of the citation contexts. To this end, using citation markers such as “[12]” or “Lee et al (2010)”, we identify sentences in a paper that include citations and the negative samples are sentences without citation markers. The goal of the model for this task is to predict whether a particular sentence needs a citation. The second scaffold task relates to predicting the section title in which a citation appears. Scientific documents follow a standard structure where the authors typically first introduce the problem, describe methodology, share results, discuss findings and conclude the paper. The intent of a citation could be relevant to the section of the paper in which the citation appears. For example, method-related citations are more likely to appear in the methods section. Therefore, we use the section title prediction as a scaffold for predicting citation intents. Note that this scaffold task is different than simply adding section title as an additional feature in the input. We are using the section titles from a larger set of data than training data for the main task as a proxy to learn linguistic patterns that are helpful for citation intents. In particular, we leverage a large number of scientific papers for which the section information is known for each citation to automatically generate large amounts of training data for this scaffold task. Multitask learning as defined by BIBREF13 is an approach to inductive transfer learning that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It requires the model to have at least some sharable parameters between the tasks. In a general setting in our model, we have a main task $Task^{(1)}$ and $n-1$ auxiliary tasks $Task^{(i)}$ . As shown in Figure 2 , each scaffold task will have its task-specific parameters for effective classification and the parameters for the lower layers of the network are shared across tasks. We use a Multi Layer Perceptron (MLP) for each task and then a softmax layer to obtain prediction probabilites. In particular, given the vector $z̭$ we pass it to $n$ MLPs and obtain $n$ output vectors $y̭^{(i)}$ : $ y̭^{(i)} = \operatorname{softmax}(\mathrm {MLP}^{(i)}(z̭)) $ We are only interested in the output $y̭^{(1)}$ and the rest of outputs $(y̭^{(2)}, ..., y̭^{(n)})$ are regarding the scaffold tasks and only used in training to inform the model of knowledge in the structure of the scientific documents. For each task, we output the class with the highest probability in $y̭$ . An alternative inference method is to sample from the output distribution. 0.5pt 1.0pt Training Let $\mathcal {D}_1$ be the labeled dataset for the main task $Task^{(1)}$ , and $\mathcal {D}_i$ denote the labeled datasets corresponding to the scaffold task $Task^{(i)}$ where $i\in \lbrace 2,...,n\rbrace $ . Similarly, let $\mathcal {L}_1$ and $\mathcal {L}_i$ be the main loss and the loss of the auxiliary task $i$ , respectively. The final loss of the model is: $$\small \mathcal {L}=\sum _{(x̭,y̭)\in \mathcal {D}_1} \mathcal {L}_1(x̭,y̭) + \sum _{i=2}^n \lambda _i \sum _{(x̭,y̭)\in \mathcal {D}_i} \mathcal {L}_i(x̭,y̭),$$ (Eq. 15) where $\lambda _i$ is a hyper-parameter specifying the sensitivity of the parameters of the model to each specific task. Here we have two scaffold tasks and hence $n{=}3$ . $\lambda _i$ could be tuned based on performance on validation set (see § "Experiments" for details). We train this model jointly across tasks and in an end-to-end fashion. In each training epoch, we construct mini-batches with the same number of instances from each of the $n$ tasks. We compute the total loss for each mini-batch as described in Equation 15 , where $\mathcal {L}_i{=}0$ for all instances of other tasks $j{\ne }i$ . We compute the gradient of the loss for each mini-batch and tune model parameters using the AdaDelta optimizer BIBREF14 with gradient clipping threshold of 5.0. We stop training the model when the development macro F1 score does not improve for five consecutive epochs. Data We compare our results on two datasets from different scientific domains. While there has been a long history of studying citation intents, there are only a few existing publicly available datasets on the task of citation intent classification. We use the most recent and comprehensive (ACL-ARC citations dataset) by BIBREF7 as a benchmark dataset to compare the performance of our model to previous work. In addition, to address the limited scope and size of this dataset, we introduce SciCite, a new dataset of citation intents that addresses multiple scientific domains and is more than five times larger than ACL-ARC. Below is a description of both datasets. ACL-ARC citations dataset ACL-ARC is a dataset of citation intents released by BIBREF7 . The dataset is based on a sample of papers from the ACL Anthology Reference Corpus BIBREF15 and includes 1,941 citation instances from 186 papers and is annotated by domain experts in the NLP field. The data was split into three standard stratified sets of train, validation, and test with 85% of data used for training and remaining 15% divided equally for validation and test. Each citation unit includes information about the immediate citation context, surrounding context, as well as information about the citing and cited paper. The data includes six intent categories outlined in Table 2 . SciCite dataset Most existing datasets contain citation categories that are too fine-grained. Some of these intent categories are very rare or not useful in meta analysis of scientific publications. Since some of these fine-grained categories only cover a minimal percentage of all citations, it is difficult to use them to gain insights or draw conclusions on impacts of papers. Furthermore, these datasets are usually domain-specific and are relatively small (less than 2,000 annotated citations). To address these limitations, we introduce SciCite, a new dataset of citation intents that is significantly larger, more coarse-grained and general-domain compared with existing datasets. Through examination of citation intents, we found out many of the categories defined in previous work such as motivation, extension or future work, can be considered as background information providing more context for the current research topic. More interesting intent categories are a direct use of a method or comparison of results. Therefore, our dataset provides a concise annotation scheme that is useful for navigating research topics and machine reading of scientific papers. We consider three intent categories outlined in Table 1 : Background, Method and ResultComparison. Below we describe data collection and annotation details. Citation intent of sentence extractions was labeled through the crowdsourcing platform Figure Eight. We selected a sample of papers from the Semantic Scholar corpus, consisting of papers in general computer science and medicine domains. Citation contexts were extracted using science-parse. The annotators were asked to identify the intent of a citation, and were directed to select among three citation intent options: Method, ResultComparison and Background. The annotation interface also included a dummy option Other which helps improve the quality of annotations of other categories. We later removed instances annotated with the Other option from our dataset (less than 1% of the annotated data), many of which were due to citation contexts which are incomplete or too short for the annotator to infer the citation intent. We used 50 test questions annotated by a domain expert to ensure crowdsource workers were following directions and disqualify annotators with accuracy less than 75%. Furthermore, crowdsource workers were required to remain on the annotation page (five annotations) for at least ten seconds before proceeding to the next page. Annotations were dynamically collected. The annotations were aggregated along with a confidence score describing the level of agreement between multiple crowdsource workers. The confidence score is the agreement on a single instance weighted by a trust score (accuracy of the annotator on the initial 50 test questions). To only collect high quality annotations, instances with confidence score of $\le $ 0.7 were discarded. In addition, a subset of the dataset with 100 samples was re-annotated by a trained, expert annotator to check for quality, and the agreement rate with crowdsource workers was 86%. Citation contexts were annotated by 850 crowdsource workers who made a total of 29,926 annotations and individually made between 4 and 240 annotations. Each sentence was annotated, on average, 3.74 times. This resulted in a total 9,159 crowdsourced instances which were divided to training and validation sets with 90% of the data used for the training set. In addition to the crowdsourced data, a separate test set of size 1,861 was annotated by a trained, expert annotator to ensure high quality of the dataset. Data for scaffold tasks For the first scaffold (citation worthiness), we sample sentences from papers and consider the sentences with citations as positive labels. We also remove the citation markers from those sentences such as numbered citations (e.g., [1]) or name-year combinations (e.g, Lee et al (2012)) to not make the second task artificially easy by only detecting citation markers. For the second scaffold (citation section title), respective to each test dataset, we sample citations from the ACL-ARC corpus and Semantic Scholar corpus and extract the citation context as well as their corresponding sections. We manually define regular expression patterns mappings to normalized section titles: “introduction”, “related work”, “method”, “experiments”, “conclusion”. Section titles which did not map to any of the aforementioned titles were excluded from the dataset. Overall, the size of the data for scaffold tasks on the ACL-ARC dataset is about 47K (section title scaffold) and 50K (citation worthiness) while on SciCite is about 91K and 73K for section title and citation worthiness scaffolds, respectively. Implementation We implement our proposed scaffold framework using the AllenNLP library BIBREF16 . For word representations, we use 100-dimensional GloVe vectors BIBREF17 trained on a corpus of 6B tokens from Wikipedia and Gigaword. For contextual representations, we use ELMo vectors released by BIBREF18 with output dimension size of 1,024 which have been trained on a dataset of 5.5B tokens. We use a single-layer BiLSTM with a hidden dimension size of 50 for each direction. For each of scaffold tasks, we use a single-layer MLP with 20 hidden nodes , ReLU BIBREF19 activation and a Dropout rate BIBREF20 of 0.2 between the hidden and input layers. The hyperparameters $\lambda _i$ are tuned for best performance on the validation set of the respective datasets using a 0.0 to 0.3 grid search. For example, the following hyperparameters are used for the ACL-ARC. Citation worthiness saffold: $\lambda _2{=}0.08$ , $\lambda _3{=}0$ , section title scaffold: $\lambda _3{=}0.09$ , $\lambda _2{=}0$ ; both scaffolds: $\lambda _2{=}0.1$ , $\lambda _3{=}0.05$ . Batch size is 8 for ACL-ARC dataset and 32 for SciCite dataset (recall that SciCite is larger than ACL-ARC). We use Beaker for running the experiments. On the smaller dataset, our best model takes approximately 30 minutes per epoch to train (training time without ELMo is significantly faster). It is known that multiple runs of probabilistic deep learning models can have variance in overall scores BIBREF21 . We control this by setting random-number generator seeds; the reported overall results are average of multiple runs with different random seeds. To facilitate reproducibility, we release our code, data, and trained models. Baselines We compare our results to several baselines including the model with state-of-the-art performance on the ACL-ARC dataset. [leftmargin=6pt] BiLSTM Attention (with and without ELMo). This baseline uses a similar architecture to our proposed neural multitask learning framework, except that it only optimizes the network for the main loss regarding the citation intent classification ( $\mathcal {L}_1$ ) and does not include the structural scaffolds. We experiment with two variants of this model: with and without using the contextualized word vector representations (ELMo) of BIBREF18 . This baseline is useful for evaluating the effect of adding scaffolds in controlled experiments. BIBREF7 . To make sure our results are competitive with state-of-the-art results on this task, we also compare our model to BIBREF7 which has the best reported results on the ACL-ARC dataset. BIBREF7 incorporate a variety of features, ranging from pattern-based features to topic-modeling features, to citation graph features. They also incorporate section titles and relative section position in the paper as features. Our implementation of this model achieves a macro-averaged F1 score of 0.526 using 10-fold cross-validation, which is in line with the highest reported results in BIBREF7 : 0.53 using leave-one-out cross validation. We were not able to use leave-one-out cross validation in our experiments since it is impractical to re-train each variant of our deep learning models thousands of times. Therefore, we opted for a standard setup of stratified train/validation/test data splits with 85% data used for training and the rest equally split between validation and test. Results Our main results for the ACL-ARC dataset BIBREF7 is shown in Table 3 . We observe that our scaffold-enhanced models achieve clear improvements over the state-of-the-art approach on this task. Starting with the `BiLSTM-Attn' baseline with a macro F1 score of 51.8, adding the first scaffold task in `BiLSTM-Attn + section title scaffold' improves the F1 score to 56.9 ( $\Delta {=}5.1$ ). Adding the second scaffold in `BiLSTM-Attn + citation worthiness scaffold' also results in similar improvements: 56.3 ( $\Delta {=}4.5$ ). When both scaffolds are used simultaneously in `BiLSTM-Attn + both scaffolds', the F1 score further improves to 63.1 ( $\Delta {=}11.3$ ), suggesting that the two tasks provide complementary signal that is useful for citation intent prediction. The best result is achieved when we also add ELMo vectors BIBREF18 to the input representations in `BiLSTM-Attn w/ ELMo + both scaffolds', achieving an F1 of 67.9, a major improvement from the previous state-of-the-art results of BIBREF7 54.6 ( $\Delta {=}13.3$ ). We note that the scaffold tasks provide major contributions on top of the ELMo-enabled baseline ( $\Delta {=}$ 13.6), demonstrating the efficacy of using structural scaffolds for citation intent prediction. We note that these results were obtained without using hand-curated features or additional linguistic resources as used in BIBREF7 . We also experimented with adding features used in BIBREF7 to our best model and not only we did not see any improvements, but we observed at least 1.7% decline in performance. This suggests that these additional manual features do not provide the model with any additional useful signals beyond what the model already learns from the data. 0.5pt 1.0pt Table 4 shows the main results on SciCite dataset, where we see similar patterns. Each scaffold task improves model performance. Adding both scaffolds results in further improvements. And the best results are obtained by using ELMo representation in addition to both scaffolds. Note that this dataset is more than five times larger in size than the ACL-ARC, therefore the performance numbers are generally higher and the F1 gains are generally smaller since it is easier for the models to learn optimal parameters utilizing the larger annotated data. On this dataset, the best baseline is the neural baseline with addition of ELMo contextual vectors achieving an F1 score of 82.6 followed by BIBREF7 , which is expected because neural models generally achieve higher gains when more training data is available and because BIBREF7 was not designed with the SciCite dataset in mind. The breakdown of results by intent on ACL-ARC and SciCite datasets is respectively shown in Tables 5 and 6 . Generally we observe that results on categories with more number of instances are higher. For example on ACL-ARC, the results on the Background category are the highest as this category is the most common. Conversely, the results on the FutureWork category are the lowest. This category has the fewest data points (see distribution of the categories in Table 2 ) and thus it is harder for the model to learn the optimal parameters for correct classification in this category. Analysis To gain more insight into why the scaffolds are helping the model in improved citation intent classification, we examine the attention weights assigned to inputs for our best proposed model (`BiLSTM-Attn w/ ELMo + both scaffolds') compared with the best neural baseline (`BiLSTM-Attn w/ ELMO'). We conduct this analysis for examples from both datasets. Figure 3 shows an example input citation along with the horizontal line and the heatmap of attention weights for this input resulting from our model versus the baseline. For first example ( 3 ) the true label is FutureWork. We observe that our model puts more weight on words surrounding the word “future” which is plausible given the true label. On the other hand, the baseline model attends most to the words “compare” and consequently incorrectly predicts a Compare label. In second example ( 3 ) the true label is ResultComparison. The baseline incorrectly classifies it as a Background, likely due to attending to another part of the sentence (“analyzed seprately”). Our model correctly classifies this instance by putting more attention weights on words that relate to comparison of the results. This suggests that the our model is more successful in learning optimal parameters for representing the citation text and classifying its respective intent compared with the baseline. Note that the only difference between our model and the neural baseline is inclusion of the structural scaffolds. Therefore, suggesting the effectiveness the scaffolds in informing the main task of relevant signals for citation intent classification. 0.5pt 1.0pt We next investigate errors made by our best model (Figure 4 plots classification errors). One general error pattern is that the model has more tendency to make false positive errors in the Background category likely due to this category dominating both datasets. It's interesting that for the ACL-ARC dataset some prediction errors are due to the model failing to properly differentiate the Use category with Background. We found out that some of these errors would have been possibly prevented by using additional context. Table 7 shows a sample of such classification errors. For the citation in the first row of the table, the model is likely distracted by “model in (citation)” and “ILP formulation from (citation)” deeming the sentence is referring to the use of another method from a cited paper and it misses the first part of the sentence describing the motivation. This is likely due to the small number of training instances in the Motivation category, preventing the model to learn such nuances. For the examples in the second and third row, it is not clear if it is possible to make the correct prediction without additional context. And similarly in the last row the instance seems ambiguous without accessing to additional context. Similarly as shown in Figure 4 two of FutureWork labels are wrongly classified. One of them is illustrated in the forth row of Table 7 where perhaps additional context could have helped the model in identifying the correct label. One possible way to prevent this type of errors, is to provide the model with an additional input, modeling the extended surrounding context. We experimented with encoding the extended surrounding context using a BiLSTM and concatenating it with the main citation context vector (z), but it resulted in a large decline in overall performance likely due to the overall noise introduced by the additional context. A possible future work is to investigate alternative effective approaches for incorporating the surrounding extended context. Related Work There is a large body of work studying the intent of citations and devising categorization systems BIBREF22 , BIBREF4 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF8 , BIBREF26 , BIBREF27 . Most of these efforts provide citation categories that are too fine-grained, some of which rarely occur in papers. Therefore, they are hardly useful for automated analysis of scientific publications. To address these problems and to unify previous efforts, in a recent work, BIBREF7 proposed a six category system for citation intents. In this work, we focus on two schemes: (1) the scheme proposed by BIBREF7 and (2) an additional, more coarse-grained general-purpose category system that we propose (details in § "Data" ). Unlike other schemes that are domain-specific, our scheme is general and naturally fits in scientific discourse in multiple domains. Early works in automated citation intent classification were based on rule-based systems (e.g., BIBREF23 , BIBREF28 ). Later, machine learning methods based on linguistic patterns and other hand-engineered features from citation context were found to be effective. For example, BIBREF8 proposed use of “cue phrases”, a set of expressions that talk about the act of presenting research in a paper. BIBREF9 relied on lexical, structural, and syntactic features and a linear SVM for classification. Researchers have also investigated methods of finding cited spans in the cited papers. Examples include feature-based methods BIBREF29 , domain-specific knowledge BIBREF30 , and a recent CNN-based model for joint prediction of cited spans and citation function BIBREF31 . We also experimented with CNNs but found the attention BiLSTM model to work significantly better. BIBREF7 expanded all pre-existing feature-based efforts on citation intent classification by proposing a comprehensive set of engineered features, including boostrapped patterns, topic modeling, dependency-based, and metadata features for the task. We argue that we can capture necessary information from the citation context using a data driven method, without the need for hand-engineered domain-dependent features or external resources. We propose a novel scaffold neural model for citation intent classification to incorporate structural information of scientific discourse into citations, borrowing the “scaffold” terminology from BIBREF32 who use auxiliary syntactic tasks for semantic problems. Conclusions and future work In this work, we show that structural properties related to scientific discourse can be effectively used to inform citation intent classification. We propose a multitask learning framework with two auxiliary tasks (predicting section titles and citation worthiness) as two scaffolds related to the main task of citation intent prediction. Our model achieves state-of-the-art result (F1 score of 67.9%) on the ACL-ARC dataset with 13.3 absolute increase over the best previous results. We additionally introduce SciCite, a new large dataset of citation intents and also show the effectiveness of our model on this dataset. Our dataset, unlike existing datasets that are designed based on a specific domain, is more general and fits in scientific discourse from multiple scientific domains. We demonstrate that carefully chosen auxiliary tasks that are inherently relevant to a main task can be leveraged to improve the performance on the main task. An interesting line of future work is to explore the design of such tasks or explore the properties or similarities between the auxiliary and the main tasks. Another relevant line of work is adapting our model to other domains containing documents with similar linked structured such as Wikipedia articles. Future work may benefit from replacing ELMo with other types of contextualized representations such as BERT in our scaffold model. For example, at the time of finalizing the camera ready version of this paper, BIBREF33 showed that a BERT contextualized representation model BIBREF34 trained on scientific text can achieve promising results on the SciCite dataset. Acknowledgments We thank Kyle Lo, Dan Weld, and Iz Beltagy for helpful discussions, Oren Etzioni for feedback on the paper, David Jurgens for helping us with their ACL-ARC dataset and reproducing their results, and the three anonymous reviewers for their comments and suggestions. Computations on beaker.org were supported in part by credits from Google Cloud.
Background, extends, uses, motivation, compare/contrast, and future work for the ACL-ARC dataset. Background, method, result comparison for the SciCite dataset.
be7f52c4f2bad20e728785a357c383853d885d94
be7f52c4f2bad20e728785a357c383853d885d94_0
Q: What is the size of ACL-ARC datasets? Text: Introduction Citations play a unique role in scientific discourse and are crucial for understanding and analyzing scientific work BIBREF0 , BIBREF1 . They are also typically used as the main measure for assessing impact of scientific publications, venues, and researchers BIBREF2 . The nature of citations can be different. Some citations indicate direct use of a method while some others merely serve as acknowledging a prior work. Therefore, identifying the intent of citations (Figure 1 ) is critical in improving automated analysis of academic literature and scientific impact measurement BIBREF1 , BIBREF3 . Other applications of citation intent classification are enhanced research experience BIBREF4 , information retrieval BIBREF5 , summarization BIBREF6 , and studying evolution of scientific fields BIBREF7 . In this work, we approach the problem of citation intent classification by modeling the language expressed in the citation context. A citation context includes text spans in a citing paper describing a referenced work and has been shown to be the primary signal in intent classification BIBREF8 , BIBREF9 , BIBREF7 . Existing models for this problem are feature-based, modeling the citation context with respect to a set of predefined hand-engineered features (such as linguistic patterns or cue phrases) and ignoring other signals that could improve prediction. In this paper we argue that better representations can be obtained directly from data, sidestepping problems associated with external features. To this end, we propose a neural multitask learning framework to incorporate knowledge into citations from the structure of scientific papers. In particular, we propose two auxiliary tasks as Istructural scaffolds to improve citation intent prediction: (1) predicting the section title in which the citation occurs and (2) predicting whether a sentence needs a citation. Unlike the primary task of citation intent prediction, it is easy to collect large amounts of training data for scaffold tasks since the labels naturally occur in the process of writing a paper and thus, there is no need for manual annotation. On two datasets, we show that the proposed neural scaffold model outperforms existing methods by large margins. Our contributions are: (i) we propose a neural scaffold framework for citation intent classification to incorporate into citations knowledge from structure of scientific papers; (ii) we achieve a new state-of-the-art of 67.9% F1 on the ACL-ARC citations benchmark, an absolute 13.3% increase over the previous state-of-the-art BIBREF7 ; and (iii) we introduce SciCite, a new dataset of citation intents which is at least five times as large as existing datasets and covers a variety of scientific domains. Model We propose a neural multitask learning framework for classification of citation intents. In particular, we introduce and use two structural scaffolds, auxiliary tasks related to the structure of scientific papers. The auxiliary tasks may not be of interest by themselves but are used to inform the main task. Our model uses a large auxiliary dataset to incorporate this structural information available in scientific documents into the citation intents. The overview of our model is illustrated in Figure 2 . Let $C$ denote the citation and $x̭$ denote the citation context relevant to $C$ . We encode the tokens in the citation context of size $n$ as $x̭=\lbrace x̭_1, ..., x̭_n\rbrace $ , where $x̭_i\in \mathcal {R}^{d_1}$ is a word vector of size $d_1$ which concatenates non-contextualized word representations BIBREF10 and contextualized embeddings BIBREF11 , i.e.: $x̭_i = \big [x̭_i^{\text{GloVe}};x̭_i^{\text{ELMo}}\big ]$ We then use a bidirectional long short-term memory BIBREF12 (BiLSTM) network with hidden size of $d_2$ to obtain a contextual representation of each token vector with respect to the entire sequence: $ h̭_i = \big [\overrightarrow{\mathrm {LSTM}}(x̭, i);\overleftarrow{\mathrm {LSTM}}(x̭, i)\big ],$ where $ h̭ \in \mathcal {R}^{(n, 2d_2)} $ and $\overrightarrow{\mathrm {LSTM}}(x̭,i)$ processes $x̭$ from left to write and returns the LSTM hidden state at position $i$ (and vice versa for the backward direction $\overleftarrow{\mathrm {LSTM}}$ ). We then use an attention mechanism to get a single vector representing the whole input sequence: $ z̭ = \sum _{i=1}^n\alpha _i h̭_i, \quad \alpha _i = \operatorname{softmax}(w̭^\top h̭_i),$ where $w̭$ is a parameter served as the query vector for dot-product attention. So far we have obtained the citation representation as a vector $z̭$ . Next, we describe our two proposed structural scaffolds for citation intent prediction. Structural scaffolds In scientific writing there is a connection between the structure of scientific papers and the intent of citations. To leverage this connection for more effective classification of citation intents, we propose a multitask framework with two structural scaffolds (auxiliary tasks) related to the structure of scientific documents. A key point for our proposed scaffolds is that they do not need any additional manual annotation as labels for these tasks occur naturally in scientific writing. The structural scaffolds in our model are the following: The first scaffold task that we consider is “citation worthiness” of a sentence, indicating whether a sentence needs a citation. The language expressed in citation sentences is likely distinctive from regular sentences in scientific writing, and such information could also be useful for better language modeling of the citation contexts. To this end, using citation markers such as “[12]” or “Lee et al (2010)”, we identify sentences in a paper that include citations and the negative samples are sentences without citation markers. The goal of the model for this task is to predict whether a particular sentence needs a citation. The second scaffold task relates to predicting the section title in which a citation appears. Scientific documents follow a standard structure where the authors typically first introduce the problem, describe methodology, share results, discuss findings and conclude the paper. The intent of a citation could be relevant to the section of the paper in which the citation appears. For example, method-related citations are more likely to appear in the methods section. Therefore, we use the section title prediction as a scaffold for predicting citation intents. Note that this scaffold task is different than simply adding section title as an additional feature in the input. We are using the section titles from a larger set of data than training data for the main task as a proxy to learn linguistic patterns that are helpful for citation intents. In particular, we leverage a large number of scientific papers for which the section information is known for each citation to automatically generate large amounts of training data for this scaffold task. Multitask learning as defined by BIBREF13 is an approach to inductive transfer learning that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It requires the model to have at least some sharable parameters between the tasks. In a general setting in our model, we have a main task $Task^{(1)}$ and $n-1$ auxiliary tasks $Task^{(i)}$ . As shown in Figure 2 , each scaffold task will have its task-specific parameters for effective classification and the parameters for the lower layers of the network are shared across tasks. We use a Multi Layer Perceptron (MLP) for each task and then a softmax layer to obtain prediction probabilites. In particular, given the vector $z̭$ we pass it to $n$ MLPs and obtain $n$ output vectors $y̭^{(i)}$ : $ y̭^{(i)} = \operatorname{softmax}(\mathrm {MLP}^{(i)}(z̭)) $ We are only interested in the output $y̭^{(1)}$ and the rest of outputs $(y̭^{(2)}, ..., y̭^{(n)})$ are regarding the scaffold tasks and only used in training to inform the model of knowledge in the structure of the scientific documents. For each task, we output the class with the highest probability in $y̭$ . An alternative inference method is to sample from the output distribution. 0.5pt 1.0pt Training Let $\mathcal {D}_1$ be the labeled dataset for the main task $Task^{(1)}$ , and $\mathcal {D}_i$ denote the labeled datasets corresponding to the scaffold task $Task^{(i)}$ where $i\in \lbrace 2,...,n\rbrace $ . Similarly, let $\mathcal {L}_1$ and $\mathcal {L}_i$ be the main loss and the loss of the auxiliary task $i$ , respectively. The final loss of the model is: $$\small \mathcal {L}=\sum _{(x̭,y̭)\in \mathcal {D}_1} \mathcal {L}_1(x̭,y̭) + \sum _{i=2}^n \lambda _i \sum _{(x̭,y̭)\in \mathcal {D}_i} \mathcal {L}_i(x̭,y̭),$$ (Eq. 15) where $\lambda _i$ is a hyper-parameter specifying the sensitivity of the parameters of the model to each specific task. Here we have two scaffold tasks and hence $n{=}3$ . $\lambda _i$ could be tuned based on performance on validation set (see § "Experiments" for details). We train this model jointly across tasks and in an end-to-end fashion. In each training epoch, we construct mini-batches with the same number of instances from each of the $n$ tasks. We compute the total loss for each mini-batch as described in Equation 15 , where $\mathcal {L}_i{=}0$ for all instances of other tasks $j{\ne }i$ . We compute the gradient of the loss for each mini-batch and tune model parameters using the AdaDelta optimizer BIBREF14 with gradient clipping threshold of 5.0. We stop training the model when the development macro F1 score does not improve for five consecutive epochs. Data We compare our results on two datasets from different scientific domains. While there has been a long history of studying citation intents, there are only a few existing publicly available datasets on the task of citation intent classification. We use the most recent and comprehensive (ACL-ARC citations dataset) by BIBREF7 as a benchmark dataset to compare the performance of our model to previous work. In addition, to address the limited scope and size of this dataset, we introduce SciCite, a new dataset of citation intents that addresses multiple scientific domains and is more than five times larger than ACL-ARC. Below is a description of both datasets. ACL-ARC citations dataset ACL-ARC is a dataset of citation intents released by BIBREF7 . The dataset is based on a sample of papers from the ACL Anthology Reference Corpus BIBREF15 and includes 1,941 citation instances from 186 papers and is annotated by domain experts in the NLP field. The data was split into three standard stratified sets of train, validation, and test with 85% of data used for training and remaining 15% divided equally for validation and test. Each citation unit includes information about the immediate citation context, surrounding context, as well as information about the citing and cited paper. The data includes six intent categories outlined in Table 2 . SciCite dataset Most existing datasets contain citation categories that are too fine-grained. Some of these intent categories are very rare or not useful in meta analysis of scientific publications. Since some of these fine-grained categories only cover a minimal percentage of all citations, it is difficult to use them to gain insights or draw conclusions on impacts of papers. Furthermore, these datasets are usually domain-specific and are relatively small (less than 2,000 annotated citations). To address these limitations, we introduce SciCite, a new dataset of citation intents that is significantly larger, more coarse-grained and general-domain compared with existing datasets. Through examination of citation intents, we found out many of the categories defined in previous work such as motivation, extension or future work, can be considered as background information providing more context for the current research topic. More interesting intent categories are a direct use of a method or comparison of results. Therefore, our dataset provides a concise annotation scheme that is useful for navigating research topics and machine reading of scientific papers. We consider three intent categories outlined in Table 1 : Background, Method and ResultComparison. Below we describe data collection and annotation details. Citation intent of sentence extractions was labeled through the crowdsourcing platform Figure Eight. We selected a sample of papers from the Semantic Scholar corpus, consisting of papers in general computer science and medicine domains. Citation contexts were extracted using science-parse. The annotators were asked to identify the intent of a citation, and were directed to select among three citation intent options: Method, ResultComparison and Background. The annotation interface also included a dummy option Other which helps improve the quality of annotations of other categories. We later removed instances annotated with the Other option from our dataset (less than 1% of the annotated data), many of which were due to citation contexts which are incomplete or too short for the annotator to infer the citation intent. We used 50 test questions annotated by a domain expert to ensure crowdsource workers were following directions and disqualify annotators with accuracy less than 75%. Furthermore, crowdsource workers were required to remain on the annotation page (five annotations) for at least ten seconds before proceeding to the next page. Annotations were dynamically collected. The annotations were aggregated along with a confidence score describing the level of agreement between multiple crowdsource workers. The confidence score is the agreement on a single instance weighted by a trust score (accuracy of the annotator on the initial 50 test questions). To only collect high quality annotations, instances with confidence score of $\le $ 0.7 were discarded. In addition, a subset of the dataset with 100 samples was re-annotated by a trained, expert annotator to check for quality, and the agreement rate with crowdsource workers was 86%. Citation contexts were annotated by 850 crowdsource workers who made a total of 29,926 annotations and individually made between 4 and 240 annotations. Each sentence was annotated, on average, 3.74 times. This resulted in a total 9,159 crowdsourced instances which were divided to training and validation sets with 90% of the data used for the training set. In addition to the crowdsourced data, a separate test set of size 1,861 was annotated by a trained, expert annotator to ensure high quality of the dataset. Data for scaffold tasks For the first scaffold (citation worthiness), we sample sentences from papers and consider the sentences with citations as positive labels. We also remove the citation markers from those sentences such as numbered citations (e.g., [1]) or name-year combinations (e.g, Lee et al (2012)) to not make the second task artificially easy by only detecting citation markers. For the second scaffold (citation section title), respective to each test dataset, we sample citations from the ACL-ARC corpus and Semantic Scholar corpus and extract the citation context as well as their corresponding sections. We manually define regular expression patterns mappings to normalized section titles: “introduction”, “related work”, “method”, “experiments”, “conclusion”. Section titles which did not map to any of the aforementioned titles were excluded from the dataset. Overall, the size of the data for scaffold tasks on the ACL-ARC dataset is about 47K (section title scaffold) and 50K (citation worthiness) while on SciCite is about 91K and 73K for section title and citation worthiness scaffolds, respectively. Implementation We implement our proposed scaffold framework using the AllenNLP library BIBREF16 . For word representations, we use 100-dimensional GloVe vectors BIBREF17 trained on a corpus of 6B tokens from Wikipedia and Gigaword. For contextual representations, we use ELMo vectors released by BIBREF18 with output dimension size of 1,024 which have been trained on a dataset of 5.5B tokens. We use a single-layer BiLSTM with a hidden dimension size of 50 for each direction. For each of scaffold tasks, we use a single-layer MLP with 20 hidden nodes , ReLU BIBREF19 activation and a Dropout rate BIBREF20 of 0.2 between the hidden and input layers. The hyperparameters $\lambda _i$ are tuned for best performance on the validation set of the respective datasets using a 0.0 to 0.3 grid search. For example, the following hyperparameters are used for the ACL-ARC. Citation worthiness saffold: $\lambda _2{=}0.08$ , $\lambda _3{=}0$ , section title scaffold: $\lambda _3{=}0.09$ , $\lambda _2{=}0$ ; both scaffolds: $\lambda _2{=}0.1$ , $\lambda _3{=}0.05$ . Batch size is 8 for ACL-ARC dataset and 32 for SciCite dataset (recall that SciCite is larger than ACL-ARC). We use Beaker for running the experiments. On the smaller dataset, our best model takes approximately 30 minutes per epoch to train (training time without ELMo is significantly faster). It is known that multiple runs of probabilistic deep learning models can have variance in overall scores BIBREF21 . We control this by setting random-number generator seeds; the reported overall results are average of multiple runs with different random seeds. To facilitate reproducibility, we release our code, data, and trained models. Baselines We compare our results to several baselines including the model with state-of-the-art performance on the ACL-ARC dataset. [leftmargin=6pt] BiLSTM Attention (with and without ELMo). This baseline uses a similar architecture to our proposed neural multitask learning framework, except that it only optimizes the network for the main loss regarding the citation intent classification ( $\mathcal {L}_1$ ) and does not include the structural scaffolds. We experiment with two variants of this model: with and without using the contextualized word vector representations (ELMo) of BIBREF18 . This baseline is useful for evaluating the effect of adding scaffolds in controlled experiments. BIBREF7 . To make sure our results are competitive with state-of-the-art results on this task, we also compare our model to BIBREF7 which has the best reported results on the ACL-ARC dataset. BIBREF7 incorporate a variety of features, ranging from pattern-based features to topic-modeling features, to citation graph features. They also incorporate section titles and relative section position in the paper as features. Our implementation of this model achieves a macro-averaged F1 score of 0.526 using 10-fold cross-validation, which is in line with the highest reported results in BIBREF7 : 0.53 using leave-one-out cross validation. We were not able to use leave-one-out cross validation in our experiments since it is impractical to re-train each variant of our deep learning models thousands of times. Therefore, we opted for a standard setup of stratified train/validation/test data splits with 85% data used for training and the rest equally split between validation and test. Results Our main results for the ACL-ARC dataset BIBREF7 is shown in Table 3 . We observe that our scaffold-enhanced models achieve clear improvements over the state-of-the-art approach on this task. Starting with the `BiLSTM-Attn' baseline with a macro F1 score of 51.8, adding the first scaffold task in `BiLSTM-Attn + section title scaffold' improves the F1 score to 56.9 ( $\Delta {=}5.1$ ). Adding the second scaffold in `BiLSTM-Attn + citation worthiness scaffold' also results in similar improvements: 56.3 ( $\Delta {=}4.5$ ). When both scaffolds are used simultaneously in `BiLSTM-Attn + both scaffolds', the F1 score further improves to 63.1 ( $\Delta {=}11.3$ ), suggesting that the two tasks provide complementary signal that is useful for citation intent prediction. The best result is achieved when we also add ELMo vectors BIBREF18 to the input representations in `BiLSTM-Attn w/ ELMo + both scaffolds', achieving an F1 of 67.9, a major improvement from the previous state-of-the-art results of BIBREF7 54.6 ( $\Delta {=}13.3$ ). We note that the scaffold tasks provide major contributions on top of the ELMo-enabled baseline ( $\Delta {=}$ 13.6), demonstrating the efficacy of using structural scaffolds for citation intent prediction. We note that these results were obtained without using hand-curated features or additional linguistic resources as used in BIBREF7 . We also experimented with adding features used in BIBREF7 to our best model and not only we did not see any improvements, but we observed at least 1.7% decline in performance. This suggests that these additional manual features do not provide the model with any additional useful signals beyond what the model already learns from the data. 0.5pt 1.0pt Table 4 shows the main results on SciCite dataset, where we see similar patterns. Each scaffold task improves model performance. Adding both scaffolds results in further improvements. And the best results are obtained by using ELMo representation in addition to both scaffolds. Note that this dataset is more than five times larger in size than the ACL-ARC, therefore the performance numbers are generally higher and the F1 gains are generally smaller since it is easier for the models to learn optimal parameters utilizing the larger annotated data. On this dataset, the best baseline is the neural baseline with addition of ELMo contextual vectors achieving an F1 score of 82.6 followed by BIBREF7 , which is expected because neural models generally achieve higher gains when more training data is available and because BIBREF7 was not designed with the SciCite dataset in mind. The breakdown of results by intent on ACL-ARC and SciCite datasets is respectively shown in Tables 5 and 6 . Generally we observe that results on categories with more number of instances are higher. For example on ACL-ARC, the results on the Background category are the highest as this category is the most common. Conversely, the results on the FutureWork category are the lowest. This category has the fewest data points (see distribution of the categories in Table 2 ) and thus it is harder for the model to learn the optimal parameters for correct classification in this category. Analysis To gain more insight into why the scaffolds are helping the model in improved citation intent classification, we examine the attention weights assigned to inputs for our best proposed model (`BiLSTM-Attn w/ ELMo + both scaffolds') compared with the best neural baseline (`BiLSTM-Attn w/ ELMO'). We conduct this analysis for examples from both datasets. Figure 3 shows an example input citation along with the horizontal line and the heatmap of attention weights for this input resulting from our model versus the baseline. For first example ( 3 ) the true label is FutureWork. We observe that our model puts more weight on words surrounding the word “future” which is plausible given the true label. On the other hand, the baseline model attends most to the words “compare” and consequently incorrectly predicts a Compare label. In second example ( 3 ) the true label is ResultComparison. The baseline incorrectly classifies it as a Background, likely due to attending to another part of the sentence (“analyzed seprately”). Our model correctly classifies this instance by putting more attention weights on words that relate to comparison of the results. This suggests that the our model is more successful in learning optimal parameters for representing the citation text and classifying its respective intent compared with the baseline. Note that the only difference between our model and the neural baseline is inclusion of the structural scaffolds. Therefore, suggesting the effectiveness the scaffolds in informing the main task of relevant signals for citation intent classification. 0.5pt 1.0pt We next investigate errors made by our best model (Figure 4 plots classification errors). One general error pattern is that the model has more tendency to make false positive errors in the Background category likely due to this category dominating both datasets. It's interesting that for the ACL-ARC dataset some prediction errors are due to the model failing to properly differentiate the Use category with Background. We found out that some of these errors would have been possibly prevented by using additional context. Table 7 shows a sample of such classification errors. For the citation in the first row of the table, the model is likely distracted by “model in (citation)” and “ILP formulation from (citation)” deeming the sentence is referring to the use of another method from a cited paper and it misses the first part of the sentence describing the motivation. This is likely due to the small number of training instances in the Motivation category, preventing the model to learn such nuances. For the examples in the second and third row, it is not clear if it is possible to make the correct prediction without additional context. And similarly in the last row the instance seems ambiguous without accessing to additional context. Similarly as shown in Figure 4 two of FutureWork labels are wrongly classified. One of them is illustrated in the forth row of Table 7 where perhaps additional context could have helped the model in identifying the correct label. One possible way to prevent this type of errors, is to provide the model with an additional input, modeling the extended surrounding context. We experimented with encoding the extended surrounding context using a BiLSTM and concatenating it with the main citation context vector (z), but it resulted in a large decline in overall performance likely due to the overall noise introduced by the additional context. A possible future work is to investigate alternative effective approaches for incorporating the surrounding extended context. Related Work There is a large body of work studying the intent of citations and devising categorization systems BIBREF22 , BIBREF4 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF8 , BIBREF26 , BIBREF27 . Most of these efforts provide citation categories that are too fine-grained, some of which rarely occur in papers. Therefore, they are hardly useful for automated analysis of scientific publications. To address these problems and to unify previous efforts, in a recent work, BIBREF7 proposed a six category system for citation intents. In this work, we focus on two schemes: (1) the scheme proposed by BIBREF7 and (2) an additional, more coarse-grained general-purpose category system that we propose (details in § "Data" ). Unlike other schemes that are domain-specific, our scheme is general and naturally fits in scientific discourse in multiple domains. Early works in automated citation intent classification were based on rule-based systems (e.g., BIBREF23 , BIBREF28 ). Later, machine learning methods based on linguistic patterns and other hand-engineered features from citation context were found to be effective. For example, BIBREF8 proposed use of “cue phrases”, a set of expressions that talk about the act of presenting research in a paper. BIBREF9 relied on lexical, structural, and syntactic features and a linear SVM for classification. Researchers have also investigated methods of finding cited spans in the cited papers. Examples include feature-based methods BIBREF29 , domain-specific knowledge BIBREF30 , and a recent CNN-based model for joint prediction of cited spans and citation function BIBREF31 . We also experimented with CNNs but found the attention BiLSTM model to work significantly better. BIBREF7 expanded all pre-existing feature-based efforts on citation intent classification by proposing a comprehensive set of engineered features, including boostrapped patterns, topic modeling, dependency-based, and metadata features for the task. We argue that we can capture necessary information from the citation context using a data driven method, without the need for hand-engineered domain-dependent features or external resources. We propose a novel scaffold neural model for citation intent classification to incorporate structural information of scientific discourse into citations, borrowing the “scaffold” terminology from BIBREF32 who use auxiliary syntactic tasks for semantic problems. Conclusions and future work In this work, we show that structural properties related to scientific discourse can be effectively used to inform citation intent classification. We propose a multitask learning framework with two auxiliary tasks (predicting section titles and citation worthiness) as two scaffolds related to the main task of citation intent prediction. Our model achieves state-of-the-art result (F1 score of 67.9%) on the ACL-ARC dataset with 13.3 absolute increase over the best previous results. We additionally introduce SciCite, a new large dataset of citation intents and also show the effectiveness of our model on this dataset. Our dataset, unlike existing datasets that are designed based on a specific domain, is more general and fits in scientific discourse from multiple scientific domains. We demonstrate that carefully chosen auxiliary tasks that are inherently relevant to a main task can be leveraged to improve the performance on the main task. An interesting line of future work is to explore the design of such tasks or explore the properties or similarities between the auxiliary and the main tasks. Another relevant line of work is adapting our model to other domains containing documents with similar linked structured such as Wikipedia articles. Future work may benefit from replacing ELMo with other types of contextualized representations such as BERT in our scaffold model. For example, at the time of finalizing the camera ready version of this paper, BIBREF33 showed that a BERT contextualized representation model BIBREF34 trained on scientific text can achieve promising results on the SciCite dataset. Acknowledgments We thank Kyle Lo, Dan Weld, and Iz Beltagy for helpful discussions, Oren Etzioni for feedback on the paper, David Jurgens for helping us with their ACL-ARC dataset and reproducing their results, and the three anonymous reviewers for their comments and suggestions. Computations on beaker.org were supported in part by credits from Google Cloud.
includes 1,941 citation instances from 186 papers
536e4a39b654b78228bf55fd09d1b433e0dae447
536e4a39b654b78228bf55fd09d1b433e0dae447_0
Q: Is the affect of a word affected by context? Text: Introduction This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Affect refers to the experience of a feeling or emotion BIBREF0 , BIBREF1 . This definition includes emotions, sentiments, personality, and moods. The importance of affect analysis in human communication and interactions has been discussed by Picard Picard1997. Historically, affective computing has focused on studying human communication and reactions through multi-modal data gathered through various sensors. The study of human affect from text and other published content is an important topic in language understanding. Word correlation with social and psychological processes is discussed by Pennebaker Pennebaker2011. Preotiuc-Pietro et al. perspara17nlpcss studied personality and psycho-demographic preferences through Facebook and Twitter content. Sentiment analysis in Twitter with a detailed discussion on human affect BIBREF2 and affect analysis in poetry BIBREF3 have also been explored. Human communication not only contains semantic and syntactic information but also reflects the psychological and emotional states. Examples include the use of opinion and emotion words BIBREF4 . The analysis of affect in interpersonal communication such as emails, chats, and longer written articles is necessary for various applications including the study of consumer behavior and psychology, understanding audiences and opinions in computational social science, and more recently for dialogue systems and conversational agents. This is a open research space today. Traditional natural language understanding systems rely on statistical language modeling and semantic word distributions such as WORDNET BIBREF5 to understand relationships across different words. There has been a resurgence of research efforts towards creating word distributions that capture multi-dimensional word semantics BIBREF6 , BIBREF7 . Sedoc et al. affnorms17eacl introduce the notion of affect features in word distributions but their approach is limited to creating enriched representations, and no comments on the utility of the new word distribution is presented. Beyond word-semantics, deep learning research in natural language understanding, is focused towards sentence representations using encoder-decoder models BIBREF8 , integrating symbolic knowledge to language models BIBREF9 , and some recent works in augmenting neural language modeling with affective information to emotive text generation BIBREF4 . These works however do not introduce distributional affective word representations that not only reflect affective content but are also superior for related downstream natural language tasks such as sentiment analysis and personality detection. We introduce Aff2Vec, affect-enriched word distributions trained on lexical resources coupled with semantic word distributions. Aff2Vec captures opinions and affect information in the representation using post-processing approaches. Figure 1 illustrates how Aff2Vec captures affective relationships using a t-SNE visualization of the word space. Aff2Vec can be trained using any affect space, we focus on the Valence–Arousal–Dominance dimensions but the approach is generalizable to other space. Our experiments show that Aff2Vec out performs vanilla embedding spaces for both intrinsic word–similarity tasks as well as extrinsic natural language applications. Main contributions of this paper include: Aff2Vec: Affect-enriched word representations using post-processing techniques. We show that Aff2Vec outperforms the state-of-the-art in both intrinsic word similarity metrics as well as downstream natural language tasks including Sentiment analysis, Personality detection, and Frustration detection in interpersonal communication. ENRON-FFP Dataset: We introduce the ENRON-FFP Email dataset with Frustration, Formality, and Politeness tags gathered using a crowd-sourced human perception study. The remainder of the paper is organized as follows. The prior art for enriched word distributions is discussed in Section "Related Work" . Aff2Vec is introduced in section "Aff2Vec: Affect–enriched Word Distributions" . We present the crowd-sourcing study for the ENRON-FFP Dataset in section "Dataset: ENRON-FFP" and section "Experiments" discusses the experimental setup. Section "Results" presents the evaluation of Aff2Vec for various intrinsic and extrinsic tasks. A discussion on the distributional word representations is presented in section "Discussion" before concluding in section "Conclusion" . Related Work The use of lexical semantic information (lexical resources) to improve distributional representations is recent. Methods like BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 achieve this by using word similarity and relational knowledge to modify the prior or add a regularization term. We call such methods `pre-training methods', as they alter the training procedure for word representations. Such methods require a change in the loss function while training the embeddings, hence are computationally expensive. The other set of word distribution enhancements are done post-training. These methods aim to include external information using normalizations and modifications to the vanilla word distributions. Methods such as Retrofitting BIBREF14 which tries to drag similar words closer together, where notion of similarity is taken from word relation knowledge found in semantic lexica (e.g. WordNet) fall in this category. Counterfitting BIBREF15 on the other hand initiates from SimLex-999 tuned embeddings, injects antonyms and synonym constraints to improve word representations. This paper introduces post-training techniques on vanilla, retrofitted, and counterfitted embeddings to include affective information in the distributions. Our work falls in the post-training category, hence no direct comparison with the pre-trained approaches is presented in this paper. Recent work has explored approaches to adapt general-purpose lexica for specific contexts and affects. Studies have recognized the limited applicability of general purpose lexica such as ANEW BIBREF16 to identify affect in verbs and adverbs, as they focus heavily on adjectives. Recognizing that general-purpose lexica often detect sentiment which is incongruous with context, Ribeiro et al. ribeiro2016sentibench proposed a sentiment damping method which utilizes the average sentiment strength over a document to damp any abnormality in the derived sentiment strength. Similarly, Blitzer et al. blitzer2007biographies argued that words like `predictable' induced a negative connotation in book reviews, while `must-read' implied a highly positive sentiment. This paper doesn't focus on building yet another affect lexicon but studies the consequences of including affect information in distributional word representations that aim at defining relational relationships across all words in large contexts and vocabularies. Automatic expansion of affect ratings has been approached with the intuition that words closer in the distributional space would have similar ratings BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . Recent work by Sedoc et al.affnorms17eacl uses Signed Spectral Clustering to differentiate between words which are contextually similar but display opposite affect. Whereas BIBREF21 uses a graph–based method inspired by label propagation. While our approach follows the nature of the task defined in Sedoc et al., we propose a generalized method to enrich content with affective information. They focus on distinguishing the polarities. Our method incorporates both semantic and affect information hence creating embeddings that can also be used for semantic similarity tasks. Note that Sedoc et al. do not include any semantic information in their modeling. Aff2Vec: Affect–enriched Word Distributions Aff2Vec aims at incorporating affective information in word representations. We leverage the Warriner's lexicon BIBREF22 in the Valence–Arousal–Dominance space for this work. The proposed work is generalizable to other affect spaces. This section presents two approaches for affect–enrichment of word distributions. Warriner's lexicon: This is a affect lexicon with 13915 english words. It contains real-valued scored for Valence, Arousal, and Dominance (VAD) on a scale of $1-9$ each. 1, 5, 9 correspond to the low, moderate (i.e. neutral), and high values for each dimension respectively. The lexicon does not contain common English words such as stop words and proper nouns. For such out–of–dictionary words we assume a neutral affect vector $\vec{a}=[5,5,5]$ . Affect-APPEND Consider word embeddings $W$ , the aim is to introduce affective information to this space using the affect embedding space, $A$ . The word vectors $W$ , with dimension $D$ are concatenated with affect vectors $A$ with dimension $F$ , thus resulting in a $D+F$ dimensional enriched representation. The process for this concatenation is described here: 1. Normalize word vector $W$ and affect vector $A$ using their L2-Norms (Equation 7 , ). This reduces the individual vectors to unit-length. $$x_i = \dfrac{x_i}{\sqrt{\sum _{k = 1}^{D} x_{ik}^2}} ~~\forall x_i \in W, ~~~~a_i = \dfrac{a_i}{\sqrt{\sum _{k = 1}^{F} a_{ik}^2}} ~~\forall a_i \in A$$ (Eq. 7) 2. Concatenate the regularized word vectors $x_i$ with regularized affect vectors $a_i$ . $$WA(w) = W(w) \oplus A(w)$$ (Eq. 8) 3. Standardize (1 variance, 0 mean) the $D+F$ dimensional embeddings to achieve uniform distribution. $$y_i = \dfrac{y_i - \mu }{\sigma } ~~~~ \forall y_i \in WA$$ (Eq. 9) where $\mu $ and $\sigma $ represent the mean and standard deviation respectively. 4. The enriched space $WA$ is then reduced to original $D$ dimensional vector. We use Principal Component Analysis for the dimensionality reduction. Affect-STRENGTH In this approach, the strength in the antonym/synonym relationships of the words is incorporated in the word distribution space. Hence, we leverage the Retrofitted Word Embeddings for this approach BIBREF14 . Retrofitting: Let $V = \lbrace w_1, w_2, w_3,..., w_n\rbrace $ be a vocabulary and $\Omega $ be an ontology which encodes semantic relations between words present in $V$ (e.g. WORDNET). This ontology $\Omega $ is represented as an undirected graph $(V,E)$ with words as vertices and $(w_i, w_j)$ as edges indicating the semantic relationship of interest. Each word $w_i \in V$ is represented as a vector representation $\hat{q}_i \in R^d$ learnt using a data–driven approach (e.g. Word2Vec or GloVe) where $d$ is the length of the word vectors. Let $\hat{Q}$ be the matrix collection of these vector representations. The objective is to learn the matrix $ Q = (q_1,..., q_n) $ such that the word vectors ( $q_i$ ) are both close to their counterparts in $\hat{Q}$ and to adjacent vertices in $\Omega $ . The distance between a pair of vectors is defined to be Euclidean, hence the objective function for minimization is $$\leavevmode \xbox {resizebox}{\XMLaddatt {width}{213.5pt} \Psi (Q) = \sum _{i=1}^{n} {\Bigg [ \alpha _i {\Vert q_i - \hat{q}_i\Vert }^2 + \sum _{(i,j) \in E} {\beta _{ij}{\Vert q_i - q_j\Vert }^2} \Bigg ] } }$$ (Eq. 12) where $\alpha $ and $\beta $ are hyper parameters and control the relative strengths of the two associations. $\Psi $ is a convex function in $Q$ and its global optimal solution can be found by using an iterative update method. By setting $\frac{\partial \Psi (Q)}{\partial q_i} = 0$ , the online updates are as follows: $$q_i = \frac{\sum _{j:(i,j) \in E} {\beta _{ij}q_j + \alpha _i\hat{q}_i}}{\sum _{j:(i,j) \in E} {\beta _{ij} + \alpha _i}}$$ (Eq. 13) We propose two ways to modify $\beta _{ij}$ in equation 12 in order to incorporate affective strength in the edge weights connecting two retrofitted vectors to each other. Affect-cStrength: In this approach, the affective strength is considered as a function of all $F$ affect dimensions. $$S(w_i, w_j) = 1 - \dfrac{\Vert a_{i} - a_{j}\Vert }{\sqrt{\sum _{f=1}^{F}{max\_dist_f^{2}}}}$$ (Eq. 14) where $a_i$ and $a_j$ are $F$ dimensional vectors in $A$ and $max\_dist_f$ is defined as the maximum possible distance between two vectors in $f^{th}$ dimension ( $= 9.0 - 1.0 = 8.0$ for VAD dimensions). Affect-iStrength: Here, each dimension is treated individually. For every dimension $f$ in $A$ , we add an edge between neighbors in the Ontology $\Omega $ where the strength of that edge is given by $S_{f}(w_i, w_j)$ : $$S_{f}(w_i, w_j) = 1 - \dfrac{|a_{if} - a_{jf}|}{max\_dist_{f}}, ~~~~ S(w_i, w_j) = \sum _{f=1}^{F}{S_{f}(w_i, w_j)}$$ (Eq. 15) $\beta _{ij}$ from equation 13 is normalized with this strength function as $\beta _{ij} = \beta _{ij} * S(w_i, w_j)$ , where $S(w_i,w_j)$ is defined by either Affect-cStrength or Affect-iStrength. Dataset: ENRON-FFP We introduce an email dataset, a subset of the ENRON data BIBREF31 , with tags about interpersonal communication traits, namely, Formality, Politeness, and Frustration. The dataset provides the text, user information, as well as the network information for email exchanges between Enron employees. Human Perceptions and Definitions: Tone or affects such as frustration and politeness are highly subjective measures. In this work, we do not attempt to introduce or standardize an accurate definition for frustration (or formality and politeness). Instead, we assume that these are defined by human perception, and each individual may differ in their understanding of these metrics. This approach of using untrained human judgments has been used in prior studies of pragmatics in text data BIBREF32 , BIBREF33 and is a recommended way of gathering gold-standard annotations BIBREF34 . The tagged data is then used to predict the formality, frustration, and politeness tags using Aff2Vec embeddings. Dataset Annotation: We conducted a crowd sourced experiment using Amazon's Mechanical Turk. The analysis presented in this section is based on 1050 emails that were tagged across multiple experiments. Table 1 provides an overview of the data statistics of the annotated data. We follow the annotation protocol of the Likert Scale BIBREF35 for all three dimensions. Each email is considered as a single data point and only the text in the email body is provided for tagging. Frustration is tagged on a 3 point scale with neutral being equated to `not frustrated'; `frustrated' and `very frustrated' are marked with $-1$ and $-2$ respectively. Formality and politeness follow a 5 point scale from $-2$ to $+2$ where both extremes mark the higher degree of presence and absence of the respective dimension. Table 2 shows some example emails from the dataset. Inter-annotator Agreement: To measure whether the individual intuition of the affect dimensions is consistent with other annotators' judgment, we use interclass correlation to quantify the ordinal ratings. This measure accounts for the fact that we may have different group of annotators for each data point. Each data point has 10 distinct annotations. Agreements reported for 3 class and 5 class annotations $0.506 \pm 0.05$ , $0.73 \pm 0.02$ , and $0.64 \pm 0.03$ for frustration, formality, and politeness respectively. The agreement measures are similar to those reported for other such psycholinguistic tagging tasks. Experiments Two sets of experiments are presented to evaluate Aff2Vec embeddings - Intrinsic evaluation using word similarity tasks and extrinsic evaluation using multiple NLP applications. We focus on 3 vanilla word embeddings: GloVe BIBREF7 , Word2Vec-SkipGram BIBREF36 , and Paragram-SL999 BIBREF37 . The vocabulary and embeddings used in our experiments resonate with the experimental setup by Mrkšić et al.mrkvsic2016counter (76427 words). Intrinsic Evaluation Word similarity is a standard task used to evaluate embeddings BIBREF15 , BIBREF14 , BIBREF38 . In this paper, we evaluate the embeddings on benchmark datasets given in Table 1 . We report the Spearman's rank correlation coefficient between rankings produced by our model (based on cosine similarity of the pair of words) against the benchmark human rankings for each dataset. Extrinsic Evaluation Although intrinsic tasks are popular, performance of word embeddings on these benchmarks does not reflect directly into the downstream tasks BIBREF41 . BIBREF42 , BIBREF43 suggest that intrinsic tasks should not be considered as gold standards but as a tool to improve the model. We test the utility of the Aff2Vec on 4 distinct natural language understanding tasks: Affect Prediction (FFP-Prediction): The experiment is to predict the formality, politeness, and frustration in email. We introduce the ENRON-FFP dataset for this task in section "Dataset: ENRON-FFP" . A basic CNN model is used for the prediction. The purpose of this experiment is to evaluate the quality of the embeddings and not necessarily the model architecture. The CNN is hence not optimized for this task. Embeddings trained on the ENRON dataset (ENRON-Trainable) are used as a baseline. Personality Detection: This task is to predict human personality from text. The big five personality dimensions BIBREF44 are used for this experiment. The 5 personality dimensions include Extroversion (EXT), Neurotic-ism (NEU), Agreeableness (AGR), Conscientiousness (CON), and Openness (OPEN). Stream-of-consciousness essay dataset by Pennebaker et al. pennebaker1999linguistic contains 2468 anonymous essays tagged with personality traits of the author. We use this dataset. Majumder et al majumder2017deep propose a CNN model for this prediction. We use their best results as baseline and report the performance of Aff2Vec on their default implementation. Sentiment Analysis: The Stanford Sentiment Treebank (SST) BIBREF45 contains sentiment labels on sentences from movie reviews. This dataset in its binary form is split into training, validation, and test sets with 6920, 872, and 1821 samples, respectively. We report the performance on a Deep Averaging Network (DAN) BIBREF46 with default parameters on the SST dataset and compare against refined embeddings specifically created for sentiment analysis. Implementation by Yu et al yu2017refining is used for the refined embeddings. Emotion Intensity Task (WASSA): WASSA shared task on emotion intensity BIBREF47 requires to determine the intensity of a particular emotion (anger, fear, joy, or sadness) in a tweet. This intensity score can be seen as an approximation of the emotion intensity of the author or as felt by the reader. We train a BiLSTM-CNN–based model for this regression task with embedding dimensions as features.. Vanilla embeddings are used as a baseline for this experiment. Qualitative Evaluation: Noise@k Affect-enriched embeddings perform better as they move semantically similar but affectively dissimilar words away from each other in the vector space. We demonstrate this effect through two measures that capture noise in the neighborhood of a word. Polarity-Noise@k (PN@k) BIBREF40 calculates the number of top $k$ nearest neighbors of a word with opposite polarity for the affect dimension under consideration. Granular-Noise@k (GN@k) captures the average difference between a word and its top $k$ nearest neighbors for a particular affect dimension ( $f$ ). $$GN_i@k = \dfrac{\sum _{j \in kNN_i}{|a_if - a_jf|}}{k}$$ (Eq. 33) where $a_i$ , $a_j$ are $F$ –dimensional vectors in $A$ and $kNN_i$ denotes the top $k$ nearest neighbors of word $i$ . This is done for each word in the affect lexicon. Results All experiments are compared against the vanilla word embeddings, embeddings with counterfitting, and embeddings with retrofitting. Table 3 summarizes the results of the Intrinsic word–similarity tasks. For the pre–trained word embeddings, Paragram-SL999 outperformed GloVe and Word2Vec on most metrics. Both retrofitting and counterfitting procedures show better or at par performance on all datasets except for WordSim-353. Addition of affect information to different versions of GloVe consistently improves performance whereas the only significant improvement for Paragram-SL999 variants is observed on the SimLex-999 and SimVerb-3500 datasets. To the best of our knowledge, $\rho =0.74$ reported by BIBREF15 represents the current state–of–the–art for SimLex-999 and inclusion of affect information to these embeddings yields higher performance ( $\rho = 0.75$ ). Similarly, for the SimVerb-3500 dataset, Paragram+Counterfitting $\oplus $ Affect embeddings beat the state–of–the–art scores. Amongst Affect-APPEND and Affect-STRENGTH, Affect-APPEND out performs the rest in most cases for GloVe and Word2vec. However, Affect-STRENGTH variations perform slightly better for the retrofitted Paragram embeddings. The results for the Extrinsic tasks are reported in Table 4 . We report the performance for GloVe and Word2Vec with Affect-APPEND variants. For FFP-Prediction, Affect-APPEND reports the lowest Mean Squared Error for Frustration and Politeness. However, in the case of Formality, the counterfitting variant reports the lowest error. For the personality detection, Affect-APPEND variants report best performance for NEU, AGR, and OPEN classes. For CON, Glove beats the best results in BIBREF39 . Evaluation against the Sentiment Analysis(SA) task shows that Affect-APPEND variants report highest accuracies. The final experiment reported here is the WASSA-EmoInt task. Affect-APPEND and retrofit variants out perform the vanilla embeddings. To summarize, the extrinsic evaluation supports the hypothesis that affect–enriched embeddings improve performance for all NLP tasks. Further, the word similarity metrics show that Aff2Vec is not specific to sentiment or affect–related tasks but is at par with accepted embedding quality metrics. Qualitative Evaluation: Table 5 reports the average Polarity-Noise@10 and Granular-Noise@10 for GloVe and Word2Vec variants. Note that lower the noise better the performance. The Affect-APPEND report the lowest noise for both cases. This shows that the introduction of affect dimensions in the word distributions intuitively captures psycholinguistic and in particular polarity properties in the vocabulary space. The rate of change of noise with varying $k$ provides insights into (1) how similar are the embedding spaces and (2) how robust are the new representations to the noise - how well is the affect captured in the new embeddings. Figure 2 shows the granular noise@k for Valence, Arousal, and Dominance respectively. Noise@k for the Aff2Vec i.e. the Affect-APPEND variants, specifically, $\oplus $ Affect and Couterfitting $\oplus $ Affect has lower noise even for a higher $k$ . The growth rate for all variants is similar and reduces with an increase in the value of $k$ . A similar behavior is observed for Polarity-Noise@k. Discussion Experiments give an empirical evaluation of the proposed embeddings, none of these provide an insight about the change in the distributional representations of the associated words. Semantic relationship capture the synonym like information. We study how the neighborhood of a certain word changes based on the different word distribution techniques used to create the corresponding representations. Table 6 shows the top five nearest neighbors based on the representations used. While SENTI-Wordnet represents synonyms more than affectively similar words, the affect–enriched embeddings provide a combination of both affective similarity and semantic similarity. The variance in the ranking of words also captures how different schemes capture the intuition of word distributions. Such an analysis can be used to build automated natural language generation and text modification systems with varying objectives. Conclusion We present a novel, simple yet effective method to create affect–enriched word embeddings using affect and semantic lexica. The proposed embeddings outperform the state–of–the–art in benchmark intrinsic evaluations as well as extrinsic applications including sentiment, personality, and affect prediction. We introduce a new human–annotated dataset with formality, politeness, and frustration tags on the publicly available ENRON email data. We are currently exploring the effect of dimension size on the performance of the enriched embeddings as well as the use of Aff2Vec for complex tasks such as text generation.
Unanswerable
212495af630c16745d0fcb614119d75327952271
212495af630c16745d0fcb614119d75327952271_0
Q: asdfasdaf Text: Introduction The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering. Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding. Related Work From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages. Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach. The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21. Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks. Model and Data In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale. Model and Data ::: Masked Language Models. We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper. Model and Data ::: Scaling to a hundred languages. XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages. Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively. Model and Data ::: Scaling the Amount of Training Data. Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili. Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model. Evaluation We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models. Evaluation ::: Cross-lingual Natural Language Inference (XNLI). The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project. Evaluation ::: Named Entity Recognition. For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32. Evaluation ::: Cross-lingual Question Answering. We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English. Evaluation ::: GLUE Benchmark. Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines. Analysis and Results In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Transfer-dilution trade-off and Curse of Multilinguality. Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other. We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus. The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models). Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: High-resource/Low-resource trade-off. The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of Capacity and Vocabulary Size. In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R. We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of large-scale training with more data. As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance. Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Simplifying multilingual tokenization with Sentence Piece. The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R. Analysis and Results ::: Cross-lingual Understanding Results Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark. Analysis and Results ::: Cross-lingual Understanding Results ::: XNLI. Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language. XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38. Analysis and Results ::: Cross-lingual Understanding Results ::: Named Entity Recognition. In Table , we report results of XLM-R and mBERT on CoNLL-2002 and CoNLL-2003. We consider the LSTM + CRF approach from BIBREF31 and the Flair model from BIBREF32 as baselines. We evaluate the performance of the model on each of the target languages in three different settings: (i) train on English data only (en) (ii) train on data in target language (each) (iii) train on data in all languages (all). Results of mBERT are reported from BIBREF9. Note that we do not use a linear-chain CRF on top of XLM-R and mBERT representations, which gives an advantage to BIBREF32. Without the CRF, our XLM-R model still performs on par with the state of the art, outperforming BIBREF32 on Dutch by $2.84$ points. On this task, XLM-R also outperforms mBERT by 2.1 F1 on average for cross-lingual transfer, and 1.86 F1 when trained on each language. Training on all languages leads to an average F1 score of 89.18%, outperforming cross-lingual transfer approach by more than 8.5%. Analysis and Results ::: Cross-lingual Understanding Results ::: Question Answering. We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance. Analysis and Results ::: Multilingual versus Monolingual In this section, we present results of multilingual XLM models against monolingual BERT models. Analysis and Results ::: Multilingual versus Monolingual ::: GLUE: XLM-R versus RoBERTa. Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks. Analysis and Results ::: Multilingual versus Monolingual ::: XNLI: XLM versus BERT. A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance. Analysis and Results ::: Representation Learning for Low-resource Languages We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively. Conclusion In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages. Languages and statistics for CC-100 used by XLM-R In this section we present the list of languages in the CC-100 corpus we created for training XLM-R. We also report statistics such as the number of tokens and the size of each monolingual corpus. Model Architectures and Sizes As we showed in section SECREF5, capacity is an important parameter for learning strong cross-lingual representations. In the table below, we list multiple monolingual and multilingual models used by the research community and summarize their architectures and total number of parameters.
Unanswerable
2d1c0618b6106a57777b8d6bbf897712d9db7abc
2d1c0618b6106a57777b8d6bbf897712d9db7abc_0
Q: asdfasdf Text: Introduction The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering. Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding. Related Work From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages. Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach. The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21. Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks. Model and Data In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale. Model and Data ::: Masked Language Models. We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper. Model and Data ::: Scaling to a hundred languages. XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages. Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively. Model and Data ::: Scaling the Amount of Training Data. Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili. Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model. Evaluation We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models. Evaluation ::: Cross-lingual Natural Language Inference (XNLI). The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project. Evaluation ::: Named Entity Recognition. For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32. Evaluation ::: Cross-lingual Question Answering. We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English. Evaluation ::: GLUE Benchmark. Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines. Analysis and Results In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Transfer-dilution trade-off and Curse of Multilinguality. Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other. We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus. The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models). Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: High-resource/Low-resource trade-off. The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of Capacity and Vocabulary Size. In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R. We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of large-scale training with more data. As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance. Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Simplifying multilingual tokenization with Sentence Piece. The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R. Analysis and Results ::: Cross-lingual Understanding Results Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark. Analysis and Results ::: Cross-lingual Understanding Results ::: XNLI. Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language. XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38. Analysis and Results ::: Cross-lingual Understanding Results ::: Named Entity Recognition. In Table , we report results of XLM-R and mBERT on CoNLL-2002 and CoNLL-2003. We consider the LSTM + CRF approach from BIBREF31 and the Flair model from BIBREF32 as baselines. We evaluate the performance of the model on each of the target languages in three different settings: (i) train on English data only (en) (ii) train on data in target language (each) (iii) train on data in all languages (all). Results of mBERT are reported from BIBREF9. Note that we do not use a linear-chain CRF on top of XLM-R and mBERT representations, which gives an advantage to BIBREF32. Without the CRF, our XLM-R model still performs on par with the state of the art, outperforming BIBREF32 on Dutch by $2.84$ points. On this task, XLM-R also outperforms mBERT by 2.1 F1 on average for cross-lingual transfer, and 1.86 F1 when trained on each language. Training on all languages leads to an average F1 score of 89.18%, outperforming cross-lingual transfer approach by more than 8.5%. Analysis and Results ::: Cross-lingual Understanding Results ::: Question Answering. We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance. Analysis and Results ::: Multilingual versus Monolingual In this section, we present results of multilingual XLM models against monolingual BERT models. Analysis and Results ::: Multilingual versus Monolingual ::: GLUE: XLM-R versus RoBERTa. Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks. Analysis and Results ::: Multilingual versus Monolingual ::: XNLI: XLM versus BERT. A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance. Analysis and Results ::: Representation Learning for Low-resource Languages We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively. Conclusion In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages. Languages and statistics for CC-100 used by XLM-R In this section we present the list of languages in the CC-100 corpus we created for training XLM-R. We also report statistics such as the number of tokens and the size of each monolingual corpus. Model Architectures and Sizes As we showed in section SECREF5, capacity is an important parameter for learning strong cross-lingual representations. In the table below, we list multiple monolingual and multilingual models used by the research community and summarize their architectures and total number of parameters.
Unanswerable
8e898bec123c70315db44f6c8002adc8bf4486ad
8e898bec123c70315db44f6c8002adc8bf4486ad_0
Q: asdfasd Text: Introduction The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering. Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding. Related Work From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages. Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach. The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21. Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks. Model and Data In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale. Model and Data ::: Masked Language Models. We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper. Model and Data ::: Scaling to a hundred languages. XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages. Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively. Model and Data ::: Scaling the Amount of Training Data. Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili. Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model. Evaluation We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models. Evaluation ::: Cross-lingual Natural Language Inference (XNLI). The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project. Evaluation ::: Named Entity Recognition. For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32. Evaluation ::: Cross-lingual Question Answering. We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English. Evaluation ::: GLUE Benchmark. Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines. Analysis and Results In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Transfer-dilution trade-off and Curse of Multilinguality. Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other. We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus. The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models). Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: High-resource/Low-resource trade-off. The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of Capacity and Vocabulary Size. In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R. We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of large-scale training with more data. As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance. Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Simplifying multilingual tokenization with Sentence Piece. The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R. Analysis and Results ::: Cross-lingual Understanding Results Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark. Analysis and Results ::: Cross-lingual Understanding Results ::: XNLI. Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language. XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38. Analysis and Results ::: Cross-lingual Understanding Results ::: Named Entity Recognition. In Table , we report results of XLM-R and mBERT on CoNLL-2002 and CoNLL-2003. We consider the LSTM + CRF approach from BIBREF31 and the Flair model from BIBREF32 as baselines. We evaluate the performance of the model on each of the target languages in three different settings: (i) train on English data only (en) (ii) train on data in target language (each) (iii) train on data in all languages (all). Results of mBERT are reported from BIBREF9. Note that we do not use a linear-chain CRF on top of XLM-R and mBERT representations, which gives an advantage to BIBREF32. Without the CRF, our XLM-R model still performs on par with the state of the art, outperforming BIBREF32 on Dutch by $2.84$ points. On this task, XLM-R also outperforms mBERT by 2.1 F1 on average for cross-lingual transfer, and 1.86 F1 when trained on each language. Training on all languages leads to an average F1 score of 89.18%, outperforming cross-lingual transfer approach by more than 8.5%. Analysis and Results ::: Cross-lingual Understanding Results ::: Question Answering. We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance. Analysis and Results ::: Multilingual versus Monolingual In this section, we present results of multilingual XLM models against monolingual BERT models. Analysis and Results ::: Multilingual versus Monolingual ::: GLUE: XLM-R versus RoBERTa. Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks. Analysis and Results ::: Multilingual versus Monolingual ::: XNLI: XLM versus BERT. A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance. Analysis and Results ::: Representation Learning for Low-resource Languages We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively. Conclusion In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages. Languages and statistics for CC-100 used by XLM-R In this section we present the list of languages in the CC-100 corpus we created for training XLM-R. We also report statistics such as the number of tokens and the size of each monolingual corpus. Model Architectures and Sizes As we showed in section SECREF5, capacity is an important parameter for learning strong cross-lingual representations. In the table below, we list multiple monolingual and multilingual models used by the research community and summarize their architectures and total number of parameters.
Unanswerable
84aef81dae38e0dca0ad041141df60ab9ac29761
84aef81dae38e0dca0ad041141df60ab9ac29761_0
Q: asdf Text: Introduction The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering. Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding. Related Work From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages. Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach. The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21. Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks. Model and Data In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale. Model and Data ::: Masked Language Models. We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper. Model and Data ::: Scaling to a hundred languages. XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages. Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively. Model and Data ::: Scaling the Amount of Training Data. Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili. Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model. Evaluation We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models. Evaluation ::: Cross-lingual Natural Language Inference (XNLI). The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project. Evaluation ::: Named Entity Recognition. For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32. Evaluation ::: Cross-lingual Question Answering. We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English. Evaluation ::: GLUE Benchmark. Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines. Analysis and Results In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Transfer-dilution trade-off and Curse of Multilinguality. Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other. We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus. The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models). Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: High-resource/Low-resource trade-off. The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of Capacity and Vocabulary Size. In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R. We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Importance of large-scale training with more data. As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance. Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models. Analysis and Results ::: Improving and Understanding Multilingual Masked Language Models ::: Simplifying multilingual tokenization with Sentence Piece. The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R. Analysis and Results ::: Cross-lingual Understanding Results Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark. Analysis and Results ::: Cross-lingual Understanding Results ::: XNLI. Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language. XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38. Analysis and Results ::: Cross-lingual Understanding Results ::: Named Entity Recognition. In Table , we report results of XLM-R and mBERT on CoNLL-2002 and CoNLL-2003. We consider the LSTM + CRF approach from BIBREF31 and the Flair model from BIBREF32 as baselines. We evaluate the performance of the model on each of the target languages in three different settings: (i) train on English data only (en) (ii) train on data in target language (each) (iii) train on data in all languages (all). Results of mBERT are reported from BIBREF9. Note that we do not use a linear-chain CRF on top of XLM-R and mBERT representations, which gives an advantage to BIBREF32. Without the CRF, our XLM-R model still performs on par with the state of the art, outperforming BIBREF32 on Dutch by $2.84$ points. On this task, XLM-R also outperforms mBERT by 2.1 F1 on average for cross-lingual transfer, and 1.86 F1 when trained on each language. Training on all languages leads to an average F1 score of 89.18%, outperforming cross-lingual transfer approach by more than 8.5%. Analysis and Results ::: Cross-lingual Understanding Results ::: Question Answering. We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance. Analysis and Results ::: Multilingual versus Monolingual In this section, we present results of multilingual XLM models against monolingual BERT models. Analysis and Results ::: Multilingual versus Monolingual ::: GLUE: XLM-R versus RoBERTa. Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks. Analysis and Results ::: Multilingual versus Monolingual ::: XNLI: XLM versus BERT. A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance. Analysis and Results ::: Representation Learning for Low-resource Languages We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively. Conclusion In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages. Languages and statistics for CC-100 used by XLM-R In this section we present the list of languages in the CC-100 corpus we created for training XLM-R. We also report statistics such as the number of tokens and the size of each monolingual corpus. Model Architectures and Sizes As we showed in section SECREF5, capacity is an important parameter for learning strong cross-lingual representations. In the table below, we list multiple monolingual and multilingual models used by the research community and summarize their architectures and total number of parameters.
Unanswerable
4c50f75b1302f749c1351de0782f2d658d4bea70
4c50f75b1302f749c1351de0782f2d658d4bea70_0
Q: How is quality of annotation measured? Text: Introduction Research in emotion analysis from text focuses on mapping words, sentences, or documents to emotion categories based on the models of Ekman1992 or Plutchik2001, which propose the emotion classes of joy, sadness, anger, fear, trust, disgust, anticipation and surprise. Emotion analysis has been applied to a variety of tasks including large scale social media mining BIBREF0, literature analysis BIBREF1, BIBREF2, lyrics and music analysis BIBREF3, BIBREF4, and the analysis of the development of emotions over time BIBREF5. There are at least two types of questions which cannot yet be answered by these emotion analysis systems. Firstly, such systems do not often explicitly model the perspective of understanding the written discourse (reader, writer, or the text's point of view). For example, the headline “Djokovic happy to carry on cruising” BIBREF6 contains an explicit mention of joy carried by the word “happy”. However, it may evoke different emotions in a reader (e. g., the reader is a supporter of Roger Federer), and the same applies to the author of the headline. To the best of our knowledge, only one work takes this point into consideration BIBREF7. Secondly, the structure that can be associated with the emotion description in text is not uncovered. Questions like: “Who feels a particular emotion?” or “What causes that emotion?” still remain unaddressed. There has been almost no work in this direction, with only few exceptions in English BIBREF8, BIBREF9 and Mandarin BIBREF10, BIBREF11. With this work, we argue that emotion analysis would benefit from a more fine-grained analysis that considers the full structure of an emotion, similar to the research in aspect-based sentiment analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15. Consider the headline: “A couple infuriated officials by landing their helicopter in the middle of a nature reserve” BIBREF16 depicted on Figure FIGREF1. One could mark “officials” as the experiencer, “a couple” as the target, and “landing their helicopter in the middle of a nature reserve” as the cause of anger. Now let us imagine that the headline starts with “A cheerful couple” instead of “A couple”. A simple approach to emotion detection based on cue words will capture that this sentence contains descriptions of anger (“infuriated”) and joy (“cheerful”). It would, however, fail in attributing correct roles to the couple and the officials, thus, the distinction between their emotion experiences would remain hidden from us. In this study, we focus on an annotation task with the goal of developing a dataset that would enable addressing the issues raised above. Specifically, we introduce the corpus GoodNewsEveryone, a novel dataset of news English headlines collected from 82 different sources analyzed in the Media Bias Chart BIBREF17 annotated for emotion class, emotion intensity, semantic roles (experiencer, cause, target, cue), and reader perspective. We use semantic roles, since identifying who feels what and why is essentially a semantic role labeling task BIBREF18. The roles we consider are a subset of those defined for the semantic frame for “Emotion” in FrameNet BIBREF19. We focus on news headlines due to their brevity and density of contained information. Headlines often appeal to a reader's emotions, and hence are a potential good source for emotion analysis. In addition, news headlines are easy-to-obtain data across many languages, void of data privacy issues associated with social media and microblogging. Our contributions are: (1) we design a two phase annotation procedure for emotion structures via crowdsourcing, (2) present the first resource of news headlines annotated for emotions, cues, intensity, experiencers, causes, targets, and reader emotion, and, (3), provide results of a baseline model to predict such roles in a sequence labeling setting. We provide our annotations at http://www.romanklinger.de/data-sets/GoodNewsEveryone.zip. Related Work Our annotation is built upon different tasks and inspired by different existing resources, therefore it combines approaches from each of those. In what follows, we look at related work on each task and specify how it relates to our new corpus. Related Work ::: Emotion Classification Emotion classification deals with mapping words, sentences, or documents to a set of emotions following psychological models such as those proposed by Ekman1992 (anger, disgust, fear, joy, sadness and surprise) or Plutchik2001; or continuous values of valence, arousal and dominance BIBREF20. One way to create annotated datasets is via expert annotation BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF7. The creators of the ISEAR dataset make use of self-reporting instead, where subjects are asked to describe situations associated with a specific emotion BIBREF25. Crowdsourcing is another popular way to acquire human judgments BIBREF26, BIBREF9, BIBREF9, BIBREF27, BIBREF28. Another recent dataset for emotion recognition reproduces the ISEAR dataset in a crowdsourcing setting for both English and German BIBREF29. Lastly, social network platforms play a central role in data acquisition with distant supervision, because they provide a cheap way to obtain large amounts of noisy data BIBREF26, BIBREF9, BIBREF30, BIBREF31. Table TABREF3 shows an overview of resources. More details could be found in Bostan2018. Related Work ::: Emotion Intensity In emotion intensity prediction, the term intensity refers to the degree an emotion is experienced. For this task, there are only a few datasets available. To our knowledge, the first dataset annotated for emotion intensity is by Aman2007, who ask experts for ratings, followed by the datasets released for the EmoInt shared tasks BIBREF32, BIBREF28, both annotated via crowdsourcing through the best-worst scaling. The annotation task can also be formalized as a classification task, similarly to the emotion classification task, where the goal would be to map some textual input to a class from a set of predefined classes of emotion intensity categories. This approach is used by Aman2007, where they annotate high, moderate, and low. Related Work ::: Cue or Trigger Words The task of finding a function that segments a textual input and finds the span indicating an emotion category is less researched. Cue or trigger words detection could also be formulated as an emotion classification task for which the set of classes to be predicted is extended to cover other emotion categories with cues. First work that annotated cues was done manually by one expert and three annotators on the domain of blog posts BIBREF21. Mohammad2014 annotates the cues of emotions in a corpus of $4,058$ electoral tweets from US via crowdsourcing. Similar in annotation procedure, Yan2016emocues curate a corpus of 15,553 tweets and annotate it with 28 emotion categories, valence, arousal, and cues. To the best of our knowledge, there is only one work BIBREF8 that leverages the annotations for cues and considers the task of emotion detection where the exact spans that represent the cues need to be predicted. Related Work ::: Emotion Cause Detection Detecting the cause of an expressed emotion in text received relatively little attention, compared to emotion detection. There are only few works on English that focus on creating resources to tackle this task BIBREF23, BIBREF9, BIBREF8, BIBREF33. The task can be formulated in different ways. One is to define a closed set of potential causes after annotation. Then, cause detection is a classification task BIBREF9. Another setting is to find the cause in the text. This is formulated as segmentation or clause classification BIBREF23, BIBREF8. Finding the cause of an emotion is widely researched on Mandarin in both resource creation and methods. Early works build on rule-based systems BIBREF34, BIBREF35, BIBREF36 which examine correlations between emotions and cause events in terms of linguistic cues. The works that follow up focus on both methods and corpus construction, showing large improvements over the early works BIBREF37, BIBREF38, BIBREF33, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF11. The most recent work on cause extraction is being done on Mandarin and formulates the task jointly with emotion detection BIBREF10, BIBREF44, BIBREF45. With the exception of Mohammad2014 who is annotating via crowdsourcing, all other datasets are manually labeled, usually by using the W3C Emotion Markup Language. Related Work ::: Semantic Role Labeling of Emotions Semantic role labeling in the context of emotion analysis deals with extracting who feels (experiencer) which emotion (cue, class), towards whom the emotion is expressed (target), and what is the event that caused the emotion (stimulus). The relations are defined akin to FrameNet's Emotion frame BIBREF19. There are two works that work on annotation of semantic roles in the context of emotion. Firstly, Mohammad2014 annotate a dataset of $4,058$ tweets via crowdsourcing. The tweets were published before the U.S. presidential elections in 2012. The semantic roles considered are the experiencer, the stimulus, and the target. However, in the case of tweets, the experiencer is mostly the author of the tweet. Secondly, Kim2018 annotate and release REMAN (Relational EMotion ANnotation), a corpus of $1,720$ paragraphs based on Project Gutenberg. REMAN was manually annotated for spans which correspond to emotion cues and entities/events in the roles of experiencers, targets, and causes of the emotion. They also provide baseline results for the automatic prediction of these structures and show that their models benefit from joint modeling of emotions with its roles in all subtasks. Our work follows in motivation Kim2018 and in procedure Mohammad2014. Related Work ::: Reader vs. Writer vs. Text Perspective Studying the impact of different annotation perspectives is another little explored area. There are few exceptions in sentiment analysis which investigate the relation between sentiment of a blog post and the sentiment of their comments BIBREF46 or model the emotion of a news reader jointly with the emotion of a comment writer BIBREF47. Fewer works exist in the context of emotion analysis. 5286061 deal with writer's and reader's emotions on online blogs and find that positive reader emotions tend to be linked to positive writer emotions. Buechel2017b and buechel-hahn-2017-emobank look into the effects of different perspectives on annotation quality and find that the reader perspective yields better inter-annotator agreement values. Data Collection & Annotation We gather the data in three steps: (1) collecting the news and the reactions they elicit in social media, (2) filtering the resulting set to retain relevant items, and (3) sampling the final selection using various metrics. The headlines are then annotated via crowdsourcing in two phases by three annotators in the first phase and by five annotators in the second phase. As a last step, the annotations are adjudicated to form the gold standard. We describe each step in detail below. Data Collection & Annotation ::: Collecting Headlines The first step consists of retrieving news headlines from the news publishers. We further retrieve content related to a news item from social media: tweets mentioning the headlines together with replies and Reddit posts that link to the headlines. We use this additional information for subsampling described later. We manually select all news sources available as RSS feeds (82 out of 124) from the Media Bias Chart BIBREF48, a project that analyzes reliability (from original fact reporting to containing inaccurate/fabricated information) and political bias (from most extreme left to most extreme right) of U.S. news sources. Our news crawler retrieved daily headlines from the feeds, together with the attached metadata (title, link, and summary of the news article) from March 2019 until October 2019. Every day, after the news collection finished, Twitter was queried for 50 valid tweets for each headline. In addition to that, for each collected tweet, we collect all valid replies and counts of being favorited, retweeted and replied to in the first 24 hours after its publication. The last step in the pipeline is aquiring the top (“hot”) submissions in the /r/news, /r/worldnews subreddits, and their metadata, including the number of up and downvotes, upvote ratio, number of comments, and comments themselves. Data Collection & Annotation ::: Filtering & Postprocessing We remove any headlines that have less than 6 tokens (e. g., “Small or nothing”, “But Her Emails”, “Red for Higher Ed”), as well as those starting with certain phrases, such as “Ep.”,“Watch Live:”, “Playlist:”, “Guide to”, and “Ten Things”. We also filter-out headlines that contain a date (e. g., “Headlines for March 15, 2019”) and words from the headlines which refer to visual content, like “video”, “photo”, “image”, “graphic”, “watch”, etc. Data Collection & Annotation ::: Sampling Headlines We stratify the remaining headlines by source (150 headlines from each source) and subsample equally according to the following strategies: 1) randomly select headlines, 2) select headlines with high count of emotion terms, 3) select headlines that contain named entities, and 4) select the headlines with high impact on social media. Table TABREF16 shows how many headlines are selected by each sampling method in relation to the most dominant emotion (see Section SECREF25). Data Collection & Annotation ::: Sampling Headlines ::: Random Sampling. The goal of the first sampling method is to collect a random sample of headlines that is representative and not biased towards any source or content type. Note that the sample produced using this strategy might not be as rich with emotional content as the other samples. Data Collection & Annotation ::: Sampling Headlines ::: Sampling via NRC. For the second sampling strategy we hypothesize that headlines containing emotionally charged words are also likely to contain the structures we aim to annotate. This strategy selects headlines whose words are in the NRC dictionary BIBREF49. Data Collection & Annotation ::: Sampling Headlines ::: Sampling Entities. We further hypothesize that headlines that mention named entities may also contain experiencers or targets of emotions, and therefore, they are likely to present a complete emotion structure. This sampling method yields headlines that contain at least one entity name, according to the recognition from spaCy that is trained on OntoNotes 5 and on Wikipedia corpus. We consider organization names, persons, nationalities, religious, political groups, buildings, countries, and other locations. Data Collection & Annotation ::: Sampling Headlines ::: Sampling based on Reddit & Twitter. The last sampling strategy involves our Twitter and Reddit metadata. This enables us to select and sample headlines based on their impact on social media (under the assumption that this correlates with emotion connotation of the headline). This strategy chooses them equally from the most favorited tweets, most retweeted headlines on Twitter, most replied to tweets on Twitter, as well as most upvoted and most commented on posts on Reddit. Data Collection & Annotation ::: Annotation Procedure Using these sampling and filtering methods, we select $9,932$ headlines. Next, we set up two questionnaires (see Table TABREF17) for the two annotation phases that we describe below. We use Figure Eight. Data Collection & Annotation ::: Annotation Procedure ::: Phase 1: Selecting Emotional Headlines The first questionnaire is meant to determine the dominant emotion of a headline, if that exists, and whether the headline triggers an emotion in a reader. We hypothesize that these two questions help us to retain only relevant headlines for the next, more expensive, annotation phase. During this phase, $9,932$ headlines were annotated by three annotators. The first question of the first phase (P1Q1) is: “Which emotion is most dominant in the given headline?” and annotators are provided a closed list of 15 emotion categories to which the category No emotion was added. The second question (P1Q2) aims to answer whether a given headline would stir up an emotion in most readers and the annotators are provided with only two possible answers (yes or no, see Table TABREF17 and Figure FIGREF1 for details). Our set of 15 emotion categories is an extended set over Plutchik's emotion classes and comprises anger, annoyance, disgust, fear, guilt, joy, love, pessimism, negative surprise, optimism, positive surprise, pride, sadness, shame, and trust. Such a diverse set of emotion labels is meant to provide a more fine-grained analysis and equip the annotators with a wider range of answer choices. Data Collection & Annotation ::: Annotation Procedure ::: Phase 2: Emotion and Role Annotation The annotations collected during the first phase are automatically ranked and the ranking is used to decide which headlines are further annotated in the second phase. Ranking consists of sorting by agreement on P1Q1, considering P1Q2 in the case of ties. The top $5,000$ ranked headlines are annotated by five annotators for emotion class, intensity, reader emotion, and other emotions in case there is not only a dominant emotion. Along with these closed annotation tasks, the annotators are asked to answer several open questions, namely (1) who is the experiencer of the emotion (if mentioned), (2) what event triggered the annotated emotion (if mentioned), (3) if the emotion had a target, and (4) who or what is the target. The annotators are free to select multiple instances related to the dominant emotion by copy-paste into the answer field. For more details on the exact questions and example of answers, see Table TABREF17. Figure FIGREF1 shows a depiction of the procedure. Data Collection & Annotation ::: Annotation Procedure ::: Quality Control and Results To control the quality, we ensured that a single annotator annotates maximum 120 headlines (this protects the annotators from reading too many news headlines and from dominating the annotations). Secondly, we let only annotators who geographically reside in the U.S. contribute to the task. We test the annotators on a set of $1,100$ test questions for the first phase (about 10% of the data) and 500 for the second phase. Annotators were required to pass 95%. The questions were generated based on hand-picked non-ambiguous real headlines through swapping out relevant words from the headline in order to obtain a different annotation, for instance, for “Djokovic happy to carry on cruising”, we would swap “Djokovic” with a different entity, the cue “happy” to a different emotion expression. Further, we exclude Phase 1 annotations that were done in less than 10 seconds and Phase 2 annotations that were done in less than 70 seconds. After we collected all annotations, we found unreliable annotators for both phases in the following way: for each annotator and for each question, we compute the probability with which the annotator agrees with the response chosen by the majority. If the computed probability is more than two standard deviations away from the mean we discard all annotations done by that annotator. On average, 310 distinct annotators needed 15 seconds in the first phase. We followed the guidelines of the platform regarding payment and decided to pay for each judgment $$0.02$ (USD) for Phase 1 (total of $$816.00$ USD). For the second phase, 331 distinct annotators needed on average $\approx $1:17 minutes to perform one judgment. Each judgment was paid with $0.08$$ USD (total $$2,720.00$ USD). Data Collection & Annotation ::: Adjudication of Annotations In this section, we describe the adjudication process we undertook to create the gold dataset and the difficulties we faced in creating a gold set out of the collected annotations. The first step was to discard obviously wrong annotations for open questions, such as annotations in other languages than English, or annotations of spans that were not part of the headline. In the next step, we incrementally apply a set of rules to the annotated instances in a one-or-nothing fashion. Specifically, we incrementally test each instance for a number of criteria in such a way that if at least one criteria is satisfied the instance is accepted and its adjudication is finalized. Instances that do not satisfy at least one criterium are adjudicated manually. Data Collection & Annotation ::: Adjudication of Annotations ::: Relative Majority Rule. This filter is applied to all questions regardless of their type. Effectively, whenever an entire annotation is agreed upon by at least two annotators, we use all parts of this annotation as the gold annotation. Given the headline depicted in Figure FIGREF1 with the following target role annotations by different annotators: “A couple”, “None”, “A couple”, “officials”, “their helicopter”. The resulting gold annotation is “A couple” and the adjudication process for the target ends. Data Collection & Annotation ::: Adjudication of Annotations ::: Most Common Subsequence Rule. This rule is only applied to open text questions. It takes the most common smallest string intersection of all annotations. In the headline above, the experiencer annotations “A couple”, “infuriated officials”, “officials”, “officials”, “infuriated officials” would lead to “officials”. Data Collection & Annotation ::: Adjudication of Annotations ::: Longest Common Subsequence Rule. This rule is only applied two different intersections are the most common (previous rule), and these two intersect. We then accept the longest common subsequence. Revisiting the example for deciding on the cause role with the annotations “by landing their helicopter in the nature reserve”, “by landing their helicopter”, “landing their helicopter in the nature reserve”, “a couple infuriated officials”, “infuriated” the adjudicated gold is “landing their helicopter in the nature reserve”. Table TABREF27 shows through examples of how each rule works and how many instances are “solved” by each adjudication rule. Data Collection & Annotation ::: Adjudication of Annotations ::: Noun Chunks For the role of experiencer, we accept only the most-common noun-chunk(s). The annotations that are left after being processed by all the rules described above are being adjudicated manually by the authors of the paper. We show examples for all roles in Table TABREF29. Analysis ::: Inter-Annotator Agreement We calculate the agreement on the full set of annotations from each phase for the two question types, namely open vs. closed, where the first deal with emotion classification and second with the roles cue, experiencer, cause, and target. Analysis ::: Inter-Annotator Agreement ::: Emotion We use Fleiss' Kappa ($\kappa $) to measure the inter-annotator agreement for closed questions BIBREF50, BIBREF51. In addition, we report the average percentage of overlaps between all pairs of annotators (%) and the mean entropy of annotations in bits. Higher agreement correlates with lower entropy. As Table TABREF38 shows, the agreement on the question whether a headline is emotional or not obtains the highest agreement ($0.34$), followed by the question on intensity ($0.22$). The lowest agreement is on the question to find the most dominant emotion ($0.09$). All metrics show comparably low agreement on the closed questions, especially on the question of the most dominant emotion. This is reasonable, given that emotion annotation is an ambiguous, subjective, and difficult task. This aspect lead to the decision of not purely calculating a majority vote label but to consider the diversity in human interpretation of emotion categories and publish the annotations by all annotators. Table TABREF40 shows the counts of annotators agreeing on a particular emotion. We observe that Love, Pride, and Sadness show highest intersubjectivity followed closely by Fear and Joy. Anger and Annoyance show, given their similarity, lower scores. Note that the micro average of the basic emotions (+ love) is $0.21$ for when more than five annotators agree. Analysis ::: Inter-Annotator Agreement ::: Roles Table TABREF41 presents the mean of pair-wise inter-annotator agreement for each role. We report average pair-wise Fleiss' $\kappa $, span-based exact $\textrm {F}_1$ over the annotated spans, accuracy, proportional token overlap, and the measure of agreement on set-valued items, MASI BIBREF52. We observe a fair agreement on the open annotation tasks. The highest agreement is for the role of the Experiencer, followed by Cue, Cause, and Target. This seems to correlate with the length of the annotated spans (see Table TABREF42). This finding is consistent with Kim2018. Presumably, Experiencers are easier to annotate as they often are noun phrases whereas causes can be convoluted relative clauses. Analysis ::: General Corpus Statistics In the following, we report numbers of the adjudicated data set for simplicity of discussion. Please note that we publish all annotations by all annotators and suggest that computational models should consider the distribution of annotations instead of one adjudicated gold. The latter for be a simplification which we consider to not be appropriate. GoodNewsEveryone contains $5,000$ headlines from various news sources described in the Media Bias Chart BIBREF17. Overall, the corpus is composed of $56,612$ words ($354,173$ characters) out of which $17,513$ are unique. The headline length is short with 11 words on average. The shortest headline contains 6 words while the longest headline contains 32 words. The length of a headline in characters ranges from 24 the shortest to 199 the longest. Table TABREF42 presents the total number of adjudicated annotations for each role in relation to the dominant emotion. GoodNewsEveryone consists of $5,000$ headlines, $3,312$ of which have annotated dominant emotion via majority vote. The rest of $1,688$ headlines (up to $5,000$) ended in ties for the most dominant emotion category and were adjudicated manually. The emotion category Negative Surprise has the highest number of annotations, while Love has the lowest number of annotations. In most cases, Cues are single tokens (e. g., “infuriates”, “slams”), Cause has the largest proportion of annotations that span more than seven tokens on average (65% out of all annotations in this category), For the role of Experiencer, we see the lowest number of annotations (19%), which is a very different result to the one presented by Kim2018, where the role Experiencer was the most annotated. We hypothesize that this is the effect of the domain we annotated; it is more likely to encounter explicit experiencers in literature (as literary characters) than in news headlines. As we can see, the cue and the cause relations dominate the dataset (27% each), followed by Target (25%) relations. Table TABREF42 also shows how many times each emotion triggered a certain relation. In this sense, Negative Surprise and Positive Surprise has triggered the most Experiencer, and Cause and Target relations, which due to the prevalence of the annotations for this emotion in the dataset. Further, Figure FIGREF44, shows the distances of the different roles from the cue. The causes and targets are predominantly realized right of the cue, while the experiencer occurs more often left of the cue. Baseline As an estimate for the difficulty of the task, we provide baseline results. We formulate the task as sequence labeling of emotion cues, mentions of experiencers, targets, and causes with a bidirectional long short-term memory networks with a CRF layer (biLSTM-CRF) that uses Elmo embeddings as input and an IOB alphabet as output. The results are shown in Table TABREF45. Conclusion & Future Work We introduce GoodNewsEveryone, a corpus of $5,000$ headlines annotated for emotion categories, semantic roles, and reader perspective. Such a dataset enables answering instance-based questions, such as, “who is experiencing what emotion and why?” or more general questions, like “what are typical causes of joy in media?”. To annotate the headlines, we employ a two-phase procedure and use crowdsourcing. To obtain a gold dataset, we aggregate the annotations through automatic heuristics. As the evaluation of the inter-annotator agreement and the baseline model results show, the task of annotating structures encompassing emotions with the corresponding roles is a very difficult one. However, we also note that developing such a resource via crowdsourcing has its limitations, due to the subjective nature of emotions, it is very challenging to come up with an annotation methodology that would ensure less dissenting annotations for the domain of headlines. We release the raw dataset, the aggregated gold dataset, the carefully designed questionnaires, and baseline models as a freely available repository (partially only after acceptance of the paper). The released dataset will be useful for social science scholars, since it contains valuable information about the interactions of emotions in news headlines, and gives interesting insights into the language of emotion expression in media. Note that this dataset is also useful since it introduces a new dataset to test on structured prediction models. We are currently investigating the dataset for understanding the interaction between media bias and annotated emotions and roles. Acknowledgements This research has been conducted within the CRETA project (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). We thank Enrica Troiano and Jeremy Barnes for fruitful discussions. same
Annotators went through various phases to make sure their annotations did not deviate from the mean.
a8e0796c1ac353d428d84f4506a92b51bce51b87
a8e0796c1ac353d428d84f4506a92b51bce51b87_0
Q: On what data is the model evaluated? Text: Introduction Reasoning over multi-relational data is a key concept in Artificial Intelligence and knowledge graphs have appeared at the forefront as an effective tool to model such multi-relational data. Knowledge graphs have found increasing importance due to its wider range of important applications such as information retrieval BIBREF0 , natural language processing BIBREF1 , recommender systems BIBREF2 , question-answering BIBREF3 and many more. This has led to the increased efforts in constructing numerous large-scale Knowledge Bases (e.g. Freebase BIBREF4 , DBpedia BIBREF5 , Google's Knowledge graph BIBREF6 , Yago BIBREF7 and NELL BIBREF8 ), that can cater to these applications, by representing information available on the web in relational format. All knowledge graphs share common drawback of incompleteness and sparsity and hence most existing relational learning techniques focus on using observed triplets in an incomplete graph to infer unobserved triplets for that graph BIBREF9 . Neural embedding techniques that learn vector space representations of entities and relationships have achieved remarkable success in this task. However, these techniques only focus on learning from a single graph. In addition to incompleteness property, these knowledge graphs also share a set of overlapping entities and relationships with varying information about them. This makes a compelling case to design a technique that can learn over multiple graphs and eventually aid in constructing a unified giant graph out of them. While research on learning representations over single graph has progressed rapidly in recent years BIBREF10 , BIBREF6 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , there is a conspicuous lack of principled approach to tackle the unique challenges involved in learning across multiple graphs. One approach to multi-graph representation learning could be to first solve graph alignment problem to merge the graphs and then use existing relational learning methods on merged graph. Unfortunately, graph alignment is an important but still unsolved problem and there exist several techniques addressing its challenges BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 in limited settings. The key challenges for the graph alignment problem emanate from the fact that the real world data are noisy and intricate in nature. The noisy or sparse data make it difficult to learn robust alignment features, and data abundance leads to computational challenges due to the combinatorial permutations needed for alignment. These challenges are compounded in multi-relational settings due to heterogeneous nodes and edges in such graphs. Recently, deep learning has shown significant impact in learning useful information over noisy, large-scale and heterogeneous graph data BIBREF19 . We, therefore, posit that combining graph alignment task with deep representation learning across multi-relational graphs has potential to induce a synergistic effect on both tasks. Specifically, we identify that a key component of graph alignment process—entity linkage—also plays a vital role in learning across graphs. For instance, the embeddings learned over two knowledge graphs for an actor should be closer to one another compared to the embeddings of all the other entities. Similarly, the entities that are already aligned together across the two graphs should produce better embeddings due to the shared context and data. To model this phenomenon, we propose LinkNBed, a novel deep learning framework that jointly performs representation learning and graph linkage task. To achieve this, we identify key challenges involved in the learning process and make the following contributions to address them: Knowledge Graph Representation A knowledge graph $\mathcal {G}$ comprises of set of facts represented as triplets ( $e^s,r,e^o$ ) denoting the relationship $r$ between subject entity $e^s$ and object entity $e^o$ . Associated to this knowledge graph, we have a set of attributes that describe observed characteristics of an entity. Attributes are represented as set of key-value pairs for each entity and an attribute can have null (missing) value for an entity. We follow Open World Assumption - triplets not observed in knowledge graph are considered to be missing but not false. We assume that there are no duplicate triplets or self-loops. Multi-Graph Relational Learning Definition. Given a collection of knowledge graphs $\mathcal {G}$ , Multi-Graph Relational Learning refers to the the task of learning information rich representations of entities and relationships across graphs. The learned embeddings can further be used to infer new knowledge in the form of link prediction or learn new labels in the form of entity linkage. We motivate our work with the setting of two knowledge graphs where given two graphs $G_1, G_2 \in \mathcal {G}$ , the task is to match an entity $e_{G_1} \in G_1$ to an entity $e_{G_2} \in G_2$ if they represent the same real-world entity. We discuss a straightforward extension of this setting to more than two graphs in Section 7. Notations. Let $X$ and $Y$ represent realization of two such knowledge graphs extracted from two different sources. Let $n_e^X$ and $n_e^Y$ represent number of entities in $X$ and $Y$ respectively. Similarly, $n_r^X$ and $n_r^Y$ represent number of relations in $X$ and $Y$ . We combine triplets from both $Y$0 and $Y$1 to obtain set of all observed triplets $Y$2 where $Y$3 is total number of available records across from both graphs. Let $Y$4 and $Y$5 be the set of all entities and all relations in $Y$6 respectively. Let $Y$7 and $Y$8 . In addition to $Y$9 , we also have set of linkage labels $n_e^X$0 for entities between $n_e^X$1 and $n_e^X$2 . Each record in $n_e^X$3 is represented as triplet ( $n_e^X$4 , $n_e^X$5 , $n_e^X$6 ) where $n_e^X$7 when the entities are matched and $n_e^X$8 otherwise. Proposed Method: LinkNBed We present a novel inductive multi-graph relational learning framework that learns a set of aggregator functions capable of ingesting various contextual information for both entities and relationships in multi-relational graph. These functions encode the ingested structural and semantic information into low-dimensional entity and relation embeddings. Further, we use these representations to learn a relational score function that computes how two entities are likely to be connected in a particular relationship. The key idea behind this formulation is that when a triplet is observed, the relationship between the two entities can be explained using various contextual information such as local neighborhood features of both entities, attribute features of both entities and type information of the entities which participate in that relationship. We outline two key insights for establishing the relationships between embeddings of the entities over multiple graphs in our framework: Insight 1 (Embedding Similarity): If the two entities $e^X \in X$ and $e^Y \in Y$ represent the same real-world entity then their embeddings $\mathbf {e^X}$ and $\mathbf {e^Y}$ will be close to each other. Insight 2 (Semantic Replacement): For a given triplet $t = (e^s, r, e^o) \in X$ , denote $g(t)$ as the function that computes a relational score for $t$ using entity and relation embeddings. If there exists a matching entity $e^{s^{\prime }} \in Y$ for $e^s \in X$ , denote $t^{\prime } = (e^{s^{\prime }},r,e^o)$ obtained after replacing $e^s$ with $e^{s^{\prime }}$ . In this case, $g(t) \sim g(t^{\prime })$ i.e. score of triplets $t$ and $g(t)$0 will be similar. For a triplet $(e^s, r , e^o) \in \mathcal {D}$ , we describe encoding mechanism of LinkNBed as three-layered architecture that computes the final output representations of $\mathbf {z}^{r}, \mathbf {z}^{e^s}, \mathbf {z}^{e^o}$ for the given triplet. Figure 1 provides an overview of LinkNBed architecture and we describe the three steps below: Atomic Layer Entities, Relations, Types and Attributes are first encoded in its basic vector representations. We use these basic representations to derive more complex contextual embeddings further. Entities, Relations and Types. The embedding vectors corresponding to these three components are learned as follows: ves = f(WE es ) veo = f(WE eo ) vr = f(WR r ) vt = f(WT t ) where $\mathbf {v^{e^s}}$ , $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$ . $\mathbf {e^s}$ , $\mathbf {e^o} \in \mathbb {R}^{n}$ are “one-hot" representations of $e^s$ and $e^o$ respectively. $\mathbf {v^{r}} \in \mathbb {R}^{k}$ and $\mathbf {r} \in \mathbb {R}^{m}$ is “one-hot" representation of $r$ . $\mathbf {v^{t}} \in \mathbb {R}^{q}$ and $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$0 is "one-hot" representation of $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$1 . $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$2 , $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$3 and $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$4 are the entity, relation and type embedding matrices respectively. $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$5 is a nonlinear activation function (Relu in our case). $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$6 , $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$7 and $\mathbf {v^{e^o}} \in \mathbb {R}^{d}$8 can be initialized randomly or using pre-trained word embeddings or vector compositions based on name phrases of components BIBREF20 . Attributes. For a given attribute $a$ represented as key-value pair, we use paragraph2vec BIBREF21 type of embedding network to learn attribute embedding. Specifically, we represent attribute embedding vector as: a = f(Wkey akey + Wval aval ) where $\mathbf {a} \in \mathbb {R}^{y}$ , $\mathbf {a_{key}} \in \mathbb {R}^{u}$ and $\mathbf {a_{val}} \in \mathbb {R}^{v}$ . $\mathbf {W^{key}} \in \mathbb {R}^{y \times u}$ and $\mathbf {W^{val}} \in \mathbb {R}^{y \times v}$ . $\mathbf {a_{key}}$ will be “one-hot" vector and $\mathbf {a_{val}}$ will be feature vector. Note that the dimensions of the embedding vectors do not necessarily need to be the same. Contextual Layer While the entity and relationship embeddings described above help to capture very generic latent features, embeddings can be further enriched to capture structural information, attribute information and type information to better explain the existence of a fact. Such information can be modeled as context of nodes and edges in the graph. To this end, we design the following canonical aggregator function that learns various contextual information by aggregating over relevant embedding vectors: c(z) = AGG({z', z' C(z)}) where $\mathbf {c}(z)$ is the vector representation of the aggregated contextual information for component $z$ . Here, component $z$ can be either an entity or a relation. $C(z)$ is the set of components in the context of $z$ and $\mathbf {z^{\prime }}$ correspond to the vector embeddings of those components. AGG is the aggregator function which can take many forms such Mean, Max, Pooling or more complex LSTM based aggregators. It is plausible that different components in a context may have varied impact on the component for which the embedding is being learned. To account for this, we employ a soft attention mechanism where we learn attention coefficients to weight components based on their impact before aggregating them. We modify Eq. "Implementation Details" as: c(z) = AGG(q(z) * {z', z' C(z)}) where q(z) = (z)z' C(z) (z') and $\theta _z$ 's are the parameters of attention model. Following contextual information is modeled in our framework: Entity Neighborhood Context $\mathbf {N_c}(e) \in \mathbb {R}^d$ . Given a triplet $(e^s,r,e^o)$ , the neighborhood context for an entity $e^s$ will be the nodes located near $e^s$ other than the node $e^o$ . This will capture the effect of local neighborhood in the graph surrounding $e^s$ that drives $e^s$ to participate in fact $(e^s,r,e^o)$ . We use Mean as aggregator function. As there can be large number of neighbors, we collect the neighborhood set for each entity as a pre-processing step using a random walk method. Specifically, given a node $e$ , we run $k$ rounds of random-walks of length $(e^s,r,e^o)$0 following BIBREF22 and create set $(e^s,r,e^o)$1 by adding all unique nodes visited across these walks. This context can be similarly computed for object entity. Entity Attribute Context $\mathbf {A_c}(e) \in \mathbb {R}^y$ . For an entity $e$ , we collect all attribute embeddings for $e$ obtained from Atomic Layer and learn aggregated information over them using Max operator given in Eq. "Implementation Details" . Relation Type Context $\mathbf {T_c}(r) \in \mathbb {R}^q$ . We use type context for relation embedding i.e. for a given relationship $r$ , this context aims at capturing the effect of type of entities that have participated in this relationship. For a given triplet $(e^s, r , e^o)$ , type context for relationship $r$ is computed by aggregation with mean over type embeddings corresponding to the context of $r$ . Appendix C provides specific forms of contextual information. Representation Layer Having computed the atomic and contextual embeddings for a triplet $(e^s, r, e^o)$ , we obtain the final embedded representations of entities and relation in the triplet using the following formulation: $ \mathbf {z^{e^s}} &= \sigma (\underbrace{\mathbf {W_1v^{e^s}}}_\text{Subject Entity Embedding} + \underbrace{\mathbf {W_2 N_c}(e^s)}_\text{Neighborhood Context}\\ &+ \underbrace{\mathbf {W_3 A_c}(e^s))}_\text{Subject Entity Attributes} $ $ \mathbf {z^{e^o}} &= \sigma (\underbrace{\mathbf {W_1v^{e^o}}}_\text{Object Entity Embedding} + \underbrace{\mathbf {W_2 N_c}(e^o)}_\text{Neighborhood Context}\\ &+ \underbrace{\mathbf {W_3 A_c}(e^o))}_\text{Object Entity Attributes} $ zr = (W4vrRelation Embedding + W5 Tc(r))Entity Type Context where $\mathbf {W_1}, \mathbf {W_2} \in \mathbb {R}^{d \times d}$ , $\mathbf {W_3} \in \mathbb {R}^{d \times y}$ , $\mathbf {W_4} \in \mathbb {R}^{d \times k}$ and $\mathbf {W_5} \in \mathbb {R}^{d \times q}$ . $\sigma $ is nonlinear activation function – generally Tanh or Relu. Following is the rationale for our formulation: An entity's representation can be enriched by encoding information about the local neighborhood features and attribute information associated with the entity in addition to its own latent features. Parameters $\mathbf {W_1}, \mathbf {W_2}, \mathbf {W_3}$ learn to capture these different aspects and map them into the entity embedding space. Similarly, a relation's representation can be enriched by encoding information about entity types that participate in that relationship in addition to its own latent features. Parameters $\mathbf {W_4}, \mathbf {W_5}$ learn to capture these aspects and map them into the relation embedding space. Further, as the ultimate goal is to jointly learn over multiple graphs, shared parameterization in our model facilitate the propagation of information across graphs thereby making it a graph-independent inductive model. The flexibility of the model stems from the ability to shrink it (to a very simple model considering atomic entity and relation embeddings only) or expand it (to a complex model by adding different contextual information) without affecting any other step in the learning procedure. Relational Score Function Having observed a triplet $(e^s,r, e^o)$ , we first use Eq. 7, 8 and 9 to compute entity and relation representations. We then use these embeddings to capture relational interaction between two entities using the following score function $g(\cdot )$ : g(es, r, eo) = (zrT (zes zeo)) where $\mathbf {z}^{r}, \mathbf {z}^{e^s}, \mathbf {z}^{e^o} \in \mathbb {R}^d$ are $d$ -dimensional representations of entity and relationships as described below. $\sigma $ is the nonlinear activation function and $\odot $ represent element-wise product. Objective Function The complete parameter space of the model can be given by: $\mathbf {\Omega = \lbrace \lbrace W_i\rbrace _{i=1}^5, W^E, W^R, W^{key}, W^{val}, W^t ,\Theta \rbrace }$ . To learn these parameters, we design a novel multi-task objective function that jointly trains over two graphs. As identified earlier, the goal of our model is to leverage the available linkage information across graphs for optimizing the entity and relation embeddings such that they can explain the observed triplets across the graphs. Further, we want to leverage these optimized embeddings to match entities across graphs and expand the available linkage information. To achieve this goal, we define following two different loss functions catering to each learning task and jointly optimize over them as a multi-task objective to learn model parameters: Relational Learning Loss. This is conventional loss function used to learn knowledge graph embeddings. Specifically, given a p-th triplet $(e^s, r, e^o)_p$ from training set $\mathcal {D}$ , we sample $C$ negative samples by replacing either head or tail entity and define a contrastive max margin function as shown in BIBREF20 : $$\begin{split} L_{rel} &= \sum \limits _{c=1}^{C} \max (0, \gamma - g(e^s_p,r_p,e^o_p) \\ &+ g^{\prime }(e^s_c,r_p,e^o_p)) \end{split}$$ (Eq. 13) where, $\gamma $ is margin, $e^s_c$ represent corrupted entity and $g^{\prime }(e^s_c,r_p,e^o_p)$ represent corrupted triplet score. Linkage Learning Loss: We design a novel loss function to leverage pairwise label set $\mathcal {L}$ . Given a triplet $(e^s_X, r_X, e^o_X)$ from knowledge graph $X$ , we first find the entity $e_Y^+$ from graph $Y$ that represent the same real-world entity as $e^s_X$ . We then replace $e^s_X$ with $e_Y^+$ and compute score $g(e_Y^+,r_X,e^o_X)$ . Next, we find set of all entities $E_Y^-$ from graph $(e^s_X, r_X, e^o_X)$0 that has a negative label with entity $(e^s_X, r_X, e^o_X)$1 . We consider them analogous to the negative samples we generated for Eq. 13 . We then propose the label learning loss function as: $$\begin{split} L_{lab} &= \sum \limits _{z=1}^{Z} \max (0, \gamma - g(e_Y^+,r_X,e^o_X) \\ &+ (g^{\prime }(e_Y^-,r_X,e^o_X)_z)) \end{split}$$ (Eq. 14) where, $Z$ is the total number of negative labels for $e_X$ . $\gamma $ is margin which is usually set to 1 and $e_Y^- \in E_Y^-$ represent entity from graph $Y$ with which entity $e^s_X$ had a negative label. Please note that this applies symmetrically for the triplets that originate from graph $Y$ in the overall dataset. Note that if both entities of a triplet have labels, we will include both cases when computing the loss. Eq. 14 is inspired by Insight 1 and Insight 2 defined earlier in Section 2. Given a set $\mathcal {D}$ of $N$ observed triplets across two graphs, we define complete multi-task objective as: $$\mathbf {L}(\mathbf {\Omega }) = \sum \limits _{i=1}^{N} [b \cdot L_{rel} + (1-b) \cdot L_{lab}] + \lambda \left\Vert \mathbf {\Omega } \right\Vert _2^2$$ (Eq. 15) where $\mathbf {\Omega }$ is set of all model parameters and $\lambda $ is regularization hyper-parameter. $b$ is weight hyper-parameter used to attribute importance to each task. We train with mini-batch SGD procedure (Algorithm "Objective Function" ) using Adam Optimizer. [t!] LinkNBed mini-batch Training Input: Mini-batch $\mathcal {M}$ , Negative Sample Size $C$ , Negative Label Size $Z$ , Attribute data $att\_data$ , Neighborhood data $nhbr\_data$ , Type data $type\_data$ , Positive Label Dict $pos\_dict$ , Negative Label Dict $\lambda $0 Output: Mini-batch Loss $\lambda $1 . $\lambda $2 score_pos = []; score_neg = []; score_pos_lab = []; score_neg_lab = [] $\lambda $3 to size( $\lambda $4 ) input_tuple = $\lambda $5 = ( $\lambda $6 ) sc = compute_triplet_score( $\lambda $7 ) (Eq. "Relational Score Function" ) score_pos.append(sc) $\lambda $8 to $\lambda $9 Select $b$0 from entity list such that $b$1 and $b$2 and $b$3 sc_neg = compute_triplet_score( $b$4 ) score_neg.append(sc_neg) $b$5 in $b$6 $b$7 = positive label for $b$8 sc_pos_l = compute_triplet_score( $b$9 ) score_pos_lab.append(sc_pos_l) $\mathcal {M}$0 to $\mathcal {M}$1 Select $\mathcal {M}$2 from $\mathcal {M}$3 sc_neg_l = compute_triplet_score( $\mathcal {M}$4 ) score_neg_lab.append(sc_neg_l) $\mathcal {M}$5 compute_minibatch_loss(score_pos, score_neg, score_pos_lab, score_neg_lab) (Eq. 15 ) Back-propagate errors and update parameters $\mathcal {M}$6 return $\mathcal {M}$7 Missing Positive Labels. It is expensive to obtain positive labels across multiple graphs and hence it is highly likely that many entities will not have positive labels available. For those entities, we will modify Eq. 14 to use the original triplet $(e^s_X, r_X, e^o_X)$ in place of perturbed triplet $g(e_Y^+,r_X,e^o_X)$ for the positive label. The rationale here again arises from Insight 2 wherein embeddings of two duplicate entities should be able to replace each other without affecting the score. Training Time Complexity. Most contextual information is pre-computed and available to all training steps which leads to constant time embedding lookup for those context. But for attribute network, embedding needs to be computed for each attribute separately and hence the complexity to compute score for one triplet is $\mathcal {O}(2a)$ where $a$ is number of attributes. Also for training, we generate $C$ negative samples for relational loss function and use $Z$ negative labels for label loss function. Let $k = C + Z$ . Hence, the training time complexity for a set of $n$ triplets will be $\mathcal {O}(2ak*n)$ which is linear in number of triplets with a constant factor as $ak << n$ for real world knowledge graphs. This is desirable as the number of triplets tend to be very large per graph in multi-relational settings. Memory Complexity. We borrow notations from BIBREF9 and describe the parameter complexity of our model in terms of the number of each component and corresponding embedding dimension requirements. Let $H_a = 2*N_eH_e + N_rH_r + N_tH_t + N_kH_k + N_vH_v$ . The parameter complexity of our model is: $H_a * (H_b + 1)$ . Here, $N_e$ , $N_r$ , $N_t$ , $N_k$ , $N_v$ signify number of entities, relations, types, attribute keys and vocab size of attribute values across both datasets. Here $H_b$ is the output dimension of the hidden layer. Datasets We evaluate LinkNBed and baselines on two real world knowledge graphs: D-IMDB (derived from large scale IMDB data snapshot) and D-FB (derived from large scale Freebase data snapshot). Table 1 provides statistics for our final dataset used in the experiments. Appendix B.1 provides complete details about dataset processing. Baselines We compare the performance of our method against state-of-the-art representation learning baselines that use neural embedding techniques to learn entity and relation representation. Specifically, we consider compositional methods of RESCAL BIBREF10 as basic matrix factorization method, DISTMULT BIBREF14 as simple multiplicative model good for capturing symmetric relationships, and Complex BIBREF11 , an upgrade over DISTMULT that can capture asymmetric relationships using complex valued embeddings. We also compare against translational model of STransE that combined original structured embedding with TransE and has shown state-of-art performance in benchmark testing BIBREF23 . Finally, we compare with GAKE BIBREF24 , a model that captures context in entity and relationship representations. In addition to the above state-of-art models, we analyze the effectiveness of different components of our model by comparing with various versions that use partial information. Specifically, we report results on following variants: LinkNBed - Embed Only. Only use entity embeddings, LinkNBed - Attr Only. Only use Attribute Context, LinkNBed - Nhbr Only. Only use Neighborhood Context, LinkNBed - Embed + Attr. Use both Entity embeddings and Attribute Context, LinkNBed - Embed + Nhbr. Use both Entity embeddings and Neighbor Context and LinkNBed - Embed All. Use all three Contexts. Evaluation Scheme We evaluate our model using two inference tasks: Link Prediction. Given a test triplet $(e^s, r, e^o)$ , we first score this triplet using Eq. "Relational Score Function" . We then replace $e^o$ with all other entities in the dataset and filter the resulting set of triplets as shown in BIBREF12 . We score the remaining set of perturbed triplets using Eq. "Relational Score Function" . All the scored triplets are sorted based on the scores and then the rank of the ground truth triplet is used for the evaluation. We use this ranking mechanism to compute HITS@10 (predicted rank $\le $ 10) and reciprocal rank ( $\frac{1}{rank}$ ) of each test triplet. We report the mean over all test samples. Entity Linkage. In alignment with Insight 2, we pose a novel evaluation scheme to perform entity linkage. Let there be two ground truth test sample triplets: $(e_X, e_Y^+, 1)$ representing a positive duplicate label and $(e_X, e_Y^-, 0)$ representing a negative duplicate label. Algorithm "Evaluation Scheme" outlines the procedure to compute linkage probability or score $q$ ( $ \in [0,1]$ ) for the pair $(e_X, e_Y)$ . We use $L1$ distance between the two vectors analogous to Mean Absolute Error (MAE). In lieu of hard-labeling test pairs, we use score $q$ to compute Area Under the Precision-Recall Curve (AUPRC). [t!] Entity Linkage Score Computation Input: Test pair – $(e_X \in X, e_Y \in Y)$ . Output: Linkage Score – $q$ . 1. Collect all triplets involving $e_X$ from graph $X$ and all triplets involving $e_Y$ from graph $Y$ into a combined set $\mathcal {O}$ . Let $|\mathcal {O}| = k$ . 2. Construct $S_{orig} \in \mathbb {R}^k$ . For each triplet $o \in \mathcal {O}$ , compute score $q$0 using Eq. "Relational Score Function" and store the score in $q$1 . 3. Create triplet set $q$2 as following: $q$3 contain $q$4 Replace $q$5 with $q$6 to create perturbed triplet $q$7 and store it in $q$8 $q$9 contain $e_X$0 Replace $e_X$1 with $e_X$2 to create perturbed triplet $e_X$3 and store it in $e_X$4 4. Construct $e_X$5 . For each triplet $e_X$6 , compute score $e_X$7 using Eq. "Relational Score Function" and store the score in $e_X$8 . 5. Compute $e_X$9 . Elements in $X$0 and $X$1 have one-one correspondence so take the mean absolute difference: $X$2 = $X$3 - $X$4 return $X$5 For the baselines and the unsupervised version (with no labels for entity linkage) of our model, we use second stage multilayer Neural Network as classifier for evaluating entity linkage. Appendix B.2 provides training configuration details. Predictive Analysis Link Prediction Results. We train LinkNBed model jointly across two knowledge graphs and then perform inference over individual graphs to report link prediction reports. For baselines, we train each baseline on individual graphs and use parameters specific to the graph to perform link prediction inference over each individual graph. Table 2 shows link prediction performance for all methods. Our model variant with attention mechanism outperforms all the baselines with $4.15\%$ improvement over single graph state-of-the-art Complex model on D-IMDB and $8.23\%$ improvement on D-FB dataset. D-FB is more challenging dataset to learn as it has a large set of sparse relationships, types and attributes and it has an order of magnitude lesser relational evidence (number of triplets) compared to D-IMDB. Hence, LinkNBed's pronounced improvement on D-FB demonstrates the effectiveness of the model. The simplest version of LinkNBed with only entity embeddings resembles DISTMULT model with different objective function. Hence closer performance of those two models aligns with expected outcome. We observed that the Neighborhood context alone provides only marginal improvements while the model benefits more from the use of attributes. Despite being marginal, attention mechanism also improves accuracy for both datasets. Compared to the baselines which are obtained by trained and evaluated on individual graphs, our superior performance demonstrates the effectiveness of multi-graph learning. Entity Linkage Results. We report entity linkage results for our method in two settings: a.) Supervised case where we train using both the objective functions. b.) Unsupervised case where we learn with only the relational loss function. The latter case resembles the baseline training where each model is trained separately on two graphs in an unsupervised manner. For performing the entity linkage in unsupervised case for all models, we first train a second stage of simple neural network classifier and then perform inference. In the supervised case, we use Algorithm "Evaluation Scheme" for performing the inference. Table 3 demonstrates the performance of all methods on this task. Our method significantly outperforms all the baselines with $33.86\%$ over second best baseline in supervised case and $17.35\%$ better performance in unsupervised case. The difference in the performance of our method in two cases demonstrate that the two training objectives are helping one another by learning across the graphs. GAKE's superior performance on this task compared to the other state-of-the-art relational baselines shows the importance of using contextual information for entity linkage. Performance of other variants of our model again demonstrate that attribute information is more helpful than neighborhood context and attention provides marginal improvements. We provide further insights with examples and detailed discussion on entity linkage task in Appendix A. Neural Embedding Methods for Relational Learning Compositional Models learn representations by various composition operators on entity and relational embeddings. These models are multiplicative in nature and highly expressive but often suffer from scalability issues. Initial models include RESCAL BIBREF10 that uses a relation specific weight matrix to explain triplets via pairwise interactions of latent features, Neural Tensor Network BIBREF20 , more expressive model that combines a standard NN layer with a bilinear tensor layer and BIBREF6 that employs a concatenation-projection method to project entities and relations to lower dimensional space. Later, many sophisticated models (Neural Association Model BIBREF25 , HoLE BIBREF26 ) have been proposed. Path based composition models BIBREF27 and contextual models GAKE BIBREF24 have been recently studied to capture more information from graphs. Recently, model like Complex BIBREF11 and Analogy BIBREF28 have demonstrated state-of-the art performance on relational learning tasks. Translational Models ( BIBREF29 , BIBREF30 , BIBREF12 , BIBREF31 , BIBREF32 , BIBREF13 ) learn representation by employing translational operators on the embeddings and optimizing based on their score. They offer an additive and efficient alternative to expensive multiplicative models. Due to their simplicity, they often loose expressive power. For a comprehensive survey of relational learning methods and empirical comparisons, we refer the readers to BIBREF9 , BIBREF23 , BIBREF33 and BIBREF14 . None of these methods address multi-graph relational learning and cannot be adapted to tasks like entity linkage in straightforward manner. Entity Resolution in Relational Data Entity Resolution refers to resolving entities available in knowledge graphs with entity mentions in text. BIBREF34 proposed entity disambiguation method for KB population, BIBREF35 learns entity embeddings for resolution, BIBREF36 propose a sophisticated DNN architecture for resolution, BIBREF37 proposes entity resolution across multiple social domains, BIBREF38 jointly embeds text and knowledge graph to perform resolution while BIBREF39 proposes Attention Mechanism for Collective Entity Resolution. Learning across multiple graphs Recently, learning over multiple graphs have gained traction. BIBREF15 divides a multi-relational graph into multiple homogeneous graphs and learns associations across them by employing product operator. Unlike our work, they do not learn across multiple multi-relational graphs. BIBREF40 provides logic based insights for cross learning, BIBREF16 does pairwise entity matching across multi-relational graphs and is very expensive, BIBREF41 learns embeddings to support multi-lingual learning and Big-Align BIBREF17 tackles graph alignment problem efficiently for bipartite graphs. None of these methods learn latent representations or jointly train graph alignment and learning which is the goal of our work. Concluding Remarks and Future Work We present a novel relational learning framework that learns entity and relationship embeddings across multiple graphs. The proposed representation learning framework leverage an efficient learning and inference procedure which takes into account the duplicate entities representing the same real-world entity in a multi-graph setting. We demonstrate superior accuracies on link prediction and entity linkage tasks compared to the existing approaches that are trained only on individual graphs. We believe that this work opens a new research direction in joint representation learning over multiple knowledge graphs. Many data driven organizations such as Google and Microsoft take the approach of constructing a unified super-graph by integrating data from multiple sources. Such unification has shown to significantly help in various applications, such as search, question answering, and personal assistance. To this end, there exists a rich body of work on linking entities and relations, and conflict resolution (e.g., knowledge fusion BIBREF6 . Still, the problem remains challenging for large scale knowledge graphs and this paper proposes a deep learning solution that can play a vital role in this construction process. In real-world setting, we envision our method to be integrated in a large scale system that would include various other components for tasks like conflict resolution, active learning and human-in-loop learning to ensure quality of constructed super-graph. However, we point out that our method is not restricted to such use cases—one can readily apply our method to directly make inference over multiple graphs to support applications like question answering and conversations. For future work, we would like to extend the current evaluation of our work from a two-graph setting to multiple graphs. A straightforward approach is to create a unified dataset out of more than two graphs by combining set of triplets as described in Section 2, and apply learning and inference on the unified graph without any major change in the methodology. Our inductive framework learns functions to encode contextual information and hence is graph independent. Alternatively, one can develop sophisticated approaches with iterative merging and learning over pairs of graphs until exhausting all graphs in an input collection. Acknowledgments We would like to give special thanks to Ben London, Tong Zhao, Arash Einolghozati, Andrew Borthwick and many others at Amazon for helpful comments and discussions. We thank the reviewers for their valuable comments and efforts towards improving our manuscript. This project was supported in part by NSF(IIS-1639792, IIS-1717916). Discussion and Insights on Entity Linkage Task Entity linkage task is novel in the space of multi-graph learning and yet has not been tackled by any existing relational learning approaches. Hence we analyze our performance on the task in more detail here. We acknowledge that baseline methods are not tailored to the task of entity linkage and hence their low performance is natural. But we observe that our model performs well even in the unsupervised scenario where essentially the linkage loss function is switched off and our model becomes a relational learning baseline. We believe that the inductive ability of our model and shared parameterization helps to capture knowledge across graphs and allows for better linkage performance. This outcome demonstrates the merit in multi-graph learning for different inference tasks. Having said that, we admit that our results are far from comparable to state-of-the-art linkage results (Das et al., 2017) and much work needs to be done to advance representation and relational learning methods to support effective entity linkage. But we note that our model works for multiple types of entities in a very heterogeneous environment with some promising results which serves as an evidence to pursue this direction for entity linkage task. We now discuss several use-case scenarios where our model did not perform well to gain insights on what further steps can be pursued to improve over this initial model: Han Solo with many attributes (False-negative example). Han Solo is a fictional character in Star Wars and appears in both D-IMDB and D-FB records. We have a positive label for this sample but we do not predict it correctly. Our model combines multiple components to effectively learn across graphs. Hence we investigated all the components to check for the failures. One observation we have is the mismatch in the amount of attributes across the two datasets. Further, this is compounded by multi-value attributes. As described, we use paragraph2vec like model to learn attribute embeddings where for each attribute, we aggregate over all its values. This seems to be computing embeddings that are very noisy. As we have seen attributes are affecting the final result with high impact and hence learning very noisy attributes is not helping. Further, the mismatch in number of types is also an issue. Even after filtering the types, the difference is pretty large. Types are also included as attributes and they contribute context to relation embeddings. We believe that the skew in type difference is making the model learn bad embeddings. Specifically this happens in cases where lot of information is available like Han Solo as it lead to the scenario of abundant noisy data. With our investigation, we believe that contextual embeddings need further sophistication to handle such scenarios. Further, as we already learn relation, type and attribute embeddings in addition to entity embeddings, aligning relations, types and attributes as integral task could also be an important future direction. Alfred Pennyworth is never the subject of matter (False-negative example). In this case, we observe a new pattern which was found in many other examples. While there are many triples available for this character in D-IMDB, very few triplets are available in D-FB. This skew in availability of data hampers the learning of deep network which ends up learning very different embeddings for two realizations. Further, we observe another patter where Alfred Pennyworth appears only as an object in all those few triplets of D-FB while it appears as both subject and object in D-IMDB. Accounting for asymmetric relationships in an explicit manner may become helpful for this scenario. Thomas Wayne is Martha Wayne! (False-positive example). This is the case of abundance of similar contextual information as our model predicts Thomas Wayne and Martha Wayne to be same entity. Both the characters share a lot of context and hence many triples and attributes, neighborhood etc. are similar for of them eventually learning very similar embeddings. Further as we have seen before, neighborhood has shown to be a weak context which seems to hamper the learning in this case. Finally, the key insight here is to be able to attend to the very few discriminative features for the entities in both datasets (e.g. male vs female) and hence a more sophisticated attention mechanism would help. In addition to the above specific use cases, we would like to discuss insights on following general concepts that naturally occur when learning over multiple graphs: Additional Dataset Details We perform light pre-processing on the dataset to remove self-loops from triples, clean the attributes to remove garbage characters and collapse CVT (Compound Value Types) entities into single triplets. Further we observe that there is big skew in the number of types between D-IMDB and D-FB. D-FB contains many non-informative type information such as $\#base.*$ . We remove all such non-informative types from both datasets which retains 41 types in D-IMDB and 324 types in D-FB. This filtering does not reduce the number of entities or triples by significant number (less than 1000 entities filtered) For comparing at scale with baselines, we further reduce dataset using similar techniques adopted in producing widely accepted FB-15K or FB-237K. Specifically, we filter relational triples such that both entities in a triple contained in our dataset must appear in more than $k$ triples. We use $k=50$ for D-FB and $k=100$ for D-IMDB as D-IMDB has orders of magnitude more triples compared to D-FB in our curated datasets. We still maintain the overall ratio of the number of triples between the two datasets. Positive and Negative Labels. We obtain 500662 positive labels using the existing links between the two datasets. Note that any entity can have only one positive label. We also generate 20 negative labels for each entity using the following method: (i) randomly select 10 entities from the other graph such that both entities belong to the same type and there exist no positive label between entities (ii) randomly select 10 entities from the other graph such that both entities belong to different types. Training Configurations We performed hyper-parameter grid search to obtain the best performance of our method and finally used the following configuration to obtain the reported results: – Entity Embedding Size: 256, Relation Embedding Size=64, Attribute Embedding Size = 16, Type Embedding Size = 16, Attribute Value Embedding Size = 512. We tried multiple batch sizes with very minor difference in performance and finally used size of 2000. For hidden units per layer, we use size = 64. We used $C=50$ negative samples and $Z=20$ negative labels. The learning rate was initialized as 0.01 and then decayed over epochs. We ran our experiments for 5 epochs after which the training starts to convert as the dataset is very large. We use loss weights $b$ as 0.6 and margin as 1. Further, we use $K = 50$ random walks of length $l = 3$ for each entity We used a train/test split of 60%/40% for both the triples set and labels set. For baselines, we used the implementations provided by the respective authors and performed grid search for all methods according to their requirements. Contextual Information Formulations Here we describe exact formulation of each context that we used in our work. Neighborhood Context: Given a triplet $(e^s,r,e^o)$ , the neighborhood context for an entity $e^s$ will be all the nodes at 1-hop distance from $e^s$ other than the node $e^o$ . This will capture the effect of other nodes in the graph surrounding $e^s$ that drives $e^s$ to participate in fact $(e^s,r,e^o)$ . Concretely, we define the neighborhood context of $e^s$ as follows: Nc(es) = 1ne' e' N(es) e' eo ve' where $\mathcal {N}(e^s)$ is the set of all entities in neighborhood of $e^s$ other than $e^o$ . We collect the neighborhood set for each entity as a pre-processing step using a random walk method. Specifically, given a node $e$ , we run $k$ rounds of random-walks of length $l$ and create the neighborhood set $\mathcal {N}(e)$ by adding all unique nodes visited across these walks. Please note that we can also use $\max $ function in ( "Contextual Information Formulations" ) instead of sum. $\mathbf {N_c}(e^s) \in \mathbb {R}^d$ and the context can be similarly computed for object entity. Attribute Context. For an entity $e^s$ , the corresponding attribute context is defined as Ac(es) = 1na i=1na aies where $n_a$ is the number of attributes. $\mathbf {a_i^{e^s}}$ is the embedding for attribute $i$ . $\mathbf {A_c}(e^s) \in \mathbb {R}^y$ . Type Context. We use type context mainly for relationships i.e. for a given relationship $r$ , this context aims at capturing the effect of type of entities that have participated in this relationship. For a given triplet $(e^s, r , e^o)$ , we define type context for relationship $r$ as: Tc(r) = 1ntr i=1ntr vit' where, $n_t^r$ is the total number of types of entities that has participated in relationship $r$ and $\mathbf {v_i^{t^{\prime }}}$ is the type embedding that corresponds to type $t$ . $\mathbf {T_c}(r) \in \mathbb {R}^q$ .
D-IMDB (derived from large scale IMDB data snapshot), D-FB (derived from large scale Freebase data snapshot)
160e6d2fc6e04bb0b4ee8d59c06715355dec4a17
160e6d2fc6e04bb0b4ee8d59c06715355dec4a17_0
Q: What accuracy score do they obtain? Text: Introduction Social media such as Facebook, Twitter, and Short Text Messaging Service (SMS) are popular channels for getting feedback from consumers on products and services. In Pakistan, with the emergence of e-government practices, SMS is being used for getting feedback from the citizens on different public services with the aim to reduce petty corruption and deficient delivery in services. Automatic classification of these SMS into predefined categories can greatly decrease the response time on complaints and consequently improve the public services rendered to the citizens. While Urdu is the national language of Pakistan, English is treated as the official language of the country. This leads to the development of a distinct dialect of communication known as Roman Urdu, which utilizes English alphabets to write Urdu. Hence, the SMS texts contain multilingual text written in the non-native script and informal diction. The utilization of two or more languages simultaneously is known as multilingualism BIBREF0. Consequently, alternation of two languages in a single conversation, a phenomenon known as code-switching, is inevitable for a multilingual speaker BIBREF1. Factors like informal verbiage, improper grammar, variation in spellings, code-switching, and short text length make the problem of automatic bilingual SMS classification highly challenging. In Natural Language Processing (NLP), deep learning has revolutionized the modeling and understanding of human languages. The richness, expressiveness, ambiguities, and complexity of the natural language can be addressed by deep neural networks without the need to produce complex engineered features BIBREF2. Deep learning models have been successfully used in many NLP tasks involving multilingual text. A Convolutional Neural Network (CNN) based model for sentiment classification of a multilingual dataset was proposed in BIBREF3. However, a particular record in the dataset belonged to one language only. In our case, a record can have either one or two languages. There is very little published work on this specific setting. One way to classify bilingual text is to normalize the different variations of a word to a standard spelling before training the model BIBREF4. However, such normalization requires external resources such as lexical database, and Roman Urdu is under-resourced in this context. Another approach for an under-resourced language is to adapt the resources from resource-rich language BIBREF5. However, such an approach is not generalizable in the case of Roman Urdu text as it is an informal language with no proper grammatical rules and dictionary. More recent approach utilizes code-switching annotations to improve the predictive performance of the model, where each word is annotated with its respective language label. Such an approach is not scalable for large data as annotation task becomes tedious. In this paper, we propose a multi-cascaded deep learning network, called as McM for multi-class classification of bilingual short text. Our goal is to achieve this without any prior knowledge of the language, code-switching indication, language translation, normalizing lexical variations, or language transliteration. In multilingual text classification, previous approaches employ a single deep learning architecture, such as CNN or Long Short Term Memory (LSTM) for feature learning and classification. McM, on the other hand, employs three cascades (aka feature learners) to learn rich textual representations from three perspectives. These representations are then forwarded to a small discriminator network for final prediction. We compare the performance of the proposed model with existing CNN-based model for multilingual text classification BIBREF3. We report a series of experiments using 3 kinds of embedding initialization approaches as well as the effect of attention mechanism BIBREF6. The English language is well studied under the umbrella of NLP, hence many resources and datasets for the different problems are available. However, research on English-Roman Urdu bilingual text lags behind because of non-availability of gold standard datasets. Our second contribution is that we present a large scale annotated dataset in Roman Urdu and English language with code-switching, for multi-class classification. The dataset consists of more than $0.3$ million records and has been made available for future research. The rest of the paper is organized as follows. Section SECREF2 defines the dataset acquiring process and provides an explanation of the class labels. In section SECREF3, the architecture of the proposed model, its hyperparameters, and the experimental setup is discussed. We discuss the results in section SECREF4 and finally, concluding remarks are presented in section SECREF5. . Dataset Acquisition and Description The dataset consists of SMS feedbacks of the citizens of Pakistan on different public services availed by them. The objective of collecting these responses is to measure the performance of government departments rendering different public services. Preprocessing of the data is kept minimal. All records having only single word in SMS were removed as cleaning step. To construct the “gold standard", $313,813$ samples are manually annotated into 12 predefined categories by two annotators in supervision of a domain-expert. Involvement of the domain-expert was to ensure the practicality and quality of the “gold standard". Finally, stratified sampling method was opted for splitting the data into train and test partitions with $80-20$ ratio (i.e., $80\%$ records for training and $20\%$ records for testing). This way, training split has $251,050$ records while testing split has $62,763$ records. The rationale behind stratified sampling was to maintain the ratio of every class in both splits. The preprocessed and annotated data along with train and test split is made available . Note that the department names and service availed by the citizens is mapped to an integer identifier for anonymity. Class label ratios, corresponding labels, and it's description are presented in Table TABREF1. Proposed Model and Experimentation The proposed model, named McM, is mainly inspired by the findings by Reimers, N., & Gurevych (2017) , who concluded that deeper model have minimal effect on the predictive performance of the model BIBREF7. McM manifests a wider model, which employ three feature learners (cascades) that are trained for classification independently (in parallel). The input text is first mapped to embedding matrix of size $l \times d$ where $l$ denotes the number of words in the text while $d$ is dimensions of the embedding vector for each of these words. More formally, let $\mathcal {T} \in \lbrace w_1, w_2, ..., w_l\rbrace $ be the input text with $l$ words, embedding matrix is defined by ${X} \in \mathbb {R}^{l \times d}$. This representation is then fed to three feature learners, which are trained with local supervision. The learned features are then forwarded to discriminator network for final prediction as shown in Fig. FIGREF3. Each of these components are discussed in subsequent subsections. Proposed Model and Experimentation ::: Stacked-CNN Learner CNN learner is employed to learn $n$-gram features for identification of relationships between words. A 1-d convolution filter is used with a sliding window (kernel) of size $k$ (number of $n$-grams) in order to extract the features. A filter $W$ is defined as $W \in \mathbb {R}^{k \times d}$ for the convolution function. The word vectors starting from the position $j$ to the position $j + k -1$ are processed by the filter $W$ at a time. The window $h_j$ is expressed as: Where, the $\oplus $ represents the concatenation of word vectors. The number of filters are usually decided empirically. Each filter convolves with one window at a time to generate a feature map $f_j$ for that specific window as: Where, the $\odot $ represents convolution operation, $b$ is a bias term, and $\sigma $ is a nonlinear transformation function ReLU, which is defined as $\sigma (x) = max(x,0)$. The feature maps of each window are concatenated across all filters to get a high level vector representation and fed as input to next CNN layer. Output of second CNN layer is followed by (i) global max-pooling to remove low activation information from feature maps of all filters, and (ii) global average-pooling to get average activation across all the $n$-grams. These two outputs are then concatenated and forwarded to a small feedforward network having two fully-connected layers, followed by a softmax layer for prediction of this particular learner. Dropout and batch-normalization layers are repeatedly used between both fully-connected layers to avoid features co-adaptation BIBREF8, BIBREF9. Proposed Model and Experimentation ::: Stacked-LSTM Learner The traditional methods in deep learning do not account for previous information while processing current input. LSTM, however, is able to memorize past information and correlate it with current information BIBREF10. LSTM structure has memory cells (aka LSTM cells) that store the information selectively. Each word is treated as one time step and is fed to LSTM in a sequential manner. While processing the input at current time step $X_t$, LSTM also takes into account the previous hidden state $h_{t-1}$. The LSTM represents each time step with an input, a memory, and an output gate, denoted as $i_t, f_t$ and $o_t$ respectively. The hidden state $h_t$ of input $X_t$ for each time step $t$ is given by: Where, the $*$ is element-wise multiplication and $\sigma $ is sigmoid activation function. Stacked-LSTM learner is comprised of two LSTM layers. Let ${H_1}$ be a matrix consisting of output vectors $\lbrace h_1, h_2, ..., h_l\rbrace $ that the first LSTM layer produced, denoting output at each time steps. This matrix is fed to second LSTM layer. Similarly, second layer produces another output matrix $H_2$ which is used to apply global max-pooling and global-average pooling. These two outputs are concatenated and forwarded to a two layered feedforward network for intermediate supervision (prediction), identical to previously described stacked-CNN learner. Proposed Model and Experimentation ::: LSTM Learner LSTM learner is employed to learn long-term dependencies of the text as described in BIBREF10. This learner encodes complete input text recursively. It takes one word vector at a time as input and outputs a single vector. The dimensions of the output vector are equal to the number of LSTM units deployed. This encoded text representation is then forwarded to a small feedforward network, identical to aforementioned two learners, for intermediate supervision in order to learn features. This learner differs from stacked-LSTM learner as it learns sentence features, and not average and max features of all time steps (input words). Proposed Model and Experimentation ::: Discriminator Network The objective of discriminator network is to aggregate features learned by each of above described three learners and squash them into a small network for final prediction. The discriminator employs two fully-connected layers with batch-normalization and dropout layer along with ReLU activation function for non-linearity. The softmax activation function with categorical cross-entropy loss is used on the final prediction layer to get probabilities of each class. The class label is assigned based on maximum probability. This is treated as final prediction of the proposed model. The complete architecture, along with dimensions of each output is shown in Fig. FIGREF3. Proposed Model and Experimentation ::: Experimental Setup Pre-trained word embeddings on massive data, such as GloVe BIBREF11, give boost to predictive performance for multi-class classification BIBREF12. However, such embeddings are limited to English language only with no equivalence for Roman Urdu. Therefore, in this study, we avoid using any word-based pre-trained embeddings to give equal treatment to words of each language. We perform three kinds of experiments. (1) Embedding matrix is constructed using ELMo embeddings BIBREF13, which utilizes characters to form word vectors and produces a word vector with $d = 1024$. We call this variation of the model McM$_\textsubscript {E}$. (2) Embedding matrix is initialized randomly for each word with word vector of size $d = 300$. We refer this particular model as McM$_\textsubscript {R}$. (3) We train domain specific embeddings using word2vec with word vector of size $d = 300$ as suggested in original study BIBREF14. We refer to this particular model as McM$_\textsubscript {D}$. Furthermore, we also introduce soft-attention BIBREF6 between two layers of CNN and LSTM (in respective feature learner) to evaluate effect of attention on bilingual text classification. Attention mechanism “highlights" (assigns more weight) a particular word that contributes more towards correct classification. We refer to attention based experiments with subscript $A$ for all three embedding initializations. This way, a total of 6 experiments are performed with different variations of the proposed model. To mitigate effect of random initialization of network weights, we fix the random seed across all experiments. We train each model for 20 epochs and create a checkpoint at epoch with best predictive performance on test split. We re-implement the model proposed in BIBREF3, and use it as a baseline for our problem. The rationale behind choosing this particular model as a baseline is it's proven good predictive performance on multilingual text classification. For McM, the choices of number of convolutional filters, number of hidden units in first dense layer, number of hidden units in second dense layer, and recurrent units for LSTM are made empirically. Rest of the hyperparameters were selected by performing grid search using $20\%$ stratified validation set from training set on McM$_\textsubscript {R}$. Available choices and final selected parameters are mentioned in Table TABREF18. These choices remained same for all experiments and the validation set was merged back into training set. Proposed Model and Experimentation ::: Evaluation Metrics We employed the standard metrics that are widely adapted in the literature for measuring multi-class classification performance. These metrics are accuracy, precision, recall, and F1-score, where latter three can be computed using micro-average or macro-average strategies BIBREF15. In micro-average strategy, each instance holds equal weight and outcomes are aggregated across all classes to compute a particular metric. This essentially means that the outcome would be influenced by the frequent class, if class distribution is skewed. In macro-average however, metrics for each class are calculated separately and then averaged, irrespective of their class label occurrence ratio. This gives each class equal weight instead of each instance, consequently favoring the under-represented classes. In our particular dataset, it is more plausible to favor smaller classes (i.e., other than “Appreciation" and “Satisfied") to detect potential complaints. Therefore, we choose to report macro-average values for precision, recall, and F1-score which are defined by (DISPLAY_FORM20), (DISPLAY_FORM21), and (DISPLAY_FORM22) respectively. Results and Discussion Before evaluating the McM, we first tested the baseline model on our dataset. Table TABREF23 presents results of baseline and all variations of our experiments. We focus our discussion on F1-score as accuracy is often misleading for dataset with unbalanced class distribution. However, for completeness sake, all measures are reported. It is observed from the results that baseline model performs worst among all the experiments. The reason behind this degradation in performance can be traced back to the nature of the texts in the datasets (i.e., datasets used in original paper of baseline model BIBREF3 and in our study). The approach in base model measure the performance of the model on multilingual dataset in which there is no code-switching involved. The complete text belongs to either one language or the other. However, in our case, the SMS text can have code-switching between two language, variation of spelling, or non-standard grammar. Baseline model is simple 1 layered CNN model that is unable to tackle such challenges. On the other hand, McM learns the features from multiple perspectives, hence feature representations are richer, which consequently leads to a superior predictive performance. As every learner in McM is also supervised, all 4 components of the proposed model (i.e., stacked-CNN learner, stacked-LSTM learner, LSTM-learner, and discriminator) can also be compared with each other. In our experiments, the best performing variation of the proposed model is McM$_\textsubscript {D}$. On this particular setting, discriminator is able to achieve an F1-score of $0.69$ with precision and recall values of $0.72$ and $0.68$ respectively. Other components of McM also show the highest stats for all performance measures. However, for McM$_\textsubscript {DA}$, a significant reduction in performance is observed, although, attention-based models have been proven to show improvement in performance BIBREF6. Investigating the reason behind this drop in performance is beyond the scope of this study. The model variations trained on ELMo embedding have second highest performance. Discriminator of McM$_\textsubscript {E}$ achieves an F1-score of $0.66$, beating other learners in this experiment. However, reduction in performance is persistent when attention is used for McM$_\textsubscript {EA}$. Regarding the experiments with random embedding initialization, McM$_\textsubscript {R}$ shows similar performance to McM$_\textsubscript {EA}$, while McM$_\textsubscript {RA}$ performs the worst. It is worth noting that in each experiment, discriminator network stays on top or performs equally as compared to other components in terms of F1-score. This is indication that discriminator network is able to learn richer representations of text as compared to methods where only single feature learner is deployed. Furthermore, the results for testing error for each component (i.e., 3 learners and a discriminator network) for all 4 variations of the proposed model are presented in Fig. FIGREF24. It is evident that the least error across all components is achieved by McM$_\textsubscript {D}$ model. Turning now to individual component performance, in ELMo embeddings based two models, lowest error is achieved by discriminator network, closely followed by stacked LSTM learner and stacked-CNN learner, while LSTM learner has the highest error. As far as model variations with random embeddings initializations are concerned, most interesting results are observed. As shown in subplot (c) and (d) in Fig. FIGREF24, McM$_\textsubscript {R}$ and McM$_\textsubscript {RA}$ tend to overfit. After second epoch, the error rate for all components of these two variations tend to increase drastically. However, it shows minimum error for discriminator in both variations, again proving that the features learned through multiple cascades are more robust and hold greater discriminative power. Note that in all 6 variations of experiments, the error of discriminator network is the lowest as compared to other components of McM. Hence it can be deduced that learning features through multiple perspectives and aggregating them for final prediction is more fruitful as compared to single method of learning. Concluding Remarks In this work, a new large-scale dataset and a novel deep learning architecture for multi-class classification of bilingual (English-Roman Urdu) text with code-switching is presented. The dataset is intended for enhancement of petty corruption detection in public offices and provides grounds for future research in this direction. While deep learning architecture is proposed for multi-class classification of bilingual SMS without utilizing any external resource. Three word embedding initialization techniques and soft-attention mechanism is also investigated. The observations from extensive experimentation led us to conclude that: (1) word embeddings vectors generated through characters tend to favor bilingual text classification as compared to random embedding initialization, (2) the attention mechanism tend to decrease the predictive performance of the model, irrespective of embedding types used, (3) using features learned through single perspective yield poor performance for bilingual text with code-switching, (4) training domain specific embeddings on a large corpus and using them to train the model achieves the highest performance. With regards to future work, we intend to investigate the reason behind degradation of model performance with soft-attention.
the best performing model obtained an accuracy of 0.86
2c88b46c7e3a632cfa10b7574276d84ecec7a0af
2c88b46c7e3a632cfa10b7574276d84ecec7a0af_0
Q: What is their baseline model? Text: Introduction Social media such as Facebook, Twitter, and Short Text Messaging Service (SMS) are popular channels for getting feedback from consumers on products and services. In Pakistan, with the emergence of e-government practices, SMS is being used for getting feedback from the citizens on different public services with the aim to reduce petty corruption and deficient delivery in services. Automatic classification of these SMS into predefined categories can greatly decrease the response time on complaints and consequently improve the public services rendered to the citizens. While Urdu is the national language of Pakistan, English is treated as the official language of the country. This leads to the development of a distinct dialect of communication known as Roman Urdu, which utilizes English alphabets to write Urdu. Hence, the SMS texts contain multilingual text written in the non-native script and informal diction. The utilization of two or more languages simultaneously is known as multilingualism BIBREF0. Consequently, alternation of two languages in a single conversation, a phenomenon known as code-switching, is inevitable for a multilingual speaker BIBREF1. Factors like informal verbiage, improper grammar, variation in spellings, code-switching, and short text length make the problem of automatic bilingual SMS classification highly challenging. In Natural Language Processing (NLP), deep learning has revolutionized the modeling and understanding of human languages. The richness, expressiveness, ambiguities, and complexity of the natural language can be addressed by deep neural networks without the need to produce complex engineered features BIBREF2. Deep learning models have been successfully used in many NLP tasks involving multilingual text. A Convolutional Neural Network (CNN) based model for sentiment classification of a multilingual dataset was proposed in BIBREF3. However, a particular record in the dataset belonged to one language only. In our case, a record can have either one or two languages. There is very little published work on this specific setting. One way to classify bilingual text is to normalize the different variations of a word to a standard spelling before training the model BIBREF4. However, such normalization requires external resources such as lexical database, and Roman Urdu is under-resourced in this context. Another approach for an under-resourced language is to adapt the resources from resource-rich language BIBREF5. However, such an approach is not generalizable in the case of Roman Urdu text as it is an informal language with no proper grammatical rules and dictionary. More recent approach utilizes code-switching annotations to improve the predictive performance of the model, where each word is annotated with its respective language label. Such an approach is not scalable for large data as annotation task becomes tedious. In this paper, we propose a multi-cascaded deep learning network, called as McM for multi-class classification of bilingual short text. Our goal is to achieve this without any prior knowledge of the language, code-switching indication, language translation, normalizing lexical variations, or language transliteration. In multilingual text classification, previous approaches employ a single deep learning architecture, such as CNN or Long Short Term Memory (LSTM) for feature learning and classification. McM, on the other hand, employs three cascades (aka feature learners) to learn rich textual representations from three perspectives. These representations are then forwarded to a small discriminator network for final prediction. We compare the performance of the proposed model with existing CNN-based model for multilingual text classification BIBREF3. We report a series of experiments using 3 kinds of embedding initialization approaches as well as the effect of attention mechanism BIBREF6. The English language is well studied under the umbrella of NLP, hence many resources and datasets for the different problems are available. However, research on English-Roman Urdu bilingual text lags behind because of non-availability of gold standard datasets. Our second contribution is that we present a large scale annotated dataset in Roman Urdu and English language with code-switching, for multi-class classification. The dataset consists of more than $0.3$ million records and has been made available for future research. The rest of the paper is organized as follows. Section SECREF2 defines the dataset acquiring process and provides an explanation of the class labels. In section SECREF3, the architecture of the proposed model, its hyperparameters, and the experimental setup is discussed. We discuss the results in section SECREF4 and finally, concluding remarks are presented in section SECREF5. . Dataset Acquisition and Description The dataset consists of SMS feedbacks of the citizens of Pakistan on different public services availed by them. The objective of collecting these responses is to measure the performance of government departments rendering different public services. Preprocessing of the data is kept minimal. All records having only single word in SMS were removed as cleaning step. To construct the “gold standard", $313,813$ samples are manually annotated into 12 predefined categories by two annotators in supervision of a domain-expert. Involvement of the domain-expert was to ensure the practicality and quality of the “gold standard". Finally, stratified sampling method was opted for splitting the data into train and test partitions with $80-20$ ratio (i.e., $80\%$ records for training and $20\%$ records for testing). This way, training split has $251,050$ records while testing split has $62,763$ records. The rationale behind stratified sampling was to maintain the ratio of every class in both splits. The preprocessed and annotated data along with train and test split is made available . Note that the department names and service availed by the citizens is mapped to an integer identifier for anonymity. Class label ratios, corresponding labels, and it's description are presented in Table TABREF1. Proposed Model and Experimentation The proposed model, named McM, is mainly inspired by the findings by Reimers, N., & Gurevych (2017) , who concluded that deeper model have minimal effect on the predictive performance of the model BIBREF7. McM manifests a wider model, which employ three feature learners (cascades) that are trained for classification independently (in parallel). The input text is first mapped to embedding matrix of size $l \times d$ where $l$ denotes the number of words in the text while $d$ is dimensions of the embedding vector for each of these words. More formally, let $\mathcal {T} \in \lbrace w_1, w_2, ..., w_l\rbrace $ be the input text with $l$ words, embedding matrix is defined by ${X} \in \mathbb {R}^{l \times d}$. This representation is then fed to three feature learners, which are trained with local supervision. The learned features are then forwarded to discriminator network for final prediction as shown in Fig. FIGREF3. Each of these components are discussed in subsequent subsections. Proposed Model and Experimentation ::: Stacked-CNN Learner CNN learner is employed to learn $n$-gram features for identification of relationships between words. A 1-d convolution filter is used with a sliding window (kernel) of size $k$ (number of $n$-grams) in order to extract the features. A filter $W$ is defined as $W \in \mathbb {R}^{k \times d}$ for the convolution function. The word vectors starting from the position $j$ to the position $j + k -1$ are processed by the filter $W$ at a time. The window $h_j$ is expressed as: Where, the $\oplus $ represents the concatenation of word vectors. The number of filters are usually decided empirically. Each filter convolves with one window at a time to generate a feature map $f_j$ for that specific window as: Where, the $\odot $ represents convolution operation, $b$ is a bias term, and $\sigma $ is a nonlinear transformation function ReLU, which is defined as $\sigma (x) = max(x,0)$. The feature maps of each window are concatenated across all filters to get a high level vector representation and fed as input to next CNN layer. Output of second CNN layer is followed by (i) global max-pooling to remove low activation information from feature maps of all filters, and (ii) global average-pooling to get average activation across all the $n$-grams. These two outputs are then concatenated and forwarded to a small feedforward network having two fully-connected layers, followed by a softmax layer for prediction of this particular learner. Dropout and batch-normalization layers are repeatedly used between both fully-connected layers to avoid features co-adaptation BIBREF8, BIBREF9. Proposed Model and Experimentation ::: Stacked-LSTM Learner The traditional methods in deep learning do not account for previous information while processing current input. LSTM, however, is able to memorize past information and correlate it with current information BIBREF10. LSTM structure has memory cells (aka LSTM cells) that store the information selectively. Each word is treated as one time step and is fed to LSTM in a sequential manner. While processing the input at current time step $X_t$, LSTM also takes into account the previous hidden state $h_{t-1}$. The LSTM represents each time step with an input, a memory, and an output gate, denoted as $i_t, f_t$ and $o_t$ respectively. The hidden state $h_t$ of input $X_t$ for each time step $t$ is given by: Where, the $*$ is element-wise multiplication and $\sigma $ is sigmoid activation function. Stacked-LSTM learner is comprised of two LSTM layers. Let ${H_1}$ be a matrix consisting of output vectors $\lbrace h_1, h_2, ..., h_l\rbrace $ that the first LSTM layer produced, denoting output at each time steps. This matrix is fed to second LSTM layer. Similarly, second layer produces another output matrix $H_2$ which is used to apply global max-pooling and global-average pooling. These two outputs are concatenated and forwarded to a two layered feedforward network for intermediate supervision (prediction), identical to previously described stacked-CNN learner. Proposed Model and Experimentation ::: LSTM Learner LSTM learner is employed to learn long-term dependencies of the text as described in BIBREF10. This learner encodes complete input text recursively. It takes one word vector at a time as input and outputs a single vector. The dimensions of the output vector are equal to the number of LSTM units deployed. This encoded text representation is then forwarded to a small feedforward network, identical to aforementioned two learners, for intermediate supervision in order to learn features. This learner differs from stacked-LSTM learner as it learns sentence features, and not average and max features of all time steps (input words). Proposed Model and Experimentation ::: Discriminator Network The objective of discriminator network is to aggregate features learned by each of above described three learners and squash them into a small network for final prediction. The discriminator employs two fully-connected layers with batch-normalization and dropout layer along with ReLU activation function for non-linearity. The softmax activation function with categorical cross-entropy loss is used on the final prediction layer to get probabilities of each class. The class label is assigned based on maximum probability. This is treated as final prediction of the proposed model. The complete architecture, along with dimensions of each output is shown in Fig. FIGREF3. Proposed Model and Experimentation ::: Experimental Setup Pre-trained word embeddings on massive data, such as GloVe BIBREF11, give boost to predictive performance for multi-class classification BIBREF12. However, such embeddings are limited to English language only with no equivalence for Roman Urdu. Therefore, in this study, we avoid using any word-based pre-trained embeddings to give equal treatment to words of each language. We perform three kinds of experiments. (1) Embedding matrix is constructed using ELMo embeddings BIBREF13, which utilizes characters to form word vectors and produces a word vector with $d = 1024$. We call this variation of the model McM$_\textsubscript {E}$. (2) Embedding matrix is initialized randomly for each word with word vector of size $d = 300$. We refer this particular model as McM$_\textsubscript {R}$. (3) We train domain specific embeddings using word2vec with word vector of size $d = 300$ as suggested in original study BIBREF14. We refer to this particular model as McM$_\textsubscript {D}$. Furthermore, we also introduce soft-attention BIBREF6 between two layers of CNN and LSTM (in respective feature learner) to evaluate effect of attention on bilingual text classification. Attention mechanism “highlights" (assigns more weight) a particular word that contributes more towards correct classification. We refer to attention based experiments with subscript $A$ for all three embedding initializations. This way, a total of 6 experiments are performed with different variations of the proposed model. To mitigate effect of random initialization of network weights, we fix the random seed across all experiments. We train each model for 20 epochs and create a checkpoint at epoch with best predictive performance on test split. We re-implement the model proposed in BIBREF3, and use it as a baseline for our problem. The rationale behind choosing this particular model as a baseline is it's proven good predictive performance on multilingual text classification. For McM, the choices of number of convolutional filters, number of hidden units in first dense layer, number of hidden units in second dense layer, and recurrent units for LSTM are made empirically. Rest of the hyperparameters were selected by performing grid search using $20\%$ stratified validation set from training set on McM$_\textsubscript {R}$. Available choices and final selected parameters are mentioned in Table TABREF18. These choices remained same for all experiments and the validation set was merged back into training set. Proposed Model and Experimentation ::: Evaluation Metrics We employed the standard metrics that are widely adapted in the literature for measuring multi-class classification performance. These metrics are accuracy, precision, recall, and F1-score, where latter three can be computed using micro-average or macro-average strategies BIBREF15. In micro-average strategy, each instance holds equal weight and outcomes are aggregated across all classes to compute a particular metric. This essentially means that the outcome would be influenced by the frequent class, if class distribution is skewed. In macro-average however, metrics for each class are calculated separately and then averaged, irrespective of their class label occurrence ratio. This gives each class equal weight instead of each instance, consequently favoring the under-represented classes. In our particular dataset, it is more plausible to favor smaller classes (i.e., other than “Appreciation" and “Satisfied") to detect potential complaints. Therefore, we choose to report macro-average values for precision, recall, and F1-score which are defined by (DISPLAY_FORM20), (DISPLAY_FORM21), and (DISPLAY_FORM22) respectively. Results and Discussion Before evaluating the McM, we first tested the baseline model on our dataset. Table TABREF23 presents results of baseline and all variations of our experiments. We focus our discussion on F1-score as accuracy is often misleading for dataset with unbalanced class distribution. However, for completeness sake, all measures are reported. It is observed from the results that baseline model performs worst among all the experiments. The reason behind this degradation in performance can be traced back to the nature of the texts in the datasets (i.e., datasets used in original paper of baseline model BIBREF3 and in our study). The approach in base model measure the performance of the model on multilingual dataset in which there is no code-switching involved. The complete text belongs to either one language or the other. However, in our case, the SMS text can have code-switching between two language, variation of spelling, or non-standard grammar. Baseline model is simple 1 layered CNN model that is unable to tackle such challenges. On the other hand, McM learns the features from multiple perspectives, hence feature representations are richer, which consequently leads to a superior predictive performance. As every learner in McM is also supervised, all 4 components of the proposed model (i.e., stacked-CNN learner, stacked-LSTM learner, LSTM-learner, and discriminator) can also be compared with each other. In our experiments, the best performing variation of the proposed model is McM$_\textsubscript {D}$. On this particular setting, discriminator is able to achieve an F1-score of $0.69$ with precision and recall values of $0.72$ and $0.68$ respectively. Other components of McM also show the highest stats for all performance measures. However, for McM$_\textsubscript {DA}$, a significant reduction in performance is observed, although, attention-based models have been proven to show improvement in performance BIBREF6. Investigating the reason behind this drop in performance is beyond the scope of this study. The model variations trained on ELMo embedding have second highest performance. Discriminator of McM$_\textsubscript {E}$ achieves an F1-score of $0.66$, beating other learners in this experiment. However, reduction in performance is persistent when attention is used for McM$_\textsubscript {EA}$. Regarding the experiments with random embedding initialization, McM$_\textsubscript {R}$ shows similar performance to McM$_\textsubscript {EA}$, while McM$_\textsubscript {RA}$ performs the worst. It is worth noting that in each experiment, discriminator network stays on top or performs equally as compared to other components in terms of F1-score. This is indication that discriminator network is able to learn richer representations of text as compared to methods where only single feature learner is deployed. Furthermore, the results for testing error for each component (i.e., 3 learners and a discriminator network) for all 4 variations of the proposed model are presented in Fig. FIGREF24. It is evident that the least error across all components is achieved by McM$_\textsubscript {D}$ model. Turning now to individual component performance, in ELMo embeddings based two models, lowest error is achieved by discriminator network, closely followed by stacked LSTM learner and stacked-CNN learner, while LSTM learner has the highest error. As far as model variations with random embeddings initializations are concerned, most interesting results are observed. As shown in subplot (c) and (d) in Fig. FIGREF24, McM$_\textsubscript {R}$ and McM$_\textsubscript {RA}$ tend to overfit. After second epoch, the error rate for all components of these two variations tend to increase drastically. However, it shows minimum error for discriminator in both variations, again proving that the features learned through multiple cascades are more robust and hold greater discriminative power. Note that in all 6 variations of experiments, the error of discriminator network is the lowest as compared to other components of McM. Hence it can be deduced that learning features through multiple perspectives and aggregating them for final prediction is more fruitful as compared to single method of learning. Concluding Remarks In this work, a new large-scale dataset and a novel deep learning architecture for multi-class classification of bilingual (English-Roman Urdu) text with code-switching is presented. The dataset is intended for enhancement of petty corruption detection in public offices and provides grounds for future research in this direction. While deep learning architecture is proposed for multi-class classification of bilingual SMS without utilizing any external resource. Three word embedding initialization techniques and soft-attention mechanism is also investigated. The observations from extensive experimentation led us to conclude that: (1) word embeddings vectors generated through characters tend to favor bilingual text classification as compared to random embedding initialization, (2) the attention mechanism tend to decrease the predictive performance of the model, irrespective of embedding types used, (3) using features learned through single perspective yield poor performance for bilingual text with code-switching, (4) training domain specific embeddings on a large corpus and using them to train the model achieves the highest performance. With regards to future work, we intend to investigate the reason behind degradation of model performance with soft-attention.
the model proposed in BIBREF3
6ff240d985bbe96b9d5042c9b372b4e8f498f264
6ff240d985bbe96b9d5042c9b372b4e8f498f264_0
Q: What is the size of the dataset? Text: Introduction Social media such as Facebook, Twitter, and Short Text Messaging Service (SMS) are popular channels for getting feedback from consumers on products and services. In Pakistan, with the emergence of e-government practices, SMS is being used for getting feedback from the citizens on different public services with the aim to reduce petty corruption and deficient delivery in services. Automatic classification of these SMS into predefined categories can greatly decrease the response time on complaints and consequently improve the public services rendered to the citizens. While Urdu is the national language of Pakistan, English is treated as the official language of the country. This leads to the development of a distinct dialect of communication known as Roman Urdu, which utilizes English alphabets to write Urdu. Hence, the SMS texts contain multilingual text written in the non-native script and informal diction. The utilization of two or more languages simultaneously is known as multilingualism BIBREF0. Consequently, alternation of two languages in a single conversation, a phenomenon known as code-switching, is inevitable for a multilingual speaker BIBREF1. Factors like informal verbiage, improper grammar, variation in spellings, code-switching, and short text length make the problem of automatic bilingual SMS classification highly challenging. In Natural Language Processing (NLP), deep learning has revolutionized the modeling and understanding of human languages. The richness, expressiveness, ambiguities, and complexity of the natural language can be addressed by deep neural networks without the need to produce complex engineered features BIBREF2. Deep learning models have been successfully used in many NLP tasks involving multilingual text. A Convolutional Neural Network (CNN) based model for sentiment classification of a multilingual dataset was proposed in BIBREF3. However, a particular record in the dataset belonged to one language only. In our case, a record can have either one or two languages. There is very little published work on this specific setting. One way to classify bilingual text is to normalize the different variations of a word to a standard spelling before training the model BIBREF4. However, such normalization requires external resources such as lexical database, and Roman Urdu is under-resourced in this context. Another approach for an under-resourced language is to adapt the resources from resource-rich language BIBREF5. However, such an approach is not generalizable in the case of Roman Urdu text as it is an informal language with no proper grammatical rules and dictionary. More recent approach utilizes code-switching annotations to improve the predictive performance of the model, where each word is annotated with its respective language label. Such an approach is not scalable for large data as annotation task becomes tedious. In this paper, we propose a multi-cascaded deep learning network, called as McM for multi-class classification of bilingual short text. Our goal is to achieve this without any prior knowledge of the language, code-switching indication, language translation, normalizing lexical variations, or language transliteration. In multilingual text classification, previous approaches employ a single deep learning architecture, such as CNN or Long Short Term Memory (LSTM) for feature learning and classification. McM, on the other hand, employs three cascades (aka feature learners) to learn rich textual representations from three perspectives. These representations are then forwarded to a small discriminator network for final prediction. We compare the performance of the proposed model with existing CNN-based model for multilingual text classification BIBREF3. We report a series of experiments using 3 kinds of embedding initialization approaches as well as the effect of attention mechanism BIBREF6. The English language is well studied under the umbrella of NLP, hence many resources and datasets for the different problems are available. However, research on English-Roman Urdu bilingual text lags behind because of non-availability of gold standard datasets. Our second contribution is that we present a large scale annotated dataset in Roman Urdu and English language with code-switching, for multi-class classification. The dataset consists of more than $0.3$ million records and has been made available for future research. The rest of the paper is organized as follows. Section SECREF2 defines the dataset acquiring process and provides an explanation of the class labels. In section SECREF3, the architecture of the proposed model, its hyperparameters, and the experimental setup is discussed. We discuss the results in section SECREF4 and finally, concluding remarks are presented in section SECREF5. . Dataset Acquisition and Description The dataset consists of SMS feedbacks of the citizens of Pakistan on different public services availed by them. The objective of collecting these responses is to measure the performance of government departments rendering different public services. Preprocessing of the data is kept minimal. All records having only single word in SMS were removed as cleaning step. To construct the “gold standard", $313,813$ samples are manually annotated into 12 predefined categories by two annotators in supervision of a domain-expert. Involvement of the domain-expert was to ensure the practicality and quality of the “gold standard". Finally, stratified sampling method was opted for splitting the data into train and test partitions with $80-20$ ratio (i.e., $80\%$ records for training and $20\%$ records for testing). This way, training split has $251,050$ records while testing split has $62,763$ records. The rationale behind stratified sampling was to maintain the ratio of every class in both splits. The preprocessed and annotated data along with train and test split is made available . Note that the department names and service availed by the citizens is mapped to an integer identifier for anonymity. Class label ratios, corresponding labels, and it's description are presented in Table TABREF1. Proposed Model and Experimentation The proposed model, named McM, is mainly inspired by the findings by Reimers, N., & Gurevych (2017) , who concluded that deeper model have minimal effect on the predictive performance of the model BIBREF7. McM manifests a wider model, which employ three feature learners (cascades) that are trained for classification independently (in parallel). The input text is first mapped to embedding matrix of size $l \times d$ where $l$ denotes the number of words in the text while $d$ is dimensions of the embedding vector for each of these words. More formally, let $\mathcal {T} \in \lbrace w_1, w_2, ..., w_l\rbrace $ be the input text with $l$ words, embedding matrix is defined by ${X} \in \mathbb {R}^{l \times d}$. This representation is then fed to three feature learners, which are trained with local supervision. The learned features are then forwarded to discriminator network for final prediction as shown in Fig. FIGREF3. Each of these components are discussed in subsequent subsections. Proposed Model and Experimentation ::: Stacked-CNN Learner CNN learner is employed to learn $n$-gram features for identification of relationships between words. A 1-d convolution filter is used with a sliding window (kernel) of size $k$ (number of $n$-grams) in order to extract the features. A filter $W$ is defined as $W \in \mathbb {R}^{k \times d}$ for the convolution function. The word vectors starting from the position $j$ to the position $j + k -1$ are processed by the filter $W$ at a time. The window $h_j$ is expressed as: Where, the $\oplus $ represents the concatenation of word vectors. The number of filters are usually decided empirically. Each filter convolves with one window at a time to generate a feature map $f_j$ for that specific window as: Where, the $\odot $ represents convolution operation, $b$ is a bias term, and $\sigma $ is a nonlinear transformation function ReLU, which is defined as $\sigma (x) = max(x,0)$. The feature maps of each window are concatenated across all filters to get a high level vector representation and fed as input to next CNN layer. Output of second CNN layer is followed by (i) global max-pooling to remove low activation information from feature maps of all filters, and (ii) global average-pooling to get average activation across all the $n$-grams. These two outputs are then concatenated and forwarded to a small feedforward network having two fully-connected layers, followed by a softmax layer for prediction of this particular learner. Dropout and batch-normalization layers are repeatedly used between both fully-connected layers to avoid features co-adaptation BIBREF8, BIBREF9. Proposed Model and Experimentation ::: Stacked-LSTM Learner The traditional methods in deep learning do not account for previous information while processing current input. LSTM, however, is able to memorize past information and correlate it with current information BIBREF10. LSTM structure has memory cells (aka LSTM cells) that store the information selectively. Each word is treated as one time step and is fed to LSTM in a sequential manner. While processing the input at current time step $X_t$, LSTM also takes into account the previous hidden state $h_{t-1}$. The LSTM represents each time step with an input, a memory, and an output gate, denoted as $i_t, f_t$ and $o_t$ respectively. The hidden state $h_t$ of input $X_t$ for each time step $t$ is given by: Where, the $*$ is element-wise multiplication and $\sigma $ is sigmoid activation function. Stacked-LSTM learner is comprised of two LSTM layers. Let ${H_1}$ be a matrix consisting of output vectors $\lbrace h_1, h_2, ..., h_l\rbrace $ that the first LSTM layer produced, denoting output at each time steps. This matrix is fed to second LSTM layer. Similarly, second layer produces another output matrix $H_2$ which is used to apply global max-pooling and global-average pooling. These two outputs are concatenated and forwarded to a two layered feedforward network for intermediate supervision (prediction), identical to previously described stacked-CNN learner. Proposed Model and Experimentation ::: LSTM Learner LSTM learner is employed to learn long-term dependencies of the text as described in BIBREF10. This learner encodes complete input text recursively. It takes one word vector at a time as input and outputs a single vector. The dimensions of the output vector are equal to the number of LSTM units deployed. This encoded text representation is then forwarded to a small feedforward network, identical to aforementioned two learners, for intermediate supervision in order to learn features. This learner differs from stacked-LSTM learner as it learns sentence features, and not average and max features of all time steps (input words). Proposed Model and Experimentation ::: Discriminator Network The objective of discriminator network is to aggregate features learned by each of above described three learners and squash them into a small network for final prediction. The discriminator employs two fully-connected layers with batch-normalization and dropout layer along with ReLU activation function for non-linearity. The softmax activation function with categorical cross-entropy loss is used on the final prediction layer to get probabilities of each class. The class label is assigned based on maximum probability. This is treated as final prediction of the proposed model. The complete architecture, along with dimensions of each output is shown in Fig. FIGREF3. Proposed Model and Experimentation ::: Experimental Setup Pre-trained word embeddings on massive data, such as GloVe BIBREF11, give boost to predictive performance for multi-class classification BIBREF12. However, such embeddings are limited to English language only with no equivalence for Roman Urdu. Therefore, in this study, we avoid using any word-based pre-trained embeddings to give equal treatment to words of each language. We perform three kinds of experiments. (1) Embedding matrix is constructed using ELMo embeddings BIBREF13, which utilizes characters to form word vectors and produces a word vector with $d = 1024$. We call this variation of the model McM$_\textsubscript {E}$. (2) Embedding matrix is initialized randomly for each word with word vector of size $d = 300$. We refer this particular model as McM$_\textsubscript {R}$. (3) We train domain specific embeddings using word2vec with word vector of size $d = 300$ as suggested in original study BIBREF14. We refer to this particular model as McM$_\textsubscript {D}$. Furthermore, we also introduce soft-attention BIBREF6 between two layers of CNN and LSTM (in respective feature learner) to evaluate effect of attention on bilingual text classification. Attention mechanism “highlights" (assigns more weight) a particular word that contributes more towards correct classification. We refer to attention based experiments with subscript $A$ for all three embedding initializations. This way, a total of 6 experiments are performed with different variations of the proposed model. To mitigate effect of random initialization of network weights, we fix the random seed across all experiments. We train each model for 20 epochs and create a checkpoint at epoch with best predictive performance on test split. We re-implement the model proposed in BIBREF3, and use it as a baseline for our problem. The rationale behind choosing this particular model as a baseline is it's proven good predictive performance on multilingual text classification. For McM, the choices of number of convolutional filters, number of hidden units in first dense layer, number of hidden units in second dense layer, and recurrent units for LSTM are made empirically. Rest of the hyperparameters were selected by performing grid search using $20\%$ stratified validation set from training set on McM$_\textsubscript {R}$. Available choices and final selected parameters are mentioned in Table TABREF18. These choices remained same for all experiments and the validation set was merged back into training set. Proposed Model and Experimentation ::: Evaluation Metrics We employed the standard metrics that are widely adapted in the literature for measuring multi-class classification performance. These metrics are accuracy, precision, recall, and F1-score, where latter three can be computed using micro-average or macro-average strategies BIBREF15. In micro-average strategy, each instance holds equal weight and outcomes are aggregated across all classes to compute a particular metric. This essentially means that the outcome would be influenced by the frequent class, if class distribution is skewed. In macro-average however, metrics for each class are calculated separately and then averaged, irrespective of their class label occurrence ratio. This gives each class equal weight instead of each instance, consequently favoring the under-represented classes. In our particular dataset, it is more plausible to favor smaller classes (i.e., other than “Appreciation" and “Satisfied") to detect potential complaints. Therefore, we choose to report macro-average values for precision, recall, and F1-score which are defined by (DISPLAY_FORM20), (DISPLAY_FORM21), and (DISPLAY_FORM22) respectively. Results and Discussion Before evaluating the McM, we first tested the baseline model on our dataset. Table TABREF23 presents results of baseline and all variations of our experiments. We focus our discussion on F1-score as accuracy is often misleading for dataset with unbalanced class distribution. However, for completeness sake, all measures are reported. It is observed from the results that baseline model performs worst among all the experiments. The reason behind this degradation in performance can be traced back to the nature of the texts in the datasets (i.e., datasets used in original paper of baseline model BIBREF3 and in our study). The approach in base model measure the performance of the model on multilingual dataset in which there is no code-switching involved. The complete text belongs to either one language or the other. However, in our case, the SMS text can have code-switching between two language, variation of spelling, or non-standard grammar. Baseline model is simple 1 layered CNN model that is unable to tackle such challenges. On the other hand, McM learns the features from multiple perspectives, hence feature representations are richer, which consequently leads to a superior predictive performance. As every learner in McM is also supervised, all 4 components of the proposed model (i.e., stacked-CNN learner, stacked-LSTM learner, LSTM-learner, and discriminator) can also be compared with each other. In our experiments, the best performing variation of the proposed model is McM$_\textsubscript {D}$. On this particular setting, discriminator is able to achieve an F1-score of $0.69$ with precision and recall values of $0.72$ and $0.68$ respectively. Other components of McM also show the highest stats for all performance measures. However, for McM$_\textsubscript {DA}$, a significant reduction in performance is observed, although, attention-based models have been proven to show improvement in performance BIBREF6. Investigating the reason behind this drop in performance is beyond the scope of this study. The model variations trained on ELMo embedding have second highest performance. Discriminator of McM$_\textsubscript {E}$ achieves an F1-score of $0.66$, beating other learners in this experiment. However, reduction in performance is persistent when attention is used for McM$_\textsubscript {EA}$. Regarding the experiments with random embedding initialization, McM$_\textsubscript {R}$ shows similar performance to McM$_\textsubscript {EA}$, while McM$_\textsubscript {RA}$ performs the worst. It is worth noting that in each experiment, discriminator network stays on top or performs equally as compared to other components in terms of F1-score. This is indication that discriminator network is able to learn richer representations of text as compared to methods where only single feature learner is deployed. Furthermore, the results for testing error for each component (i.e., 3 learners and a discriminator network) for all 4 variations of the proposed model are presented in Fig. FIGREF24. It is evident that the least error across all components is achieved by McM$_\textsubscript {D}$ model. Turning now to individual component performance, in ELMo embeddings based two models, lowest error is achieved by discriminator network, closely followed by stacked LSTM learner and stacked-CNN learner, while LSTM learner has the highest error. As far as model variations with random embeddings initializations are concerned, most interesting results are observed. As shown in subplot (c) and (d) in Fig. FIGREF24, McM$_\textsubscript {R}$ and McM$_\textsubscript {RA}$ tend to overfit. After second epoch, the error rate for all components of these two variations tend to increase drastically. However, it shows minimum error for discriminator in both variations, again proving that the features learned through multiple cascades are more robust and hold greater discriminative power. Note that in all 6 variations of experiments, the error of discriminator network is the lowest as compared to other components of McM. Hence it can be deduced that learning features through multiple perspectives and aggregating them for final prediction is more fruitful as compared to single method of learning. Concluding Remarks In this work, a new large-scale dataset and a novel deep learning architecture for multi-class classification of bilingual (English-Roman Urdu) text with code-switching is presented. The dataset is intended for enhancement of petty corruption detection in public offices and provides grounds for future research in this direction. While deep learning architecture is proposed for multi-class classification of bilingual SMS without utilizing any external resource. Three word embedding initialization techniques and soft-attention mechanism is also investigated. The observations from extensive experimentation led us to conclude that: (1) word embeddings vectors generated through characters tend to favor bilingual text classification as compared to random embedding initialization, (2) the attention mechanism tend to decrease the predictive performance of the model, irrespective of embedding types used, (3) using features learned through single perspective yield poor performance for bilingual text with code-switching, (4) training domain specific embeddings on a large corpus and using them to train the model achieves the highest performance. With regards to future work, we intend to investigate the reason behind degradation of model performance with soft-attention.
$0.3$ million records
30dad5d9b4a03e56fa31f932c879aa56e11ed15b
30dad5d9b4a03e56fa31f932c879aa56e11ed15b_0
Q: What is the 12 class bilingual text? Text: Introduction Social media such as Facebook, Twitter, and Short Text Messaging Service (SMS) are popular channels for getting feedback from consumers on products and services. In Pakistan, with the emergence of e-government practices, SMS is being used for getting feedback from the citizens on different public services with the aim to reduce petty corruption and deficient delivery in services. Automatic classification of these SMS into predefined categories can greatly decrease the response time on complaints and consequently improve the public services rendered to the citizens. While Urdu is the national language of Pakistan, English is treated as the official language of the country. This leads to the development of a distinct dialect of communication known as Roman Urdu, which utilizes English alphabets to write Urdu. Hence, the SMS texts contain multilingual text written in the non-native script and informal diction. The utilization of two or more languages simultaneously is known as multilingualism BIBREF0. Consequently, alternation of two languages in a single conversation, a phenomenon known as code-switching, is inevitable for a multilingual speaker BIBREF1. Factors like informal verbiage, improper grammar, variation in spellings, code-switching, and short text length make the problem of automatic bilingual SMS classification highly challenging. In Natural Language Processing (NLP), deep learning has revolutionized the modeling and understanding of human languages. The richness, expressiveness, ambiguities, and complexity of the natural language can be addressed by deep neural networks without the need to produce complex engineered features BIBREF2. Deep learning models have been successfully used in many NLP tasks involving multilingual text. A Convolutional Neural Network (CNN) based model for sentiment classification of a multilingual dataset was proposed in BIBREF3. However, a particular record in the dataset belonged to one language only. In our case, a record can have either one or two languages. There is very little published work on this specific setting. One way to classify bilingual text is to normalize the different variations of a word to a standard spelling before training the model BIBREF4. However, such normalization requires external resources such as lexical database, and Roman Urdu is under-resourced in this context. Another approach for an under-resourced language is to adapt the resources from resource-rich language BIBREF5. However, such an approach is not generalizable in the case of Roman Urdu text as it is an informal language with no proper grammatical rules and dictionary. More recent approach utilizes code-switching annotations to improve the predictive performance of the model, where each word is annotated with its respective language label. Such an approach is not scalable for large data as annotation task becomes tedious. In this paper, we propose a multi-cascaded deep learning network, called as McM for multi-class classification of bilingual short text. Our goal is to achieve this without any prior knowledge of the language, code-switching indication, language translation, normalizing lexical variations, or language transliteration. In multilingual text classification, previous approaches employ a single deep learning architecture, such as CNN or Long Short Term Memory (LSTM) for feature learning and classification. McM, on the other hand, employs three cascades (aka feature learners) to learn rich textual representations from three perspectives. These representations are then forwarded to a small discriminator network for final prediction. We compare the performance of the proposed model with existing CNN-based model for multilingual text classification BIBREF3. We report a series of experiments using 3 kinds of embedding initialization approaches as well as the effect of attention mechanism BIBREF6. The English language is well studied under the umbrella of NLP, hence many resources and datasets for the different problems are available. However, research on English-Roman Urdu bilingual text lags behind because of non-availability of gold standard datasets. Our second contribution is that we present a large scale annotated dataset in Roman Urdu and English language with code-switching, for multi-class classification. The dataset consists of more than $0.3$ million records and has been made available for future research. The rest of the paper is organized as follows. Section SECREF2 defines the dataset acquiring process and provides an explanation of the class labels. In section SECREF3, the architecture of the proposed model, its hyperparameters, and the experimental setup is discussed. We discuss the results in section SECREF4 and finally, concluding remarks are presented in section SECREF5. . Dataset Acquisition and Description The dataset consists of SMS feedbacks of the citizens of Pakistan on different public services availed by them. The objective of collecting these responses is to measure the performance of government departments rendering different public services. Preprocessing of the data is kept minimal. All records having only single word in SMS were removed as cleaning step. To construct the “gold standard", $313,813$ samples are manually annotated into 12 predefined categories by two annotators in supervision of a domain-expert. Involvement of the domain-expert was to ensure the practicality and quality of the “gold standard". Finally, stratified sampling method was opted for splitting the data into train and test partitions with $80-20$ ratio (i.e., $80\%$ records for training and $20\%$ records for testing). This way, training split has $251,050$ records while testing split has $62,763$ records. The rationale behind stratified sampling was to maintain the ratio of every class in both splits. The preprocessed and annotated data along with train and test split is made available . Note that the department names and service availed by the citizens is mapped to an integer identifier for anonymity. Class label ratios, corresponding labels, and it's description are presented in Table TABREF1. Proposed Model and Experimentation The proposed model, named McM, is mainly inspired by the findings by Reimers, N., & Gurevych (2017) , who concluded that deeper model have minimal effect on the predictive performance of the model BIBREF7. McM manifests a wider model, which employ three feature learners (cascades) that are trained for classification independently (in parallel). The input text is first mapped to embedding matrix of size $l \times d$ where $l$ denotes the number of words in the text while $d$ is dimensions of the embedding vector for each of these words. More formally, let $\mathcal {T} \in \lbrace w_1, w_2, ..., w_l\rbrace $ be the input text with $l$ words, embedding matrix is defined by ${X} \in \mathbb {R}^{l \times d}$. This representation is then fed to three feature learners, which are trained with local supervision. The learned features are then forwarded to discriminator network for final prediction as shown in Fig. FIGREF3. Each of these components are discussed in subsequent subsections. Proposed Model and Experimentation ::: Stacked-CNN Learner CNN learner is employed to learn $n$-gram features for identification of relationships between words. A 1-d convolution filter is used with a sliding window (kernel) of size $k$ (number of $n$-grams) in order to extract the features. A filter $W$ is defined as $W \in \mathbb {R}^{k \times d}$ for the convolution function. The word vectors starting from the position $j$ to the position $j + k -1$ are processed by the filter $W$ at a time. The window $h_j$ is expressed as: Where, the $\oplus $ represents the concatenation of word vectors. The number of filters are usually decided empirically. Each filter convolves with one window at a time to generate a feature map $f_j$ for that specific window as: Where, the $\odot $ represents convolution operation, $b$ is a bias term, and $\sigma $ is a nonlinear transformation function ReLU, which is defined as $\sigma (x) = max(x,0)$. The feature maps of each window are concatenated across all filters to get a high level vector representation and fed as input to next CNN layer. Output of second CNN layer is followed by (i) global max-pooling to remove low activation information from feature maps of all filters, and (ii) global average-pooling to get average activation across all the $n$-grams. These two outputs are then concatenated and forwarded to a small feedforward network having two fully-connected layers, followed by a softmax layer for prediction of this particular learner. Dropout and batch-normalization layers are repeatedly used between both fully-connected layers to avoid features co-adaptation BIBREF8, BIBREF9. Proposed Model and Experimentation ::: Stacked-LSTM Learner The traditional methods in deep learning do not account for previous information while processing current input. LSTM, however, is able to memorize past information and correlate it with current information BIBREF10. LSTM structure has memory cells (aka LSTM cells) that store the information selectively. Each word is treated as one time step and is fed to LSTM in a sequential manner. While processing the input at current time step $X_t$, LSTM also takes into account the previous hidden state $h_{t-1}$. The LSTM represents each time step with an input, a memory, and an output gate, denoted as $i_t, f_t$ and $o_t$ respectively. The hidden state $h_t$ of input $X_t$ for each time step $t$ is given by: Where, the $*$ is element-wise multiplication and $\sigma $ is sigmoid activation function. Stacked-LSTM learner is comprised of two LSTM layers. Let ${H_1}$ be a matrix consisting of output vectors $\lbrace h_1, h_2, ..., h_l\rbrace $ that the first LSTM layer produced, denoting output at each time steps. This matrix is fed to second LSTM layer. Similarly, second layer produces another output matrix $H_2$ which is used to apply global max-pooling and global-average pooling. These two outputs are concatenated and forwarded to a two layered feedforward network for intermediate supervision (prediction), identical to previously described stacked-CNN learner. Proposed Model and Experimentation ::: LSTM Learner LSTM learner is employed to learn long-term dependencies of the text as described in BIBREF10. This learner encodes complete input text recursively. It takes one word vector at a time as input and outputs a single vector. The dimensions of the output vector are equal to the number of LSTM units deployed. This encoded text representation is then forwarded to a small feedforward network, identical to aforementioned two learners, for intermediate supervision in order to learn features. This learner differs from stacked-LSTM learner as it learns sentence features, and not average and max features of all time steps (input words). Proposed Model and Experimentation ::: Discriminator Network The objective of discriminator network is to aggregate features learned by each of above described three learners and squash them into a small network for final prediction. The discriminator employs two fully-connected layers with batch-normalization and dropout layer along with ReLU activation function for non-linearity. The softmax activation function with categorical cross-entropy loss is used on the final prediction layer to get probabilities of each class. The class label is assigned based on maximum probability. This is treated as final prediction of the proposed model. The complete architecture, along with dimensions of each output is shown in Fig. FIGREF3. Proposed Model and Experimentation ::: Experimental Setup Pre-trained word embeddings on massive data, such as GloVe BIBREF11, give boost to predictive performance for multi-class classification BIBREF12. However, such embeddings are limited to English language only with no equivalence for Roman Urdu. Therefore, in this study, we avoid using any word-based pre-trained embeddings to give equal treatment to words of each language. We perform three kinds of experiments. (1) Embedding matrix is constructed using ELMo embeddings BIBREF13, which utilizes characters to form word vectors and produces a word vector with $d = 1024$. We call this variation of the model McM$_\textsubscript {E}$. (2) Embedding matrix is initialized randomly for each word with word vector of size $d = 300$. We refer this particular model as McM$_\textsubscript {R}$. (3) We train domain specific embeddings using word2vec with word vector of size $d = 300$ as suggested in original study BIBREF14. We refer to this particular model as McM$_\textsubscript {D}$. Furthermore, we also introduce soft-attention BIBREF6 between two layers of CNN and LSTM (in respective feature learner) to evaluate effect of attention on bilingual text classification. Attention mechanism “highlights" (assigns more weight) a particular word that contributes more towards correct classification. We refer to attention based experiments with subscript $A$ for all three embedding initializations. This way, a total of 6 experiments are performed with different variations of the proposed model. To mitigate effect of random initialization of network weights, we fix the random seed across all experiments. We train each model for 20 epochs and create a checkpoint at epoch with best predictive performance on test split. We re-implement the model proposed in BIBREF3, and use it as a baseline for our problem. The rationale behind choosing this particular model as a baseline is it's proven good predictive performance on multilingual text classification. For McM, the choices of number of convolutional filters, number of hidden units in first dense layer, number of hidden units in second dense layer, and recurrent units for LSTM are made empirically. Rest of the hyperparameters were selected by performing grid search using $20\%$ stratified validation set from training set on McM$_\textsubscript {R}$. Available choices and final selected parameters are mentioned in Table TABREF18. These choices remained same for all experiments and the validation set was merged back into training set. Proposed Model and Experimentation ::: Evaluation Metrics We employed the standard metrics that are widely adapted in the literature for measuring multi-class classification performance. These metrics are accuracy, precision, recall, and F1-score, where latter three can be computed using micro-average or macro-average strategies BIBREF15. In micro-average strategy, each instance holds equal weight and outcomes are aggregated across all classes to compute a particular metric. This essentially means that the outcome would be influenced by the frequent class, if class distribution is skewed. In macro-average however, metrics for each class are calculated separately and then averaged, irrespective of their class label occurrence ratio. This gives each class equal weight instead of each instance, consequently favoring the under-represented classes. In our particular dataset, it is more plausible to favor smaller classes (i.e., other than “Appreciation" and “Satisfied") to detect potential complaints. Therefore, we choose to report macro-average values for precision, recall, and F1-score which are defined by (DISPLAY_FORM20), (DISPLAY_FORM21), and (DISPLAY_FORM22) respectively. Results and Discussion Before evaluating the McM, we first tested the baseline model on our dataset. Table TABREF23 presents results of baseline and all variations of our experiments. We focus our discussion on F1-score as accuracy is often misleading for dataset with unbalanced class distribution. However, for completeness sake, all measures are reported. It is observed from the results that baseline model performs worst among all the experiments. The reason behind this degradation in performance can be traced back to the nature of the texts in the datasets (i.e., datasets used in original paper of baseline model BIBREF3 and in our study). The approach in base model measure the performance of the model on multilingual dataset in which there is no code-switching involved. The complete text belongs to either one language or the other. However, in our case, the SMS text can have code-switching between two language, variation of spelling, or non-standard grammar. Baseline model is simple 1 layered CNN model that is unable to tackle such challenges. On the other hand, McM learns the features from multiple perspectives, hence feature representations are richer, which consequently leads to a superior predictive performance. As every learner in McM is also supervised, all 4 components of the proposed model (i.e., stacked-CNN learner, stacked-LSTM learner, LSTM-learner, and discriminator) can also be compared with each other. In our experiments, the best performing variation of the proposed model is McM$_\textsubscript {D}$. On this particular setting, discriminator is able to achieve an F1-score of $0.69$ with precision and recall values of $0.72$ and $0.68$ respectively. Other components of McM also show the highest stats for all performance measures. However, for McM$_\textsubscript {DA}$, a significant reduction in performance is observed, although, attention-based models have been proven to show improvement in performance BIBREF6. Investigating the reason behind this drop in performance is beyond the scope of this study. The model variations trained on ELMo embedding have second highest performance. Discriminator of McM$_\textsubscript {E}$ achieves an F1-score of $0.66$, beating other learners in this experiment. However, reduction in performance is persistent when attention is used for McM$_\textsubscript {EA}$. Regarding the experiments with random embedding initialization, McM$_\textsubscript {R}$ shows similar performance to McM$_\textsubscript {EA}$, while McM$_\textsubscript {RA}$ performs the worst. It is worth noting that in each experiment, discriminator network stays on top or performs equally as compared to other components in terms of F1-score. This is indication that discriminator network is able to learn richer representations of text as compared to methods where only single feature learner is deployed. Furthermore, the results for testing error for each component (i.e., 3 learners and a discriminator network) for all 4 variations of the proposed model are presented in Fig. FIGREF24. It is evident that the least error across all components is achieved by McM$_\textsubscript {D}$ model. Turning now to individual component performance, in ELMo embeddings based two models, lowest error is achieved by discriminator network, closely followed by stacked LSTM learner and stacked-CNN learner, while LSTM learner has the highest error. As far as model variations with random embeddings initializations are concerned, most interesting results are observed. As shown in subplot (c) and (d) in Fig. FIGREF24, McM$_\textsubscript {R}$ and McM$_\textsubscript {RA}$ tend to overfit. After second epoch, the error rate for all components of these two variations tend to increase drastically. However, it shows minimum error for discriminator in both variations, again proving that the features learned through multiple cascades are more robust and hold greater discriminative power. Note that in all 6 variations of experiments, the error of discriminator network is the lowest as compared to other components of McM. Hence it can be deduced that learning features through multiple perspectives and aggregating them for final prediction is more fruitful as compared to single method of learning. Concluding Remarks In this work, a new large-scale dataset and a novel deep learning architecture for multi-class classification of bilingual (English-Roman Urdu) text with code-switching is presented. The dataset is intended for enhancement of petty corruption detection in public offices and provides grounds for future research in this direction. While deep learning architecture is proposed for multi-class classification of bilingual SMS without utilizing any external resource. Three word embedding initialization techniques and soft-attention mechanism is also investigated. The observations from extensive experimentation led us to conclude that: (1) word embeddings vectors generated through characters tend to favor bilingual text classification as compared to random embedding initialization, (2) the attention mechanism tend to decrease the predictive performance of the model, irrespective of embedding types used, (3) using features learned through single perspective yield poor performance for bilingual text with code-switching, (4) training domain specific embeddings on a large corpus and using them to train the model achieves the highest performance. With regards to future work, we intend to investigate the reason behind degradation of model performance with soft-attention.
Appreciation, Satisfied, Peripheral complaint, Demanded inquiry, Corruption, Lagged response, Unresponsive, Medicine payment, Adverse behavior, Grievance ascribed and Obnoxious/irrelevant
54c9147ffd57f1f7238917b013444a9743f0deb8
54c9147ffd57f1f7238917b013444a9743f0deb8_0
Q: Which are the sequence model architectures this method can be transferred across? Text: Introduction Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4. Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not previously segmented. Thus, one common practice in Chinese NER is first performing word segmentation using an existing CWS system and then applying a word-level sequence labeling model to the segmented sentence BIBREF5, BIBREF6. However, it is inevitable that the CWS system will wrongly segment the query sequence. This will, in turn, result in entity boundary detection errors and even entity category prediction errors in the following NER. Take the character sequence “南京市 (Nanjing) / 长江大桥 (Yangtze River Bridge)" as an example, where “/" indicates the gold segmentation result. If the sequence is segmented into “南京 (Nanjing) / 市长 (mayor) / 江大桥 (Daqiao Jiang)", the word-based NER system is definitely not able to correctly recognize “南京市 (Nanjing)" and “长江大桥 (Yangtze River Bridge)" as two entities of the location type. Instead, it is possible to incorrectly treat “南京 (Nanjing)" as a location entity and predict “江大桥 (Daqiao Jiang)" to be a person's name. Therefore, some works resort to performing Chinese NER directly on the character level, and it has been shown that this practice can achieve better performance BIBREF7, BIBREF8, BIBREF9, BIBREF0. A drawback of the purely character-based NER method is that word information, which has been proved to be useful, is not fully exploited. With this consideration, BIBREF0 proposed to incorporating word lexicon into the character-based NER model. In addition, instead of heuristically choosing a word for the character if it matches multiple words of the lexicon, they proposed to preserving all matched words of the character, leaving the following NER model to determine which matched word to apply. To achieve this, they introduced an elaborate modification to the LSTM-based sequence modeling layer of the LSTM-CRF model BIBREF1 to jointly model the character sequence and all of its matched words. Experimental studies on four public Chinese NER datasets show that Lattice-LSTM can achieve comparative or better performance on Chinese NER over existing methods. Although successful, there exists a big problem in Lattice-LSTM that limits its application in many industrial areas, where real-time NER responses are needed. That is, its model architecture is quite complicated. This slows down its inference speed and makes it difficult to perform training and inference in parallel. In addition, it is far from easy to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers), which may be more suitable for some specific datasets. In this work, we aim to find a easier way to achieve the idea of Lattice-LSTM, i.e., incorporating all matched words of the sentence to the character-based NER model. The first principle of our method design is to achieve a fast inference speed. To this end, we propose to encoding the matched words, obtained from the lexicon, into the representations of characters. Compared with Lattice-LSTM, this method is more concise and easier to implement. It can avoid complicated model architecture design thus has much faster inference speed. It can also be quickly adapted to any appropriate neural architectures without redesign. Given an existing neural character-based NER model, we only have to modify its character representation layer to successfully introduce the word lexicon. In addition, experimental studies on four public Chinese NER datasets show that our method can even achieve better performance than Lattice-LSTM when applying the LSTM-CRF model. Our source code is published at https://github.com/v-mipeng/LexiconAugmentedNER. Generic Character-based Neural Architecture for Chinese NER In this section, we provide a concise description of the generic character-based neural NER model, which conceptually contains three stacked layers. The first layer is the character representation layer, which maps each character of a sentence into a dense vector. The second layer is the sequence modeling layer. It plays the role of modeling the dependence between characters, obtaining a hidden representation for each character. The final layer is the label inference layer. It takes the hidden representation sequence as input and outputs the predicted label (with probability) for each character. We detail these three layers below. Generic Character-based Neural Architecture for Chinese NER ::: Character Representation Layer For a character-based Chinese NER model, the smallest unit of a sentence is a character and the sentence is seen as a character sequence $s=\lbrace c_1, \cdots , c_n\rbrace \in \mathcal {V}_c$, where $\mathcal {V}_c$ is the character vocabulary. Each character $c_i$ is represented using a dense vector (embedding): where $\mathbf {e}^{c}$ denotes the character embedding lookup table. Generic Character-based Neural Architecture for Chinese NER ::: Character Representation Layer ::: Char + bichar. In addition, BIBREF0 has proved that character bigrams are useful for representing characters, especially for those methods not use word information. Therefore, it is common to augment the character representation with bigram information by concatenating bigram embeddings with character embeddings: where $\mathbf {e}^{b}$ denotes the bigram embedding lookup table, and $\oplus $ denotes the concatenation operation. The sequence of character representations $\mathbf {\mathrm {x}}_i^c$ form the matrix representation $\mathbf {\mathrm {x}}^s=\lbrace \mathbf {\mathrm {x}}_1^c, \cdots , \mathbf {\mathrm {x}}_n^c\rbrace $ of $s$. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer The sequence modeling layer models the dependency between characters built on vector representations of the characters. In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: LSTM-based The bidirectional long-short term memory network (BiLSTM) is one of the most commonly used architectures for sequence modeling BIBREF10, BIBREF3, BIBREF11. It contains two LSTM BIBREF12 cells that model the sequence in the left-to-right (forward) and right-to-left (backward) directions with two distinct sets of parameters. Here, we precisely show the definition of the forward LSTM: where $\sigma $ is the element-wise sigmoid function and $\odot $ represents element-wise product. $\mathbf {\mathrm {\mathrm {W}}} \in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h\times (k_h+k_w)}}$ and $\mathbf {\mathrm {\mathrm {b}}}\in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h}}$ are trainable parameters. The backward LSTM shares the same definition as the forward one but in an inverse sequence order. The concatenated hidden states at the $i^{th}$ step of the forward and backward LSTMs $\mathbf {\mathrm {h}}_i=[\overrightarrow{\mathbf {\mathrm {h}}}_i \oplus \overleftarrow{\mathbf {\mathrm {h}}}_i]$ forms the context-dependent representation of $c_i$. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: CNN-based Another popular architecture for sequence modeling is the convolution network BIBREF13, which has been proved BIBREF14 to be effective for Chinese NER. In this work, we apply a convolutional layer to model trigrams of the character sequence and gradually model its multigrams by stacking multiple convolutional layers. Specifically, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $\mathbf {\mathrm {F}}^l \in \mathbb {R}^{k_l \times k_c \times 3}$ denote the corresponding filter used in this layer. To obtain the hidden representation $\mathbf {\mathrm {h}}^{l+1}_i$ of $c_i$ in the $(l+1)^{th}$ layer, it takes the convolution of $\mathbf {\mathrm {F}}^l$ over the 3-gram representation: where $\mathbf {\mathrm {h}}^l_{<i-1, i+1>} = [\mathbf {\mathrm {h}}^l_{i-1}; \mathbf {\mathrm {h}}^l_{i}; \mathbf {\mathrm {h}}^l_{i+1}]$ and $\langle A,B \rangle _i=\mbox{Tr}(AB[i, :, :]^T)$. This operation applies $L$ times, obtaining the final context-dependent representation, $\mathbf {\mathrm {h}}_i = \mathbf {\mathrm {h}}_i^L$, of $c_i$. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: Transformer-based Transformer BIBREF15 is originally proposed for sequence transduction, on which it has shown several advantages over the recurrent or convolutional neural networks. Intrinsically, it can also be applied to the sequence labeling task using only its encoder part. In similar, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $f^l$ denote a feedforward module used in this layer. To obtain the hidden representation matrix $\mathbf {\mathrm {h}}^{l+1}$ of $s$ in the $(l+1)^{th}$ layer, it takes the self-attention of $\mathbf {\mathrm {h}}^l$: where $d^l$ is the dimension of $\mathbf {\mathrm {h}}^l_i$. This process applies $L$ times, obtaining $\mathbf {\mathrm {h}}^L$. After that, the position information of each character $c_i$ is introduced into $\mathbf {\mathrm {h}}^L_i$ to obtain its final context-dependent representation $\mathbf {\mathrm {h}}_i$: where $PE_i=sin(i/1000^{2j/d^L}+j\%2\cdot \pi /2)$. We recommend you to refer to the excellent guides “The Annotated Transformer.” for more implementation detail of this architecture. Generic Character-based Neural Architecture for Chinese NER ::: Label Inference Layer On top of the sequence modeling layer, a sequential conditional random field (CRF) BIBREF16 layer is applied to perform label inference for the character sequence as a whole: where $\mathcal {Y}_s$ denotes all possible label sequences of $s$, $\phi _{t}({y}^\prime , {y}|\mathbf {\mathrm {s}})=\exp (\mathbf {w}^T_{{y}^\prime , {y}} \mathbf {\mathrm {h}}_t + b_{{y}^\prime , {y}})$, where $\mathbf {w}_{{y}^\prime , {y}}$ and $ b_{{y}^\prime , {y}}$ are trainable parameters corresponding to the label pair $({y}^\prime , {y})$, and $\mathbf {\theta }$ denotes model parameters. For label inference, it searches for the label sequence $\mathbf {\mathrm {y}}^{*}$ with the highest conditional probability given the input sequence ${s}$: which can be efficiently solved using the Viterbi algorithm BIBREF17. Lattice-LSTM for Chinese NER Lattice-LSTM designs to incorporate word lexicon into the character-based neural sequence labeling model. To achieve this purpose, it first performs lexicon matching on the input sentence. It will add an directed edge from $c_i$ to $c_j$, if the sub-sequence $\lbrace c_i, \cdots , c_j\rbrace $ of the sentence matches a word of the lexicon for $i < j$. And it preserves all lexicon matching results on a character by allowing the character to connect with multiple characters. Concretely, for a sentence $\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $, if both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match a word of the lexicon, it will add a directed edge from $c_1$ to $c_4$ and a directed edge from $c_2$ to $c_4$. This practice will turn the input form of the sentence from a chained sequence into a graph. To model the graph-based input, Lattice-LSTM accordingly modifies the LSTM-based sequence modeling layer. Specifically, let $s_{<*, j>}$ denote the list of sub-sequences of a sentence $s$ that match the lexicon and end with $c_j$, $\mathbf {\mathrm {h}}_{<*, j>}$ denote the corresponding hidden state list $\lbrace \mathbf {\mathrm {h}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $, and $\mathbf {\mathrm {c}}_{<*, j>}$ denote the corresponding memory cell list $\lbrace \mathbf {\mathrm {c}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $. In Lattice-LSTM, the hidden state $\mathbf {\mathrm {h}}_j$ and memory cell $\mathbf {\mathrm {c}}_j$ of $c_j$ are now updated by: where $f$ is a simplified representation of the function used by Lattice-LSTM to perform memory update. Note that, in the updating process, the inputs now contains current step character representation $\mathbf {\mathrm {x}}_j^c$, last step hidden state $\mathbf {\mathrm {h}}_{j-1}$ and memory cell $\mathbf {\mathrm {c}}_{j-1}$, and lexicon matched sub-sequences $s_{<*, j>}$ and their corresponding hidden state and memory cell lists, $\mathbf {\mathrm {h}}_{<*, j>}$ and $\mathbf {\mathrm {c}}_{<*, j>}$. We refer you to the paper of Lattice-LSTM BIBREF0 for more detail of the implementation of $f$. A problem of Lattice-LSTM is that its speed of sequence modeling is much slower than the normal LSTM architecture since it has to additionally model $s_{<*, j>}$, $\mathbf {\mathrm {h}}_{<*, j>}$, and $\mathbf {\mathrm {c}}_{<*, j>}$ for memory update. In addition, considering the implementation of $f$, it is hard for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1). This raises the necessity to design a simpler way to achieve the function of Lattice-LSTM for incorporating the word lexicon into the character-based NER model. Proposed Method In this section, we introduce our method, which aims to keep the merit of Lattice-LSTM and at the same time, make the computation efficient. We will start the description of our method from our thinking on Lattice-LSTM. From our view, the advance of Lattice-LSTM comes from two points. The first point is that it preserve all possible matching words for each character. This can avoid the error propagation introduced by heuristically choosing a matching result of the character to the NER system. The second point is that it can introduce pre-trained word embeddings to the system, which bring great help to the final performance. While the disadvantage of Lattice-LSTM is that it turns the input form of a sentence from a chained sequence into a graph. This will greatly increase the computational cost for sentence modeling. Therefore, the design of our method should try to keep the chained input form of the sentence and at the same time, achieve the above two advanced points of Lattice-LSTM. With this in mind, our method design was firstly motivated by the Softword technique, which was originally used for incorporating word segmentation information into downstream tasks BIBREF18, BIBREF19. Precisely, the Softword technique augments the representation of a character with the embedding of its corresponding segmentation label: Here, $seg(c_j) \in \mathcal {Y}_{seg}$ denotes the segmentation label of the character $c_j$ predicted by the word segmentor, $\mathbf {e}^{seg}$ denotes the segmentation label embedding lookup table, and commonly $\mathcal {Y}_{seg}=\lbrace \text{B}, \text{M}, \text{E}, \text{S}\rbrace $ with B, M, E indicating that the character is the beginning, middle, and end of a word, respectively, and S indicating that the character itself forms a single-character word. The first idea we come out based on the Softword technique is to construct a word segmenter using the lexicon and allow a character to have multiple segmentation labels. Take the sentence $s=\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $ as an example. If both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_3, c_4\rbrace $ match a word of the lexicon, then the segmentation label sequence of $s$ using the lexicon is $segs(s)=\lbrace \lbrace \text{B}\rbrace , \lbrace \text{M}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{E}\rbrace , \lbrace \text{O}\rbrace \rbrace $. Here, $segs(s)_1=\lbrace \text{B}\rbrace $ indicates that there is at least one sub-sequence of $s$ matching a word of the lexicon and beginning with $c_1$, $segs(s)_3=\lbrace \text{B}, \text{M}\rbrace $ means that there is at least one sub-sequence of $s$ matching the lexicon and beginning with $c_3$ and there is also at least one lexicon matched sub-sequence in the middle of which $c_3$ occurs, and $segs(s)_5=\lbrace \text{O}\rbrace $ means that there is no sub-sequence of $s$ that matches the lexicon and contains $c_5$. The character representation is then obtained by: where $\mathbf {e}^{seg}(segs(s)_j)$ is a 5-dimensional binary vector with each dimension corresponding to an item of $\lbrace \text{B, M, E, S, O\rbrace }$. We call this method as ExSoftword in the following. However, through the analysis of ExSoftword, we can find out that the ExSoftword method cannot fully inherit the two merits of Lattice-LSTM. Firstly, it cannot not introduce pre-trained word embeddings. Secondly, though it tries to keep all the lexicon matching results by allowing a character to have multiple segmentation labels, it still loses lots of information. In many cases, we cannot restore the matching results from the segmentation label sequence. Consider the case that in the sentence $s=\lbrace c_1, c_2, c_3, c_4\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match the lexicon. In this case, $segs(s) = \lbrace \lbrace \text{B}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{M}, \text{E}\rbrace , \lbrace \text{E}\rbrace \rbrace $. However, based on $segs(s)$ and $s$, we cannot say that it is $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ matching the lexicon since we will obtain the same segmentation label sequence when $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2,c_3\rbrace $ match the lexicon. To this end, we propose to preserving not only the possible segmentation labels of a character but also their corresponding matched words. Specifically, in this improved method, each character $c$ of a sentence $s$ corresponds to four word sets marked by the four segmentation labels “BMES". The word set $\rm {B}(c)$ consists of all lexicon matched words on $s$ that begin with $c$. Similarly, $\rm {M}(c)$ consists of all lexicon matched words in the middle of which $c$ occurs, $\rm {E}(c)$ consists of all lexicon matched words that end with $c$, and $\rm {S}(c)$ is the single-character word comprised of $c$. And if a word set is empty, we will add a special word “NONE" to it to indicate this situation. Consider the sentence $s=\lbrace c_1, \cdots , c_5\rbrace $ and suppose that $\lbrace c_1, c_2\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $, $\lbrace c_2, c_3, c_4\rbrace $, and $\lbrace c_2, c_3, c_4, c_5\rbrace $ match the lexicon. Then, for $c_2$, $\rm {B}(c_2)=\lbrace \lbrace c_2, c_3, c_4\rbrace , \lbrace c_2, c_3, c_4, c_5\rbrace \rbrace $, $\rm {M}(c_2)=\lbrace \lbrace c_1, c_2, c_3\rbrace \rbrace $, $\rm {E}(c_2)=\lbrace \lbrace c_1, c_2\rbrace \rbrace $, and $\rm {S}(c_2)=\lbrace NONE\rbrace $. In this way, we can now introduce the pre-trained word embeddings and moreover, we can exactly restore the matching results from the word sets of each character. The next step of the improved method is to condense the four word sets of each character into a fixed-dimensional vector. In order to retain information as much as possible, we choose to concatenate the representations of the four word sets to represent them as a whole and add it to the character representation: Here, $\mathbf {v}^s$ denotes the function that maps a single word set to a dense vector. This also means that we should map each word set into a fixed-dimensional vector. To achieve this purpose, we first tried the mean-pooling algorithm to get the vector representation of a word set $\mathcal {S}$: Here, $\mathbf {e}^w$ denotes the word embedding lookup table. However, the empirical studies, as depicted in Table TABREF31, show that this algorithm performs not so well . Through the comparison with Lattice-LSTM, we find out that in Lattice-LSTM, it applies a dynamic attention algorithm to weigh each matched word related to a single character. Motivated by this practice, we propose to weighing the representation of each word in the word set to get the pooling representation of the word set. However, considering the computational efficiency, we do not want to apply a dynamical weighing algorithm, like attention, to get the weight of each word. With this in mind, we propose to using the frequency of the word as an indication of its weight. The basic idea beneath this algorithm is that the more times a character sequence occurs in the data, the more likely it is a word. Note that, the frequency of a word is a static value and can be obtained offline. This can greatly accelerate the calculation of the weight of each word (e.g., using a lookup table). Specifically, let $w_c$ denote the character sequence constituting $w$ and $z(w)$ denote the frequency of $w_c$ occurring in the statistic data set (in this work, we combine training and testing data of a task to construct the statistic data set. Of course, if we have unlabelled data for the task, we can take the unlabeled data as the statistic data set). Note that, we do not add the frequency of $w$ if $w_c$ is covered by that of another word of the lexicon in the sentence. For example, suppose that the lexicon contains both “南京 (Nanjing)" and “南京市 (Nanjing City)". Then, when counting word frequency on the sequence “南京市长江大桥", we will not add the frequency of “南京" since it is covered by “南京市" in the sequence. This can avoid the situation that the frequency of “南京" is definitely higher than “南京市". Finally, we get the weighted representation of the word set $\mathcal {S}$ by: where Here, we perform weight normalization on all words of the four word sets to allow them compete with each other across sets. Further, we have tried to introducing a smoothing to the weight of each word to increase the weights of infrequent words. Specifically, we add a constant $c$ into the frequency of each word and re-define $\mathbf {v}^s$ by: where We set $c$ to the value that there are 10% of training words occurring less than $c$ times within the statistic data set. In summary, our method mainly contains the following four steps. Firstly, we scan each input sentence with the word lexicon, obtaining the four 'BMES' word sets for each character of the sentence. Secondly, we look up the frequency of each word counted on the statistic data set. Thirdly, we obtain the vector representation of the four word sets of each character according to Eq. (DISPLAY_FORM22), and add it to the character representation according to Eq. (DISPLAY_FORM20). Finally, based on the augmented character representations, we perform sequence labeling using any appropriate neural sequence labeling model, like LSTM-based sequence modeling layer + CRF label inference layer. Experiments ::: Experiment Design Firstly, we performed a development study on our method with the LSTM-based sequence modeling layer, in order to compare the implementations of $\mathbf {v}^s$ and to determine whether or not to use character bigrams in our method. Decision made in this step will be applied to the following experiments. Secondly, we verified the computational efficiency of our method compared with Lattice-LSTM and LR-CNN BIBREF20, which is a followee of Lattice-LSTM for faster inference speed. Thirdly, we verified the effectiveness of our method by comparing its performance with that of Lattice-LSTM and other comparable models on four benchmark Chinese NER data sets. Finally, we verified the applicability of our method to different sequence labeling models. Experiments ::: Experiment Setup Most experimental settings in this work follow the protocols of Lattice-LSTM BIBREF0, including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on. To make this work self-completed, we concisely illustrate some primary settings of this work. Experiments ::: Experiment Setup ::: Datasets The methods were evaluated on four Chinese NER datasets, including OntoNotes BIBREF21, MSRA BIBREF22, Weibo NER BIBREF23, BIBREF24, and Resume NER BIBREF0. OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data. For OntoNotes, gold segmentation is also available for development and testing data. Weibo NER and Resume NER are from social media and resume, respectively. There is no gold standard segmentation in these two datasets. Table TABREF26 shows statistic information of these datasets. As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words. Experiments ::: Experiment Setup ::: Implementation Detail When applying the LSTM-based sequence modeling layer, we followed most implementation protocols of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number. The hidden size was set to 100 for Weibo and 256 for the rest three datasets. The learning rate was set to 0.005 for Weibo and Resume and 0.0015 for OntoNotes and MSRA with Adamax BIBREF25. When applying the CNN- and transformer- based sequence modeling layers, most hyper-parameters were the same as those used in the LSTM-based model. In addition, the layer number $L$ for the CNN-based model was set to 4, and that for transformer-based model was set to 2 with h=4 parallel attention layers. Kernel number $k_f$ of the CNN-based model was set to 512 for MSRA and 128 for the other datasets in all layers. Experiments ::: Development Experiments In this experiment, we compared the implementations of $\mathbf {v}^s$ with the LSTM-based sequence modeling layer. In addition, we study whether or not character bigrams can bring improvement to our method. Table TABREF31 shows performance of three implementations of $\mathbf {v}^s$ without using character bigrams. From the table, we can see that the weighted pooling algorithm performs generally better than the other two implementations. Of course, we may obtain better results with the smoothed weighted pooling algorithm by reducing the value of $c$ (when $c=0$, it is equivalent to the weighted pooling algorithm). We did not do so for two reasons. The first one is to guarantee the generality of our system for unexplored tasks. The second one is that the performance of the weighted pooling algorithm is good enough compared with other state-of-the-art baselines. Therefore, in the following experiments, we in default applied the weighted pooling algorithm to implement $\mathbf {v}^s$. Figure FIGREF32 shows the F1-score of our method against the number of training iterations when using character bigram or not. From the figure, we can see that additionally introducing character bigrams cannot bring considerable improvement to our method. A possible explanation of this phenomenon is that the introduced word information by our proposed method has covered the bichar information. Therefore, in the following experiments, we did not use bichar in our method. Experiments ::: Computational Efficiency Study Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer. Experiments ::: Effectiveness Study Table TABREF37$-$TABREF43 show the performance of method with the LSTM-based sequence modeling layer compared with Lattice-LSTM and other comparative baselines. Experiments ::: Effectiveness Study ::: OntoNotes. Table TABREF37 shows results on OntoNotes, which has gold segmentation for both training and testing data. The methods of the “Gold seg" and "Auto seg" group are word-based that build on the gold word segmentation results and the automatic segmentation results, respectively. The automatic segmentation results were generated by the segmenter trained on training data of OntoNotes. Methods of the "No seg" group are character-based. From the table, we can obtain several informative observations. First, by replacing the gold segmentation with the automatically generated segmentation, the F1-score of the Word-based (LSTM) + char + bichar model decreased from 75.77% to 71.70%. This shows the problem of the practice that treats the predicted word segmentation result as the true one for the word-based Chinese NER. Second, the Char-based (LSTM)+bichar+ExSoftword model achieved a 71.89% to 72.40% improvement over the Char-based (LSTM)+bichar+softword baseline on the F1-score. This indicates the feasibility of the naive extension of ExSoftword to softword. However, it still greatly underperformed Lattice-LSTM, showing its deficiency in utilizing word information. Finally, our proposed method, which is a further extension of Exsoftword, obtained a statistically significant improvement over Lattice-LSTM and even performed similarly to those word-based methods with gold segmentation, verifying its effectiveness on this data set. Experiments ::: Effectiveness Study ::: MSRA. Table TABREF40 shows results on MSRA. The word-based methods were built on the automatic segmentation results generated by the segmenter trained on training data of MSRA. Compared methods included the best statistical models on this data set, which leveraged rich handcrafted features BIBREF28, BIBREF29, BIBREF30, character embedding features BIBREF31, and radical features BIBREF32. From the table, we observe that our method obtained a statistically significant improvement over Lattice-LSTM and other comparative baselines on the recall and F1-score, verifying the effectiveness of our method on this data set. Experiments ::: Effectiveness Study ::: Weibo/Resume. Table TABREF42 shows results on Weibo NER, where NE, NM, and Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively. The existing state-of-the-art system BIBREF19 explored rich embedding features, cross-domain data, and semi-supervised data. From the table, we can see that our proposed method achieved considerable improvement over the compared baselines on this data set. Table TABREF43 shows results on Resume. Consistent with observations on the other three tested data sets, our proposed method significantly outperformed Lattice-LSTM and the other comparable methods on this data set. Experiments ::: Transferability Study Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our methods with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This shows that our method is applicable to different neural sequence modeling architectures for exploiting lexicon information. Conclusion In this work, we address the computational efficiency for utilizing word lexicon in Chinese NER. To achieve a high-performing NER system with fast inference speed, we proposed to adding lexicon information into the character representation and keeping the input form of a sentence as a chained sequence. Experimental study on four benchmark Chinese NER datasets shows that our method can obtain faster inference speed than the comparative methods and at the same time, achieve high performance. It also shows that our methods can apply to different neural sequence labeling models for Chinese NER.
The sequence model architectures which this method is transferred to are: LSTM and Transformer-based models
16f71391335a5d574f01235a9c37631893cd3bb0
16f71391335a5d574f01235a9c37631893cd3bb0_0
Q: What percentage of improvement in inference speed is obtained by the proposed method over the newest state-of-the-art methods? Text: Introduction Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4. Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not previously segmented. Thus, one common practice in Chinese NER is first performing word segmentation using an existing CWS system and then applying a word-level sequence labeling model to the segmented sentence BIBREF5, BIBREF6. However, it is inevitable that the CWS system will wrongly segment the query sequence. This will, in turn, result in entity boundary detection errors and even entity category prediction errors in the following NER. Take the character sequence “南京市 (Nanjing) / 长江大桥 (Yangtze River Bridge)" as an example, where “/" indicates the gold segmentation result. If the sequence is segmented into “南京 (Nanjing) / 市长 (mayor) / 江大桥 (Daqiao Jiang)", the word-based NER system is definitely not able to correctly recognize “南京市 (Nanjing)" and “长江大桥 (Yangtze River Bridge)" as two entities of the location type. Instead, it is possible to incorrectly treat “南京 (Nanjing)" as a location entity and predict “江大桥 (Daqiao Jiang)" to be a person's name. Therefore, some works resort to performing Chinese NER directly on the character level, and it has been shown that this practice can achieve better performance BIBREF7, BIBREF8, BIBREF9, BIBREF0. A drawback of the purely character-based NER method is that word information, which has been proved to be useful, is not fully exploited. With this consideration, BIBREF0 proposed to incorporating word lexicon into the character-based NER model. In addition, instead of heuristically choosing a word for the character if it matches multiple words of the lexicon, they proposed to preserving all matched words of the character, leaving the following NER model to determine which matched word to apply. To achieve this, they introduced an elaborate modification to the LSTM-based sequence modeling layer of the LSTM-CRF model BIBREF1 to jointly model the character sequence and all of its matched words. Experimental studies on four public Chinese NER datasets show that Lattice-LSTM can achieve comparative or better performance on Chinese NER over existing methods. Although successful, there exists a big problem in Lattice-LSTM that limits its application in many industrial areas, where real-time NER responses are needed. That is, its model architecture is quite complicated. This slows down its inference speed and makes it difficult to perform training and inference in parallel. In addition, it is far from easy to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers), which may be more suitable for some specific datasets. In this work, we aim to find a easier way to achieve the idea of Lattice-LSTM, i.e., incorporating all matched words of the sentence to the character-based NER model. The first principle of our method design is to achieve a fast inference speed. To this end, we propose to encoding the matched words, obtained from the lexicon, into the representations of characters. Compared with Lattice-LSTM, this method is more concise and easier to implement. It can avoid complicated model architecture design thus has much faster inference speed. It can also be quickly adapted to any appropriate neural architectures without redesign. Given an existing neural character-based NER model, we only have to modify its character representation layer to successfully introduce the word lexicon. In addition, experimental studies on four public Chinese NER datasets show that our method can even achieve better performance than Lattice-LSTM when applying the LSTM-CRF model. Our source code is published at https://github.com/v-mipeng/LexiconAugmentedNER. Generic Character-based Neural Architecture for Chinese NER In this section, we provide a concise description of the generic character-based neural NER model, which conceptually contains three stacked layers. The first layer is the character representation layer, which maps each character of a sentence into a dense vector. The second layer is the sequence modeling layer. It plays the role of modeling the dependence between characters, obtaining a hidden representation for each character. The final layer is the label inference layer. It takes the hidden representation sequence as input and outputs the predicted label (with probability) for each character. We detail these three layers below. Generic Character-based Neural Architecture for Chinese NER ::: Character Representation Layer For a character-based Chinese NER model, the smallest unit of a sentence is a character and the sentence is seen as a character sequence $s=\lbrace c_1, \cdots , c_n\rbrace \in \mathcal {V}_c$, where $\mathcal {V}_c$ is the character vocabulary. Each character $c_i$ is represented using a dense vector (embedding): where $\mathbf {e}^{c}$ denotes the character embedding lookup table. Generic Character-based Neural Architecture for Chinese NER ::: Character Representation Layer ::: Char + bichar. In addition, BIBREF0 has proved that character bigrams are useful for representing characters, especially for those methods not use word information. Therefore, it is common to augment the character representation with bigram information by concatenating bigram embeddings with character embeddings: where $\mathbf {e}^{b}$ denotes the bigram embedding lookup table, and $\oplus $ denotes the concatenation operation. The sequence of character representations $\mathbf {\mathrm {x}}_i^c$ form the matrix representation $\mathbf {\mathrm {x}}^s=\lbrace \mathbf {\mathrm {x}}_1^c, \cdots , \mathbf {\mathrm {x}}_n^c\rbrace $ of $s$. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer The sequence modeling layer models the dependency between characters built on vector representations of the characters. In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: LSTM-based The bidirectional long-short term memory network (BiLSTM) is one of the most commonly used architectures for sequence modeling BIBREF10, BIBREF3, BIBREF11. It contains two LSTM BIBREF12 cells that model the sequence in the left-to-right (forward) and right-to-left (backward) directions with two distinct sets of parameters. Here, we precisely show the definition of the forward LSTM: where $\sigma $ is the element-wise sigmoid function and $\odot $ represents element-wise product. $\mathbf {\mathrm {\mathrm {W}}} \in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h\times (k_h+k_w)}}$ and $\mathbf {\mathrm {\mathrm {b}}}\in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h}}$ are trainable parameters. The backward LSTM shares the same definition as the forward one but in an inverse sequence order. The concatenated hidden states at the $i^{th}$ step of the forward and backward LSTMs $\mathbf {\mathrm {h}}_i=[\overrightarrow{\mathbf {\mathrm {h}}}_i \oplus \overleftarrow{\mathbf {\mathrm {h}}}_i]$ forms the context-dependent representation of $c_i$. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: CNN-based Another popular architecture for sequence modeling is the convolution network BIBREF13, which has been proved BIBREF14 to be effective for Chinese NER. In this work, we apply a convolutional layer to model trigrams of the character sequence and gradually model its multigrams by stacking multiple convolutional layers. Specifically, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $\mathbf {\mathrm {F}}^l \in \mathbb {R}^{k_l \times k_c \times 3}$ denote the corresponding filter used in this layer. To obtain the hidden representation $\mathbf {\mathrm {h}}^{l+1}_i$ of $c_i$ in the $(l+1)^{th}$ layer, it takes the convolution of $\mathbf {\mathrm {F}}^l$ over the 3-gram representation: where $\mathbf {\mathrm {h}}^l_{<i-1, i+1>} = [\mathbf {\mathrm {h}}^l_{i-1}; \mathbf {\mathrm {h}}^l_{i}; \mathbf {\mathrm {h}}^l_{i+1}]$ and $\langle A,B \rangle _i=\mbox{Tr}(AB[i, :, :]^T)$. This operation applies $L$ times, obtaining the final context-dependent representation, $\mathbf {\mathrm {h}}_i = \mathbf {\mathrm {h}}_i^L$, of $c_i$. Generic Character-based Neural Architecture for Chinese NER ::: Sequence Modeling Layer ::: Transformer-based Transformer BIBREF15 is originally proposed for sequence transduction, on which it has shown several advantages over the recurrent or convolutional neural networks. Intrinsically, it can also be applied to the sequence labeling task using only its encoder part. In similar, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $f^l$ denote a feedforward module used in this layer. To obtain the hidden representation matrix $\mathbf {\mathrm {h}}^{l+1}$ of $s$ in the $(l+1)^{th}$ layer, it takes the self-attention of $\mathbf {\mathrm {h}}^l$: where $d^l$ is the dimension of $\mathbf {\mathrm {h}}^l_i$. This process applies $L$ times, obtaining $\mathbf {\mathrm {h}}^L$. After that, the position information of each character $c_i$ is introduced into $\mathbf {\mathrm {h}}^L_i$ to obtain its final context-dependent representation $\mathbf {\mathrm {h}}_i$: where $PE_i=sin(i/1000^{2j/d^L}+j\%2\cdot \pi /2)$. We recommend you to refer to the excellent guides “The Annotated Transformer.” for more implementation detail of this architecture. Generic Character-based Neural Architecture for Chinese NER ::: Label Inference Layer On top of the sequence modeling layer, a sequential conditional random field (CRF) BIBREF16 layer is applied to perform label inference for the character sequence as a whole: where $\mathcal {Y}_s$ denotes all possible label sequences of $s$, $\phi _{t}({y}^\prime , {y}|\mathbf {\mathrm {s}})=\exp (\mathbf {w}^T_{{y}^\prime , {y}} \mathbf {\mathrm {h}}_t + b_{{y}^\prime , {y}})$, where $\mathbf {w}_{{y}^\prime , {y}}$ and $ b_{{y}^\prime , {y}}$ are trainable parameters corresponding to the label pair $({y}^\prime , {y})$, and $\mathbf {\theta }$ denotes model parameters. For label inference, it searches for the label sequence $\mathbf {\mathrm {y}}^{*}$ with the highest conditional probability given the input sequence ${s}$: which can be efficiently solved using the Viterbi algorithm BIBREF17. Lattice-LSTM for Chinese NER Lattice-LSTM designs to incorporate word lexicon into the character-based neural sequence labeling model. To achieve this purpose, it first performs lexicon matching on the input sentence. It will add an directed edge from $c_i$ to $c_j$, if the sub-sequence $\lbrace c_i, \cdots , c_j\rbrace $ of the sentence matches a word of the lexicon for $i < j$. And it preserves all lexicon matching results on a character by allowing the character to connect with multiple characters. Concretely, for a sentence $\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $, if both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match a word of the lexicon, it will add a directed edge from $c_1$ to $c_4$ and a directed edge from $c_2$ to $c_4$. This practice will turn the input form of the sentence from a chained sequence into a graph. To model the graph-based input, Lattice-LSTM accordingly modifies the LSTM-based sequence modeling layer. Specifically, let $s_{<*, j>}$ denote the list of sub-sequences of a sentence $s$ that match the lexicon and end with $c_j$, $\mathbf {\mathrm {h}}_{<*, j>}$ denote the corresponding hidden state list $\lbrace \mathbf {\mathrm {h}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $, and $\mathbf {\mathrm {c}}_{<*, j>}$ denote the corresponding memory cell list $\lbrace \mathbf {\mathrm {c}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $. In Lattice-LSTM, the hidden state $\mathbf {\mathrm {h}}_j$ and memory cell $\mathbf {\mathrm {c}}_j$ of $c_j$ are now updated by: where $f$ is a simplified representation of the function used by Lattice-LSTM to perform memory update. Note that, in the updating process, the inputs now contains current step character representation $\mathbf {\mathrm {x}}_j^c$, last step hidden state $\mathbf {\mathrm {h}}_{j-1}$ and memory cell $\mathbf {\mathrm {c}}_{j-1}$, and lexicon matched sub-sequences $s_{<*, j>}$ and their corresponding hidden state and memory cell lists, $\mathbf {\mathrm {h}}_{<*, j>}$ and $\mathbf {\mathrm {c}}_{<*, j>}$. We refer you to the paper of Lattice-LSTM BIBREF0 for more detail of the implementation of $f$. A problem of Lattice-LSTM is that its speed of sequence modeling is much slower than the normal LSTM architecture since it has to additionally model $s_{<*, j>}$, $\mathbf {\mathrm {h}}_{<*, j>}$, and $\mathbf {\mathrm {c}}_{<*, j>}$ for memory update. In addition, considering the implementation of $f$, it is hard for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1). This raises the necessity to design a simpler way to achieve the function of Lattice-LSTM for incorporating the word lexicon into the character-based NER model. Proposed Method In this section, we introduce our method, which aims to keep the merit of Lattice-LSTM and at the same time, make the computation efficient. We will start the description of our method from our thinking on Lattice-LSTM. From our view, the advance of Lattice-LSTM comes from two points. The first point is that it preserve all possible matching words for each character. This can avoid the error propagation introduced by heuristically choosing a matching result of the character to the NER system. The second point is that it can introduce pre-trained word embeddings to the system, which bring great help to the final performance. While the disadvantage of Lattice-LSTM is that it turns the input form of a sentence from a chained sequence into a graph. This will greatly increase the computational cost for sentence modeling. Therefore, the design of our method should try to keep the chained input form of the sentence and at the same time, achieve the above two advanced points of Lattice-LSTM. With this in mind, our method design was firstly motivated by the Softword technique, which was originally used for incorporating word segmentation information into downstream tasks BIBREF18, BIBREF19. Precisely, the Softword technique augments the representation of a character with the embedding of its corresponding segmentation label: Here, $seg(c_j) \in \mathcal {Y}_{seg}$ denotes the segmentation label of the character $c_j$ predicted by the word segmentor, $\mathbf {e}^{seg}$ denotes the segmentation label embedding lookup table, and commonly $\mathcal {Y}_{seg}=\lbrace \text{B}, \text{M}, \text{E}, \text{S}\rbrace $ with B, M, E indicating that the character is the beginning, middle, and end of a word, respectively, and S indicating that the character itself forms a single-character word. The first idea we come out based on the Softword technique is to construct a word segmenter using the lexicon and allow a character to have multiple segmentation labels. Take the sentence $s=\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $ as an example. If both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_3, c_4\rbrace $ match a word of the lexicon, then the segmentation label sequence of $s$ using the lexicon is $segs(s)=\lbrace \lbrace \text{B}\rbrace , \lbrace \text{M}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{E}\rbrace , \lbrace \text{O}\rbrace \rbrace $. Here, $segs(s)_1=\lbrace \text{B}\rbrace $ indicates that there is at least one sub-sequence of $s$ matching a word of the lexicon and beginning with $c_1$, $segs(s)_3=\lbrace \text{B}, \text{M}\rbrace $ means that there is at least one sub-sequence of $s$ matching the lexicon and beginning with $c_3$ and there is also at least one lexicon matched sub-sequence in the middle of which $c_3$ occurs, and $segs(s)_5=\lbrace \text{O}\rbrace $ means that there is no sub-sequence of $s$ that matches the lexicon and contains $c_5$. The character representation is then obtained by: where $\mathbf {e}^{seg}(segs(s)_j)$ is a 5-dimensional binary vector with each dimension corresponding to an item of $\lbrace \text{B, M, E, S, O\rbrace }$. We call this method as ExSoftword in the following. However, through the analysis of ExSoftword, we can find out that the ExSoftword method cannot fully inherit the two merits of Lattice-LSTM. Firstly, it cannot not introduce pre-trained word embeddings. Secondly, though it tries to keep all the lexicon matching results by allowing a character to have multiple segmentation labels, it still loses lots of information. In many cases, we cannot restore the matching results from the segmentation label sequence. Consider the case that in the sentence $s=\lbrace c_1, c_2, c_3, c_4\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match the lexicon. In this case, $segs(s) = \lbrace \lbrace \text{B}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{M}, \text{E}\rbrace , \lbrace \text{E}\rbrace \rbrace $. However, based on $segs(s)$ and $s$, we cannot say that it is $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ matching the lexicon since we will obtain the same segmentation label sequence when $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2,c_3\rbrace $ match the lexicon. To this end, we propose to preserving not only the possible segmentation labels of a character but also their corresponding matched words. Specifically, in this improved method, each character $c$ of a sentence $s$ corresponds to four word sets marked by the four segmentation labels “BMES". The word set $\rm {B}(c)$ consists of all lexicon matched words on $s$ that begin with $c$. Similarly, $\rm {M}(c)$ consists of all lexicon matched words in the middle of which $c$ occurs, $\rm {E}(c)$ consists of all lexicon matched words that end with $c$, and $\rm {S}(c)$ is the single-character word comprised of $c$. And if a word set is empty, we will add a special word “NONE" to it to indicate this situation. Consider the sentence $s=\lbrace c_1, \cdots , c_5\rbrace $ and suppose that $\lbrace c_1, c_2\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $, $\lbrace c_2, c_3, c_4\rbrace $, and $\lbrace c_2, c_3, c_4, c_5\rbrace $ match the lexicon. Then, for $c_2$, $\rm {B}(c_2)=\lbrace \lbrace c_2, c_3, c_4\rbrace , \lbrace c_2, c_3, c_4, c_5\rbrace \rbrace $, $\rm {M}(c_2)=\lbrace \lbrace c_1, c_2, c_3\rbrace \rbrace $, $\rm {E}(c_2)=\lbrace \lbrace c_1, c_2\rbrace \rbrace $, and $\rm {S}(c_2)=\lbrace NONE\rbrace $. In this way, we can now introduce the pre-trained word embeddings and moreover, we can exactly restore the matching results from the word sets of each character. The next step of the improved method is to condense the four word sets of each character into a fixed-dimensional vector. In order to retain information as much as possible, we choose to concatenate the representations of the four word sets to represent them as a whole and add it to the character representation: Here, $\mathbf {v}^s$ denotes the function that maps a single word set to a dense vector. This also means that we should map each word set into a fixed-dimensional vector. To achieve this purpose, we first tried the mean-pooling algorithm to get the vector representation of a word set $\mathcal {S}$: Here, $\mathbf {e}^w$ denotes the word embedding lookup table. However, the empirical studies, as depicted in Table TABREF31, show that this algorithm performs not so well . Through the comparison with Lattice-LSTM, we find out that in Lattice-LSTM, it applies a dynamic attention algorithm to weigh each matched word related to a single character. Motivated by this practice, we propose to weighing the representation of each word in the word set to get the pooling representation of the word set. However, considering the computational efficiency, we do not want to apply a dynamical weighing algorithm, like attention, to get the weight of each word. With this in mind, we propose to using the frequency of the word as an indication of its weight. The basic idea beneath this algorithm is that the more times a character sequence occurs in the data, the more likely it is a word. Note that, the frequency of a word is a static value and can be obtained offline. This can greatly accelerate the calculation of the weight of each word (e.g., using a lookup table). Specifically, let $w_c$ denote the character sequence constituting $w$ and $z(w)$ denote the frequency of $w_c$ occurring in the statistic data set (in this work, we combine training and testing data of a task to construct the statistic data set. Of course, if we have unlabelled data for the task, we can take the unlabeled data as the statistic data set). Note that, we do not add the frequency of $w$ if $w_c$ is covered by that of another word of the lexicon in the sentence. For example, suppose that the lexicon contains both “南京 (Nanjing)" and “南京市 (Nanjing City)". Then, when counting word frequency on the sequence “南京市长江大桥", we will not add the frequency of “南京" since it is covered by “南京市" in the sequence. This can avoid the situation that the frequency of “南京" is definitely higher than “南京市". Finally, we get the weighted representation of the word set $\mathcal {S}$ by: where Here, we perform weight normalization on all words of the four word sets to allow them compete with each other across sets. Further, we have tried to introducing a smoothing to the weight of each word to increase the weights of infrequent words. Specifically, we add a constant $c$ into the frequency of each word and re-define $\mathbf {v}^s$ by: where We set $c$ to the value that there are 10% of training words occurring less than $c$ times within the statistic data set. In summary, our method mainly contains the following four steps. Firstly, we scan each input sentence with the word lexicon, obtaining the four 'BMES' word sets for each character of the sentence. Secondly, we look up the frequency of each word counted on the statistic data set. Thirdly, we obtain the vector representation of the four word sets of each character according to Eq. (DISPLAY_FORM22), and add it to the character representation according to Eq. (DISPLAY_FORM20). Finally, based on the augmented character representations, we perform sequence labeling using any appropriate neural sequence labeling model, like LSTM-based sequence modeling layer + CRF label inference layer. Experiments ::: Experiment Design Firstly, we performed a development study on our method with the LSTM-based sequence modeling layer, in order to compare the implementations of $\mathbf {v}^s$ and to determine whether or not to use character bigrams in our method. Decision made in this step will be applied to the following experiments. Secondly, we verified the computational efficiency of our method compared with Lattice-LSTM and LR-CNN BIBREF20, which is a followee of Lattice-LSTM for faster inference speed. Thirdly, we verified the effectiveness of our method by comparing its performance with that of Lattice-LSTM and other comparable models on four benchmark Chinese NER data sets. Finally, we verified the applicability of our method to different sequence labeling models. Experiments ::: Experiment Setup Most experimental settings in this work follow the protocols of Lattice-LSTM BIBREF0, including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on. To make this work self-completed, we concisely illustrate some primary settings of this work. Experiments ::: Experiment Setup ::: Datasets The methods were evaluated on four Chinese NER datasets, including OntoNotes BIBREF21, MSRA BIBREF22, Weibo NER BIBREF23, BIBREF24, and Resume NER BIBREF0. OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data. For OntoNotes, gold segmentation is also available for development and testing data. Weibo NER and Resume NER are from social media and resume, respectively. There is no gold standard segmentation in these two datasets. Table TABREF26 shows statistic information of these datasets. As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words. Experiments ::: Experiment Setup ::: Implementation Detail When applying the LSTM-based sequence modeling layer, we followed most implementation protocols of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number. The hidden size was set to 100 for Weibo and 256 for the rest three datasets. The learning rate was set to 0.005 for Weibo and Resume and 0.0015 for OntoNotes and MSRA with Adamax BIBREF25. When applying the CNN- and transformer- based sequence modeling layers, most hyper-parameters were the same as those used in the LSTM-based model. In addition, the layer number $L$ for the CNN-based model was set to 4, and that for transformer-based model was set to 2 with h=4 parallel attention layers. Kernel number $k_f$ of the CNN-based model was set to 512 for MSRA and 128 for the other datasets in all layers. Experiments ::: Development Experiments In this experiment, we compared the implementations of $\mathbf {v}^s$ with the LSTM-based sequence modeling layer. In addition, we study whether or not character bigrams can bring improvement to our method. Table TABREF31 shows performance of three implementations of $\mathbf {v}^s$ without using character bigrams. From the table, we can see that the weighted pooling algorithm performs generally better than the other two implementations. Of course, we may obtain better results with the smoothed weighted pooling algorithm by reducing the value of $c$ (when $c=0$, it is equivalent to the weighted pooling algorithm). We did not do so for two reasons. The first one is to guarantee the generality of our system for unexplored tasks. The second one is that the performance of the weighted pooling algorithm is good enough compared with other state-of-the-art baselines. Therefore, in the following experiments, we in default applied the weighted pooling algorithm to implement $\mathbf {v}^s$. Figure FIGREF32 shows the F1-score of our method against the number of training iterations when using character bigram or not. From the figure, we can see that additionally introducing character bigrams cannot bring considerable improvement to our method. A possible explanation of this phenomenon is that the introduced word information by our proposed method has covered the bichar information. Therefore, in the following experiments, we did not use bichar in our method. Experiments ::: Computational Efficiency Study Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer. Experiments ::: Effectiveness Study Table TABREF37$-$TABREF43 show the performance of method with the LSTM-based sequence modeling layer compared with Lattice-LSTM and other comparative baselines. Experiments ::: Effectiveness Study ::: OntoNotes. Table TABREF37 shows results on OntoNotes, which has gold segmentation for both training and testing data. The methods of the “Gold seg" and "Auto seg" group are word-based that build on the gold word segmentation results and the automatic segmentation results, respectively. The automatic segmentation results were generated by the segmenter trained on training data of OntoNotes. Methods of the "No seg" group are character-based. From the table, we can obtain several informative observations. First, by replacing the gold segmentation with the automatically generated segmentation, the F1-score of the Word-based (LSTM) + char + bichar model decreased from 75.77% to 71.70%. This shows the problem of the practice that treats the predicted word segmentation result as the true one for the word-based Chinese NER. Second, the Char-based (LSTM)+bichar+ExSoftword model achieved a 71.89% to 72.40% improvement over the Char-based (LSTM)+bichar+softword baseline on the F1-score. This indicates the feasibility of the naive extension of ExSoftword to softword. However, it still greatly underperformed Lattice-LSTM, showing its deficiency in utilizing word information. Finally, our proposed method, which is a further extension of Exsoftword, obtained a statistically significant improvement over Lattice-LSTM and even performed similarly to those word-based methods with gold segmentation, verifying its effectiveness on this data set. Experiments ::: Effectiveness Study ::: MSRA. Table TABREF40 shows results on MSRA. The word-based methods were built on the automatic segmentation results generated by the segmenter trained on training data of MSRA. Compared methods included the best statistical models on this data set, which leveraged rich handcrafted features BIBREF28, BIBREF29, BIBREF30, character embedding features BIBREF31, and radical features BIBREF32. From the table, we observe that our method obtained a statistically significant improvement over Lattice-LSTM and other comparative baselines on the recall and F1-score, verifying the effectiveness of our method on this data set. Experiments ::: Effectiveness Study ::: Weibo/Resume. Table TABREF42 shows results on Weibo NER, where NE, NM, and Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively. The existing state-of-the-art system BIBREF19 explored rich embedding features, cross-domain data, and semi-supervised data. From the table, we can see that our proposed method achieved considerable improvement over the compared baselines on this data set. Table TABREF43 shows results on Resume. Consistent with observations on the other three tested data sets, our proposed method significantly outperformed Lattice-LSTM and the other comparable methods on this data set. Experiments ::: Transferability Study Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our methods with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This shows that our method is applicable to different neural sequence modeling architectures for exploiting lexicon information. Conclusion In this work, we address the computational efficiency for utilizing word lexicon in Chinese NER. To achieve a high-performing NER system with fast inference speed, we proposed to adding lexicon information into the character representation and keeping the input form of a sentence as a chained sequence. Experimental study on four benchmark Chinese NER datasets shows that our method can obtain faster inference speed than the comparative methods and at the same time, achieve high performance. It also shows that our methods can apply to different neural sequence labeling models for Chinese NER.
Across 4 datasets, the best performing proposed model (CNN) achieved an average of 363% improvement over the state of the art method (LR-CNN)
33f72c8da22dd7d1378d004cbd8d2dcd814a5291
33f72c8da22dd7d1378d004cbd8d2dcd814a5291_0
Q: What is the metric that is measures in this paper? Text: Introduction Current speech and language technologies based on Deep Neural Networks (DNNs) BIBREF0 require large quantities of transcribed data and additional linguistic resources (phonetic dictionary, transcribed data). Yet, for many languages in the world, such resources are not available and gathering them would be very difficult due to a lack of stable and widespread orthography BIBREF1 . The goal of Zero-resource technologies is to build speech and language systems in an unknown language by using only raw speech data BIBREF2 . The Zero Resource challenges (2015 and 2017) focused on discovering invariant sub-word representations (Track 1) and audio terms (Track 2) in an unsupervised fashion. Several teams have proposed to use terms discovered in Track 2 to provide DNNs with pairs of same versus different words as a form of weak or self supervision for Track 1: correspondence auto-encoders BIBREF3 , BIBREF4 , siamese networks BIBREF5 , BIBREF6 . This paper extends and complements the ABnet Siamese network architecture proposed by BIBREF7 , BIBREF5 for the sub-word modelling task. DNN contributions typically focus on novel architectures or objective functions. Here, we study an often overlooked component of Siamese networks: the sampling procedure which chooses the set of pairs of same versus different tokens. To assess how each parameter contributes to the algorithm performance, we conduct a comprehensive set of experiments with a large range of variations in one parameter, holding constant the quantity of available data and the other parameters. We find that frequency compression of the word types has a particularly important effect. This is congruent with other frequency compression techniques used in NLP, for instance in the computation of word embeddings (word2vec BIBREF8 ). Besides, Levy et al. BIBREF9 reveals that the performance differences between word-embedding algorithms are due more to the choice of the hyper-parameters, than to the embedding algorithms themselves. In this study, we first show that, using gold word-level annotations on the Buckeye corpus, a flattened frequency range gives the best results on phonetic learning in a Siamese network. Then, we show that the hyper-parameters that worked best with gold annotations yield improvements in the zero-resource scenario (unsupervised pairs) as well. Specifically, they improve on the state-of-the-art obtained with siamese and auto-encoder architectures. Methods We developed a new package abnet3 using the pytorch framework BIBREF10 . The code is open-sourced (BSD 3-clause) and available on github, as is the code for the experiments for this paper. Data preparation For the weakly-supervised study, we use 4 subsets of the Buckeye BIBREF11 dataset from the ZeroSpeech 2015 challenge BIBREF2 with, respectively, 1%, 10%, 50%, and 100% of the original data (see Table 1 ). The original dataset is composed of American English casual conversations recorded in the laboratory, with no overlap, no speech noises, separated in two splits: 12 speakers for training and 2 speakers for test. A Voice Activity Detection file indicates the onset and offset of each utterance and enables to discard silence portions of each file. We use the orthographic transcription from word-level annotations to determine same and different pairs to train the siamese networks. In the fully unsupervised setting, we obtain pairs of same and different words from the Track 2 baseline of the 2015 ZeroSpeech challenge BIBREF2 : the Spoken Term Discovery system from BIBREF12 . We use both the original files from the baseline, and a rerun of the algorithm with systematic variations on its similarity threshold parameter. For the speech signal pre-processing, frames are taken every 10ms and each one is encoded by a 40 log-energy Mel-scale filterbank representing 25ms of speech (Hamming windowed), without deltas or delta-delta coefficients. The input to the Siamese network is a stack of 7 successive filterbank frames. The features are mean-variance normalized per file, using the VAD information. ABnet A Siamese network is a type of neural network architecture that is used for representation learning, initially introduced for signature verification BIBREF13 . It contains 2 subnetworks sharing the same architecture and weights. In our case, to obtain the training information, we use the lexicon of words to learn an embedding of speech sounds which is more representative of the linguistic properties of the signal at the sub-word level (phoneme structure) and invariant to non-linguistic ones (speaker ID, channel, etc). A token $t$ is from a specific word type $w$ (ex: “the”,“process” etc.) pronounced by a specific speaker $s$ . The input to the network during training is a pair of stacked frames of filterbank features $x_1$ and $x_2$ and we use as label $y = {1}(\lbrace w_1 = w_2\rbrace )$ . For pairs of identical words, we realign them at the frame level using the Dynamic Time Warping (DTW) algorithm BIBREF14 . Based on the alignment paths from the DTW algorithm, the sequences of the stacked frames are then presented as the entries of the siamese network. Dissimilar pairs are aligned along the shortest word, e.g. the longest word is trimmed. With these notions of similarity, we can learn a representation where the distance between the two outputs of the siamese network $e(x_1)$ and $e(x_2)$ try to respect as much as possible the local constraints between $x_1$ and $x_2$ . To do so, ABnet is trained with the margin cosine loss function: $w$0 For a clear and fair comparison between the sampling procedures we fixed the network architecture and loss function as in BIBREF5 . The subnetwork is composed of 2 hidden layers with 500 units, with the Sigmoid as non-linearity and a final embedding layer of 100 units. For regularization, we use the Batch Normalization technique BIBREF15 , with a loss margin $\gamma =0.5$ . All the experiments are carried using the Adam training procedure BIBREF16 and early-stopping on a held-out validation set of $30\%$ of spoken words. We sample the validation set in the same way as the training set. Sampling The sampling strategy refers to the way pairs of tokens are fed to the Siamese network. Sampling every possible pairs of tokens becomes quickly intractable as the dataset grows (cf. Table 1 ). There are four different possible configurations for a pair of word tokens $(t_1,t_2) $ : whether, or not, the tokens are from the same word type, $w_1 = w_2$ . and whether, or not, the tokens are pronounced by the same speaker, $s_1 = s_2$ . Each specific word type $w$ is characterized by the total number of occurrences $n_w$ it has been spoken in the whole corpus. Then, is deduced the frequency of appearances $f_w \propto n_w$ , and $r_w$ its frequency rank in the given corpus. We want to sample a pair of word tokens, in our framework we sample independently these 2 tokens. We define the probability to sample a specific token word type $w$ as a function of $n_w$ . We introduce the function $\phi $ as the sampling compression function: $$\mathbb {P}(w) = \frac{\phi (n_w)}{\sum \limits _{\forall w^{\prime }}\phi (n_{w^{\prime }})}$$ (Eq. 7) When a specific word type $w$ is selected according to these probabilities, a token $t$ is selected randomly from the specific word type $w$ . The usual strategy to select pairs to train siamese networks is to randomly pick two tokens from the whole list of training tokens examples BIBREF13 , BIBREF17 , BIBREF5 . In this framework, the sampling function corresponds $\phi : n \rightarrow n$ . Yet, there is a puzzling phenomenon in human language, there exists an empirical law for the distribution of words, also known as the Zipf's law BIBREF18 . Words types appear following a power law relationship between the frequency $f_w$ and the corresponding rank $r_w$ : a few very high-frequency types account for almost all tokens in a natural corpus (most of them are function words such as “the”,“a”,“it”, etc.) and there are many word types with a low frequency of appearances (“magret”,“duck”,“hectagon”). The frequency $f_t$ of type $t$ scales with its corresponding $r_t$ following a power law, with a parameter $\alpha $ depending on the language: $t$0 One main effect on the training is the oversampling of word types with high frequency, and this is accentuated with the sampling of two tokens for the siamese. These frequent, usually monosyllabic, word types do not carry the necessary phonetic diversity to learn an embedding robust to rarer co-articulations, and rarer phones. To study and minimize this empirical linguistic trend, we will examine 4 other possibilities for the $\phi $ function that compress the word frequency type: : n [2]n, : n [3]n : n (1+n), : n 1 The first two options minimize the effect of the Zipf's Law on the frequency, but the power law is kept. The $\log $ option removes the power law distribution, yet it keeps a linear weighting as a function of the rank of the types. Finally with the last configuration, the word types are sampled uniformly. Another important variation factor in speech realizations is the speaker identity. We expect that the learning of speech representations to take advantage of word pairs from different speakers, to generalize better to new ones, and improve the ABX performance. $ P^s_{-} = \frac{\# \text{Sampled pairs pronounced by different speakers}}{\# \text{Sampled pairs}} $ Given the natural statistics of the dataset, the number of possible "different" pairs exceeds by a large margin the number of possible "same" pairs ( $\sim 1\%$ of all token pairs for the Buckeye-100%). The siamese loss is such that "Same" pairs are brought together in embedding space, and "Different" pairs are pulled apart. Should we reflect this statistic during the training, or eliminate it by presenting same and different pairs equally? We manipulate systematically the proportion of pairs from different word types fed to the network: $ P^w_{-} = \frac{\# \text{Sampled pairs with non-matching word types}}{\# \text{Sampled pairs}} $ Evaluation with ABX tasks To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task BIBREF19 , BIBREF20 . It only requires to define a dissimilarity function $d$ between speech tokens, no external training algorithm is needed. We define the ABX-discriminability of category $x$ from category $y$ as the probability that $A$ and $X$ are further apart than $B$ and $X$ when $A$ and $X$ are from category $x$ and $x$0 is from category $x$1 , according to a dissimilarity function $x$2 . Here, we focus on phone triplet minimal pairs: sequences of 3 phonemes that differ only in the central one (“beg”-“bag”, “api”-“ati”, etc.). For the within-speaker task, all the phones triplets belong to the same speaker (e.g. $x$3 ) Finally the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the across-speaker task, $x$4 and $x$5 belong to the same speaker, and $x$6 to a different one (e.g. $x$7 ). The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate. Weakly supervised Learning We first analyze the results for the sampling compression function $\phi $ Figure 1 . For all training datasets, we observe a similar pattern for the performances on both tasks: the word frequency compression improves the learning and generalization. The result show that, compared to the raw filterbank features baseline, all the trained ABnet networks improve the scores on the phoneme discrimination tasks, even in the $1\%$ scenario. Yet, the improvement with the usual sampling scenario $\phi : n \rightarrow n$ is small in all 4 training datasets. The optimal function for the within and across speaker task on all training configuration is the uniform function $\phi : n \rightarrow 1$ . It yields substantial improvements over the raw filterbanks for ABX task across-speaker ( $5.6 $ absolute points and $16.8 \%$ relative improvement for the $1\%$ -Buckeye training). The addition of data for these experiments improves the performance of the network, but not in a substantial way: the improvements from $1\%$ -Buckeye to $100\%$ -Buckeye, for $\phi : n \rightarrow 1$ , is $1\%$0 absolute points and $1\%$1 relative. These results show that using frequency compression is clearly beneficial, and surprisingly adding more data is still advantageous but not as much as the choice of $1\%$2 . Renshaw et al. BIBREF4 , found similar results with a correspondence auto-encoder, training with more training data did not yield improvements for their system. We now look at the effect on the ABX performances of the proportion of pairs of words pronounced by two different speakers Figure 2 . We start from our best sampling function configuration so far $\phi : n \rightarrow 1$ . We report on the graph only the two extreme training settings. The variations for the 4 different training splits are similar, and still witness a positive effect with additional data on the siamese network performances. Counter-intuitively, the performances on the ABX tasks does not take advantage of pairs from different speakers. It even shows a tendency to increase the ABX error rate: for the $100\%$ -Buckeye we witness an augmentation of the ABX error-rate (2.9 points and $11.6\%$ relative) between $P_{-}^s=0$ and $P_{-}^s=1$ . One of our hypothesis on this surprising effect, might be the poor performance of the DTW alignment algorithm directly on raw filterbanks features of tokens from 2 different speakers. We next study the influence of the proportion of pairs from different word types $P^w_{-}$ Figure 3 . In all training scenarios, to privilege either only the positive or the negative examples is not the solution. For the different training splits, the optimal number for $P_{-}^w$ is either $0.7$ or $0.8$ in the within and across speaker ABX task. We do not observe a symmetric influence of the positive and negative examples, but it is necessary to keep the same and different pairs. The results collapsed, if the siamese network is provided only with positive labels to match: the network will tend to map all speech tokens to the same vector point and the discriminability is at chance level. Applications to fully unsupervised setting Now, we transfer the findings about sampling from the weakly supervised setting, to the fully unsupervised setting. We report in Table 2 our results for the two ZeroSpeech 2015 BIBREF2 corpus: the same subset of the Buckeye Corpus as earlier and a subset of the NCHLT corpus of Xitsonga BIBREF21 . To train our siamese networks, we use as BIBREF5 , the top-down information from the baseline for the Track 2 (Spoken Term Discovery) of the ZeroSpeech 2015 challenge from BIBREF12 . The resulting clusters are not perfect, whereas we had perfect clusters in our previous analysis. In Thiolière et al. BIBREF5 the sampling is done with : $P^w_{-} = P^s_{-} = 0.5$ , and $\phi = n \rightarrow n$ . This gives us a baseline to compare our sampling method improvements with our own implementation of siamese networks. First, the “discovered” clusters – obtained from spoken term discovery system – don't follow the Zipf's law like the gold clusters. This difference of distributions diminishes the impact of the sampling compression function $\phi $ . We matched state-of-the-art for this challenge only on the ABX task within-speaker for the Buckeye, otherwise the modified DPGMM algorithm proposed by Heck et al. stays the best submissions for the 2015 ZeroSpeech challenge. Finally, we study the influence of the DTW-threshold $\delta $ used in the spoken discovery system on the phonetic discriminability of siamese networks. We start again from our best finding from weakly supervised learning. The clusters found by the Jansen et al. BIBREF12 system are very sensitive to this parameter with a trade-off between the Coverage and the Normalized Edit Distance (NED) introduced by BIBREF24 . We find that ABnet is getting good results across the various outputs of the STD system shown in Table 3 and improves over the filterbanks results in all cases. Obtaining more data with the STD system involves a loss in words quality. In contrast with the weakly supervised setting, there is an optimal trade-off between the amount and quality of discovered words for the sub-word modelling task with siamese networks. Conclusions and Future work We presented a systematic study of the sampling component in siamese networks. In the weakly-supervised setting, we established that the word frequency compression had an important impact on the discriminability performances. We also found that optimal proportions of pairs with different types and speakers are not the ones usually used in siamese networks. We transferred the best parameters to the unsupervised setting to compare our results to the 2015 Zero Resource challenge submissions. It lead to improvements over the previous neural networks architectures, yet the Gaussian mixture methods (DPGMM) remain the state-of-the-art in the phonetic discriminability task. In the future, we will study in the same systematic way the influence of sampling in the fully unsupervised setting. We will then try to leverage the better discriminability of our representations obtained with ABnet to improve the spoken term discovery, which relies on frame-level discrimination to find pairs of similar words. Besides, power law distributions are endemic in natural language tasks. It would be interesting to extend this principle to other tasks (for instance, language modeling). Acknowledgements The team's project is funded by the European Research Council (ERC-2011-AdG-295810 BOOTPHON), the Agence Nationale pour la Recherche (ANR-10-LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL* ), Almerys (industrial chair Data Science and Security), Facebook AI Research (Doctoral research contract), Microsoft Research (joint MSR-INRIA center) and a Google Award Grant.
error rate in a minimal pair ABX discrimination task
4e2e19a58e1f2cc5a7b1bc666c1577922454d8c8
4e2e19a58e1f2cc5a7b1bc666c1577922454d8c8_0
Q: Do they only test on one dataset? Text: Introduction Attentional sequence-to-sequence models have become the new standard for machine translation over the last two years, and with the unprecedented improvements in translation accuracy comes a new set of technical challenges. One of the biggest challenges is the high training and decoding costs of these neural machine translation (NMT) system, which is often at least an order of magnitude higher than a phrase-based system trained on the same data. For instance, phrasal MT systems were able achieve single-threaded decoding speeds of 100-500 words/sec on decade-old CPUs BIBREF0 , while BIBREF1 reported single-threaded decoding speeds of 8-10 words/sec on a shallow NMT system. BIBREF2 was able to reach CPU decoding speeds of 100 words/sec for a deep model, but used 44 CPU cores to do so. There has been recent work in speeding up decoding by reducing the search space BIBREF3 , but little in computational improvements. In this work, we consider a production scenario which requires low-latency, high-throughput NMT decoding. We focus on CPU-based decoders, since GPU/FPGA/ASIC-based decoders require specialized hardware deployment and logistical constraints such as batch processing. Efficient CPU decoders can also be used for on-device mobile translation. We focus on single-threaded decoding and single-sentence processing, since multiple threads can be used to reduce latency but not total throughput. We approach this problem from two angles: In Section "Decoder Speed Improvements" , we describe a number of techniques for improving the speed of the decoder, and obtain a 4.4x speedup over a highly efficient baseline. These speedups do not affect decoding results, so they can be applied universally. In Section "Model Improvements" , we describe a simple but powerful network architecture which uses a single RNN (GRU/LSTM) layer at the bottom with a large number of fully-connected (FC) layers on top, and obtains improvements similar to a deep RNN model at a fraction of the training and decoding cost. Data Set The data set we evaluate on in this work is WMT English-French NewsTest2014, which has 380M words of parallel training data and a 3003 sentence test set. The NewsTest2013 set is used for validation. In order to compare our architecture to past work, we train a word-based system without any data augmentation techniques. The network architecture is very similar to BIBREF4 , and specific details of layer size/depth are provided in subsequent sections. We use an 80k source/target vocab and perform standard unk-replacement BIBREF1 on out-of-vocabulary words. Training is performed using an in-house toolkit. Baseline Decoder Our baseline decoder is a standard beam search decoder BIBREF5 with several straightforward performance optimizations: Decoder Speed Improvements This section describes a number of speedups that can be made to a CPU-based attentional sequence-to-sequence beam decoder. Crucially, none of these speedups affect the actual mathematical computation of the decoder, so they can be applied to any network architecture with a guarantee that they will not affect the results. The model used here is similar to the original implementation of BIBREF4 . The exact target GRU equation is: $ d_{ij} & = & {\rm tanh}(W_a{h_{i-1}} + V_a{x_i}){\cdot }{\rm tanh}(U_as_j) \\ \alpha _{ij} & = & \frac{e^{d_{ij}}}{\sum _{j^{\prime }}e^{d_{ij^{\prime }}}} \\ c_{i} &=& \sum _{j} \alpha _{ij}s_j \\ u_i & = & \sigma (W_u{h_{i-1}} + V_u{x_i} + U_u{c_i} + b_u) \\ r_i & = & \sigma (W_r{h_{i-1}} + V_r{x_i} + U_r{c_i} + b_r) \\ \hat{h}_i & = & \sigma (r_i{\odot }(W_h{h_{i-1}}) + V_h{x_i} + U_h{c_i} + b_h) \\ h_i & = & u_ih_{i-1} + (1 - u_i)\hat{h}_i $ Where $W_*$ , $V_*$ , $U_*$ , $b_*$ are learned parameters, $s_j$ is the hidden vector of the $j^{\rm th}$ source word, $h_{i-1}$ is the previous target recurrent vector, $x_i$ is the target input (e.g., embedding of previous word). We also denote the various hyperparameters: $b$ for the beam size, $r$ for the recurrent hidden size, $e$ is the embedding size, $|S|$ for the source sentence length, and $|T|$ for the target sentence length, $|E|$ is the vocab size. 16-Bit Matrix Multiplication Although CPU-based matrix multiplication libraries are highly optimized, they typically only operate on 32/64-bit floats, even though DNNs can almost always operate on much lower precision without degredation of accuracy BIBREF7 . However, low-precision math (1-bit to 7-bit) is difficult to implement efficiently on the CPU, and even 8-bit math has limited support in terms of vectorized (SIMD) instruction sets. Here, we use 16-bit fixed-point integer math, since it has first-class SIMD support and requires minimal changes to training. Training is still performed with 32-bit floats, but we clip the weights to the range [-1.0, 1.0] the relu activation to [0.0, 10.0] to ensure that all values fit into 16-bits with high precision. A reference implementation of 16-bit multiplication in C++/SSE2 is provided in the supplementary material, with a thorough description of low-level details. A comparison between our 16-bit integer implementation and Intel MKL's 32-bit floating point multiplication is given in Figure 1 . We can see that 16-bit multiplication is 2x-3x faster than 32-bit multiplication for batch sizes between 2 and 8, which is the typical range of the beam size $b$ . We are able to achieve greater than a 2x speedup in certain cases because we pre-process the weight matrix offline to have optimal memory layout, which is a capability BLAS libraries do not have. Pre-Compute Embeddings In the first hidden layer on the source and target sides, $x_i$ corresponds to word embeddings. Since this is a closed set of values that are fixed after training, the vectors $V{x_i}$ can be pre-computed BIBREF8 for each word in the vocabulary and stored in a lookup table. This can only be applied to the first hidden layer. Pre-computation does increase the memory cost of the model, since we must store $r \times 3$ floats per word instead of $e$ . However, if we only compute the $k$ most frequently words (e.g., $k = 8,000$ ), this reduces the pre-computation memory by 90% but still results in 95%+ token coverage due to the Zipfian distribution of language. Pre-Compute Attention The attention context computation in the GRU can be re-factored as follows: $U{c_i} = U(\sum _j \alpha _{ij}s_j) = \sum _j \alpha _{ij}(Us_j)$ Crucially, the hidden vector representation $s_j$ is only dependent on the source sentence, while $a_{ij}$ is dependent on the target hypothesis. Therefore, the original computation $U{c_i}$ requires total $|T| \times b$ multiplications per sentence, but the re-factored version $Us_j$ only requires total $|S|$ multiplications. The expectation over $\alpha $ must still be computed at each target timestep, but this is much less expensive than the multiplication by $U$ . SSE & Lookup Tables For the element-wise vector functions use in the GRU, we can use vectorized instructions (SSE/AVX) for the add and multiply functions, and lookup tables for sigmoid and tanh. Reference implementations in C++ are provided in the supplementary material. Merge Recurrent States In the GRU equation, for the first target hidden layer, $x_i$ represents the previously generated word, and $h_{i-1}$ encodes the hypothesis up to two words before the current word. Therefore, if two partial hypotheses in the beam only differ by the last emitted word, their $h_{i-1}$ vectors will be identical. Thus, we can perform matrix multiplication $Wh_{i-1}$ only on the unique $h_{i-1}$ vectors in the beam at each target timestep. For a beam size of $b = 6$ , we measured that the ratio of unique $h_{i-1}$ compared to total $h_{i-1}$ is approximately 70%, averaged over several language pairs. This can only be applied to the first target hidden layer. Speedup Results Cumulative results from each of the preceding speedups are presented in Table 1 , measured on WMT English-French NewsTest2014. The NMT architecture evaluated here uses 3-layer 512-dimensional bidirectional GRU for the source, and a 1-layer 1024-dimensional attentional GRU for the target. Each sentence is decoded independently with a beam of 6. Since these speedups are all mathematical identities excluding quantization noise, all outputs achieve 36.2 BLEU and are 99.9%+ identical. The largest improvement is from 16-bit matrix multiplication, but all speedups contribute a significant amount. Overall, we are able to achieve a 4.4x speedup over a fast baseline decoder. Although the absolute speed is impressive, the model only uses one target layer and is several BLEU behind the SOTA, so the next goal is to maximize model accuracy while still achieving speeds greater than some target, such as 100 words/sec. Model Improvements In NMT, like in many other deep learning tasks, accuracy can be greatly improved by adding more hidden layers, but training and decoding time increase significantly BIBREF11 , BIBREF12 , BIBREF2 . Several past works have noted that convolutional neural networks (CNNs) are significantly less expensive than RNNs, and replaced the source and/or target side with a CNN-based architecture BIBREF13 , BIBREF14 . However, these works have found it is difficult to replace the target side of the model with CNN layers while maintaining high accuracy. The use of a recurrent target is especially important to track attentional coverage and ensure fluency. Here, we propose a mixed model which uses an RNN layer at the bottom to both capture full-sentence context and perform attention, followed by a series of fully-connected (FC) layers applied on top at each timestep. The FC layers can be interpreted as a CNN without overlapping stride. Since each FC layer consists of a single matrix multiplication, it is $1/6^{\rm th}$ the cost of a GRU (or $1/8^{\rm th}$ an LSTM). Additionally, several of the speedups from Section "Decoder Speed Improvements" can only be applied to the first layer, so there is strong incentive to only use a single target RNN. To avoid vanishing gradients, we use ResNet-style skip connections BIBREF15 . These allow very deep models to be trained from scratch and do not require any additional matrix multiplications, unlike highway networks BIBREF16 . With 5 intermediate FC layers, target timestep $i$ is computed as: $ h^B_{i} &=& {\rm AttGRU}(h^B_{i-1}, x_i, S) \\ h^1_{i} &=& {\rm relu}(W^1h^B_i) \\ h^2_{i} &=& {\rm relu}(W^2h^1_i) \\ h^3_{i} &=& {\rm relu}(W^3h^2_i + h^1_i) \\ h^4_{i} &=& {\rm relu}(W^4h^3_i) \\ h^5_{i} &=& {\rm relu}(W^5h^4_i + h^3_i) \\ h^T_{i} &=& {\rm tanh}(W^Th^5_i)\ {\rm {\bf or}}\ {\rm GRU}(h^T_{i-1}, h^5_{i}) \\ y_i &=& {\rm softmax}(Vh^T_{i}) $ We follow BIBREF15 and only use skip connections on every other FC layer, but do not use batch normalization. The same pattern can be used for more FC layers, and the FC layers can be a different size than the bottom or top hidden layers. The top hidden layer can be an RNN or an FC layer. It is important to use relu activations (opposed to tanh) for ResNet-style skip connections. The GRUs still use tanh. Model Results Results using the mixed RNN+FC architecture are shown in Table 2 , using all speedups. We have found that the benefit of using RNN+FC layers on the source is minimal, so we only perform ablation on the target. For the source, we use a 3-layer 512-dim bidi GRU in all models (S1)-(S6). Model (S1) and (S2) are one and two layer baselines. Model (S4), which uses 7 intermediate FC layers, has similar decoding cost to (S2) while doubling the improvement over (S1) to 1.2 BLEU. We see minimal benefit from using a GRU on the top layer (S5) or using more FC layers (S6). In (E1) and (E2) we present 2 and 3 model ensembles of (S4), trained from scratch with different random seeds. We can see that the 2-model ensemble improves results by 0.9 BLEU, but the 3-model ensemble has little additional improvment. Although not presented here, we have found these improvement from decoder speedups and RNN+FC to be consistent across many language pairs. All together, we were able to achieve a BLEU score of 38.3 while decoding at 100 words/sec on a single CPU core. As a point of comparison, BIBREF2 achieves similar BLEU scores on this test set (37.9 to 38.9) and reports a CPU decoding speed of 100 words/sec (0.2226 sents/sec), but parallelizes this decoding across 44 CPU cores. System (S7), which is our re-implementation of BIBREF2 , decodes at 28 words/sec on one CPU core, using all of the speedups described in Section "Decoder Speed Improvements" . BIBREF12 has a similar computational cost to (S7), but we were not able to replicate those results in terms of accuracy. Although we are comparing an ensemble to a single model, we can see ensemble (E1) is over 3x faster to decode than the single model (S7). Additionally, we have found that model (S4) is roughly 3x faster to train than (S7) using the same GPU resources, so (E1) is also 1.5x faster to train than a single model (S7).
Yes
69ca609e86888c7e4e2e3d33435a0a36f77601b5
69ca609e86888c7e4e2e3d33435a0a36f77601b5_0
Q: What baseline decoder do they use? Text: Introduction Attentional sequence-to-sequence models have become the new standard for machine translation over the last two years, and with the unprecedented improvements in translation accuracy comes a new set of technical challenges. One of the biggest challenges is the high training and decoding costs of these neural machine translation (NMT) system, which is often at least an order of magnitude higher than a phrase-based system trained on the same data. For instance, phrasal MT systems were able achieve single-threaded decoding speeds of 100-500 words/sec on decade-old CPUs BIBREF0 , while BIBREF1 reported single-threaded decoding speeds of 8-10 words/sec on a shallow NMT system. BIBREF2 was able to reach CPU decoding speeds of 100 words/sec for a deep model, but used 44 CPU cores to do so. There has been recent work in speeding up decoding by reducing the search space BIBREF3 , but little in computational improvements. In this work, we consider a production scenario which requires low-latency, high-throughput NMT decoding. We focus on CPU-based decoders, since GPU/FPGA/ASIC-based decoders require specialized hardware deployment and logistical constraints such as batch processing. Efficient CPU decoders can also be used for on-device mobile translation. We focus on single-threaded decoding and single-sentence processing, since multiple threads can be used to reduce latency but not total throughput. We approach this problem from two angles: In Section "Decoder Speed Improvements" , we describe a number of techniques for improving the speed of the decoder, and obtain a 4.4x speedup over a highly efficient baseline. These speedups do not affect decoding results, so they can be applied universally. In Section "Model Improvements" , we describe a simple but powerful network architecture which uses a single RNN (GRU/LSTM) layer at the bottom with a large number of fully-connected (FC) layers on top, and obtains improvements similar to a deep RNN model at a fraction of the training and decoding cost. Data Set The data set we evaluate on in this work is WMT English-French NewsTest2014, which has 380M words of parallel training data and a 3003 sentence test set. The NewsTest2013 set is used for validation. In order to compare our architecture to past work, we train a word-based system without any data augmentation techniques. The network architecture is very similar to BIBREF4 , and specific details of layer size/depth are provided in subsequent sections. We use an 80k source/target vocab and perform standard unk-replacement BIBREF1 on out-of-vocabulary words. Training is performed using an in-house toolkit. Baseline Decoder Our baseline decoder is a standard beam search decoder BIBREF5 with several straightforward performance optimizations: Decoder Speed Improvements This section describes a number of speedups that can be made to a CPU-based attentional sequence-to-sequence beam decoder. Crucially, none of these speedups affect the actual mathematical computation of the decoder, so they can be applied to any network architecture with a guarantee that they will not affect the results. The model used here is similar to the original implementation of BIBREF4 . The exact target GRU equation is: $ d_{ij} & = & {\rm tanh}(W_a{h_{i-1}} + V_a{x_i}){\cdot }{\rm tanh}(U_as_j) \\ \alpha _{ij} & = & \frac{e^{d_{ij}}}{\sum _{j^{\prime }}e^{d_{ij^{\prime }}}} \\ c_{i} &=& \sum _{j} \alpha _{ij}s_j \\ u_i & = & \sigma (W_u{h_{i-1}} + V_u{x_i} + U_u{c_i} + b_u) \\ r_i & = & \sigma (W_r{h_{i-1}} + V_r{x_i} + U_r{c_i} + b_r) \\ \hat{h}_i & = & \sigma (r_i{\odot }(W_h{h_{i-1}}) + V_h{x_i} + U_h{c_i} + b_h) \\ h_i & = & u_ih_{i-1} + (1 - u_i)\hat{h}_i $ Where $W_*$ , $V_*$ , $U_*$ , $b_*$ are learned parameters, $s_j$ is the hidden vector of the $j^{\rm th}$ source word, $h_{i-1}$ is the previous target recurrent vector, $x_i$ is the target input (e.g., embedding of previous word). We also denote the various hyperparameters: $b$ for the beam size, $r$ for the recurrent hidden size, $e$ is the embedding size, $|S|$ for the source sentence length, and $|T|$ for the target sentence length, $|E|$ is the vocab size. 16-Bit Matrix Multiplication Although CPU-based matrix multiplication libraries are highly optimized, they typically only operate on 32/64-bit floats, even though DNNs can almost always operate on much lower precision without degredation of accuracy BIBREF7 . However, low-precision math (1-bit to 7-bit) is difficult to implement efficiently on the CPU, and even 8-bit math has limited support in terms of vectorized (SIMD) instruction sets. Here, we use 16-bit fixed-point integer math, since it has first-class SIMD support and requires minimal changes to training. Training is still performed with 32-bit floats, but we clip the weights to the range [-1.0, 1.0] the relu activation to [0.0, 10.0] to ensure that all values fit into 16-bits with high precision. A reference implementation of 16-bit multiplication in C++/SSE2 is provided in the supplementary material, with a thorough description of low-level details. A comparison between our 16-bit integer implementation and Intel MKL's 32-bit floating point multiplication is given in Figure 1 . We can see that 16-bit multiplication is 2x-3x faster than 32-bit multiplication for batch sizes between 2 and 8, which is the typical range of the beam size $b$ . We are able to achieve greater than a 2x speedup in certain cases because we pre-process the weight matrix offline to have optimal memory layout, which is a capability BLAS libraries do not have. Pre-Compute Embeddings In the first hidden layer on the source and target sides, $x_i$ corresponds to word embeddings. Since this is a closed set of values that are fixed after training, the vectors $V{x_i}$ can be pre-computed BIBREF8 for each word in the vocabulary and stored in a lookup table. This can only be applied to the first hidden layer. Pre-computation does increase the memory cost of the model, since we must store $r \times 3$ floats per word instead of $e$ . However, if we only compute the $k$ most frequently words (e.g., $k = 8,000$ ), this reduces the pre-computation memory by 90% but still results in 95%+ token coverage due to the Zipfian distribution of language. Pre-Compute Attention The attention context computation in the GRU can be re-factored as follows: $U{c_i} = U(\sum _j \alpha _{ij}s_j) = \sum _j \alpha _{ij}(Us_j)$ Crucially, the hidden vector representation $s_j$ is only dependent on the source sentence, while $a_{ij}$ is dependent on the target hypothesis. Therefore, the original computation $U{c_i}$ requires total $|T| \times b$ multiplications per sentence, but the re-factored version $Us_j$ only requires total $|S|$ multiplications. The expectation over $\alpha $ must still be computed at each target timestep, but this is much less expensive than the multiplication by $U$ . SSE & Lookup Tables For the element-wise vector functions use in the GRU, we can use vectorized instructions (SSE/AVX) for the add and multiply functions, and lookup tables for sigmoid and tanh. Reference implementations in C++ are provided in the supplementary material. Merge Recurrent States In the GRU equation, for the first target hidden layer, $x_i$ represents the previously generated word, and $h_{i-1}$ encodes the hypothesis up to two words before the current word. Therefore, if two partial hypotheses in the beam only differ by the last emitted word, their $h_{i-1}$ vectors will be identical. Thus, we can perform matrix multiplication $Wh_{i-1}$ only on the unique $h_{i-1}$ vectors in the beam at each target timestep. For a beam size of $b = 6$ , we measured that the ratio of unique $h_{i-1}$ compared to total $h_{i-1}$ is approximately 70%, averaged over several language pairs. This can only be applied to the first target hidden layer. Speedup Results Cumulative results from each of the preceding speedups are presented in Table 1 , measured on WMT English-French NewsTest2014. The NMT architecture evaluated here uses 3-layer 512-dimensional bidirectional GRU for the source, and a 1-layer 1024-dimensional attentional GRU for the target. Each sentence is decoded independently with a beam of 6. Since these speedups are all mathematical identities excluding quantization noise, all outputs achieve 36.2 BLEU and are 99.9%+ identical. The largest improvement is from 16-bit matrix multiplication, but all speedups contribute a significant amount. Overall, we are able to achieve a 4.4x speedup over a fast baseline decoder. Although the absolute speed is impressive, the model only uses one target layer and is several BLEU behind the SOTA, so the next goal is to maximize model accuracy while still achieving speeds greater than some target, such as 100 words/sec. Model Improvements In NMT, like in many other deep learning tasks, accuracy can be greatly improved by adding more hidden layers, but training and decoding time increase significantly BIBREF11 , BIBREF12 , BIBREF2 . Several past works have noted that convolutional neural networks (CNNs) are significantly less expensive than RNNs, and replaced the source and/or target side with a CNN-based architecture BIBREF13 , BIBREF14 . However, these works have found it is difficult to replace the target side of the model with CNN layers while maintaining high accuracy. The use of a recurrent target is especially important to track attentional coverage and ensure fluency. Here, we propose a mixed model which uses an RNN layer at the bottom to both capture full-sentence context and perform attention, followed by a series of fully-connected (FC) layers applied on top at each timestep. The FC layers can be interpreted as a CNN without overlapping stride. Since each FC layer consists of a single matrix multiplication, it is $1/6^{\rm th}$ the cost of a GRU (or $1/8^{\rm th}$ an LSTM). Additionally, several of the speedups from Section "Decoder Speed Improvements" can only be applied to the first layer, so there is strong incentive to only use a single target RNN. To avoid vanishing gradients, we use ResNet-style skip connections BIBREF15 . These allow very deep models to be trained from scratch and do not require any additional matrix multiplications, unlike highway networks BIBREF16 . With 5 intermediate FC layers, target timestep $i$ is computed as: $ h^B_{i} &=& {\rm AttGRU}(h^B_{i-1}, x_i, S) \\ h^1_{i} &=& {\rm relu}(W^1h^B_i) \\ h^2_{i} &=& {\rm relu}(W^2h^1_i) \\ h^3_{i} &=& {\rm relu}(W^3h^2_i + h^1_i) \\ h^4_{i} &=& {\rm relu}(W^4h^3_i) \\ h^5_{i} &=& {\rm relu}(W^5h^4_i + h^3_i) \\ h^T_{i} &=& {\rm tanh}(W^Th^5_i)\ {\rm {\bf or}}\ {\rm GRU}(h^T_{i-1}, h^5_{i}) \\ y_i &=& {\rm softmax}(Vh^T_{i}) $ We follow BIBREF15 and only use skip connections on every other FC layer, but do not use batch normalization. The same pattern can be used for more FC layers, and the FC layers can be a different size than the bottom or top hidden layers. The top hidden layer can be an RNN or an FC layer. It is important to use relu activations (opposed to tanh) for ResNet-style skip connections. The GRUs still use tanh. Model Results Results using the mixed RNN+FC architecture are shown in Table 2 , using all speedups. We have found that the benefit of using RNN+FC layers on the source is minimal, so we only perform ablation on the target. For the source, we use a 3-layer 512-dim bidi GRU in all models (S1)-(S6). Model (S1) and (S2) are one and two layer baselines. Model (S4), which uses 7 intermediate FC layers, has similar decoding cost to (S2) while doubling the improvement over (S1) to 1.2 BLEU. We see minimal benefit from using a GRU on the top layer (S5) or using more FC layers (S6). In (E1) and (E2) we present 2 and 3 model ensembles of (S4), trained from scratch with different random seeds. We can see that the 2-model ensemble improves results by 0.9 BLEU, but the 3-model ensemble has little additional improvment. Although not presented here, we have found these improvement from decoder speedups and RNN+FC to be consistent across many language pairs. All together, we were able to achieve a BLEU score of 38.3 while decoding at 100 words/sec on a single CPU core. As a point of comparison, BIBREF2 achieves similar BLEU scores on this test set (37.9 to 38.9) and reports a CPU decoding speed of 100 words/sec (0.2226 sents/sec), but parallelizes this decoding across 44 CPU cores. System (S7), which is our re-implementation of BIBREF2 , decodes at 28 words/sec on one CPU core, using all of the speedups described in Section "Decoder Speed Improvements" . BIBREF12 has a similar computational cost to (S7), but we were not able to replicate those results in terms of accuracy. Although we are comparing an ensemble to a single model, we can see ensemble (E1) is over 3x faster to decode than the single model (S7). Additionally, we have found that model (S4) is roughly 3x faster to train than (S7) using the same GPU resources, so (E1) is also 1.5x faster to train than a single model (S7).
a standard beam search decoder BIBREF5 with several straightforward performance optimizations
98eb245c727c0bd050d7686d133fa7cd9d25a0fb
98eb245c727c0bd050d7686d133fa7cd9d25a0fb_0
Q: Was evaluation metrics and criteria were used to evaluate the output of the cascaded multimodal speech translation? Text: Introduction The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system. MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14. Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated. Methods In this section, we briefly describe the proposed multimodal speech translation system and its components. Methods ::: Automatic Speech Recognition The baseline ASR system that we use to obtain English transcripts is an attentive sequence-to-sequence architecture with a stacked encoder of 6 bidirectional LSTM layers BIBREF16. Each LSTM layer is followed by a tanh projection layer. The middle two LSTM layers apply a temporal subsampling BIBREF17 by skipping every other input, reducing the length of the sequence $\mathrm {X}$ from $T$ to $T/4$. All LSTM and projection layers have 320 hidden units. The forward-pass of the encoder produces the source encodings on top of which attention will be applied within the decoder. The hidden and cell states of all LSTM layers are initialized with 0. The decoder is a 2-layer stacked GRU BIBREF18, where the first GRU receives the previous hidden state of the second GRU in a transitional way. GRU layers, attention layer and embeddings have 320 hidden units. We share the input and output embeddings to reduce the number of parameters BIBREF19. At timestep $t\mathrm {=}0$, the hidden state of the first GRU is initialized with the average-pooled source encoding states. Methods ::: Deliberation-based NMT A human translator typically produces a translation draft first, and then refines it towards the final translation. The idea behind the deliberation networks BIBREF20 simulates this process by extending the conventional attentive encoder-decoder architecture BIBREF21 with a second pass refinement decoder. Specifically, the encoder first encodes a source sentence of length $N$ into a sequence of hidden states $\mathcal {H} = \lbrace h_1, h_2,\dots ,h_{N}\rbrace $ on top of which the first pass decoder applies the attention. The pre-softmax hidden states $\lbrace \hat{s}_1,\hat{s}_2,\dots ,\hat{s}_{M}\rbrace $ produced by the decoder leads to a first pass translation $\lbrace \hat{y}_1,\hat{y}_2,\dots , \hat{y}_{M}\rbrace $. The second pass decoder intervenes at this point and generates a second translation by attending separately to both $\mathcal {H}$ and the concatenated state vectors $\lbrace [\hat{s}_1;\hat{y}_1], [\hat{s}_2; \hat{y}_2],\dots ,[\hat{s}_{M}; \hat{y}_{M}]\rbrace $. Two context vectors are produced as a result, and they are joint inputs with $s_{t-1}$ (previous hidden state of ) and $y_{t-1}$ (previous output of ) to to yield $s_t$ and then $y_t$. A transformer-based deliberation architecture is proposed by BIBREF1. It follows the same two-pass refinement process, with every second-pass decoder block attending to both the encoder output $\mathcal {H}$ and the first-pass pre-softmax hidden states $\mathcal {\hat{S}}$. However, it differs from BIBREF20 in that the actual first-pass translation $\hat{Y}$ is not used for the second-pass attention. Methods ::: Multimodality ::: Visual Features We experiment with three types of video features, namely average-pooled vector representations (), convolutional layer outputs (), and Ten-Hot action category embeddings (). The features are provided by the How2 dataset using the following approach: a video is segmented into smaller parts of 16 frames each, and the segments are fed to a 3D ResNeXt-101 CNN BIBREF22, trained to recognise 400 action classes BIBREF23. The 2048-D fully-connected features are then averaged across the segments to obtain a single feature vector for the overall video. In order to obtain the features, 16 equi-distant frames are sampled from a video, and they are then used as input to an inflated 3D ResNet-50 CNN BIBREF24 fine-tuned on the Moments in Time action video dataset. The CNN hence takes in a video and classifies it into one of 339 categories. The features, taken at the CONV$_4$ layer of the network, has a $7 \times 7 \times 2048$ dimensionality. Higher-level semantic information can be more helpful than convolutional features. We apply the same CNN to a video as we do for features, but this time the focus is on the softmax layer output: we process the embedding matrix to keep the 10 most probable category embeddings intact while zeroing out the remaining ones. We call this representation ten-hot action category embeddings (). Methods ::: Multimodality ::: Integration Approaches Encoder with Additive Visual Conditioning (-) In this approach, inspired by BIBREF7, we add a projection of the visual features to each output of the vanilla transformer encoder (-). This projection is strictly linear from the 2048-D features to the 1024-D space in which the self attention hidden states reside, and the projection matrix is learned jointly with the translation model. Decoder with Visual Attention (-) In order to accommodate attention to visual features at the decoder side and inspired by BIBREF25, we insert one layer of visual cross attention at a decoder block immediately before the fully-connected layer. We name the transformer decoder with such an extra layer as –, where this layer is immediately after the textual attention to the encoder output. Specifically, we experiment with attention to , and features separately. The visual attention is distributed across the 49 video regions in , the 339 action category word embeddings in , or the 32 rows in where we reshape the 2048-D vector into a $32 \times 64$ matrix. Methods ::: Multimodality ::: Multimodal Transformers The vanilla text-only transformer (-) is used as a baseline, and we design two variants: with additive visual conditioning (-) and with attention to visual features (-). A -features a -and a vanilla transformer decoder (-), therefore utilising visual information only at the encoder side. In contrast, a -is configured with a -and a –, exploiting visual cues only at the decoder. Figure FIGREF7 summarises the two approaches. Methods ::: Multimodality ::: Multimodal Deliberation Our multimodal deliberation models differ from each other in two ways: whether to use additive () BIBREF7 or cascade () textual deliberation to integrate the textual attention to the original input and to the first pass, and whether to employ visual attention (-) or additive visual conditioning (-) to integrate the visual features into the textual MT model. Figures FIGREF9 and FIGREF10 show the configurations of our additive and cascade deliberation models, respectively, each also showing the connections necessary for -and -. Additive () & Cascade () Textual Deliberation In an additive-deliberation second-pass decoder (–) block, the first layer is still self-attention, whereas the second layer is the addition of two separate attention sub-layers. The first sub-layer attends to the encoder output in the same way -does, while the attention of the second sub-layer is distributed across the concatenated first pass outputs and hidden states. The input to both sub-layers is the output of the self-attention layer, and the outputs of the sub-layers are summed as the final output and then (with a residual connection) fed to the visual attention layer if the decoder is multimodal or to the fully connected layer otherwise. For the cascade version, the only difference is that, instead of two sub-layers, we have two separate, successive layers with the same functionalities. It is worth mentioning that we introduce the attention to the first pass only at the initial three decoder blocks out of the total six of the second pass decoder (-), following BIBREF7. Additive Visual Conditioning (-) & Visual Attention (-) -and -are simply applying -and -respectively to a deliberation model, therefore more details have been introduced in Section SECREF5. For -, similar to in -, we add a projection of the visual features to the output of -, and use -as the first pass decoder and either additive or cascade deliberation as the -. For -, in a similar vein as -, the encoder in this setting is simply -and the first pass decoder is just -, but this time -is responsible for attending to the first pass output as well as the visual features. For both additive and cascade deliberation, a visual attention layer is inserted immediately before the fully-connected layer, so that the penultimate layer of a decoder block now attends to visual information. Experiments ::: Dataset We stick to the default training/validation/test splits and the pre-extracted speech features for the How2 dataset, as provided by the organizers. As for the pre-processing, we lowercase the sentences and then tokenise them using Moses BIBREF26. We then apply subword segmentation BIBREF27 by learning separate English and Portuguese models with 20,000 merge operations each. The English corpus used when training the subword model consists of both the ground-truth video subtitles and the noisy transcripts produced by the underlying ASR system. We do not share vocabularies between the source and target domains. Finally for the post-processing step, we merge the subword tokens, apply recasing and detokenisation. The recasing model is a standard Moses baseline trained again on the parallel How2 corpus. The baseline ASR system is trained on the How2 dataset as well. This system is then used to obtain noisy transcripts for the whole dataset, using beam-search with beam size of 10. The pre-processing pipeline for the ASR is different from the MT pipeline in the sense that the punctuations are removed and the subword segmentation is performed using SentencePiece BIBREF28 with a vocabulary size of 5,000. The test-set performance of this ASR is around 19% WER. Experiments ::: Training We train our transformer and deliberation models until convergence largely with transformer_big hyperparameters: 16 attention heads, 1024-D hidden states and a dropout of 0.1. During inference, we apply beam-search with beam size of 10. For deliberation, we first train the underlying transformer model until convergence, and use its weights to initialise the encoder and the first pass decoder. After freezing those weights, we train -until convergence. The reason for the partial freezing is that our preliminary experiments showed that it enabled better performance compared to updating the whole model. Following BIBREF20, we obtain 10-best samples from the first pass with beam-search for source augmentation during the training of -. We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7. Results & Analysis ::: Quantitative Results We report tokenised results obtained using the multeval toolkit BIBREF32. We focus on single system performance and thus, do not perform any ensembling or checkpoint averaging. The scores of the models are shown in Table TABREF17. Evident from the table is that the best models overall are -and –with a score of 39.8, and the other multimodal transformers have slightly worse performance, showing score drops around 0.1. Also, none of the multimodal transformer systems are significantly different from the baseline, which is a sign of the limited extent to which visual features affect the output. For additive deliberation (-), the performance variation is considerably larger: -and take the lead with 37.6 , but the next best system (-) plunges to 37.2. The other two (-& -) also have noticeably worse results (36.0 and 37.0). Overall, however, -is still similar to the transformers in that the baseline generally yields higher-quality translations. Cascade deliberation, on the other hand, is different in that its text-only baseline is outperformed by most of its multimodal counterparts. Multimodality enables boosts as large as around 1 point in the cases of -and -, both of which achieve about 37.4 and are significantly different from the baseline. Another observation is that the deliberation models as a whole lead to worse performance than the canonical transformers, with deterioration ranging from 2.3 (across -variants) to 3.5 (across -systems), which defies the findings of BIBREF7. We leave this to future investigations. Results & Analysis ::: Incongruence Analysis To further probe the effect of multimodality, we follow the incongruent decoding approach BIBREF15, where our multimodal models are fed with mismatched visual features. The general assumption is that a model will have learned to exploit visual information to help with its translation, if it shows substantial performance degradation when given wrong visual features. The results are reported in Table TABREF19. Overall, there are considerable parallels between the transformers and the cascade deliberation models in terms of the incongruence effect, such as universal performance deterioration (ranging from 0.1 to 0.6 ) and more noticeable score changes ($\downarrow $ 0.5 for –and $\downarrow $ 0.6 for —) in the -setting compared to the other scenarios. Additive deliberation, however, manifests a drastically different pattern, showing almost no incongruence effect for -, only a 0.2 decrease for -, and even a 0.1 boost for -and -. Therefore, the determination can be made that and -models are considerably more sensitive to incorrect visual information than -, which means the former better utilise visual clues during translation. Interestingly, the extent of performance degradation caused by incongruence is not necessarily correlated with the congruent scores. For example, –is on par with –in congruent decoding (differing by around 0.1 ), but the former suffers only a 0.1-loss with incongruence whereas the figure for the latter is 0.4, in addition to the fact that the latter becomes significantly different after incongruent decoding. This means that some multimodal models that are sensitive to incongruence likely complement visual attention with textual attention but without getting higher-quality translation as a result. The differences between the multimodal behaviour of additive and cascade deliberation also warrant more investigation, since the two types of deliberation are identical in their utilisation of visual features and only vary in their handling of the textual attention to the outputs of the encoder and the first pass decoder. Conclusions We explored a series of transformers and deliberation based models to approach cascaded multimodal speech translation as our participation in the How2-based speech translation task of IWSLT 2019. We submitted the –system, which is a canonical transformer with visual attention over the convolutional features, as our primary system with the remaining ones marked as contrastive ones. The primary system obtained a of 39.63 on the public IWSLT19 test set, whereas -, the top contrastive system on the same set, achieved 39.85. Our main conclusions are as follows: (i) the visual modality causes varying levels of translation quality damage to the transformers and additive deliberation, but boosts cascade deliberation; (ii) the multimodal transformers and cascade deliberation show performance degradation due to incongruence, but additive deliberation is not as affected; (iii) there is no strict correlation between incongruence sensitivity and translation performance. Acknowledgements This work was supported by the MultiMT (H2020 ERC Starting Grant No. 678017) and MMVC (Newton Fund Institutional Links Grant, ID 352343575) projects.
BLEU scores
537a786794604ecc473fb3ef6222e0c3cb81f772
537a786794604ecc473fb3ef6222e0c3cb81f772_0
Q: What dataset was used in this work? Text: Introduction The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system. MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14. Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated. Methods In this section, we briefly describe the proposed multimodal speech translation system and its components. Methods ::: Automatic Speech Recognition The baseline ASR system that we use to obtain English transcripts is an attentive sequence-to-sequence architecture with a stacked encoder of 6 bidirectional LSTM layers BIBREF16. Each LSTM layer is followed by a tanh projection layer. The middle two LSTM layers apply a temporal subsampling BIBREF17 by skipping every other input, reducing the length of the sequence $\mathrm {X}$ from $T$ to $T/4$. All LSTM and projection layers have 320 hidden units. The forward-pass of the encoder produces the source encodings on top of which attention will be applied within the decoder. The hidden and cell states of all LSTM layers are initialized with 0. The decoder is a 2-layer stacked GRU BIBREF18, where the first GRU receives the previous hidden state of the second GRU in a transitional way. GRU layers, attention layer and embeddings have 320 hidden units. We share the input and output embeddings to reduce the number of parameters BIBREF19. At timestep $t\mathrm {=}0$, the hidden state of the first GRU is initialized with the average-pooled source encoding states. Methods ::: Deliberation-based NMT A human translator typically produces a translation draft first, and then refines it towards the final translation. The idea behind the deliberation networks BIBREF20 simulates this process by extending the conventional attentive encoder-decoder architecture BIBREF21 with a second pass refinement decoder. Specifically, the encoder first encodes a source sentence of length $N$ into a sequence of hidden states $\mathcal {H} = \lbrace h_1, h_2,\dots ,h_{N}\rbrace $ on top of which the first pass decoder applies the attention. The pre-softmax hidden states $\lbrace \hat{s}_1,\hat{s}_2,\dots ,\hat{s}_{M}\rbrace $ produced by the decoder leads to a first pass translation $\lbrace \hat{y}_1,\hat{y}_2,\dots , \hat{y}_{M}\rbrace $. The second pass decoder intervenes at this point and generates a second translation by attending separately to both $\mathcal {H}$ and the concatenated state vectors $\lbrace [\hat{s}_1;\hat{y}_1], [\hat{s}_2; \hat{y}_2],\dots ,[\hat{s}_{M}; \hat{y}_{M}]\rbrace $. Two context vectors are produced as a result, and they are joint inputs with $s_{t-1}$ (previous hidden state of ) and $y_{t-1}$ (previous output of ) to to yield $s_t$ and then $y_t$. A transformer-based deliberation architecture is proposed by BIBREF1. It follows the same two-pass refinement process, with every second-pass decoder block attending to both the encoder output $\mathcal {H}$ and the first-pass pre-softmax hidden states $\mathcal {\hat{S}}$. However, it differs from BIBREF20 in that the actual first-pass translation $\hat{Y}$ is not used for the second-pass attention. Methods ::: Multimodality ::: Visual Features We experiment with three types of video features, namely average-pooled vector representations (), convolutional layer outputs (), and Ten-Hot action category embeddings (). The features are provided by the How2 dataset using the following approach: a video is segmented into smaller parts of 16 frames each, and the segments are fed to a 3D ResNeXt-101 CNN BIBREF22, trained to recognise 400 action classes BIBREF23. The 2048-D fully-connected features are then averaged across the segments to obtain a single feature vector for the overall video. In order to obtain the features, 16 equi-distant frames are sampled from a video, and they are then used as input to an inflated 3D ResNet-50 CNN BIBREF24 fine-tuned on the Moments in Time action video dataset. The CNN hence takes in a video and classifies it into one of 339 categories. The features, taken at the CONV$_4$ layer of the network, has a $7 \times 7 \times 2048$ dimensionality. Higher-level semantic information can be more helpful than convolutional features. We apply the same CNN to a video as we do for features, but this time the focus is on the softmax layer output: we process the embedding matrix to keep the 10 most probable category embeddings intact while zeroing out the remaining ones. We call this representation ten-hot action category embeddings (). Methods ::: Multimodality ::: Integration Approaches Encoder with Additive Visual Conditioning (-) In this approach, inspired by BIBREF7, we add a projection of the visual features to each output of the vanilla transformer encoder (-). This projection is strictly linear from the 2048-D features to the 1024-D space in which the self attention hidden states reside, and the projection matrix is learned jointly with the translation model. Decoder with Visual Attention (-) In order to accommodate attention to visual features at the decoder side and inspired by BIBREF25, we insert one layer of visual cross attention at a decoder block immediately before the fully-connected layer. We name the transformer decoder with such an extra layer as –, where this layer is immediately after the textual attention to the encoder output. Specifically, we experiment with attention to , and features separately. The visual attention is distributed across the 49 video regions in , the 339 action category word embeddings in , or the 32 rows in where we reshape the 2048-D vector into a $32 \times 64$ matrix. Methods ::: Multimodality ::: Multimodal Transformers The vanilla text-only transformer (-) is used as a baseline, and we design two variants: with additive visual conditioning (-) and with attention to visual features (-). A -features a -and a vanilla transformer decoder (-), therefore utilising visual information only at the encoder side. In contrast, a -is configured with a -and a –, exploiting visual cues only at the decoder. Figure FIGREF7 summarises the two approaches. Methods ::: Multimodality ::: Multimodal Deliberation Our multimodal deliberation models differ from each other in two ways: whether to use additive () BIBREF7 or cascade () textual deliberation to integrate the textual attention to the original input and to the first pass, and whether to employ visual attention (-) or additive visual conditioning (-) to integrate the visual features into the textual MT model. Figures FIGREF9 and FIGREF10 show the configurations of our additive and cascade deliberation models, respectively, each also showing the connections necessary for -and -. Additive () & Cascade () Textual Deliberation In an additive-deliberation second-pass decoder (–) block, the first layer is still self-attention, whereas the second layer is the addition of two separate attention sub-layers. The first sub-layer attends to the encoder output in the same way -does, while the attention of the second sub-layer is distributed across the concatenated first pass outputs and hidden states. The input to both sub-layers is the output of the self-attention layer, and the outputs of the sub-layers are summed as the final output and then (with a residual connection) fed to the visual attention layer if the decoder is multimodal or to the fully connected layer otherwise. For the cascade version, the only difference is that, instead of two sub-layers, we have two separate, successive layers with the same functionalities. It is worth mentioning that we introduce the attention to the first pass only at the initial three decoder blocks out of the total six of the second pass decoder (-), following BIBREF7. Additive Visual Conditioning (-) & Visual Attention (-) -and -are simply applying -and -respectively to a deliberation model, therefore more details have been introduced in Section SECREF5. For -, similar to in -, we add a projection of the visual features to the output of -, and use -as the first pass decoder and either additive or cascade deliberation as the -. For -, in a similar vein as -, the encoder in this setting is simply -and the first pass decoder is just -, but this time -is responsible for attending to the first pass output as well as the visual features. For both additive and cascade deliberation, a visual attention layer is inserted immediately before the fully-connected layer, so that the penultimate layer of a decoder block now attends to visual information. Experiments ::: Dataset We stick to the default training/validation/test splits and the pre-extracted speech features for the How2 dataset, as provided by the organizers. As for the pre-processing, we lowercase the sentences and then tokenise them using Moses BIBREF26. We then apply subword segmentation BIBREF27 by learning separate English and Portuguese models with 20,000 merge operations each. The English corpus used when training the subword model consists of both the ground-truth video subtitles and the noisy transcripts produced by the underlying ASR system. We do not share vocabularies between the source and target domains. Finally for the post-processing step, we merge the subword tokens, apply recasing and detokenisation. The recasing model is a standard Moses baseline trained again on the parallel How2 corpus. The baseline ASR system is trained on the How2 dataset as well. This system is then used to obtain noisy transcripts for the whole dataset, using beam-search with beam size of 10. The pre-processing pipeline for the ASR is different from the MT pipeline in the sense that the punctuations are removed and the subword segmentation is performed using SentencePiece BIBREF28 with a vocabulary size of 5,000. The test-set performance of this ASR is around 19% WER. Experiments ::: Training We train our transformer and deliberation models until convergence largely with transformer_big hyperparameters: 16 attention heads, 1024-D hidden states and a dropout of 0.1. During inference, we apply beam-search with beam size of 10. For deliberation, we first train the underlying transformer model until convergence, and use its weights to initialise the encoder and the first pass decoder. After freezing those weights, we train -until convergence. The reason for the partial freezing is that our preliminary experiments showed that it enabled better performance compared to updating the whole model. Following BIBREF20, we obtain 10-best samples from the first pass with beam-search for source augmentation during the training of -. We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7. Results & Analysis ::: Quantitative Results We report tokenised results obtained using the multeval toolkit BIBREF32. We focus on single system performance and thus, do not perform any ensembling or checkpoint averaging. The scores of the models are shown in Table TABREF17. Evident from the table is that the best models overall are -and –with a score of 39.8, and the other multimodal transformers have slightly worse performance, showing score drops around 0.1. Also, none of the multimodal transformer systems are significantly different from the baseline, which is a sign of the limited extent to which visual features affect the output. For additive deliberation (-), the performance variation is considerably larger: -and take the lead with 37.6 , but the next best system (-) plunges to 37.2. The other two (-& -) also have noticeably worse results (36.0 and 37.0). Overall, however, -is still similar to the transformers in that the baseline generally yields higher-quality translations. Cascade deliberation, on the other hand, is different in that its text-only baseline is outperformed by most of its multimodal counterparts. Multimodality enables boosts as large as around 1 point in the cases of -and -, both of which achieve about 37.4 and are significantly different from the baseline. Another observation is that the deliberation models as a whole lead to worse performance than the canonical transformers, with deterioration ranging from 2.3 (across -variants) to 3.5 (across -systems), which defies the findings of BIBREF7. We leave this to future investigations. Results & Analysis ::: Incongruence Analysis To further probe the effect of multimodality, we follow the incongruent decoding approach BIBREF15, where our multimodal models are fed with mismatched visual features. The general assumption is that a model will have learned to exploit visual information to help with its translation, if it shows substantial performance degradation when given wrong visual features. The results are reported in Table TABREF19. Overall, there are considerable parallels between the transformers and the cascade deliberation models in terms of the incongruence effect, such as universal performance deterioration (ranging from 0.1 to 0.6 ) and more noticeable score changes ($\downarrow $ 0.5 for –and $\downarrow $ 0.6 for —) in the -setting compared to the other scenarios. Additive deliberation, however, manifests a drastically different pattern, showing almost no incongruence effect for -, only a 0.2 decrease for -, and even a 0.1 boost for -and -. Therefore, the determination can be made that and -models are considerably more sensitive to incorrect visual information than -, which means the former better utilise visual clues during translation. Interestingly, the extent of performance degradation caused by incongruence is not necessarily correlated with the congruent scores. For example, –is on par with –in congruent decoding (differing by around 0.1 ), but the former suffers only a 0.1-loss with incongruence whereas the figure for the latter is 0.4, in addition to the fact that the latter becomes significantly different after incongruent decoding. This means that some multimodal models that are sensitive to incongruence likely complement visual attention with textual attention but without getting higher-quality translation as a result. The differences between the multimodal behaviour of additive and cascade deliberation also warrant more investigation, since the two types of deliberation are identical in their utilisation of visual features and only vary in their handling of the textual attention to the outputs of the encoder and the first pass decoder. Conclusions We explored a series of transformers and deliberation based models to approach cascaded multimodal speech translation as our participation in the How2-based speech translation task of IWSLT 2019. We submitted the –system, which is a canonical transformer with visual attention over the convolutional features, as our primary system with the remaining ones marked as contrastive ones. The primary system obtained a of 39.63 on the public IWSLT19 test set, whereas -, the top contrastive system on the same set, achieved 39.85. Our main conclusions are as follows: (i) the visual modality causes varying levels of translation quality damage to the transformers and additive deliberation, but boosts cascade deliberation; (ii) the multimodal transformers and cascade deliberation show performance degradation due to incongruence, but additive deliberation is not as affected; (iii) there is no strict correlation between incongruence sensitivity and translation performance. Acknowledgements This work was supported by the MultiMT (H2020 ERC Starting Grant No. 678017) and MMVC (Newton Fund Institutional Links Grant, ID 352343575) projects.
How2
dc5ff2adbe1a504122e3800c9ca1d348de391c94
dc5ff2adbe1a504122e3800c9ca1d348de391c94_0
Q: How do they evaluate the sentence representations? Text: Introduction Learning sentence representations from unlabelled data is becoming increasingly prevalent in both the machine learning and natural language processing research communities, as it efficiently and cheaply allows knowledge extraction that can successfully transfer to downstream tasks. Methods built upon the distributional hypothesis BIBREF0 and distributional similarity BIBREF1 can be roughly categorised into two types: Word-prediction Objective: The objective pushes the system to make better predictions of words in a given sentence. As the nature of the objective is to predict words, these are also called generative models. In one of the two classes of models of this type, an encoder-decoder model is learnt using a corpus of contiguous sentences BIBREF2 , BIBREF3 , BIBREF4 to make predictions of the words in the next sentence given the words in the current one. After training, the decoder is usually discarded as it is only needed during training and is not designed to produce sentence representations. In the other class of models of this type, a large language model is learnt BIBREF5 , BIBREF6 , BIBREF7 on unlabelled corpora, which could be an autoregressive model or a masked language model, which gives extremely powerful language encoders but requires massive computing resources and training time. Similarity-based Objective: The objective here relies on a predefined similarity function to enforce the model to produce more similar representations for adjacent sentences than those that are not BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Therefore, the inductive biases introduced by the two key components, the differential similarity function and the context window, in the objective crucially determine the quality of learnt representations and what information of sentences can be encoded in them. To avoid tuning the inductive biases in the similarity-based objective, we follow the word-prediction objective with an encoder and a decoder, and we are particularly interested in exploiting invertible decoding functions, which can then be used as additional encoders during testing. The contribution of our work is summarised as follows: Related Work Learning vector representations for words with a word embedding matrix as the encoder and a context word embedding matrix as the decoder BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 can be considered as a word-level example of our approach, as the models learn to predict the surrounding words in the context given the current word, and the context word embeddings can also be utilised to augment the word embeddings BIBREF14 , BIBREF16 . We are thus motivated to explore the use of sentence decoders after learning instead of ignoring them as most sentence encoder-decoder models do. Our approach is to invert the decoding function in order to use it as another encoder to assist the original encoder. In order to make computation of the inverse function well-posed and tractable, careful design of the decoder is needed. A simple instance of an invertible decoder is a linear projection with an orthonormal square matrix, whose transpose is its inverse. A family of bijective transformations with non-linear functions BIBREF17 , BIBREF18 , BIBREF19 can also be considered as it empowers the decoder to learn a complex data distribution. In our paper, we exploit two types of plausible decoding functions, including linear projection and bijective functions with neural networks BIBREF17 , and with proper design, the inverse of each of the decoding functions can be derived without expensive inverse calculation after learning. Thus, the decoder function can be utilised along with the encoder for building sentence representations. We show that the ensemble of the encoder and the inverse of the decoder outperforms each of them. Model Design Our model has similar structure to that of skip-thought BIBREF2 and, given the neighbourhood hypothesis BIBREF20 , learns to decode the next sentence given the current one instead of predicting both the previous sentence and the next one at the same time. Training Objective Given the finding BIBREF4 that neither an autoregressive nor an RNN decoder is necessary for learning sentence representations that excel on downstream tasks as the autoregressive decoders are slow to train and the quality of the generated sequences is not highly correlated with that of the representations of the sentences, our model only learns to predict words in the next sentence in a non-autoregressive fashion. Suppose that the $i$ -th sentence $S_i=\lbrace w_1,w_2,...,w_{N_i}\rbrace $ has $N_i$ words, and $S_{i+1}$ has $N_{i+1}$ words. The learning objective is to maximise the averaged log-likelihood for all sentence pairs: $$\ell _{S_{i+i}|S_i}(\phi ,\theta )=\frac{1}{N_{i+1}}\sum _{w_j\in S_{i+1}}\log P(w_j|S_i) \nonumber $$ (Eq. 5) where $\theta $ and $\phi $ contain the parameters in the encoder $f_\text{en}(S_i;\theta )$ and the decoder $f_\text{de}(_i;\phi )$ respectively. The forward computation of our model for a given sentence pair $\lbrace S_i, S_{i+1}\rbrace $ , in which the words in $S_i$ are the input to the learning system and the words in $S_{i+1}$ are targets is defined as: $$_i &= f_\text{en}(S_i;\theta ) \nonumber \\ _i &= f_\text{de}(_i;\phi ) \nonumber $$ (Eq. 6) where $_i$ is the vector representation of $S_i$ , and $_i$ is the vector output of the decoder which will be compared with the vector representations of words in the next sentence $S_{i+1}$ . Since calculating the likelihood of generating each word involves a computationally demanding softmax function, the negative sampling method BIBREF12 is applied to replace the softmax, and $\log P(w_j|s_i)$ is calculated as: $$\log \sigma (_i^\top _{w_j}) + \sum _{k=1}^{K}\mathbb {E}_{w_k\sim P_e(w)}\log \sigma (-_i^\top _{w_k}) \nonumber $$ (Eq. 7) where $_{w_k}\in ^{d_}$ is the pretrained vector representation for $w_k$ , the empirical distribution $P_e(w)$ is the unigram distribution of words in the training corpus raised to power 0.75 as suggested in the prior work BIBREF21 , and $K$ is the number of negative samples. In this case, we enforce the output of the decoder $_i$ to have the same dimensionality as the pretrained word vectors $_{w_j}$ . The loss function is summed over all contiguous sentence pairs in the training corpus. For simplicity, we omit the subscription for indexing the sentences in the following sections. Encoder The encoder $f_\text{en}(S;\theta )$ is a bi-directional Gated Recurrent Unit BIBREF22 with $d$ -dimensions in each direction. It processes word vectors in an input sentence $\lbrace _{w_1},_{w_2},...,_{w_{N}}\rbrace $ sequentially according to the temporal order of the words, and generates a sequence of hidden states. During learning, in order to reduce the computation load, only the last hidden state serves as the sentence representation $\in ^{d_}$ , where $d_=2d$ . Decoder As the goal is to reuse the decoding function $f_{\text{de}}()$ as another plausible encoder for building sentence representations after learning rather than ignoring it, one possible solution is to find the inverse function of the decoder function during testing, which is noted as $f^{-1}_{\text{de}}()$ . In order to reduce the complexity and the running time during both training and testing, the decoding function $f_{\text{de}}()$ needs to be easily invertible. Here, two types of decoding functions are considered and explored. In this case, the decoding function is a linear projection, which is $= f_{\text{de}}()=+ $ , where $\in ^{d_\times d_}$ is a trainable weight matrix and $\in ^{d_\times 1}$ is the bias term. As $f_\text{de}$ is a linear projection, the simplest situation is when $$ is an orthogonal matrix and its inverse is equal to its transpose. Often, as the dimensionality of vector $$ doesn't necessarily need to match that of word vectors $$ , $$ is not a square matrix . To enforce invertibility on $$ , a row-wise orthonormal regularisation on $$ is applied during learning, which leads to $^\top =$ , where $$0 is the identity matrix, thus the inverse function is simply $$1 , which is easily computed. The regularisation formula is $$2 , where $$3 is the Frobenius norm. Specifically, the update rule BIBREF23 for the regularisation is: $$:=(1+\beta )-\beta (^\top )\nonumber $$ (Eq. 12) The usage of the decoder during training and testing is defined as follows: $$\text{Training:} \hspace{2.84544pt} & = f_{\text{de}}()=+ \nonumber \\ \text{Testing:} \hspace{2.84544pt} & = f_\text{de}^{-1}()=^\top (- ) \nonumber $$ (Eq. 13) Therefore, the decoder is also utilised after learning to serve as a linear encoder in addition to the RNN encoder. A general case is to use a bijective function as the decoder, as the bijective functions are naturally invertible. However, the inverse of a bijective function could be hard to find and its calculation could also be computationally intense. A family of bijective transformation was designed in NICE BIBREF17 , and the simplest continuous bijective function $f:^D\rightarrow ^D$ and its inverse $f^{-1}$ is defined as: $$h: \hspace{14.22636pt} _1 &= _1, & _2 &= _2+m(_1) \nonumber \\ h^{-1}: \hspace{14.22636pt} _1 &= _1, & _2 &= _2-m(_1) \nonumber $$ (Eq. 15) where $_1$ is a $d$ -dimensional partition of the input $\in ^D$ , and $m:^d\rightarrow ^{D-d}$ is an arbitrary continuous function, which could be a trainable multi-layer feedforward neural network with non-linear activation functions. It is named as an `additive coupling layer' BIBREF17 , which has unit Jacobian determinant. To allow the learning system to explore more powerful transformation, we follow the design of the `affine coupling layer' BIBREF24 : $$h: \hspace{5.69046pt} _1 &= _1, & _2 &= _2 \odot \text{exp}(s(_1)) + t(_1) \nonumber \\ h^{-1}: \hspace{5.69046pt} _1 &= _1, & _2 &= (_2-t(_1)) \odot \text{exp}(-s(_1)) \nonumber $$ (Eq. 16) where $s:^d\rightarrow ^{D-d}$ and $t:^d\rightarrow ^{D-d}$ are both neural networks with linear output units. The requirement of the continuous bijective transformation is that, the dimensionality of the input $$ and the output $$ need to match exactly. In our case, the output $\in ^{d_}$ of the decoding function $f_{\text{de}}$ has lower dimensionality than the input $\in ^{d_}$ does. Our solution is to add an orthonormal regularised linear projection before the bijective function to transform the vector representation of a sentence to the desired dimension. The usage of the decoder that is composed of a bijective function and a regularised linear projection during training and testing is defined as: $$\text{Training:} \hspace{2.84544pt} & = f_{\text{de}}() = h(+ ) \nonumber \\ \text{Testing:} \hspace{2.84544pt} & = f_\text{de}^{-1}() = ^\top (h^{-1}() - ) \nonumber $$ (Eq. 17) Using Decoder in the Test Phase As the decoder is easily invertible, it is also used to produce vector representations. The post-processing step BIBREF25 that removes the top principal component is applied on the representations from $f_\text{en}$ and $f^{-1}_\text{de}$ individually. In the following sections, $_\text{en}$ denotes the post-processed representation from $f_\text{en}$ , and $_\text{de}$ from $f^{-1}_\text{de}$ . Since $f_\text{en}$ and $f^{-1}_\text{de}$ naturally process sentences in distinctive ways, it is reasonable to expect that the ensemble of $_\text{en}$ and $_\text{de}$ will outperform each of them. Experimental Design Experiments are conducted in PyTorch BIBREF26 , with evaluation using the SentEval package BIBREF27 with modifications to include the post-processing step. Word vectors $_{w_j}$ are initialised with FastText BIBREF15 , and fixed during learning. Unlabelled Corpora Two unlabelled corpora, including BookCorpus BIBREF28 and UMBC News Corpus BIBREF29 , are used to train models with invertible decoders. These corpora are referred as B, and U in Table 3 and 5 . The UMBC News Corpus is roughly twice as large as the BookCorpus, and the details are shown in Table 1 . Unsupervised Evaluation The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 . The cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset. Supervised Evaluation It includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 . In these tasks, MR, CR, SST, SUBJ, MPQA and MRPC are binary classification tasks, TREC is a multi-class classification task. SICK and MRPC require the same feature engineering method BIBREF44 in order to compose a vector from vector representations of two sentences to indicate the difference between them. Hyperparameter Tuning The hyperparameters are tuned on the averaged scores on STS14 of the model trained on BookCorpus, thus it is marked with a $^\star $ in tables to indicate potential overfitting. The hyperparameter setting for our model is summarised as follows: the batch size $N=512$ , the dimension of sentence vectors $d_=2048$ , the dimension of word vectors $d_{_{w_j}}=300$ , the number of negative samples $K=5$ , and the initial learning rate is $5\times 10^{-4}$ which is kept fixed during learning. The Adam optimiser BIBREF45 with gradient clipping BIBREF46 is applied for stable learning. Each model in our experiment is only trained for one epoch on the given training corpus. $\beta $ in the invertible constraint of the linear projection is set to be $0.01$ , and after learning, all 300 eigenvalues are close to 1. For the bijective transformation, in order to make sure that each output unit is influenced by all input units, we stack four affine coupling layers in the bijective transformation BIBREF17 . The non-linear mappings $s$ and $t$ are both neural networks with one hidden layer with the rectified linear activation function. Representation Pooling Various pooling functions are applied to produce vector representations for input sentences. For unsupervised evaluation tasks, as recommended in previous studies BIBREF14 , BIBREF50 , BIBREF51 , a global mean-pooling function is applied on both the output of the RNN encoder $f_\text{en}$ to produce a vector representation $_\text{en}$ and the inverse of the decoder $f_\text{de}^{-1}$ to produce $_\text{de}$ . For supervised evaluation tasks, three pooling functions, including global max-, min-, and mean-pooling, are applied on top of the encoder and the outputs from three pooling functions are concatenated to serve as a vector representation for a given sentence. The same representation pooling strategy is applied on the inverse of the decoder. The reason for applying different representation pooling strategies for two categories of tasks is: (1) cosine similarity of two vector representations is directly calculated in unsupervised evaluation tasks to determine the textual similarity of two sentences, and it suffers from the curse-of-dimensionality BIBREF52 , which leads to more equidistantly distributed representations for higher dimensional vector representations decreasing the difference among similarity scores. (2) given Cover's theorem BIBREF53 and the blessings-of-dimensionality property, it is more likely for the data points to be linearly separable when they are presented in high dimensional space, and in the supervised evaluation tasks, high dimensional vector representations are preferred as a linear classifier will be learnt to evaluate how likely the produced sentence representations are linearly separable; (3) in our case, both the encoder and the inverse of the decoder are capable of producing a vector representation per time step in a given sentence, although during training, only the last one is regarded as the sentence representation for the fast training speed, it is more reasonable to make use of all representations at all time steps with various pooling functions to compute a vector representations to produce high-quality sentence representations that excel the downstream tasks. Discussion It is worth discussing the motivation of the model design and the observations in our experiments. As mentioned as one of the take-away messages BIBREF54 , to demonstrate the effectiveness of the invertible constraint, the comparison of our model with the constraint and its own variants use the same word embeddings from FastText BIBREF15 and have the same dimensionaility of sentence representations during learning, and use the same classifier on top of the produced representations with the same hyperparameter settings. Overall, given the performance of the inverse of each decoder presented in Table 3 and 5 , it is reasonable to state that the inverse of the decoder provides high-quality sentence representations as well as the encoder does. However, there is no significant difference between the two decoders in terms of the performance on the downstream tasks. In this section, observations and thoughts are presented based on the analyses of our model with the invertible constraint. Effect of Invertible Constraint The motivation of enforcing the invertible constraint on the decoder during learning is to make it usable and potentially helpful during testing in terms of boosting the performance of the lone RNN encoder in the encoder-decoder models (instead of ignoring the decoder part after learning). Therefore, it is important to check the necessity of the invertible constraint on the decoders. A model with the same hyperparameter settings but without the invertible constraint is trained as the baseline model, and macro-averaged results that summarise the same type of tasks are presented in Table 2 . As noted in the prior work BIBREF55 , there exists significant inconsistency between the group of unsupervised tasks and the group of supervised ones, it is possible for a model to excel on one group of tasks but fail on the other one. As presented in our table, the inverse of the decoder tends to perform better than the encoder on unsupervised tasks, and the situation reverses when it comes to the supervised ones. In our model, the invertible constraint helps the RNN encoder $f_\text{en}$ to perform better on the unsupervised evaluation tasks, and helps the inverse of the decoder $f_\text{de}^{-1}$ to provide better results on single sentence classification tasks. An interesting observation is that, by enforcing the invertible constraint, the model learns to sacrifice the performance of $f_\text{de}^{-1}$ and improve the performance of $f_\text{en}$ on unsupervised tasks to mitigate the gap between the two encoding functions, which leads to more aligned vector representations between $f_\text{en}$ and $f_\text{de}^{-1}$ . Effect on Ensemble Although encouraging the invertible constraint leads to slightly poorer performance of $f_\text{de}^{-1}$ on unsupervised tasks, it generally leads to better sentence representations when the ensemble of the encoder $f_\text{en}$ and the inverse of the decoder $f_\text{de}^{-1}$ is considered. Specifically, for unsupervised tasks, the ensemble is an average of two vector representations produced from two encoding functions during the testing time, and for supervised tasks, the concatenation of two representations is regarded as the representation of a given sentence. The ensemble method is recommended in prior work BIBREF14 , BIBREF16 , BIBREF51 , BIBREF56 , BIBREF4 , BIBREF54 . As presented in Table 2 , on unsupervised evaluation tasks (STS12-16 and SICK14), the ensemble of two encoding functions is averaging, which benefits from aligning representations from $f_\text{en}$ and $f_\text{de}^{-1}$ by enforcing the invertible constraint. While in the learning system without the invertible constraint, the ensemble of two encoding functions provides worse performance than $f_\text{de}^{-1}$ . On supervised evaluation tasks, as the ensemble method is concatenation and a linear model is applied on top of the concatenated representations, as long as the two encoding functions process sentences distinctively, the linear classifier is capable of picking relevant feature dimensions from both encoding functions to make good predictions, thus there is no significant difference between our model with and without invertible constraint. Effect of Learning Recent research BIBREF54 showed that the improvement on the supervised evaluation tasks led by learning from labelled or unlabelled corpora is rather insignificant compared to random initialised projections on top of pretrained word vectors. Another interesting direction of research that utilises probabilistic random walk models on the unit sphere BIBREF57 , BIBREF25 , BIBREF58 derived several simple yet effective post-processing methods that operate on pretrained word vectors and are able to boost the performance of the averaged word vectors as the sentence representation on unsupervised tasks. While these papers reveal interesting aspects of the downstream tasks and question the need for optimising a learning objective, our results show that learning on unlabelled corpora helps. On unsupervised evaluation tasks, in order to show that learning from an unlabelled corpus helps, the performance of our learnt representations should be directly compared with the pretrained word vectors, FastText in our system, at the same dimensionality with the same post-processing BIBREF25 . The word vectors are scattered in the 300-dimensional space, and our model has a decoder that is learnt to project a sentence representation $\in ^{d_}$ to $=f_\text{de}(;\phi )\in ^{300}$ . The results of our learnt representations and averaged word vectors with the same postprocessing are presented in Table 4 . As shown in the Table 4 , the performance of our learnt system is better than FastText at the same dimensionality. It is worth mentioning that, in our system, the final representation is an average of postprocessed word vectors and the learnt representations $$ , and the invertible constraint guarantees that the ensemble of both gives better performance. Otherwise, as discussed in the previous section, an ensemble of postprocessed word vectors and some random encoders won't necessarily lead to stronger results. Table 3 also provides evidence for the effectiveness of learning on the unsupervised evaluation tasks. On supervised evaluation tasks, we agree that higher dimensional vector representations give better results on the downstream tasks. Compared to random projections with $4096\times 6$ output dimensions, learning from unlabelled corpora leverages the distributional similarity BIBREF1 at the sentence-level into the learnt representations and potentially helps capture the meaning of a sentence. In our system, the raw representations are in 2400-dimensional space, and the use of various pooling functions expands it to $2048\times 6$ dimensions, which is half as large as the random projection dimension and still yields better performance. Both our models and random projections with no training are presented in Table 5 . The evidence from both sets of downstream tasks support our argument that learning from unlabelled corpora helps the representations capture meaning of sentences. However, current ways of incorporating the distributional hypothesis only utilise it as a weak and noisy supervision, which might limit the quality of the learnt sentence representations. Conclusion Two types of decoders, including an orthonormal regularised linear projection and a bijective transformation, whose inverses can be derived effortlessly, are presented in order to utilise the decoder as another encoder in the testing phase. The experiments and comparisons are conducted on two large unlabelled corpora, and the performance on the downstream tasks shows the high usability and generalisation ability of the decoders in testing. Analyses show that the invertible constraint enforced on the decoder encourages each one to learn from the other one during learning, and provides improved encoding functions after learning. Ensemble of the encoder and the inverse of the decoder gives even better performance when the invertible constraint is applied on the decoder side. Furthermore, by comparing with prior work, we argue that learning from unlabelled corpora indeed helps to improve the sentence representations, although the current way of utilising corpora might not be optimal. We view this as unifying the generative and discriminative objectives for unsupervised sentence representation learning, as it is trained with a generative objective which when inverted can be seen as creating a discriminative target. Our proposed method in our implementation doesn't provide extremely good performance on the downstream tasks, but we see our method as an opportunity to fuse all possible components in a model, even a usually discarded decoder, to produce sentence representations. Future work could potentially expand our work into end-to-end invertible model that is able to produce high-quality representations by omnidirectional computations. Acknowledgements Many Thanks to Andrew Ying for helpful clarifications on several concepts.
The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 . The cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset., Supervised Evaluation It includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 .
04b43deab0fd753e3419ed8741c10f652b893f02
04b43deab0fd753e3419ed8741c10f652b893f02_0
Q: What are the two decoding functions? Text: Introduction Learning sentence representations from unlabelled data is becoming increasingly prevalent in both the machine learning and natural language processing research communities, as it efficiently and cheaply allows knowledge extraction that can successfully transfer to downstream tasks. Methods built upon the distributional hypothesis BIBREF0 and distributional similarity BIBREF1 can be roughly categorised into two types: Word-prediction Objective: The objective pushes the system to make better predictions of words in a given sentence. As the nature of the objective is to predict words, these are also called generative models. In one of the two classes of models of this type, an encoder-decoder model is learnt using a corpus of contiguous sentences BIBREF2 , BIBREF3 , BIBREF4 to make predictions of the words in the next sentence given the words in the current one. After training, the decoder is usually discarded as it is only needed during training and is not designed to produce sentence representations. In the other class of models of this type, a large language model is learnt BIBREF5 , BIBREF6 , BIBREF7 on unlabelled corpora, which could be an autoregressive model or a masked language model, which gives extremely powerful language encoders but requires massive computing resources and training time. Similarity-based Objective: The objective here relies on a predefined similarity function to enforce the model to produce more similar representations for adjacent sentences than those that are not BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Therefore, the inductive biases introduced by the two key components, the differential similarity function and the context window, in the objective crucially determine the quality of learnt representations and what information of sentences can be encoded in them. To avoid tuning the inductive biases in the similarity-based objective, we follow the word-prediction objective with an encoder and a decoder, and we are particularly interested in exploiting invertible decoding functions, which can then be used as additional encoders during testing. The contribution of our work is summarised as follows: Related Work Learning vector representations for words with a word embedding matrix as the encoder and a context word embedding matrix as the decoder BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 can be considered as a word-level example of our approach, as the models learn to predict the surrounding words in the context given the current word, and the context word embeddings can also be utilised to augment the word embeddings BIBREF14 , BIBREF16 . We are thus motivated to explore the use of sentence decoders after learning instead of ignoring them as most sentence encoder-decoder models do. Our approach is to invert the decoding function in order to use it as another encoder to assist the original encoder. In order to make computation of the inverse function well-posed and tractable, careful design of the decoder is needed. A simple instance of an invertible decoder is a linear projection with an orthonormal square matrix, whose transpose is its inverse. A family of bijective transformations with non-linear functions BIBREF17 , BIBREF18 , BIBREF19 can also be considered as it empowers the decoder to learn a complex data distribution. In our paper, we exploit two types of plausible decoding functions, including linear projection and bijective functions with neural networks BIBREF17 , and with proper design, the inverse of each of the decoding functions can be derived without expensive inverse calculation after learning. Thus, the decoder function can be utilised along with the encoder for building sentence representations. We show that the ensemble of the encoder and the inverse of the decoder outperforms each of them. Model Design Our model has similar structure to that of skip-thought BIBREF2 and, given the neighbourhood hypothesis BIBREF20 , learns to decode the next sentence given the current one instead of predicting both the previous sentence and the next one at the same time. Training Objective Given the finding BIBREF4 that neither an autoregressive nor an RNN decoder is necessary for learning sentence representations that excel on downstream tasks as the autoregressive decoders are slow to train and the quality of the generated sequences is not highly correlated with that of the representations of the sentences, our model only learns to predict words in the next sentence in a non-autoregressive fashion. Suppose that the $i$ -th sentence $S_i=\lbrace w_1,w_2,...,w_{N_i}\rbrace $ has $N_i$ words, and $S_{i+1}$ has $N_{i+1}$ words. The learning objective is to maximise the averaged log-likelihood for all sentence pairs: $$\ell _{S_{i+i}|S_i}(\phi ,\theta )=\frac{1}{N_{i+1}}\sum _{w_j\in S_{i+1}}\log P(w_j|S_i) \nonumber $$ (Eq. 5) where $\theta $ and $\phi $ contain the parameters in the encoder $f_\text{en}(S_i;\theta )$ and the decoder $f_\text{de}(_i;\phi )$ respectively. The forward computation of our model for a given sentence pair $\lbrace S_i, S_{i+1}\rbrace $ , in which the words in $S_i$ are the input to the learning system and the words in $S_{i+1}$ are targets is defined as: $$_i &= f_\text{en}(S_i;\theta ) \nonumber \\ _i &= f_\text{de}(_i;\phi ) \nonumber $$ (Eq. 6) where $_i$ is the vector representation of $S_i$ , and $_i$ is the vector output of the decoder which will be compared with the vector representations of words in the next sentence $S_{i+1}$ . Since calculating the likelihood of generating each word involves a computationally demanding softmax function, the negative sampling method BIBREF12 is applied to replace the softmax, and $\log P(w_j|s_i)$ is calculated as: $$\log \sigma (_i^\top _{w_j}) + \sum _{k=1}^{K}\mathbb {E}_{w_k\sim P_e(w)}\log \sigma (-_i^\top _{w_k}) \nonumber $$ (Eq. 7) where $_{w_k}\in ^{d_}$ is the pretrained vector representation for $w_k$ , the empirical distribution $P_e(w)$ is the unigram distribution of words in the training corpus raised to power 0.75 as suggested in the prior work BIBREF21 , and $K$ is the number of negative samples. In this case, we enforce the output of the decoder $_i$ to have the same dimensionality as the pretrained word vectors $_{w_j}$ . The loss function is summed over all contiguous sentence pairs in the training corpus. For simplicity, we omit the subscription for indexing the sentences in the following sections. Encoder The encoder $f_\text{en}(S;\theta )$ is a bi-directional Gated Recurrent Unit BIBREF22 with $d$ -dimensions in each direction. It processes word vectors in an input sentence $\lbrace _{w_1},_{w_2},...,_{w_{N}}\rbrace $ sequentially according to the temporal order of the words, and generates a sequence of hidden states. During learning, in order to reduce the computation load, only the last hidden state serves as the sentence representation $\in ^{d_}$ , where $d_=2d$ . Decoder As the goal is to reuse the decoding function $f_{\text{de}}()$ as another plausible encoder for building sentence representations after learning rather than ignoring it, one possible solution is to find the inverse function of the decoder function during testing, which is noted as $f^{-1}_{\text{de}}()$ . In order to reduce the complexity and the running time during both training and testing, the decoding function $f_{\text{de}}()$ needs to be easily invertible. Here, two types of decoding functions are considered and explored. In this case, the decoding function is a linear projection, which is $= f_{\text{de}}()=+ $ , where $\in ^{d_\times d_}$ is a trainable weight matrix and $\in ^{d_\times 1}$ is the bias term. As $f_\text{de}$ is a linear projection, the simplest situation is when $$ is an orthogonal matrix and its inverse is equal to its transpose. Often, as the dimensionality of vector $$ doesn't necessarily need to match that of word vectors $$ , $$ is not a square matrix . To enforce invertibility on $$ , a row-wise orthonormal regularisation on $$ is applied during learning, which leads to $^\top =$ , where $$0 is the identity matrix, thus the inverse function is simply $$1 , which is easily computed. The regularisation formula is $$2 , where $$3 is the Frobenius norm. Specifically, the update rule BIBREF23 for the regularisation is: $$:=(1+\beta )-\beta (^\top )\nonumber $$ (Eq. 12) The usage of the decoder during training and testing is defined as follows: $$\text{Training:} \hspace{2.84544pt} & = f_{\text{de}}()=+ \nonumber \\ \text{Testing:} \hspace{2.84544pt} & = f_\text{de}^{-1}()=^\top (- ) \nonumber $$ (Eq. 13) Therefore, the decoder is also utilised after learning to serve as a linear encoder in addition to the RNN encoder. A general case is to use a bijective function as the decoder, as the bijective functions are naturally invertible. However, the inverse of a bijective function could be hard to find and its calculation could also be computationally intense. A family of bijective transformation was designed in NICE BIBREF17 , and the simplest continuous bijective function $f:^D\rightarrow ^D$ and its inverse $f^{-1}$ is defined as: $$h: \hspace{14.22636pt} _1 &= _1, & _2 &= _2+m(_1) \nonumber \\ h^{-1}: \hspace{14.22636pt} _1 &= _1, & _2 &= _2-m(_1) \nonumber $$ (Eq. 15) where $_1$ is a $d$ -dimensional partition of the input $\in ^D$ , and $m:^d\rightarrow ^{D-d}$ is an arbitrary continuous function, which could be a trainable multi-layer feedforward neural network with non-linear activation functions. It is named as an `additive coupling layer' BIBREF17 , which has unit Jacobian determinant. To allow the learning system to explore more powerful transformation, we follow the design of the `affine coupling layer' BIBREF24 : $$h: \hspace{5.69046pt} _1 &= _1, & _2 &= _2 \odot \text{exp}(s(_1)) + t(_1) \nonumber \\ h^{-1}: \hspace{5.69046pt} _1 &= _1, & _2 &= (_2-t(_1)) \odot \text{exp}(-s(_1)) \nonumber $$ (Eq. 16) where $s:^d\rightarrow ^{D-d}$ and $t:^d\rightarrow ^{D-d}$ are both neural networks with linear output units. The requirement of the continuous bijective transformation is that, the dimensionality of the input $$ and the output $$ need to match exactly. In our case, the output $\in ^{d_}$ of the decoding function $f_{\text{de}}$ has lower dimensionality than the input $\in ^{d_}$ does. Our solution is to add an orthonormal regularised linear projection before the bijective function to transform the vector representation of a sentence to the desired dimension. The usage of the decoder that is composed of a bijective function and a regularised linear projection during training and testing is defined as: $$\text{Training:} \hspace{2.84544pt} & = f_{\text{de}}() = h(+ ) \nonumber \\ \text{Testing:} \hspace{2.84544pt} & = f_\text{de}^{-1}() = ^\top (h^{-1}() - ) \nonumber $$ (Eq. 17) Using Decoder in the Test Phase As the decoder is easily invertible, it is also used to produce vector representations. The post-processing step BIBREF25 that removes the top principal component is applied on the representations from $f_\text{en}$ and $f^{-1}_\text{de}$ individually. In the following sections, $_\text{en}$ denotes the post-processed representation from $f_\text{en}$ , and $_\text{de}$ from $f^{-1}_\text{de}$ . Since $f_\text{en}$ and $f^{-1}_\text{de}$ naturally process sentences in distinctive ways, it is reasonable to expect that the ensemble of $_\text{en}$ and $_\text{de}$ will outperform each of them. Experimental Design Experiments are conducted in PyTorch BIBREF26 , with evaluation using the SentEval package BIBREF27 with modifications to include the post-processing step. Word vectors $_{w_j}$ are initialised with FastText BIBREF15 , and fixed during learning. Unlabelled Corpora Two unlabelled corpora, including BookCorpus BIBREF28 and UMBC News Corpus BIBREF29 , are used to train models with invertible decoders. These corpora are referred as B, and U in Table 3 and 5 . The UMBC News Corpus is roughly twice as large as the BookCorpus, and the details are shown in Table 1 . Unsupervised Evaluation The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 . The cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset. Supervised Evaluation It includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 . In these tasks, MR, CR, SST, SUBJ, MPQA and MRPC are binary classification tasks, TREC is a multi-class classification task. SICK and MRPC require the same feature engineering method BIBREF44 in order to compose a vector from vector representations of two sentences to indicate the difference between them. Hyperparameter Tuning The hyperparameters are tuned on the averaged scores on STS14 of the model trained on BookCorpus, thus it is marked with a $^\star $ in tables to indicate potential overfitting. The hyperparameter setting for our model is summarised as follows: the batch size $N=512$ , the dimension of sentence vectors $d_=2048$ , the dimension of word vectors $d_{_{w_j}}=300$ , the number of negative samples $K=5$ , and the initial learning rate is $5\times 10^{-4}$ which is kept fixed during learning. The Adam optimiser BIBREF45 with gradient clipping BIBREF46 is applied for stable learning. Each model in our experiment is only trained for one epoch on the given training corpus. $\beta $ in the invertible constraint of the linear projection is set to be $0.01$ , and after learning, all 300 eigenvalues are close to 1. For the bijective transformation, in order to make sure that each output unit is influenced by all input units, we stack four affine coupling layers in the bijective transformation BIBREF17 . The non-linear mappings $s$ and $t$ are both neural networks with one hidden layer with the rectified linear activation function. Representation Pooling Various pooling functions are applied to produce vector representations for input sentences. For unsupervised evaluation tasks, as recommended in previous studies BIBREF14 , BIBREF50 , BIBREF51 , a global mean-pooling function is applied on both the output of the RNN encoder $f_\text{en}$ to produce a vector representation $_\text{en}$ and the inverse of the decoder $f_\text{de}^{-1}$ to produce $_\text{de}$ . For supervised evaluation tasks, three pooling functions, including global max-, min-, and mean-pooling, are applied on top of the encoder and the outputs from three pooling functions are concatenated to serve as a vector representation for a given sentence. The same representation pooling strategy is applied on the inverse of the decoder. The reason for applying different representation pooling strategies for two categories of tasks is: (1) cosine similarity of two vector representations is directly calculated in unsupervised evaluation tasks to determine the textual similarity of two sentences, and it suffers from the curse-of-dimensionality BIBREF52 , which leads to more equidistantly distributed representations for higher dimensional vector representations decreasing the difference among similarity scores. (2) given Cover's theorem BIBREF53 and the blessings-of-dimensionality property, it is more likely for the data points to be linearly separable when they are presented in high dimensional space, and in the supervised evaluation tasks, high dimensional vector representations are preferred as a linear classifier will be learnt to evaluate how likely the produced sentence representations are linearly separable; (3) in our case, both the encoder and the inverse of the decoder are capable of producing a vector representation per time step in a given sentence, although during training, only the last one is regarded as the sentence representation for the fast training speed, it is more reasonable to make use of all representations at all time steps with various pooling functions to compute a vector representations to produce high-quality sentence representations that excel the downstream tasks. Discussion It is worth discussing the motivation of the model design and the observations in our experiments. As mentioned as one of the take-away messages BIBREF54 , to demonstrate the effectiveness of the invertible constraint, the comparison of our model with the constraint and its own variants use the same word embeddings from FastText BIBREF15 and have the same dimensionaility of sentence representations during learning, and use the same classifier on top of the produced representations with the same hyperparameter settings. Overall, given the performance of the inverse of each decoder presented in Table 3 and 5 , it is reasonable to state that the inverse of the decoder provides high-quality sentence representations as well as the encoder does. However, there is no significant difference between the two decoders in terms of the performance on the downstream tasks. In this section, observations and thoughts are presented based on the analyses of our model with the invertible constraint. Effect of Invertible Constraint The motivation of enforcing the invertible constraint on the decoder during learning is to make it usable and potentially helpful during testing in terms of boosting the performance of the lone RNN encoder in the encoder-decoder models (instead of ignoring the decoder part after learning). Therefore, it is important to check the necessity of the invertible constraint on the decoders. A model with the same hyperparameter settings but without the invertible constraint is trained as the baseline model, and macro-averaged results that summarise the same type of tasks are presented in Table 2 . As noted in the prior work BIBREF55 , there exists significant inconsistency between the group of unsupervised tasks and the group of supervised ones, it is possible for a model to excel on one group of tasks but fail on the other one. As presented in our table, the inverse of the decoder tends to perform better than the encoder on unsupervised tasks, and the situation reverses when it comes to the supervised ones. In our model, the invertible constraint helps the RNN encoder $f_\text{en}$ to perform better on the unsupervised evaluation tasks, and helps the inverse of the decoder $f_\text{de}^{-1}$ to provide better results on single sentence classification tasks. An interesting observation is that, by enforcing the invertible constraint, the model learns to sacrifice the performance of $f_\text{de}^{-1}$ and improve the performance of $f_\text{en}$ on unsupervised tasks to mitigate the gap between the two encoding functions, which leads to more aligned vector representations between $f_\text{en}$ and $f_\text{de}^{-1}$ . Effect on Ensemble Although encouraging the invertible constraint leads to slightly poorer performance of $f_\text{de}^{-1}$ on unsupervised tasks, it generally leads to better sentence representations when the ensemble of the encoder $f_\text{en}$ and the inverse of the decoder $f_\text{de}^{-1}$ is considered. Specifically, for unsupervised tasks, the ensemble is an average of two vector representations produced from two encoding functions during the testing time, and for supervised tasks, the concatenation of two representations is regarded as the representation of a given sentence. The ensemble method is recommended in prior work BIBREF14 , BIBREF16 , BIBREF51 , BIBREF56 , BIBREF4 , BIBREF54 . As presented in Table 2 , on unsupervised evaluation tasks (STS12-16 and SICK14), the ensemble of two encoding functions is averaging, which benefits from aligning representations from $f_\text{en}$ and $f_\text{de}^{-1}$ by enforcing the invertible constraint. While in the learning system without the invertible constraint, the ensemble of two encoding functions provides worse performance than $f_\text{de}^{-1}$ . On supervised evaluation tasks, as the ensemble method is concatenation and a linear model is applied on top of the concatenated representations, as long as the two encoding functions process sentences distinctively, the linear classifier is capable of picking relevant feature dimensions from both encoding functions to make good predictions, thus there is no significant difference between our model with and without invertible constraint. Effect of Learning Recent research BIBREF54 showed that the improvement on the supervised evaluation tasks led by learning from labelled or unlabelled corpora is rather insignificant compared to random initialised projections on top of pretrained word vectors. Another interesting direction of research that utilises probabilistic random walk models on the unit sphere BIBREF57 , BIBREF25 , BIBREF58 derived several simple yet effective post-processing methods that operate on pretrained word vectors and are able to boost the performance of the averaged word vectors as the sentence representation on unsupervised tasks. While these papers reveal interesting aspects of the downstream tasks and question the need for optimising a learning objective, our results show that learning on unlabelled corpora helps. On unsupervised evaluation tasks, in order to show that learning from an unlabelled corpus helps, the performance of our learnt representations should be directly compared with the pretrained word vectors, FastText in our system, at the same dimensionality with the same post-processing BIBREF25 . The word vectors are scattered in the 300-dimensional space, and our model has a decoder that is learnt to project a sentence representation $\in ^{d_}$ to $=f_\text{de}(;\phi )\in ^{300}$ . The results of our learnt representations and averaged word vectors with the same postprocessing are presented in Table 4 . As shown in the Table 4 , the performance of our learnt system is better than FastText at the same dimensionality. It is worth mentioning that, in our system, the final representation is an average of postprocessed word vectors and the learnt representations $$ , and the invertible constraint guarantees that the ensemble of both gives better performance. Otherwise, as discussed in the previous section, an ensemble of postprocessed word vectors and some random encoders won't necessarily lead to stronger results. Table 3 also provides evidence for the effectiveness of learning on the unsupervised evaluation tasks. On supervised evaluation tasks, we agree that higher dimensional vector representations give better results on the downstream tasks. Compared to random projections with $4096\times 6$ output dimensions, learning from unlabelled corpora leverages the distributional similarity BIBREF1 at the sentence-level into the learnt representations and potentially helps capture the meaning of a sentence. In our system, the raw representations are in 2400-dimensional space, and the use of various pooling functions expands it to $2048\times 6$ dimensions, which is half as large as the random projection dimension and still yields better performance. Both our models and random projections with no training are presented in Table 5 . The evidence from both sets of downstream tasks support our argument that learning from unlabelled corpora helps the representations capture meaning of sentences. However, current ways of incorporating the distributional hypothesis only utilise it as a weak and noisy supervision, which might limit the quality of the learnt sentence representations. Conclusion Two types of decoders, including an orthonormal regularised linear projection and a bijective transformation, whose inverses can be derived effortlessly, are presented in order to utilise the decoder as another encoder in the testing phase. The experiments and comparisons are conducted on two large unlabelled corpora, and the performance on the downstream tasks shows the high usability and generalisation ability of the decoders in testing. Analyses show that the invertible constraint enforced on the decoder encourages each one to learn from the other one during learning, and provides improved encoding functions after learning. Ensemble of the encoder and the inverse of the decoder gives even better performance when the invertible constraint is applied on the decoder side. Furthermore, by comparing with prior work, we argue that learning from unlabelled corpora indeed helps to improve the sentence representations, although the current way of utilising corpora might not be optimal. We view this as unifying the generative and discriminative objectives for unsupervised sentence representation learning, as it is trained with a generative objective which when inverted can be seen as creating a discriminative target. Our proposed method in our implementation doesn't provide extremely good performance on the downstream tasks, but we see our method as an opportunity to fuse all possible components in a model, even a usually discarded decoder, to produce sentence representations. Future work could potentially expand our work into end-to-end invertible model that is able to produce high-quality representations by omnidirectional computations. Acknowledgements Many Thanks to Andrew Ying for helpful clarifications on several concepts.
a linear projection and a bijective function with continuous transformation though ‘affine coupling layer’ of (Dinh et al.,2016).
515e10a71d78ccd9c7dc93cd942924a4c85d3a30
515e10a71d78ccd9c7dc93cd942924a4c85d3a30_0
Q: How is language modelling evaluated? Text: Introduction Swiss German (“Schwyzerdütsch” or “Schwiizertüütsch”, abbreviated “GSW”) is the name of a large continuum of dialects attached to the Germanic language tree spoken by more than 60% of the Swiss population BIBREF0. Used every day from colloquial conversations to business meetings, Swiss German in its written form has become more and more popular in recent years with the rise of blogs, messaging applications and social media. However, the variability of the written form is rather large as orthography is more based on local pronunciations and emerging conventions than on a unique grammar. Even though Swiss German is widely spread in Switzerland, there are still few natural language processing (NLP) corpora, studies or tools available BIBREF1. This lack of resources may be explained by the small pool of speakers (less than one percent of the world population), but also the many intrinsic difficulties of Swiss German, including the lack of official writing rules, the high variability across different dialects, and the informal context in which texts are commonly written. Furthermore, there is no official top-level domain (TLD) for Swiss German on the Internet, which renders the automatic collection of Swiss German texts more difficult. To automate the treatment of Swiss German and foster its adoption in online services such as automatic speech recognition (ASR), we gathered the largest corpus of written Swiss German to date by crawling the web using a customized tool. We highlight the difficulties for finding Swiss German on the web and demonstrate in an experimental evaluation how our text corpus can be used to significantly improve an important NLP task that is a fundamental part of the ASR process: language modeling. Related Work Few GSW corpora already exists. Although they are very valuable for research on specific aspects of the Swiss German language, they are either highly specialized BIBREF2 BIBREF3 BIBREF4, rather small BIBREF1 (7,305 sentences), or do not offer full sentences BIBREF5. To our knowledge, the only comprehensive written Swiss German corpus to date comes from the Leipzig corpora collection initiative BIBREF6 offering corpora for more than 136 languages. The Swiss German data has two sources: the Alemannic Wikipedia and web crawls on the .ch domain in 2016 and 2017, leading to a total of 175,399 unique sentences. While the Leipzig Web corpus for Swiss German is of considerable size, we believe this number does not reflect the actual amount of GSW available on the Internet. Furthermore, the enforced sentence structures do not represent the way Swiss German speakers write online. In this paper, we thus aim at augmenting the Leipzig Web corpus by looking further than the .ch domain and by using a suite of tools specifically designed for retrieving Swiss German. The idea of using the web as a vast source of linguistic data has been around for decades BIBREF7 and many authors have already addressed its importance for low-resources languages BIBREF8. A common technique is to send queries made of mid-frequency $n$-grams to a search engine to gather bootstrap URLs, which initiate a crawl using a breadth-first strategy in order to gather meaningful information, such as documents or words BIBREF9, BIBREF5. Existing tools and studies, however, have requirements that are inadequate for the case of Swiss German. For example, GSW is not a language known to search engines BIBREF9, does not have specific TLDs BIBREF10, and lacks good language identification models. Also, GSW documents are too rare to use bootstrapping techniques BIBREF8. Finally, as GSW is scarce and mostly found in comments sections or as part of multilingual web pages (e.g. High German), we cannot afford to “privilege precision over recall” BIBREF11 by focusing on the main content of a page. As a consequence, our method is based on known techniques that are adapted to deal with those peculiarities. Furthermore, it was designed for having a human in the loop. Its iterative nature makes it possible to refine each step of the tool chain as our knowledge of GSW improves. Proposed System The two main components of our proposed system are shown in Figure FIGREF1: a seeder that gathers potentially interesting URLs using a Search Engine and a crawler that extracts GSW from web pages, linked together by a MongoDB database. The system is implemented in Python 3, with the full code available on GitHub. Due to the exploratory nature of the task, the tool chain is executed in an iterative manner, allowing us to control and potentially improve the process punctually. Proposed System ::: Language Identification Language identification (LID) is a central component of the pipeline, as it has a strong influence on the final result. In addition, readily available tools are not performing at a satisfying level. For these reasons we created a tailor-made LID system for this situation. LID has been extensively studied over the past decades BIBREF12 and has achieved impressive results on long monolingual documents in major languages such as English. However, the task becomes more challenging when the pool of training data is small and of high variability, and when the unit of identification is only a sentence. Free pretrained LIDs supporting GSW such as FastText BIBREF13 are trained on the Alemannic Wikipedia, which encompasses not only GSW, but also German dialects such as Badisch, Elsässisch, Schwäbisch and Vorarlbergisch. This makes the precision of the model insufficient for our purposes. The dataset used to build our Swiss German LID is based on the Leipzig text corpora BIBREF6, mostly focusing on the texts gathered from the Internet. In preliminary experiments, we have chosen eight language classes shown in Table TABREF4, which give precedence to languages closely related to Swiss German in their structure. In this Table, GSW_LIKE refers to a combination of dialects that are similar to Swiss German but for which we did not have sufficient resources to model classes on their own. A total of 535,000 sentences are considered for LID with an equal distribution over the eight classes. The 66,684 GSW sentences originate from the Leipzig web corpus 2017 and have been refined during preliminary experiments to exclude obvious non-GSW contents. We use 75% of the data for training, 10% for optimizing system parameters, and 15% for testing the final performance. Using a pretrained German BERT model BIBREF14 and fine-tuning it on our corpus, we obtain a high LID accuracy of 99.58%. GSW is most confused with German (0.04%) and GSW_LIKE (0.04%). We have also validated the LID system on SMS sentences BIBREF2, where it proves robust for sentences as short as five words. Proposed System ::: The Seeder Query generation has already been extensively studied BIBREF15, BIBREF9. In the case of Swiss German, we tested three different approaches: (a) most frequent trigrams, (b) selection of 2 to 7 random words weighted by their frequency distribution and (c) human-generated queries. When comparing the corpora generated by 100 seeds of each type, we did not observe significant differences in terms of quantity or quality for the three seeding strategies. On a positive side, $50\%$ of the sentences were different from one seed strategy to the other, suggesting for an approach where strategies are mixed. However, we also observed that (a) tends to yield more similar queries over time and (c) is too time-consuming for practical use. Considering these observations, we privileged the following approach: Start with a list of sentences, either from a bootstrap dataset or from sentences from previous crawls using one single sentence per unique URL; Compute the frequency over the vocabulary, normalizing words to lower case and discarding those having non-alphabetic characters; Filter out words appearing only once or present in German or English vocabularies; Generate query seeds by sampling 3 words with a probability following their frequency distribution; Exclude seeds with more than two single-letter words or having a GSW probability below 95% (see Section SECREF3). Initial sentences come from the Leipzig web corpus 2017, filtered by means of the LID described in Section SECREF3 Each seed is submitted to startpage.com, a Google Search proxy augmented with privacy features. To ensure GSW is not auto-corrected to High German, each word is first surrounded by double quotes. The first 20 new URLs, i.e. URLs that were never seen before, are saved for further crawling. Proposed System ::: The Crawler The crawler starts with a list of URLs and metadata taken either from a file or from the MongoDB instance, and are added to a task queue with a depth of 0. As illustrated in Figure FIGREF1, each task consists of a series of steps that will download the page content, extract well-formed GSW sentences and add links found on the page to the task queue. At different stages of this pipeline, a decider can intervene in order to stop the processing early. A crawl may also be limited to a given depth, usually set to 3. Proposed System ::: The Crawler ::: Scrape The raw HTML content is fetched and converted to UTF-8 using a mixture of requests and BeautifulSoup. Boilerplate removal such as navigation and tables uses jusText BIBREF16, but ignores stop words filtering as such a list is not available for GSW. The output is a UTF-8 text containing newlines. Proposed System ::: The Crawler ::: Normalize This stage tries to fix remaining encoding issues using ftfy BIBREF17 and to remove unicode emojis. Another important task is to normalize the unicode code points used for accents, spaces, dashes, quotes etc., and strip any invisible characters. To further improve the usability of the corpus and to simplify tokenization, we also try to enforce one single convention for spaces around quotes and colons, e.g. colons after closing quote, no space inside quotes. Proposed System ::: The Crawler ::: Split To split text into sentences, we implemented Moses' split-sentences.perl in Python and changed it in three main ways: existing newlines are preserved, colons and semi-colons are considered segmentation hints and sentences are not required to start with an uppercase. The latter is especially important as GSW is mostly found in comments where people tend to write fast and without proper casing/punctuation. The list of non-breaking prefixes used is a concatenation of the English and German prefixes found in Moses with few additions. Proposed System ::: The Crawler ::: Filter Non- or bad- sentences are identified based on a list of $20+$ rules that normal sentences should obey. Most rules are specified in the form of regular expression patterns and boundaries of acceptable occurrences, few compare the ratio of occurrence between two patterns. Examples of such rules in natural language are: “no more than one hashtag”, “no word with more than 30 characters”, “the ratio capitalized/lowercase words is below 1.5”. Proposed System ::: The Crawler ::: Language ID Using the LID described in Section SECREF3, sentences with a GSW probability of less than 92% are discarded. This threshold is low on purpose in order to favor recall over precision. Proposed System ::: The Crawler ::: Link filter This component is used to exclude or transform outgoing links found in a page based on duplicates, URL composition, but also specific rules for big social media sites or known blogs. Examples are the exclusion of unrelated national TLDs (.af, .nl, ...) and known media extensions (.pdf, .jpeg, etc.), the stripping of session IDs in URL parameters, and the homogenization of subdomains for sites such as Twitter. Note that filtering is based only on the URL and therefore does not handle redirects or URLs pointing to the same page. This leads to extra work during the crawling, but keeps the whole system simple. Proposed System ::: The Crawler ::: Decide A decider has three main decisions to take. First, based on the metadata associated with an URL, should it be visited? In practice, we visit only new URLs, but the tool is designed in a way such that a recrawl is possible if the page is detected as highly dynamic. The second decision arises at the end of the processing, where the page can be either saved or blacklisted. To favor recall, we currently keep any URL with at least one GSW sentence. Finally, the decider can choose to visit the outgoing links or not. After some trials, we found that following links from pages with more than two new GSW sentences is a reasonable choice, as pages with less sentences are often quotes or false positives. Proposed System ::: The Crawler ::: Duplicates During the crawl, the uniqueness of sentences and URLs considers only exact matches. However, when exporting the results, near-duplicate sentences are removed by first stripping any non-letter (including spaces) and making a lowercase comparison. We tried other near-duplicate approaches, but found that they also discarded meaningful writing variations. State of the Swiss German Web Table TABREF14 shows the results of running the system three times using 100 seeds on a virtual machine with 5 CPU cores and no GPUs. As expected, the first iteration yields the most new sentences. Unfortunately, the number of newly discovered hosts and sentences decreases exponentially as the system runs, dropping to 20K sentences on the third iteration. This result emphasizes the fact that the amount of GSW on the web is very limited. The third iteration took also significantly longer, which highlights the difficulties of crawling the web. In this iteration, some URLs had as much as 12 thousand outgoing links that we had to visit before discarding. Another problem arises on web sites where query parameters are used in URLs to encode cookie information and on which duplicate hypotheses cannot be solved unless visiting the links. On each new search engine query, we go further down the list of results as the top ones may already be known. As such, the percentage of pertinent URLs retrieved (% good, see decider description in Section SECREF13) slowly decreases at each iteration. It is however still above 55% of the retrieved URLs on the third run, indicating a good quality of the seeds. The SwissCrawl Text Corpus Using the proposed system, we were able to gather more than half a million unique GSW sentences from around the web. The crawling took place between September and November 2019. The corpus is available for download in the form of a CSV file with four columns: text, url, crawl_proba, date, with crawl_proba being the GSW probability returned by the LID system (see Section SECREF3). The SwissCrawl Text Corpus ::: Contents The corpus is composed of 562,524 sentences from 62K URLs among 3,472 domains. The top ten domains (see Table TABREF18) are forums and social media sites. They account for 46% of the whole corpus. In general, we consider a GSW probability of $\ge {99}\%$, to be indeed Swiss German with high confidence. This represents more than 89% of the corpus (500K) (see Figure FIGREF19). The sentence length varies between 25 and 998 characters with a mean of $92\pm 55$ and a median of 77 (see Figure FIGREF20), while the number of words lies between 4 and 222, with a mean of $16\pm 10$ and a median of 14. This highlights a common pattern in Swiss German writings: used mostly in informal contexts, sentences tend to be short and to include many symbols, such as emojis or repetitive punctuation. Very long sentences are usually lyrics that lack proper punctuation and thus could not be segmented properly. We however decided to keep them in the final corpus, as they could be useful in specific tasks and are easy to filter out otherwise. Besides the normalization described in SECREF13, no cleaning nor post-processing is applied to the sentences. This is a deliberate choice to avoid losing any information that could be pertinent for a given task or for further selection. As a result, the mean letter density is 80% and only 61% of sentences both start with an uppercase letter and end with a common punctuation mark (.!?). Finally, although we performed no human validation per se, we actively monitored the crawling process to spot problematic domains early. This allowed to blacklist some domains entirely, for example those serving embedded PDFs (impossible to parse properly) or written in very close German dialects. The SwissCrawl Text Corpus ::: Discussion Table TABREF23 shows some hand-picked examples. As most of our sources are social medias and forums, the writing style is often colloquial, interspersed with emojis and slang. This perfectly reflects the use of GSW in real life, where speakers switch to High German in formal conversations. In general, the quality of sentences is good, with few false positives mostly in High German or German dialects, rarer still in Dutch or Luxembourgian. The presence of specific structures in the sentences are often the cause of such mistakes, as they yield strong GSW cues. For example: High German with spelling mistakes or broken words; GSW named entities (“Ueli Aeschbacher”, “Züri”); The presence of many umlauts and/or short words; The repetition of letters, also used to convey emotions. The quality of the corpus highly depends on the text extraction step, which itself depends on the HTML structure of the pages. As there are no enforced standards and each website has its own needs, it is impossible to handle all edge cases. For example, some sites use hidden <span> elements to hold information, which become part of the extracted sentences. This is true for watson.ch and was dealt with using a specific rule, but there are still instances we did not detect. Splitting text into sentences is not a trivial task. Typical segmentation mistakes come from the use of ASCII emojis as punctuation marks (see text sample 3 in Table TABREF23), which are very common in forums. They are hard to detect due to the variability of each individual style. We defined duplicates as having the exact same letters. As such, some sentences may differ by one umlaut and some may be the truncation of others (e.g. excerpts with ellipsis). Finally, the corpus also contains poems and lyrics. Sometimes repetitive and especially hard to segment, they are still an important source of Swiss German online. In any case, they may be filtered out using cues in the sentence length and the URLs. Swiss German Language Modeling To demonstrate the effectiveness of the SwissCrawl corpus, we conducted a series of experiments for the NLP task of language modeling. The whole code is publicly available on GitHub. Using the GPT-2 BIBREF18 model in its base configuration (12 layers, 786 hidden states, 12 heads, 117M parameters), we trained three models using different training data: Leipzig unique sentences from the Leipzig GSW web; SwissCrawl sentences with a GSW probability $\ge {99}\%$ (see Section SECREF17); Both the union of 1) and 2). For each model, the vocabulary is generated using Byte Pair Encoding (BPE) BIBREF19 applied on the training set. The independent test sets are composed of 20K samples from each source. Table TABREF32 shows the perplexity of the models on each of the test sets. As expected, each model performs better on the test set they have been trained on. When applied to a different test set, both see an increase in perplexity. However, the Leipzig model seems to have more trouble generalizing: its perplexity nearly doubles on the SwissCrawl test set and raises by twenty on the combined test set. The best results are achieved by combining both corpora: while the perplexity on our corpus only marginally improves (from $49.5$ to $45.9$), the perplexity on the Leipzig corpus improves significantly (from $47.6$ to $30.5$, a 36% relative improvement). Conclusion In this paper, we presented the tools developed to gather the most comprehensive collection of written Swiss German to our knowledge. It represents Swiss German in the way it is actually used in informal contexts, both with respect to the form (punctuation, capitalization, ...) and the content (slang, elliptic sentences, ...). We have demonstrated how this new resource can significantly improve Swiss German language modeling. We expect that other NLP tasks, such as LID and eventually machine translation, will also be able to profit from this new resource in the future. Our experiments support the reasoning that Swiss German is still scarce and very hard to find online. Still, the Internet is in constant evolution and we aim to keep increasing the corpus size by rerunning the tool chain at regular intervals. Another line of future development is the customization of the tools for big social media platforms such as Facebook and Twitter, where most of the content is only accessible through specific APIs.
perplexity of the models
fbabde18ebec5852e3d46b1f8ce0afb42350ce62
fbabde18ebec5852e3d46b1f8ce0afb42350ce62_0
Q: Why there is only user study to evaluate the model? Text: Introduction In recent years, Twitter, a social media platform with hundreds of millions of users, has become a major source of news for people BIBREF0 . This is especially true for breaking-news about real-world events BIBREF1 . The 2011 Japanese earthquake, the 2013 Boston marathon bombings, and the 2015 Paris shootings are just three examples of an event where Twitter played major role in the dissemination of information. However, given the great volume of tweets generated during these events, it becomes extremely difficult to make sense of all the information that is being shared. In this paper, we present a semi-automatic tool that combines state-of-the-art natural language processing and clustering algorithms in a novel way, enabling users to efficiently and accurately identify and track stories that spread on Twitter about particular events. The output of our system can also be used by rumor verification systems to substantiate the veracity of rumors on Twitter BIBREF2 . A lot of the messages that are posted on Twitter about events are not directly related to any stories about the event itself. For instance, a tweet talking about how scared someone is about an event, does not contain useful information about the event itself. Tweets that are related to a story usually contain assertions. Therefore, our method focused on first identifying assertions about events. These assertions could be anything from eye-witness testimony, to false rumors, or reports from the media or law enforcement agencies. What is an Assertion? An assertion is an utterance that commits the speaker to the truth of the expressed proposition. For example, the tweet, "there is a third bomber on the roof" contains an assertion, while the tweet, "I hate reporters!" does not (it contains an expression). It has been shown than more than half of tweets about events do not contain assertions BIBREF3 . Thus, by filtering all non-assertions tweets, we can drastically reduce the number of tweets that need to be analysed for story detection. System Overview An overview of the system can be seen in Figure 1 . In the figure, the modules that are fully automatic are shown in blue while the modules requiring manual input are shown in green. Currently, the system only works on Twitter, though we plan to expand it to cover other publicly available social media platforms, such as Reddit. The first module in the system is a standard boolean query search, specified by the user. The purpose of this query is to limit the scope of the data that is being analysed to one event. This query can be about anything but works best if it is about a well-defined event. For example, in this figure, the query is Boston AND Bombing, which picks out tweets about the 2013 Boston marathon bombings. These tweets are next passed to the “automatic” parts of the system, an Assertion Detector module and a Hierarchical Clustering module. Raw tweets about an event feed directly into the assertion detector, which automatically filters the tweets for only those containing assertions (tweets not containing assertions are discarded at this stage). These tweets are then clustered in a hierarchical manner, based on the their semantic similarity. In theory, these clusters should mostly contain tweets that are making similar assertions. The hierarchical clusters (and their contents, including the text and the meta-data of the tweets they contain) are passed to the user-facing, interactive exploration tool. The exploration tool can be used to identify, investigate, and track stories, that are spreading about an event on Twitter. Detecting Assertions in Tweets Assertions are a class of speech-acts. In order to detect assertions in tweets, a speech-act classifier is needed. We manually annotated $7,000$ tweets about several different real-world events. We labelled these tweets as either containing assertions or not. Of the $7,000$ tweets, $3,290$ ( $47\%$ ) of those tweets containing assertions and the rest containing one of the other speech-acts. These tweets were used to train a state-of-the-art supervised Twitter speech-act classifier, developed by Vosoughi et al. BIBREF3 . Since, we were interested in only detecting assertions, we turned the speech-act classifier to a binary assertion classifier (by collapsing all the non-assertion classes into one class). We evaluated the classifier using 20-fold cross validation, with the F-score for classifying assertions being $.86$ . The performance of this classifier is better illustrated by its ROC curve in Figure 2 . Hierarchical Clustering of Tweets The next part in the automatic processing pipeline is the hierarchical clustering of semantically similar tweets, in order to group together tweets making similar assertions. The output of hierarchical clustering can best be described as a dendrogram.At the lowest level of the dendrogram, all tweets belong to their own clusters. At the very root of the tree is a single cluster, containing all the tweets. Users can explore the clusters at any level. A partition lower in the tree (further from the root) would yield more clusters, with each cluster containing fewer number of tweets. Conversely, a partition higher in the tree would yield less clusters, with each containing greater number of tweets. Depending on the event, there could be thousands of clusters at different levels. It will be up to the users to decide how to best cut-off and explore the clusters. For example, if the event in question is a very local event, meaning that there are not many tweets about the event, then perhaps a partition higher in the tree would be more useful and vice-versa. Hierarchical clustering of text documents is a well-studied problem. However, as was the case with speech-act classification, the noisy, unconventional and most importantly short nature of the language used on Twitter, greatly reduce the performance of conventional hierarchical document clustering methods. Thus, we developed a novel hierarchical clustering method for Twitter, using very recent advances in Twitter natural language processing techniques. In the next sections, we will describe a conventional hierarchical clustering method, followed by our novel method. Both methods were implemented so that the performance of our novel method could be benchmarked. Conventional Method Generally speaking, there are two strategies for hierarchical clustering: Agglomerative: This is a "bottom up" approach; each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Divisive: This is a "top down" approach; all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy. The complexity of agglomerative clustering is polynomial at $O(n^{3})$ , while the complexity of divisive clustering is exponential at $O(2^{n})$ . Given the potentially large number tweets about an event, we decided to use Hierarchical Agglomerative Clustering (HAC), given its lower complexity. To do any sort of clustering of documents (such as tweets), a similarity function is needed, to measure the similarity between different documents and decide which clusters to merge or divide. A standard metric used to measure similarity between text documents is TF-IDF combined with cosine similarity. TF-IDF is a method of converting text into numbers so that it can be represented meaningfully by a vector. TF-IDF is the product of two statistics, TF or Term Frequency and IDF or Inverse Document Frequency. Using TF-IDF, a vector for each document is derived. The set of documents in our collection is then viewed as a set of vectors in a vector space with each term having its own axis. Similarity between two documents is measured using cosine similarity. With this similarity function, we can hierarchically cluster tweets using HAC. Novel Method TF-IDF, combined with cosine similarity is usually a good method of measuring similarity between documents. However, tweets are short (140 characters), irregular text whose topic-level information can be hardly expressed by TF-IDF representations. An alternative method, is to use a similarity metric that is adapted to this platform. We implemented the Twitter paraphrase identification method proposed recently by Asli Eyecioglu and Bill Keller BIBREF4 (winners of SemEval-2015 in this category) to measure similarity between pairs of tweets. This method is for identifying Twitter paraphrase pairs, where paraphrase identification is defined as "the task of deciding whether two given text fragments have the same meaning". This method takes a pair of tweets and makes a binary judgement on whether these two tweets are paraphrases. For example, the tweets, "Amber alert gave me a damn heart attack" and "That Amber alert scared the crap out of me" are a paraphrase pair, while the tweets "My phone is annoying me with these amber alert", and "Am I the only one who dont get Amber alert" are not a paraphrase pair. We used a dataset called the Twitter Paraphrase Corpus (TPC) BIBREF5 for training and testing our model. The dataset contains 18K tweet pairs 1K test data, with $35\%$ those pairs being paraphrases, and $65\%$ non-paraphrases. We trained an linear SVM classifier using the features proposed in that paper. These features are based on overlap of word level and character level n-grams. To begin, the text in each tweet is cleaned and represented as a set of tokens, where a token can be a character or word unigram or bigram. The overlap features are then created using set operations: size of the union of the tokens, size of the intersection of the tokens, and the size of the set of tokens. Of all the combinations of overlap features, the following six features were shown to be the most informative: union of word unigrams, union of character bigrams, intersection of word unigrams, intersection of character bigrams, sizes of tweet 1 and tweet 2. The linear SVM trained on these features achieved an F-score of $.67\%$ . Other than the great performance, this method is very fitted to our use-case since both feature extraction and classification are extremely fast. Given that sometimes the number of tweets about a particular event could be in the millions, this is extremely important. All possible pairs of tweets that make it past our assertion detector (which is ${N 2}$ pairs, N being the number of tweets containing assertions), are passed through this binary classifier, to be classified as paraphrases or not. The results are used to create an undirected graph, with each of the $N$ tweets being represented as a node, and edges between nodes representing paraphrase pairs. This graph is used to construct hierarchical clusters of tweets. Given this undirected graph of tweets, we can use efficient community detection methods, to detect communities, or "clusters" of tweets with similar meaning assertions. We used a method called the, Louvain BIBREF6 for this purpose. The Louvain method is a simple and efficient method for community detection in very large networks. It uses a greedy optimization method that runs in time $O(n\log {}n)$ , outperforming other community detection methods in terms of computation time, while performing on par, if not better, than other methods when it comes to the accuracy and quality of the extracted communities BIBREF7 . It is however the speed of this method which is its main advantage, as it takes only two minutes to analyze a typical network of 2 million nodes. This is very important for applications requiring real-time clustering, such as ours. Also, crucial for our task, the Louvain method generates hierarchical structures. The idea behind the method is the greedy optimization of the modularity of the graph. Modularity is defined as a value between $-1$ and 1 that measures the density of links inside communities compared to the links between communities. The method consists of two steps. First, the method looks for "small" communities by optimizing modularity locally. Second, it aggregates nodes belonging to the same community and builds a new network whose nodes are the communities. These steps are repeated iteratively until a maximum of modularity is attained and a hierarchy of communities is produced BIBREF6 . These hierarchical communities are analogous to the hierarchical clusters generated by HAC, in that these communities contain similar assertions. Conclusions In this paper, we presented a semi-automatic tool that can be used to efficiently identify stories about real-world events on Twitter. This is an important problem since Twitter and other social media platforms have become one of the main sources of news for many people. Given a user-specified query about an event, our tool automatically detects and clusters assertions about that event on Twitter. The system uses a Twitter speech-act classifier, in conjunction with a novel hierarchical clustering method for tweets. Instead of relying on traditional hierarchical methods which perform poorly on tweets, our method works by first creating a similarity graph of tweets (using recent advances in Twitter NLP tools) and then applying a very fast community detection algorithm on the graph. The system is not only faster, but it also provides higher quality clusters (less noisy and more coherent), making it easier for users to quickly sort through thousands of tweets.
Unanswerable
52e8c9ed66ace1780e41815260af1309064d20de
52e8c9ed66ace1780e41815260af1309064d20de_0
Q: What datasets are used to evaluate the model? Text: Introduction While machine learning methods conventionally model functions given sample inputs and outputs, a subset of statistical relational learning(SRL) BIBREF0 , BIBREF1 approaches specifically aim to model “things” (entities) and relations between them. These methods usually model human knowledge which is structured in the form of multi-relational Knowledge Graphs(KG). KGs allow semantically rich queries in search engines, natural language processing (NLP) and question answering. However, they usually miss a substantial portion of true relations, i.e. they are incomplete. Therefore, the prediction of missing links/relations in KGs is a crucial challenge for SRL approaches. A KG usually consists of a set of facts. A fact is a triple (head, relation, tail) where head and tail are called entities. Among the SRL models, distance based knowledge graph embeddings are popular because of their simplicity, their low number of parameters, and their efficiency on large scale datasets. Specifically, their simplicity allows integrating them into many models. Previous studies have integrated them with logical rule embeddings BIBREF2 , have adopted them to encode temporal information BIBREF3 and have applied them to find equivalent entities between multi-language datasets BIBREF4 . Since the introduction of the first multi-relational distance based method BIBREF5 many variations were published (e.g., TransH BIBREF6 , TransR BIBREF7 , TransD BIBREF8 , STransE BIBREF9 ) that – despite their improvement in the accuracy of the model – suffer from several inherent limitations of TransE that restrict their expressiveness. As BIBREF10 , BIBREF11 noted, within the family of distance based embeddings, usually reflexive relations are forced to be symmetric and transitive. In addition, those approaches are unable to learn symmetric relations. In this work, we put forward a distance based approach that addresses the limitations of these distance based models. Since our approach consists of several distances as objectives, we dubbed it multi-distance embeddings (MDE). We show that 1. TransE and MDE are fully expressive, 2. MDE is able to learn several relational patterns, 3. It is extendable, 4. It shows competitive performance in the empirical evaluations and 5. We also develop an algorithm to find the limits for the limit-based loss function to use in embedding models. Background and Notation Given the set of all entities $\mathcal {E}$ and the set of all relations $\mathcal {R}$ , we define a fact as a triple of the form $(\mathbf {h}, \mathbf {r}, \mathbf {t})$ in which $\mathbf {\mathbf {h}}$ is the head and $\mathbf {t}$ is the tail, $\mathbf {h,t} \in \mathcal {E}$ and $\mathbf {r} \in \mathcal {R}$ is a relation. A knowledge graph $\mathcal {KG}$ is a subset of all true facts $\mathcal {KG} \in \zeta $ and is represented by a set of triples. An embedding is a function from an entity or a relation to their latent representation which is one or several vectors or tensors of numbers. A relational learning model is made of an embedding function and a prediction function that given a triple $(\mathbf {h}, \mathbf {r}, \mathbf {t})$ it determines if $\mathcal {R}$0 . We represent embedding representation of an entity $\mathcal {R}$1 , with a lowercase letter $\mathcal {R}$2 if it is a vector and with uppercase letters $\mathcal {R}$3 if it is a metric. A ground truth, in the closed world assumption, is the full assignment of truth values to all triples. A relational learning model is fully expressive if it can express any ground truth, i.e, there exists an assignment of values to the embeddings of the entities and relations that accurately separates the correct and incorrect triples. The ability to encode different patterns in the relations can show the generalization power of a model: Definition 1. A relation r is symmetric (anti-symmetric) if $\forall x, y \ r(x, y) \Rightarrow r(y, x) \wedge r(x, y) \Rightarrow \lnot r(y, x)$ . A clause with such a structure has a symmetry (antisymmetry) pattern. Definition 2. A relation $r_1$ is inverse to relation $r_2$ if $\forall x, y$ $r_2(x, y) \Rightarrow r_1(y, x)$ . A clause with such a form has an inversion pattern. Definition 3. A relation $r_1$ is composed of relation $r_2$ and relation $r_3$ if $\forall x, y, z \ \ r_2(x, y) \wedge r_3(y, z) \Rightarrow r_1(x, z)$ A clause with such a form has a composition pattern. Related Work Tensor factorization and multiplicative models define the score of triples via pairwise multiplication of embeddings. Dismult BIBREF12 simply multiplies the embedding vectors of a triple element by element $\langle h,t,r \rangle $ as the score function. Since multiplication of real numbers is symmetric, Dismult can not distinguish displacement of head relation and tail entities and therefore it can not model anti-symmetric relations. To solve this limitation SimplE BIBREF10 collaborates the reverse of relations to Dismult and ties a relation and its inverse. ComplEx BIBREF13 solves the same issue of DistMult by the idea that multiplication of complex values is not symmetric. By introducing complex-valued embeddings instead of real-valued vectors to dismult, the score of a triple in ComplEx is $Re(h^\top diag(r)\bar{t})$ with $\bar{t}$ the conjugate of t and $Re(.)$ is the real part of a complex value. In RESCAL BIBREF14 instead of a vector, a matrix represents $r$ , and performs outer products of $h$ and $t$ vectors to this matrix so that its score function becomes $h^\top R t$ . A simplified version of RESCAL is HolE BIBREF15 that defines a vector for $r$ and performs circular correlation of $h$ and $Re(h^\top diag(r)\bar{t})$0 has been found equivalent BIBREF16 to ComplEx. In Latent distance approaches the score function is the distance between embedding vectors of entities and relations. In the view of social network analysis, BIBREF17 originally proposed distance of entities $-d(h, t)$ as the score function for modeling uni-relational graphs where $d(., .)$ means any arbitrary distance, such as Euclidean distance. SE BIBREF18 generalizes the distance for multi-relational data by incorporating a pair of relation matrices into it. TransE BIBREF5 represents relation and entities of a triple by a vector that has this relation $$\parallel h+r-t \parallel _p$$ (Eq. 1) with $\parallel . \parallel _p$ is the norm $p$ . To better distinguish entities with complex relations, TransH BIBREF19 projects the vector of head and tail to a relation-specific hyperplane. Similarly, TransR follows the idea with relation-specific spaces and extends the distance function to $\parallel M_r h+r- M_r t \parallel _p$ . RotatE BIBREF11 combines translation and rotation and defines the distance of a $t$ from tail $h$ which is rotated the amount $r$ as the score function of a triple $-d(h \circ r, t)$ with $\circ $ an Hadamard product. Neural network methods train a neural network to learn the interaction of the h, r and t. ER-MLP BIBREF20 is a two layer feedforward neural network considering $h$ , $r$ and $t$ vectors in the input. NTN BIBREF21 is neural tensor network that concatenates head $h$ and tail $t$ vectors and feeds them to the first layer that has $r$ as weight. In another layer, it combine $h$ and $t$ with a tensor $R$ that represents $\textbf {r}$ and finally, for each relation, it defines an output layer $r$ to represent relation embeddings. In SME BIBREF22 relation $r$ is once combined with the head $h$ to get $t$0 , and similarly once with the tail $t$1 to get $t$2 . SME defines a score function by the dot product of this two functions in the hidden layer. In the linear SME, $t$3 is equal to $t$4 , and in the bilinear version, it is $t$5 . Here, $t$6 refers to weight matrix and $t$7 is a bias vector. MDE: Multi Distance Embedding Method A method to put together different views to the input samples is to incorporate the different formulations of samples from the different models as one learning model. In contrast to ensemble approaches that incorporate models by training independently and testing together, multi-objective optimization models (MOE) BIBREF23 join in the minimization step. The most common method of generating multi-objective optimization models is the weighted sum approach: $U = \sum _{i=1}^{k} w_i F_i (x).$ Here, we propose this approach for distance (score) functions. This combination is usually practical for the objective functions, but adding contrasting score functions can diminish the scores. To tackle this challenge we represent the same entities with independent variables in different distance functions. The idea of using more than one vector representation is not new. In canonical Polyadic (CP) decomposition BIBREF24 , each entity $e$ is represented by two vectors $h_e$ , $t_e \in \mathbb {R}^d$ , and for each relation $r$ has a single embedding vector $ v_r \in \mathbb {R}^d $ . In CP, the two embedding vectors for entities are learned independent from each other, i.e., observing $(e_1 , r , e_2 )$ only updates $h_{e1}$ and $t_{e2}$ , not $t_{e1}$ and $h_{e2}$ . We observe that using independent vectors for entity and relations we are able to define independent score functions. Following the idea, we equip distance based embeddings with the exploration of more aspects of the data simply using more distance functions. This simple technique resolves some of its deficiencies and improve its generalization power. Symmetric Relations Learning It is possible to easily check that Formulation $\parallel h+r-t \parallel $ is anti-symmetric but as we show it in the next Section, it is not capable of learning symmetric relations. We add the distance function 2 to enable it to learn symmetric relations. $$\parallel h+t-r \parallel _p$$ (Eq. 2) Inverse Relation Learning Beside the symmetric relations, many relations in knowledge graphs are indicative of a bi-directional relation which is not necessarily symmetric. For example, let $IsAuthorOf(a,t)$ represent if an author $a$ is an author in a topic $t$ and $Likes(p, t)$ represents if a person likes a topic. A third relation $Knows(p, a)$ represents if a person $p$ knows an author $a$ . Observations about the $Likes(.,.)$ relation and the inverse of $IsAuthorOf(.,.)$ influence the third relation $Knows(p, a)$ , indicating that the inverse of a relation could be interesting to be learned. We take advantage of the independent vectors again this time to learn the inverse of relations. We define ( 3 ) as: $$\parallel t + r - h\parallel _p$$ (Eq. 3) While learning the symmetric relations is practiced in multiplicative learning models (e.g. in BIBREF25 ) and inverse of relations has been used in machine learning models (e.g. in BIBREF10 , BIBREF26 , providing a way to have them together in distance based embeddings is a novel contribution. Model Definition: MDE considers three vectors $e_i, e_j, e_k \in \mathbb {R}^d$ as the embedding vector of each entity $\textbf {e}$ (similar to CP and SimplE), and three vectors $r_i, r_j, r_k \in \mathbb {R}^d$ for each relation $\textbf {r}$ . The score function of MDE for a triple $(\textbf {h}$ $ \textbf {r}$ $\textbf {t})$ is defined as wighted sum of above scores: $$Score_{MDE} = w_1 \parallel h_i + r_i - t_i \parallel _p~+~ w_2 \parallel h_j + t_j - r_j \parallel _p~+~ w_3 \parallel t_k + r_k - h_k \parallel _p - \psi $$ (Eq. 4) where $n$ refers to $L_1$ or $L_2$ norm and $\psi \in \mathbb {R^+}$ is a positive constant. SimplE BIBREF10 also adds a second score function to Dismult to handle the antisymmetry pattern. However, in SimplE, the relation vectors of the two scores are tied together, in contrast to MDE that the entity and relation vectors in are independent( which allows the summation of contrasting scores.). MDE is simply proposing the weighted sum for distances and is not limited to the above distance functions. In our experiments, we consider a fourth score, which we explain it in Proposition 6. Guided limit based Loss While Margin ranking loss minimizes the sum of error over all the training samples BIBREF27 noticed that, when applying the margin-based ranking loss to translation embeddings, it is possible that the score of correct a triplet is not small enough to hold the $h + r - t$ relation. In order to the scores of positive triples become lower than those of negative ones, they defined limited based loss which limits that the error in all the positive (negative) samples become less than a limit. BIBREF28 defines such limit for negative samples as well, so that their score stay greater than a limit. However the limit based loss resolves this issue of margin ranking loss, it does not provide a way to find the optimal limits. Therefore for each dataset and hyper-parameter change the fixed limits should be found by try and error. To address this issue, we define a moving-limit loss function denoted by $loss_{guided}$ . The aim of this approach is to find a balance between two goals. 1. To make the error of a correct triple zero (following the idea of the distance functions). 2. To increase the margin between the limits for positive and negative samples as match as possible (following Structural risk minimization principle BIBREF29 to maximize the margin between the positive and negative samples). We minimize the limit for objective of negative samples, with the condition that the error for the objective of positive samples stay a small value. Therefore we extend the limit based loss to $$loss_{guided} = \lim _{\delta ^{\prime } \rightarrow \delta + \alpha } \lim _{\delta \rightarrow \gamma _1} \beta _1 \sum _{\tau \in {T}^+} [f(\tau )- (\gamma _1 - \delta )]_+ + \beta _2 \sum _{\tau ^{\prime }\in {T}^-} [(\gamma _2 - \delta ^{\prime }) - f(\tau ^{\prime })]_+$$ (Eq. 6) where $[.]_+ = max(., 0). \gamma _1 , \gamma _2$ are small positive values and $\delta _0, \delta ^{\prime }_0 = 0$ . $\beta _1, \beta _2 >0$ represent constrains to represent the importance of the positive and negative samples. ${T}^+ ,{T}^-$ denote the set of positive and negative samples. $\alpha $ denotes a margin between $\gamma _1$ and $\gamma _2$ . In this formulation we tend to find a $\gamma _1$ is near to zero such that the positive samples gain zero error(the idea of distance based embeddings) and increase a $\gamma _2$ as large as possible to maximize the margin between positive and negative loss. To apply the limits, we first set the $\gamma _2, \gamma _1$ to positive values. After several iterations if the positive loss ( $loss^+$ ) does no decrease it shows the limit for positive samples is set too small. Therefore, we increase both $\gamma _1 , \gamma _2$ . Whenever during the iterations the $loss^+$ becomes zero we increase $\delta $ by a fixed amount $\xi $ so that $\delta = \delta + \xi $ . We apply the constraint $f(\tau ) - f(\tau ^{\prime }) \ge \gamma _2 - \gamma _1$ on the algorithm so that the proposed loss function would preserve the characteristic of the margin-based ranking loss. We perform a similar comparison for the loss of negative values ( $loss^-$ ) to decrease $\delta ^{\prime }$ . The details of the dynamic limit loss is explained in Algorithm 1. The loops in the Algorithm have become feasible provided that we select an optimizer with adaptive learning rate BIBREF30 which adapts the learning rate after each iteration.0 [t] Dynamic Limit Loss [1] Input: $loss^+$0 training iterations are not finished, for each iteration $loss^+$1 $loss^+$2 $loss^+$3 $loss^+$4 $loss^+$5 $loss^+$6 $loss^+$7 from Equation 6 Fully Expressiveness To prove for fully expressiveness of TransE we define an upper bound $\alpha $ for dimension of entities and relations. Here we prove the expressiveness of TransE with the upper bound $\alpha $ with a small modification on its loss function. We later in Section "Relieving Limitations on Translation Embeddings" , further discuss previous published negative results on the full-expressiveness for TransE. Proposition 1. For any ground truth over entities $\mathcal {E}$ and relations $\mathcal {R}$ containing $\alpha $ true facts, there exists a TransE model using limit-based loss with the arbitrary limits $\gamma _1$ for positive samples, $\gamma _2$ for negative samples ( $\gamma _2 \ge \gamma _1$ ) and with embedding vectors of size $ \alpha + 1$ representing that ground truth. We prove the $\alpha + 1 $ bound with setting the arbitrary $\gamma _1$ and $\gamma _2$ . As the base of induction, let $\alpha $ be zero (empty set of triples). For every entity $e_i$ , $e_j$ and relation $r_j$ , to preserve the relations in p-norm: $ \parallel h_{e_i} + v_{r_j} - t_{e_k}\parallel _p \,\ \ge \gamma _2 \text{~~~for negative samples and ~~~}$ $ \parallel h_{e_i} + v_{r_j} - t_{e_k}\parallel _p \,\ \le \gamma _1 \text{~~~for positive samples. ~~~}$ It is enough to set the value for entities and the relation to one and to set $ 2 \ge \gamma _1 \ge 1 $ and $\gamma _2 \ge 1$ and $\gamma _2 \ge \gamma _1$ . Therefore, there exist an assignment of values for to embedding vectors of size 1 that can represent the ground truth. In the induction step from $n$ to $n+1$ , where $\alpha = n \,(1 \le n \le |\mathcal {R}| |\mathcal {E}|^2)$ , we prove for any ground truth, there exist an assignment of values to embedding vectors of size $n + 1$ that represents this ground truth. Let $(e_i, r_j, e_k)$ is a fact that is not assigned true by the ground truth of step $n$ . Let $\parallel h_{e_i} + v_{r_j} - t_{e_k}\parallel _p = q $ , where $h_{e_i}$ , $v_{r_j}$ and $t_{e_k}$ are vectors that represent this fact. We add an element to the end of all embedding vectors and set it to 0. This increases the vector sizes to $n+1$0 but does not change any scores. For $n+1$1 , it is enough to set the last element of $n+1$2 to 1, $n+1$3 to 1 and $n+1$4 to be 1 and assign $n+1$5 and we arbitrary set $n+1$6 . This ensures that $n+1$7 for the new vectors, and no other score is affected. Corollary 1. MDE is fully expressive in the same way as TransE is fully expressiveness using the limit-based loss function. For the proof we only need to follow the proof of Proposition 1 and set the supplementary distances to zero. Corollary 2. The proof of Proposition 1 is extendable to the family of translation based embeddings with the score function $\parallel A_r h + r - B_r t \parallel _p$ , where $h,r,t \in \mathbb {R}^d$ , $A$ and $B$ are matrices $\in \mathbb {R}^{d^{\prime }\times d}$ given that they apply limit-based loss function, because they all can recreate the score of TransE in the induction. Modeling Relational Patterns In this section, we show that the proposed model not only is capable of learning inverse and composition patterns it can also learn symmetric and antisymmetric relations. We prove the capability of one of the objectives of MDE in learning these patterns, and afterward, we show that in the following propositions (3,4,5) it is enough to prove that one of the distances learns a pattern. Let $r_1, r_2, r_3$ be relation vector representations and $e_i$ $e_j$ $e_k$ are entity representations. A relation $r_1$ between $(e_i, e_k)$ exists when a triple $(e_i , r_1 , e_k )$ exists and we show it by $r_1(e_i, e_k )$ . Formally, we have the following results: . Proposition 2. Entities with symmetry/antisymmetry pattern can encoded by MDE. If $r_1(e_i, e_j)$ and $r_1(e_j, e_i)$ hold, in equation 2 we have $e_i$0 $e_i$1 Proposition 3. Entities with inversion pattern can encoded by MDE. If $r_1(e_i, e_j)$ and $r_2(e_j, e_i)$ hold, from equation 1 we have $ e_i + r_1 = e_j \wedge e_j + r_2 = e_i \Rightarrow r_1 = - r_2 $ Proposition 4. Entities with the composition pattern can be encoded by MDE. If $r_1(e_i, e_k)$ , $r_2(e_i, e_j)$ and, $r_3(e_j, e_k)$ hold, from equation 1 we have $ e_i + r_1 = e_k \wedge e_i + r_2 = e_j \wedge e_j + r_3 = e_k \Rightarrow r_2 + r_3 = r_1 $ We first constrain $\gamma _1, \gamma _2, w_1, w_2, w_3$ , such that learning a fact by one of the distances in 4 is enough to classify a fact correctly. Proposition 5. There exist $\psi $ and $\gamma _1, \gamma _2 \ge 0$ ( $\gamma _1 \ge \gamma _2$ ) that only if one of the three distances esitmates a fact is true based on the Proposition 3, 4, or 5 , the main distance(score) also predicts it as a true fact. It is enough to show there is at least one set of boundaries for the positive and negative samples that follows the constraints. It is easy to check that equation 3 can learn the same patterns that equation 1 can learn.Therefore the cases to prove are when two of the distance functions $s_1$ and $s_3$ from the equations 1 , 3 classify a fact negative $N$ and the third distance function $s_2$ from equation 3 classify it as positive $P$ , and the case that $s_1$ and $s_3$ classify a fact as positive and $s_2$ classify it as negative. We set $w_1 = w_3 = 1/4$ and $w_2 = 1/2$ and assume that $s_3$0 is the values estimated by the score function of MDE, we have: $$a > N/2 \ge \gamma _2/2 \wedge \gamma _1/2 > P/2 \ge 0 \Rightarrow a + \gamma _1/2 > Sum + \psi \ge \gamma _2/2$$ (Eq. 9) There exist $a= 2$ and $\gamma _1 = \gamma _2= 2$ and $\psi = 1$ that satisfy $\gamma _1 > Sum \ge 0 $ and the inequality 9 . It can be easily checked that without introduction of $\psi $ , there is no value of $Sum$ that can satisfy both $\gamma _1 > Sum \ge 0 $ and the inequality 9 and its value is calculated based on the values of $\gamma _1$ , $\gamma _2$ and $a$ . In case that future studies discover new interesting distances, this proposition shows how to basically integrate them into MDE. Relieving Limitations on Translation Embeddings Several studies highlighted limited expressiveness for transitional embedding models. Here we show that not only some of their claims are not accurate, we prove that the MDE solves some of those limitations that apply on TransE. While BIBREF31 attempted to prove that RESCAL subsumes the TransE. Their proof is applied to a specific version of TransE, with the score function, $S_{T1}$ which only applies norm $L_2$ and its relation to the vector produced by the score function of TransE $S_{TransE}$ is $ S_{T1} = -\sqrt{S_{TransE}}$ . However, first, TransE can be used with norm $L_1$ and; second, the provided proof is made for $S_{T1}$ which is always less than $S_{TransE}$ . Therefore the declared theory does not relate TransE to RESCAL and does not limit the expressiveness of TransE. The comparison of empirical results of RESCAL and TransE also confirms the incorrectness of this deduction. Another study by BIBREF10 presents the existing restrictions over several translation models by showing that reflexive relations in TransE are also symmetric and transitive. We should remark that these limitations are on generalization power not the expressivity of the model. Nevertheless, here we present two of these restrictions and show how MDE removes them by inclusion of new distance functions over relations and entities. Proposition 6. Below restrictions of translation based embeddings approaches BIBREF10 do not apply to the MDE. These restrictions include: R1: if a relation $r$ is reflexive, on $\Delta \in \mathcal {E}$ , $r$ it will be also symmetric on $L_2$0 , R2: if $L_2$1 is reflexive on $L_2$2 , $L_2$3 it will be also be transitive on $L_2$4 . R1: For such reflexive $r_1$ , if $r_1(e_i, e_i)$ then $r_l(e_j, e_j)$ . Since definition of MDE is open to weighted sum with new scores. We add the score $\parallel h - r \circ t \parallel _p$ . (which is similar to $\parallel h \circ r - t \parallel _p$ the score of BIBREF11 ) In this equation we have: $e_i = r_1 e_i \wedge e_j = r_1 e_j \Rightarrow r_1 = U \lnot \Rightarrow e_i = r_1 e_j$ where $U$ is unit tensor. R2: For such reflexive $r_1$ , if $r_1(e_i, e_j)$ and $r_l(e_j, e_k)$ then $r_1(e_j, e_i)$ and $r_l(e_k, e_j)$ . In equation above we have: $ e_i = r_1 e_j \wedge e_j = r_1 e_k \Rightarrow e_i = r_1 r_1 e_j e_k \wedge r_i = U \Rightarrow e_i = e_j e_k \lnot \Rightarrow e_i + e_k = r_l $ Time Complexity and Parameter Growth Considering the ever growth of knowledge graphs and the expansion of the web, it is crucial that the time and memory complexity of a relational mode be minimal. Despite the limitations in expressivity, TransE is one of the popular models on large datasets due to its scalability. With $O(d)$ time complexity, where d is the size of embedding vectors, it is more efficient than RESCAL, NTN and the neural network models. Similar to TransE, the time complexity of MDE is $O(d)$ . Due to additive construction of MDE, inclusion of more distance functions keeps the time complexity linear in the size of vector embeddings. Experiments Datasets: We experimented on four standard datasets: WN18 and FB15k are extracted by BIBREF5 from Wordnet BIBREF32 Freebase BIBREF33 . We used the same train/valid/test sets as in BIBREF5 . WN18 contains 40,943 entities, 18 relations and 141,442 train triples. FB15k contains 14,951 entities, 1,345 relations and 483,142 train triples. In order to test the expressiveness ability rather than relational pattern learning power of models, FB15k-237 BIBREF34 and WN18RR BIBREF35 exclude the triples with inverse relations from FB15k and WN18 which reduced the size of their training data to 56% and 61% respectively. Baselines: We compare MDE with several state-of-the-art relational learning approaches. Our baselines include, TransE, TransH, TransD, TransR, STransE, DistMult, NTN, RESCAL, ER-MLP, and ComplEx and SimplE. We report the results of TransE, DistMult, and ComplEx from BIBREF13 and the results of TransR and NTN from BIBREF36 , and ER-MLP from BIBREF15 . The results on the inverse relation excluded datasets are from BIBREF11 , Table 13 for TransE and RotatE and the rest are from BIBREF35 . Evaluation Settings: We evaluate the link prediction performance by ranking the score of each test triple against its versions with replaced head, and once for tail. Then we compute the hit at N (hit@N), mean rank(MR) and mean reciprocal rank (MRR) of these rankings. MR is a more robust measure than MRR since in MRR few very good results can influence the overall score. Implementation: We implemented MDE in PyTorch. Following BIBREF18 , we generated one negative example per positive example for all the datasets. We used Adadelta BIBREF30 as the optimizer and fine-tuned the hyperparameters on the validation dataset. The ranges of the hyperparameters are set as follows: embedding dimension 25, 50, 100, batch size 100, 150, iterations 1000, 1500, 2500, 3600. We set the initial learning rate on all datasets to 10. The best embedding size and $\gamma _1$ and $\gamma _2$ and $\beta _1$ and $\beta _2$ values on WN18 were 50 and 1.9, 1.9, 2 and 1 respectively and for FB15k were 100, 14, 14, 1, 1. The best found embedding size and $\gamma _1$ and $\gamma _2$ and $\beta _1$ and $\beta _2$ values on FB15k-237 were 100, 5.6, 5.6, 1 and 1 respectively and for WN18RR were 50, 2, 2, 5 and 1. In Algorithm 1, we defined $threshold =$ 0.05 and $\xi =$ 0.1. In the equation ( 4 ), we used $\gamma _2$0 = 1.2 for all the experiments. Entity Prediction Results Table 1 and Table 2 show the result of our experiment. Due to the hard limit in the limit based loss, the mean rank of MDE is much lower than other methods. Comparison of MDE and TransE and other distance based models confirms the improved ability of MDE in learning different patterns. Negative Sampling and Data Augmentation Models: Currently, the training datasets for link prediction evaluation miss negative samples. Therefore, models generate their own negative samples while training. In consequence, the method of negative sample generation influences the result of the models. This problem is crucial because the ranking results of the models are close and the reported works make an unfair comparison of the dataset construction rather than the relational learning models. This is considerable when comparing results to ConvE that generates all possible combinations of object entities to generate samples(e.g., we observed in each iteration, it generates $\approx 26000$ negative samples per one positive sample when training on WN18RR). ComplEx and SimplE also generate 10 negative samples per one positive sample on FB15K. Here, we use 1 negative per positive, for MDE. To show the effect of dataset construction we compare this models with an experiment of TransE and RotatE on FB15k-237 that applies 256 negative samples per one positive sample BIBREF11 . Although these models perform better than the TransE on FB15K(Table1), they produce lower rankings on FB15k-237(Table2) in the more fair comparison conditions. Conclusion In this study, we showed that not only some of the claimed limitations on the expressiveness of score based embeddings do not hold, but also demonstrated how MDE relieves the expressiveness restriction of TransE. Finally, we proved that with the proper loss function translation embedding methods are fully expressive. Besides MDE, BIBREF11 and BIBREF10 , most of the existing models are unable to model all the three relation patterns. Indeed, TransE cannot model the symmetry pattern, DisMult has a problem learning the antisymmetry, ComplEx cannot infer composition rules. Here, we showed a general method to override these limitations of the older models. We demonstrated and validated our contributions via both theoretical proofs and empirical results.
WN18 and FB15k
3ee721c3531bf1b9a1356a40205d088c9a7a44fc
3ee721c3531bf1b9a1356a40205d088c9a7a44fc_0
Q: How did they gather the data? Text: Introduction Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants and, more recently, navigating user interfaces, by providing a natural language interface to services and APIs on the web. The recent popularity of conversational interfaces and the advent of frameworks like Actions on Google and Alexa Skills, which allow developers to easily add support for new services, has resulted in a major increase in the number of application domains and individual services that assistants need to support, following the pattern of smartphone applications. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, M2M BIBREF1 and FRAMES BIBREF2. However, existing datasets for multi-domain task-oriented dialogue do not sufficiently capture a number of challenges that arise with scaling virtual assistants in production. These assistants need to support a large BIBREF3, constantly increasing number of services over a large number of domains. In comparison, existing public datasets cover few domains. Furthermore, they define a single static API per domain, whereas multiple services with overlapping functionality, but heterogeneous interfaces, exist in the real world. To highlight these challenges, we introduce the Schema-Guided Dialogue (SGD) dataset, which is, to the best of our knowledge, the largest public task-oriented dialogue corpus. It exceeds existing corpora in scale, with over 16000 dialogues in the training set spanning 26 services belonging to 16 domains (more details in Table TABREF2). Further, to adequately test the models' ability to generalize in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants. We also propose the schema-guided paradigm for task-oriented dialogue, advocating building a single unified dialogue model for all services and APIs. Using a service's schema as input, the model would make predictions over this dynamic set of intents and slots present in the schema. This setting enables effective sharing of knowledge among all services, by relating the semantic information in the schemas, and allows the model to handle unseen services and APIs. Under the proposed paradigm, we present a novel architecture for multi-domain dialogue state tracking. By using large pretrained models like BERT BIBREF4, our model can generalize to unseen services and is robust to API changes, while achieving state-of-the-art results on the original and updated BIBREF5 MultiWOZ datasets. Related Work Task-oriented dialogue systems have constituted an active area of research for decades. The growth of this field has been consistently fueled by the development of new datasets. Initial datasets were limited to one domain, such as ATIS BIBREF6 for spoken language understanding for flights. The Dialogue State Tracking Challenges BIBREF7, BIBREF8, BIBREF9, BIBREF10 contributed to the creation of dialogue datasets with increasing complexity. Other notable related datasets include WOZ2.0 BIBREF11, FRAMES BIBREF2, M2M BIBREF1 and MultiWOZ BIBREF0. These datasets have utilized a variety of data collection techniques, falling within two broad categories: Wizard-of-Oz This setup BIBREF12 connects two crowd workers playing the roles of the user and the system. The user is provided a goal to satisfy, and the system accesses a database of entities, which it queries as per the user's preferences. WOZ2.0, FRAMES and MultiWOZ, among others, have utilized such methods. Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically. As virtual assistants incorporate diverse domains, recent work has focused on zero-shot modeling BIBREF13, BIBREF14, BIBREF15, domain adaptation and transfer learning techniques BIBREF16. Deep-learning based approaches have achieved state of the art performance on dialogue state tracking tasks. Popular approaches on small-scale datasets estimate the dialogue state as a distribution over all possible slot-values BIBREF17, BIBREF11 or individually score all slot-value combinations BIBREF18, BIBREF19. Such approaches are not practical for deployment in virtual assistants operating over real-world services having a very large and dynamic set of possible values. Addressing these concerns, approaches utilizing a dynamic vocabulary of slot values have been proposed BIBREF20, BIBREF21, BIBREF22. The Schema-Guided Dialogue Dataset An important goal of this work is to create a benchmark dataset highlighting the challenges associated with building large-scale virtual assistants. Table TABREF2 compares our dataset with other public datasets. Our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services. The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset. The Schema-Guided Dialogue Dataset ::: Services and APIs We define the schema for a service as a combination of intents and slots with additional constraints, with an example in Figure FIGREF7. We implement all services using a SQL engine. For constructing the underlying tables, we sample a set of entities from Freebase and obtain the values for slots defined in the schema from the appropriate attribute in Freebase. We decided to use Freebase to sample real-world entities instead of synthetic ones since entity attributes are often correlated (e.g, a restaurant's name is indicative of the cuisine served). Some slots like event dates/times and available ticket counts, which are not present in Freebase, are synthetically sampled. To reflect the constraints present in real-world services and APIs, we impose a few other restrictions. First, our dataset does not expose the set of all possible slot values for some slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Our dataset specifically identifies such slots as non-categorical and does not provide a set of all possible values for these. We also ensure that the evaluation sets have a considerable fraction of slot values not present in the training set to evaluate the models in the presence of new values. Some slots like gender, number of people, day of the week etc. are defined as categorical and we specify the set of all possible values taken by them. However, these values are not assumed to be consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot. Second, real-world services can only be invoked with a limited number of slot combinations: e.g. restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. However, existing datasets simplistically allow service calls with any given combination of slot values, thus giving rise to flows unsupported by actual services or APIs. As in Figure FIGREF7, the different service calls supported by a service are listed as intents. Each intent specifies a set of required slots and the system is not allowed to call this intent without specifying values for these required slots. Each intent also lists a set of optional slots with default values, which the user can override. The Schema-Guided Dialogue Dataset ::: Dialogue Simulator Framework The dialogue simulator interacts with the services to generate dialogue outlines. Figure FIGREF9 shows the overall architecture of our dialogue simulator framework. It consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. These dialogue acts can take a slot or a slot-value pair as argument. Figure FIGREF13 shows all dialogue acts supported by the agents. At the start of a conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. We identified over 200 distinct scenarios for the training set, each comprising up to 5 intents. For multi-domain dialogues, we also identify combinations of slots whose values may be transferred when switching intents e.g. the 'address' slot value in a restaurant service could be transferred to the 'destination' slot for a taxi service invoked right after. The user agent then generates the dialogue acts to be output in the next turn. It may retrieve arguments i.e. slot values for some of the generated acts by accessing either the service schema or the raw SQL backend. The acts, combined with the respective parameters yield the corresponding user actions. Next, the system agent generates the next set of actions using a similar procedure. Unlike the user agent, however, the system agent has restricted access to the services (denoted by dashed line), e.g. it can only query the services by supplying values for all required slots for some service call. This helps us ensure that all generated flows are valid. After an intent is fulfilled through a series of user and system actions, the user agent queries the scenario to proceed to the next intent. Alternatively, the system may suggest related intents e.g. reserving a table after searching for a restaurant. The simulator also allows for multiple intents to be active during a given turn. While we skip many implementation details for brevity, it is worth noting that we do not include any domain-specific constraints in the simulation automaton. All domain-specific constraints are encoded in the schema and scenario, allowing us to conveniently use the simulator across a wide variety of domains and services. The Schema-Guided Dialogue Dataset ::: Dialogue Paraphrasing The dialogue paraphrasing framework converts the outlines generated by the simulator into a natural conversation. Figure FIGREF11a shows a snippet of the dialogue outline generated by the simulator, containing a sequence of user and system actions. The slot values present in these actions are in a canonical form because they obtained directly from the service. However, users may refer to these values in various different ways during the conversation, e.g., “los angeles" may be referred to as “LA" or “LAX". To introduce these natural variations in the slot values, we replace different slot values with a randomly selected variation (kept consistent across user turns in a dialogue) as shown in Figure FIGREF11b. Next we define a set of action templates for converting each action into a utterance. A few examples of such templates are shown below. These templates are used to convert each action into a natural language utterance, and the resulting utterances for the different actions in a turn are concatenated together as shown in Figure FIGREF11c. The dialogue transformed by these steps is then sent to the crowd workers. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence. In our paraphrasing task, the crowd workers are instructed to exactly repeat the slot values in their paraphrases. This not only helps us verify the correctness of the paraphrases, but also lets us automatically obtain slot spans in the generated utterances by string search. This automatic slot span generation greatly reduced the annotation effort required, with little impact on dialogue naturalness, thus allowing us to collect more data with the same resources. Furthermore, it is important to note that this entire procedure preserves all other annotations obtained from the simulator including the dialogue state. Hence, no further annotation is needed. The Schema-Guided Dialogue Dataset ::: Dataset Analysis With over 16000 dialogues in the training set, the Schema-Guided Dialogue dataset is the largest publicly available annotated task-oriented dialogue dataset. The annotations include the active intents and dialogue states for each user utterance and the system actions for every system utterance. We have a few other annotations like the user actions but we withhold them from the public release. These annotations enable our dataset to be used as benchmark for tasks like intent detection, dialogue state tracking, imitation learning of dialogue policy, dialogue act to text generation etc. The schemas contain semantic information about the schema and the constituent intents and slots, in the form of natural language descriptions and other details (example in Figure FIGREF7). The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on an average. These numbers are also reflected in Figure FIGREF13 showing the histogram of dialogue lengths on the training set. Table TABREF5 shows the distribution of dialogues across the different domains. We note that the dataset is largely balanced in terms of the domains and services covered, with the exception of Alarm domain, which is only present in the development set. Figure FIGREF13 shows the frequency of dialogue acts contained in the dataset. Note that all dialogue acts except INFORM, REQUEST and GOODBYE are specific to either the user or the system. The Schema-Guided Approach Virtual assistants aim to support a large number of services available on the web. One possible approach is to define a large unified schema for the assistant, to which different service providers can integrate with. However, it is difficult to come up with a common schema covering all use cases. Having a common schema also complicates integration of tail services with limited developer support. We propose the schema-guided approach as an alternative to allow easy integration of new services and APIs. Under our proposed approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF7 shows an example). These descriptions are used to obtain a semantic representation of these schema elements. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. For example, Figure FIGREF14 shows how dialogue state representation for the same dialogue can vary for two different services. Here, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept. There are many advantages to this approach. First, using a single model facilitates representation and transfer of common knowledge across related services. Second, since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. Third, it is robust to changes like addition of new intents or slots to the service. Zero-Shot Dialogue State Tracking Models in the schema-guided setting can condition on the pertinent services' schemas using descriptions of intents and slots. These models, however, also need access to representations for potentially unseen inputs from new services. Recent pretrained models like ELMo BIBREF23 and BERT BIBREF4 can help, since they are trained on very large corpora. Building upon these, we present our zero-shot schema-guided dialogue state tracking model. Zero-Shot Dialogue State Tracking ::: Model We use a single model, shared among all services and domains, to make these predictions. We first encode all the intents, slots and slot values for categorical slots present in the schema into an embedded representation. Since different schemas can have differing numbers of intents or slots, predictions are made over dynamic sets of schema elements by conditioning them on the corresponding schema embeddings. This is in contrast to existing models which make predictions over a static schema and are hence unable to share knowledge across domains and services. They are also not robust to changes in schema and require the model to be retrained with new annotated data upon addition of a new intent, slot, or in some cases, a slot value to a service. Zero-Shot Dialogue State Tracking ::: Model ::: Schema Embedding This component obtains the embedded representations of intents, slots and categorical slot values in each service schema. Table TABREF18 shows the sequence pairs used for embedding each schema element. These sequence pairs are fed to a pretrained BERT encoder shown in Figure FIGREF20 and the output $\mathbf {u}_{\texttt {CLS}}$ is used as the schema embedding. For a given service with $I$ intents and $S$ slots, let $\lbrace \mathbf {i}_j\rbrace $, ${1 \le j \le I}$ and $\lbrace \mathbf {s}_j\rbrace $, ${1 \le j \le S}$ be the embeddings of all intents and slots respectively. As a special case, we let $\lbrace \mathbf {s}^n_j\rbrace $, ${1 \le j \le N \le S}$ denote the embeddings for the $N$ non-categorical slots in the service. Also, let $\lbrace \textbf {v}_j^k\rbrace $, $1 \le j \le V^k$ denote the embeddings for all possible values taken by the $k^{\text{th}}$ categorical slot, $1 \le k \le C$, with $C$ being the number of categorical slots and $N + C = S$. All these embeddings are collectively called schema embeddings. Zero-Shot Dialogue State Tracking ::: Model ::: Utterance Encoding Like BIBREF24, we use BERT to encode the user utterance and the preceding system utterance to obtain utterance pair embedding $\mathbf {u} = \mathbf {u}_{\texttt {CLS}}$ and token level representations $\mathbf {t}_1, \mathbf {t}_2 \cdots \mathbf {t}_M$, $M$ being the total number of tokens in the two utterances. The utterance and schema embeddings are used together to obtain model predictions using a set of projections (defined below). Zero-Shot Dialogue State Tracking ::: Model ::: Projection Let $\mathbf {x}, \mathbf {y} \in \mathbb {R}^d$. For a task $K$, we define $\mathbf {l} = \mathcal {F}_K(\mathbf {x}, \mathbf {y}, p)$ as a projection transforming $\mathbf {x}$ and $\mathbf {y}$ into the vector $\mathbf {l} \in \mathbb {R}^p$ using Equations DISPLAY_FORM22-. Here, $\mathbf {h_1},\mathbf {h_2} \in \mathbb {R}^d$, $W^K_i$ and $b^K_i$ for $1 \le i \le 3$ are trainable parameters of suitable dimensions and $A$ is the activation function. We use $\texttt {gelu}$ BIBREF25 activation as in BERT. Zero-Shot Dialogue State Tracking ::: Model ::: Active Intent For a given service, the active intent denotes the intent requested by the user and currently being fulfilled by the system. It takes the value “NONE" if no intent for the service is currently being processed. Let $\mathbf {i}_0$ be a trainable parameter in $\mathbb {R}^d$ for the “NONE" intent. We define the intent network as below. The logits $l^{j}_{\text{int}}$ are normalized using softmax to yield a distribution over all $I$ intents and the “NONE" intent. During inference, we predict the highest probability intent as active. Zero-Shot Dialogue State Tracking ::: Model ::: Requested Slots These are the slots whose values are requested by the user in the current utterance. Projection $\mathcal {F}_{\text{req}}$ predicts logit $l^j_{\text{req}}$ for the $j^{\text{th}}$ slot. Obtained logits are normalized using sigmoid to get a score in $[0,1]$. During inference, all slots with $\text{score} > 0.5$ are predicted as requested. Zero-Shot Dialogue State Tracking ::: Model ::: User Goal We define the user goal as the user constraints specified over the dialogue context till the current user utterance. Instead of predicting the entire user goal after each user utterance, we predict the difference between the user goal for the current turn and preceding user turn. During inference, the predicted user goal updates are accumulated to yield the predicted user goal. We predict the user goal updates in two stages. First, for each slot, a distribution of size 3 denoting the slot status and taking values none, dontcare and active is obtained by normalizing the logits obtained in equation DISPLAY_FORM28 using softmax. If the status of a slot is predicted to be none, its assigned value is assumed to be unchanged. If the prediction is dontcare, then the special dontcare value is assigned to it. Otherwise, a slot value is predicted and assigned to it in the second stage. In the second stage, equation is used to obtain a logit for each value taken by each categorical slot. Logits for a given categorical slot are normalized using softmax to get a distribution over all possible values. The value with the maximum mass is assigned to the slot. For each non-categorical slot, logits obtained using equations and are normalized using softmax to yield two distributions over all tokens. These two distributions respectively correspond to the start and end index of the span corresponding to the slot. The indices $p \le q$ maximizing $start[p] + end[q]$ are predicted to be the span boundary and the corresponding value is assigned to the slot. Zero-Shot Dialogue State Tracking ::: Evaluation We consider the following metrics for evaluation of the dialogue state tracking task: Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted. Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped. Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. The slots which have a non-empty assignment in the ground truth dialogue state are considered for accuracy. This is the average accuracy of predicting the value of a slot correctly. A fuzzy matching score is used for non-categorical slots to reward partial matches with the ground truth. Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a turn correctly. For non-categorical slots a fuzzy matching score is used. Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on other datasets We evaluate our model on public datasets WOZ2.0, MultiWOZ 2.0 and the updated MultiWOZ 2.1 BIBREF5. As results in Table TABREF37 show, our model performs competitively on all these datasets. Furthermore, we obtain state-of-the-art joint goal accuracies of 0.516 on MultiWOZ 2.0 and 0.489 on MultiWOZ 2.1 test sets respectively, exceeding the best-known results of 0.486 and 0.456 on these datasets as reported in BIBREF5. Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on SGD The model performs well for Active Intent Accuracy and Requested Slots F1 across both seen and unseen services, shown in Table TABREF37. For joint goal and average goal accuracy, the model performs better on seen services compared to unseen ones (Figure FIGREF38). The main reason for this performance difference is a significantly higher OOV rate for slot values of unseen services. Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on different domains (SGD) The model performance also varies across various domains. The performance for the different domains is shown in (Table TABREF39) below. We observe that one of the factors affecting the performance across domains is still the presence of the service in the training data (seen services). Among the seen services, those in the `Events' domain have a very low OOV rate for slot values and the largest number of training examples which might be contributing to the high joint goal accuracy. For unseen services, we notice that the `Services' domain has a lower joint goal accuracy because of higher OOV rate and higher average turns per dialogue. For `Services' and `Flights' domains, the difference between joint goal accuracy and average accuracy indicates a possible skew in performance across slots where the performance on a few of the slots is much worse compared to all the other slots, thus considerably degrading the joint goal accuracy. The `RideSharing' domain also exhibits poor performance, since it possesses the largest number of the possible slot values across the dataset. We also notice that for categorical slots, with similar slot values (e.g. “Psychologist" and “Psychiatrist"), there is a very weak signal for the model to distinguish between the different classes, resulting in inferior performance. Discussion It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below. Fewer Annotation Errors: All annotations are automatically generated, so these errors are rare. In contrast, BIBREF5 reported annotation errors in 40% of turns in MultiWOZ 2.0 which utilized a Wizard-of-Oz setup. Simpler Task: The crowd worker task of paraphrasing a readable utterance for each turn is simple. The error-prone annotation task requiring skilled workers is not needed. Low Cost: The simplicity of the crowd worker task and lack of an annotation task greatly cut data collection costs. Better Coverage: A wide variety of dialogue flows can be collected and specific usecases can be targeted. Conclusions We presented the Schema-Guided Dialogue dataset to encourage scalable modeling approaches for virtual assistants. We also introduced the schema-guided paradigm for task-oriented dialogue that simplifies the integration of new services and APIs with large scale virtual assistants. Building upon this paradigm, we present a scalable zero-shot dialogue state tracking model achieving state-of-the-art results. Conclusions ::: Acknowledgements The authors thank Guan-Lin Chao for help with model design and implementation, and Amir Fayazi and Maria Wang for help with data collection.
simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers
6dcbe941a3b0d5193f950acbdc574f1cfb007845
6dcbe941a3b0d5193f950acbdc574f1cfb007845_0
Q: What are the domains covered in the dataset? Text: Introduction Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants and, more recently, navigating user interfaces, by providing a natural language interface to services and APIs on the web. The recent popularity of conversational interfaces and the advent of frameworks like Actions on Google and Alexa Skills, which allow developers to easily add support for new services, has resulted in a major increase in the number of application domains and individual services that assistants need to support, following the pattern of smartphone applications. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, M2M BIBREF1 and FRAMES BIBREF2. However, existing datasets for multi-domain task-oriented dialogue do not sufficiently capture a number of challenges that arise with scaling virtual assistants in production. These assistants need to support a large BIBREF3, constantly increasing number of services over a large number of domains. In comparison, existing public datasets cover few domains. Furthermore, they define a single static API per domain, whereas multiple services with overlapping functionality, but heterogeneous interfaces, exist in the real world. To highlight these challenges, we introduce the Schema-Guided Dialogue (SGD) dataset, which is, to the best of our knowledge, the largest public task-oriented dialogue corpus. It exceeds existing corpora in scale, with over 16000 dialogues in the training set spanning 26 services belonging to 16 domains (more details in Table TABREF2). Further, to adequately test the models' ability to generalize in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants. We also propose the schema-guided paradigm for task-oriented dialogue, advocating building a single unified dialogue model for all services and APIs. Using a service's schema as input, the model would make predictions over this dynamic set of intents and slots present in the schema. This setting enables effective sharing of knowledge among all services, by relating the semantic information in the schemas, and allows the model to handle unseen services and APIs. Under the proposed paradigm, we present a novel architecture for multi-domain dialogue state tracking. By using large pretrained models like BERT BIBREF4, our model can generalize to unseen services and is robust to API changes, while achieving state-of-the-art results on the original and updated BIBREF5 MultiWOZ datasets. Related Work Task-oriented dialogue systems have constituted an active area of research for decades. The growth of this field has been consistently fueled by the development of new datasets. Initial datasets were limited to one domain, such as ATIS BIBREF6 for spoken language understanding for flights. The Dialogue State Tracking Challenges BIBREF7, BIBREF8, BIBREF9, BIBREF10 contributed to the creation of dialogue datasets with increasing complexity. Other notable related datasets include WOZ2.0 BIBREF11, FRAMES BIBREF2, M2M BIBREF1 and MultiWOZ BIBREF0. These datasets have utilized a variety of data collection techniques, falling within two broad categories: Wizard-of-Oz This setup BIBREF12 connects two crowd workers playing the roles of the user and the system. The user is provided a goal to satisfy, and the system accesses a database of entities, which it queries as per the user's preferences. WOZ2.0, FRAMES and MultiWOZ, among others, have utilized such methods. Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically. As virtual assistants incorporate diverse domains, recent work has focused on zero-shot modeling BIBREF13, BIBREF14, BIBREF15, domain adaptation and transfer learning techniques BIBREF16. Deep-learning based approaches have achieved state of the art performance on dialogue state tracking tasks. Popular approaches on small-scale datasets estimate the dialogue state as a distribution over all possible slot-values BIBREF17, BIBREF11 or individually score all slot-value combinations BIBREF18, BIBREF19. Such approaches are not practical for deployment in virtual assistants operating over real-world services having a very large and dynamic set of possible values. Addressing these concerns, approaches utilizing a dynamic vocabulary of slot values have been proposed BIBREF20, BIBREF21, BIBREF22. The Schema-Guided Dialogue Dataset An important goal of this work is to create a benchmark dataset highlighting the challenges associated with building large-scale virtual assistants. Table TABREF2 compares our dataset with other public datasets. Our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services. The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset. The Schema-Guided Dialogue Dataset ::: Services and APIs We define the schema for a service as a combination of intents and slots with additional constraints, with an example in Figure FIGREF7. We implement all services using a SQL engine. For constructing the underlying tables, we sample a set of entities from Freebase and obtain the values for slots defined in the schema from the appropriate attribute in Freebase. We decided to use Freebase to sample real-world entities instead of synthetic ones since entity attributes are often correlated (e.g, a restaurant's name is indicative of the cuisine served). Some slots like event dates/times and available ticket counts, which are not present in Freebase, are synthetically sampled. To reflect the constraints present in real-world services and APIs, we impose a few other restrictions. First, our dataset does not expose the set of all possible slot values for some slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Our dataset specifically identifies such slots as non-categorical and does not provide a set of all possible values for these. We also ensure that the evaluation sets have a considerable fraction of slot values not present in the training set to evaluate the models in the presence of new values. Some slots like gender, number of people, day of the week etc. are defined as categorical and we specify the set of all possible values taken by them. However, these values are not assumed to be consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot. Second, real-world services can only be invoked with a limited number of slot combinations: e.g. restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. However, existing datasets simplistically allow service calls with any given combination of slot values, thus giving rise to flows unsupported by actual services or APIs. As in Figure FIGREF7, the different service calls supported by a service are listed as intents. Each intent specifies a set of required slots and the system is not allowed to call this intent without specifying values for these required slots. Each intent also lists a set of optional slots with default values, which the user can override. The Schema-Guided Dialogue Dataset ::: Dialogue Simulator Framework The dialogue simulator interacts with the services to generate dialogue outlines. Figure FIGREF9 shows the overall architecture of our dialogue simulator framework. It consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. These dialogue acts can take a slot or a slot-value pair as argument. Figure FIGREF13 shows all dialogue acts supported by the agents. At the start of a conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. We identified over 200 distinct scenarios for the training set, each comprising up to 5 intents. For multi-domain dialogues, we also identify combinations of slots whose values may be transferred when switching intents e.g. the 'address' slot value in a restaurant service could be transferred to the 'destination' slot for a taxi service invoked right after. The user agent then generates the dialogue acts to be output in the next turn. It may retrieve arguments i.e. slot values for some of the generated acts by accessing either the service schema or the raw SQL backend. The acts, combined with the respective parameters yield the corresponding user actions. Next, the system agent generates the next set of actions using a similar procedure. Unlike the user agent, however, the system agent has restricted access to the services (denoted by dashed line), e.g. it can only query the services by supplying values for all required slots for some service call. This helps us ensure that all generated flows are valid. After an intent is fulfilled through a series of user and system actions, the user agent queries the scenario to proceed to the next intent. Alternatively, the system may suggest related intents e.g. reserving a table after searching for a restaurant. The simulator also allows for multiple intents to be active during a given turn. While we skip many implementation details for brevity, it is worth noting that we do not include any domain-specific constraints in the simulation automaton. All domain-specific constraints are encoded in the schema and scenario, allowing us to conveniently use the simulator across a wide variety of domains and services. The Schema-Guided Dialogue Dataset ::: Dialogue Paraphrasing The dialogue paraphrasing framework converts the outlines generated by the simulator into a natural conversation. Figure FIGREF11a shows a snippet of the dialogue outline generated by the simulator, containing a sequence of user and system actions. The slot values present in these actions are in a canonical form because they obtained directly from the service. However, users may refer to these values in various different ways during the conversation, e.g., “los angeles" may be referred to as “LA" or “LAX". To introduce these natural variations in the slot values, we replace different slot values with a randomly selected variation (kept consistent across user turns in a dialogue) as shown in Figure FIGREF11b. Next we define a set of action templates for converting each action into a utterance. A few examples of such templates are shown below. These templates are used to convert each action into a natural language utterance, and the resulting utterances for the different actions in a turn are concatenated together as shown in Figure FIGREF11c. The dialogue transformed by these steps is then sent to the crowd workers. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence. In our paraphrasing task, the crowd workers are instructed to exactly repeat the slot values in their paraphrases. This not only helps us verify the correctness of the paraphrases, but also lets us automatically obtain slot spans in the generated utterances by string search. This automatic slot span generation greatly reduced the annotation effort required, with little impact on dialogue naturalness, thus allowing us to collect more data with the same resources. Furthermore, it is important to note that this entire procedure preserves all other annotations obtained from the simulator including the dialogue state. Hence, no further annotation is needed. The Schema-Guided Dialogue Dataset ::: Dataset Analysis With over 16000 dialogues in the training set, the Schema-Guided Dialogue dataset is the largest publicly available annotated task-oriented dialogue dataset. The annotations include the active intents and dialogue states for each user utterance and the system actions for every system utterance. We have a few other annotations like the user actions but we withhold them from the public release. These annotations enable our dataset to be used as benchmark for tasks like intent detection, dialogue state tracking, imitation learning of dialogue policy, dialogue act to text generation etc. The schemas contain semantic information about the schema and the constituent intents and slots, in the form of natural language descriptions and other details (example in Figure FIGREF7). The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on an average. These numbers are also reflected in Figure FIGREF13 showing the histogram of dialogue lengths on the training set. Table TABREF5 shows the distribution of dialogues across the different domains. We note that the dataset is largely balanced in terms of the domains and services covered, with the exception of Alarm domain, which is only present in the development set. Figure FIGREF13 shows the frequency of dialogue acts contained in the dataset. Note that all dialogue acts except INFORM, REQUEST and GOODBYE are specific to either the user or the system. The Schema-Guided Approach Virtual assistants aim to support a large number of services available on the web. One possible approach is to define a large unified schema for the assistant, to which different service providers can integrate with. However, it is difficult to come up with a common schema covering all use cases. Having a common schema also complicates integration of tail services with limited developer support. We propose the schema-guided approach as an alternative to allow easy integration of new services and APIs. Under our proposed approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF7 shows an example). These descriptions are used to obtain a semantic representation of these schema elements. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. For example, Figure FIGREF14 shows how dialogue state representation for the same dialogue can vary for two different services. Here, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept. There are many advantages to this approach. First, using a single model facilitates representation and transfer of common knowledge across related services. Second, since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. Third, it is robust to changes like addition of new intents or slots to the service. Zero-Shot Dialogue State Tracking Models in the schema-guided setting can condition on the pertinent services' schemas using descriptions of intents and slots. These models, however, also need access to representations for potentially unseen inputs from new services. Recent pretrained models like ELMo BIBREF23 and BERT BIBREF4 can help, since they are trained on very large corpora. Building upon these, we present our zero-shot schema-guided dialogue state tracking model. Zero-Shot Dialogue State Tracking ::: Model We use a single model, shared among all services and domains, to make these predictions. We first encode all the intents, slots and slot values for categorical slots present in the schema into an embedded representation. Since different schemas can have differing numbers of intents or slots, predictions are made over dynamic sets of schema elements by conditioning them on the corresponding schema embeddings. This is in contrast to existing models which make predictions over a static schema and are hence unable to share knowledge across domains and services. They are also not robust to changes in schema and require the model to be retrained with new annotated data upon addition of a new intent, slot, or in some cases, a slot value to a service. Zero-Shot Dialogue State Tracking ::: Model ::: Schema Embedding This component obtains the embedded representations of intents, slots and categorical slot values in each service schema. Table TABREF18 shows the sequence pairs used for embedding each schema element. These sequence pairs are fed to a pretrained BERT encoder shown in Figure FIGREF20 and the output $\mathbf {u}_{\texttt {CLS}}$ is used as the schema embedding. For a given service with $I$ intents and $S$ slots, let $\lbrace \mathbf {i}_j\rbrace $, ${1 \le j \le I}$ and $\lbrace \mathbf {s}_j\rbrace $, ${1 \le j \le S}$ be the embeddings of all intents and slots respectively. As a special case, we let $\lbrace \mathbf {s}^n_j\rbrace $, ${1 \le j \le N \le S}$ denote the embeddings for the $N$ non-categorical slots in the service. Also, let $\lbrace \textbf {v}_j^k\rbrace $, $1 \le j \le V^k$ denote the embeddings for all possible values taken by the $k^{\text{th}}$ categorical slot, $1 \le k \le C$, with $C$ being the number of categorical slots and $N + C = S$. All these embeddings are collectively called schema embeddings. Zero-Shot Dialogue State Tracking ::: Model ::: Utterance Encoding Like BIBREF24, we use BERT to encode the user utterance and the preceding system utterance to obtain utterance pair embedding $\mathbf {u} = \mathbf {u}_{\texttt {CLS}}$ and token level representations $\mathbf {t}_1, \mathbf {t}_2 \cdots \mathbf {t}_M$, $M$ being the total number of tokens in the two utterances. The utterance and schema embeddings are used together to obtain model predictions using a set of projections (defined below). Zero-Shot Dialogue State Tracking ::: Model ::: Projection Let $\mathbf {x}, \mathbf {y} \in \mathbb {R}^d$. For a task $K$, we define $\mathbf {l} = \mathcal {F}_K(\mathbf {x}, \mathbf {y}, p)$ as a projection transforming $\mathbf {x}$ and $\mathbf {y}$ into the vector $\mathbf {l} \in \mathbb {R}^p$ using Equations DISPLAY_FORM22-. Here, $\mathbf {h_1},\mathbf {h_2} \in \mathbb {R}^d$, $W^K_i$ and $b^K_i$ for $1 \le i \le 3$ are trainable parameters of suitable dimensions and $A$ is the activation function. We use $\texttt {gelu}$ BIBREF25 activation as in BERT. Zero-Shot Dialogue State Tracking ::: Model ::: Active Intent For a given service, the active intent denotes the intent requested by the user and currently being fulfilled by the system. It takes the value “NONE" if no intent for the service is currently being processed. Let $\mathbf {i}_0$ be a trainable parameter in $\mathbb {R}^d$ for the “NONE" intent. We define the intent network as below. The logits $l^{j}_{\text{int}}$ are normalized using softmax to yield a distribution over all $I$ intents and the “NONE" intent. During inference, we predict the highest probability intent as active. Zero-Shot Dialogue State Tracking ::: Model ::: Requested Slots These are the slots whose values are requested by the user in the current utterance. Projection $\mathcal {F}_{\text{req}}$ predicts logit $l^j_{\text{req}}$ for the $j^{\text{th}}$ slot. Obtained logits are normalized using sigmoid to get a score in $[0,1]$. During inference, all slots with $\text{score} > 0.5$ are predicted as requested. Zero-Shot Dialogue State Tracking ::: Model ::: User Goal We define the user goal as the user constraints specified over the dialogue context till the current user utterance. Instead of predicting the entire user goal after each user utterance, we predict the difference between the user goal for the current turn and preceding user turn. During inference, the predicted user goal updates are accumulated to yield the predicted user goal. We predict the user goal updates in two stages. First, for each slot, a distribution of size 3 denoting the slot status and taking values none, dontcare and active is obtained by normalizing the logits obtained in equation DISPLAY_FORM28 using softmax. If the status of a slot is predicted to be none, its assigned value is assumed to be unchanged. If the prediction is dontcare, then the special dontcare value is assigned to it. Otherwise, a slot value is predicted and assigned to it in the second stage. In the second stage, equation is used to obtain a logit for each value taken by each categorical slot. Logits for a given categorical slot are normalized using softmax to get a distribution over all possible values. The value with the maximum mass is assigned to the slot. For each non-categorical slot, logits obtained using equations and are normalized using softmax to yield two distributions over all tokens. These two distributions respectively correspond to the start and end index of the span corresponding to the slot. The indices $p \le q$ maximizing $start[p] + end[q]$ are predicted to be the span boundary and the corresponding value is assigned to the slot. Zero-Shot Dialogue State Tracking ::: Evaluation We consider the following metrics for evaluation of the dialogue state tracking task: Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted. Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped. Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. The slots which have a non-empty assignment in the ground truth dialogue state are considered for accuracy. This is the average accuracy of predicting the value of a slot correctly. A fuzzy matching score is used for non-categorical slots to reward partial matches with the ground truth. Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a turn correctly. For non-categorical slots a fuzzy matching score is used. Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on other datasets We evaluate our model on public datasets WOZ2.0, MultiWOZ 2.0 and the updated MultiWOZ 2.1 BIBREF5. As results in Table TABREF37 show, our model performs competitively on all these datasets. Furthermore, we obtain state-of-the-art joint goal accuracies of 0.516 on MultiWOZ 2.0 and 0.489 on MultiWOZ 2.1 test sets respectively, exceeding the best-known results of 0.486 and 0.456 on these datasets as reported in BIBREF5. Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on SGD The model performs well for Active Intent Accuracy and Requested Slots F1 across both seen and unseen services, shown in Table TABREF37. For joint goal and average goal accuracy, the model performs better on seen services compared to unseen ones (Figure FIGREF38). The main reason for this performance difference is a significantly higher OOV rate for slot values of unseen services. Zero-Shot Dialogue State Tracking ::: Evaluation ::: Performance on different domains (SGD) The model performance also varies across various domains. The performance for the different domains is shown in (Table TABREF39) below. We observe that one of the factors affecting the performance across domains is still the presence of the service in the training data (seen services). Among the seen services, those in the `Events' domain have a very low OOV rate for slot values and the largest number of training examples which might be contributing to the high joint goal accuracy. For unseen services, we notice that the `Services' domain has a lower joint goal accuracy because of higher OOV rate and higher average turns per dialogue. For `Services' and `Flights' domains, the difference between joint goal accuracy and average accuracy indicates a possible skew in performance across slots where the performance on a few of the slots is much worse compared to all the other slots, thus considerably degrading the joint goal accuracy. The `RideSharing' domain also exhibits poor performance, since it possesses the largest number of the possible slot values across the dataset. We also notice that for categorical slots, with similar slot values (e.g. “Psychologist" and “Psychiatrist"), there is a very weak signal for the model to distinguish between the different classes, resulting in inferior performance. Discussion It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below. Fewer Annotation Errors: All annotations are automatically generated, so these errors are rare. In contrast, BIBREF5 reported annotation errors in 40% of turns in MultiWOZ 2.0 which utilized a Wizard-of-Oz setup. Simpler Task: The crowd worker task of paraphrasing a readable utterance for each turn is simple. The error-prone annotation task requiring skilled workers is not needed. Low Cost: The simplicity of the crowd worker task and lack of an annotation task greatly cut data collection costs. Better Coverage: A wide variety of dialogue flows can be collected and specific usecases can be targeted. Conclusions We presented the Schema-Guided Dialogue dataset to encourage scalable modeling approaches for virtual assistants. We also introduced the schema-guided paradigm for task-oriented dialogue that simplifies the integration of new services and APIs with large scale virtual assistants. Building upon this paradigm, we present a scalable zero-shot dialogue state tracking model achieving state-of-the-art results. Conclusions ::: Acknowledgements The authors thank Guan-Lin Chao for help with model design and implementation, and Amir Fayazi and Maria Wang for help with data collection.
Alarm Bank Bus Calendar Event Flight Home Hotel Media Movie Music RentalCar Restaurant RideShare Service Travel Weather
544b68f6f729e5a62c2461189682f9e4307a05c6
544b68f6f729e5a62c2461189682f9e4307a05c6_0
Q: What is their baseline? Text: Introduction India is a highly diverse multilingual country in the world. In India, people of different regions use their own regional speaking languages, which makes India a country having world's second highest number of languages. Human spoken languages in India belongs to several language families. Two main of those families are typically known as Indo-Aryan languages having 78.05 percentage Indian speakers BIBREF0 and Dravidian languages having 19.64 BIBREF0 percentage Indian speakers. Hindi and Gujarati are among constitutional languages of India having nearly 601,688,479 BIBREF0 Indian speakers almost 59 BIBREF0 percentage of total country population. Constitute of India under Article 343 offers English as second additional official language having only 226,449 BIBREF0 Indian speakers and nearly 0.02 percentages of total country population BIBREF0. Communication and information exchange among people is necessary for sharing knowledge, feelings, opinions, facts, and thoughts. Variation of English is used globally for human communication. The content available on the Internet is exceptionally dominated by English. Only 20 percent of the world population speaks in English, while in India it is only 0.02 BIBREF0. It is not possible to have a human translator in the country having this much language diversity. In order to bridge this vast language gap we need effective and accurate computational approaches, which require minimum human intervention. This task can be effectively done using machine translation. Machine Translation (MT) is described as a task of computationally translate human spoken or natural language text or speech from one language to another with minimum human intervention. Machine translation aims to generate translations which have the same meaning as a source sentence and grammatically correct in the target language. Initial work on MT started in early 1950s BIBREF1, and has advanced rapidly since the 1990s due to the availability of more computational capacity and training data. Then after, number of approaches have been proposed to achieve more and more accurate machine translation as, Rule-based translation, Knowledge-based translation, Corpus-based translation, Hybrid translation, and Statistical machine translation(SMT) BIBREF1. All the approaches have their own merits and demerits. Among these, SMT which is a subcategory of Corpus based translation, is widely used as it is able to produce better results compared to other previously available techniques. The usage of the Neural networks in machine translation become popular in recent years around the globe and the novel technique of machine translation with the usage of neural network is known as Neural Machine Translation or NMT. In recent years, many works has been carried out on NMT. Little has been done on Indian languages as well BIBREF1. We found the NMT approach on Indic languages is still a challenging task, especially on bilingual machine translation. In our past research work, we have worked on sequence-to-sequence model based machine translation system for Hindi languageBIBREF2. In this work, we have improved that model and applied for English-Gujarati language pair. We have developed a system that uses neural model based on Attention mechanism. Our proposed attention based NMT model is tested with evaluation matrices as BLEU, perplexity and TER. In section 2 overview of related work carried out in the domain of machine translation is described in brief, section 3 gives fundamentals of machine translation process with neural network using attention mechanism, section 4 gives a comparative analysis of various automatic evaluation matrices, section 5 introduce the proposed bilingual neural machine translation models, section 6 shows the implementation and generated results with our attention based NMT model is shown in section 7, conclusion of the paper is presented in section 8. Related work The process of translating text from source language to target language automatically with machine without any external human intervention is generally referred as Machine Translation(MT). It will basically convert sequence of words from source language to another sequence of words in target language without altering meaning of source words. Initial work in the field of machine translation was conceived by researchers at IBM research laboratory in the early '50s. They have also provided a successful demonstration in 1956 for machine translation systemBIBREF1. But soon automatic language processing advisory committee of American government reported that machine translation task is infeasible to scale due to the amount of resource it requires. A new breakthrough in machine translation came only after 1979 where domain-specific translation system was implemented for weather bulletin translation from English to FrenchBIBREF3 BIBREF4. In the year 1991, researchers from IIT Kanpur has developed Angla Bharati-I machine translation system BIBREF5BIBREF6. It was a general purpose translation system with domain customization. It is specifically designed for translating English to Hindi. In the year of 1999, CDAC developed a machine translation system named MANTRA BIBREF5, that uses the transfer-based machine translation. The system is developed for working on English-Gujarati, English-Hindi, English-Bengali and English-Telugu data pairs. Later the system is upgraded to AnglaBharati-II BIBREF5BIBREF6 using a hybrid approach of machine translation in 2004. In AnglaBharati-II, the efficiency of the system is improved compared to AnglaBharati-I. Machine Translation Machine translation can be stated as the process of translating source language into target language considering the grammatical structure of the source language. The 1990s was marked as the breakthrough of a fairly new approaches to challenge and eventually improve the already established methodologies. This approach of machine translation was based on generating insights from large amount of available parallel corpuses. Example based Machine Translation was first proposed in 1981, but was developed from about 1990 onwards BIBREF7. The core idea is to reuse existing translations for generating a new translationBIBREF8. Machine Translation ::: Statistical Machine Translation Statistics based approach for machine translation does not utilize any traditional linguistic data. It basically works on the principle of probability. Here, the word in a source language corresponds to other similar word(s) in the given target language. However it requires a large corpus of reliable translations consisting in both source and target language sentences. This approach is similar to the methods of the IBM research group, which had initial success for speech recognition and Machine Translation in the early 1990s BIBREF7. Machine Translation ::: Rule-based Machine Translation Normally all the languages used by humans for communication consist of certain amount of grammatical rules. If we are able to model these rules into a system, we can generate the natural fluent sentences in target language. Rule-based machine translation system tries to model the same approach for machine translation by mapping source and target language sentences using necessary rules. However to translate Indian languages large number of rules with different context are required BIBREF9. Machine Translation ::: Phrase-based Machine Translation A phrase is a small group of words which have some special meaning. Phrase-based machine translation system contains a phrase table, which has a list of translated sentences between source and target language. In addition to that, it is having information about how we can rearrange translation of multiple phrases to generate a meaningful target language sentence. But, these types of machine translation systems were unable to produce human-like natural language sentences as it is not possible to have all combination of different phrase every time in modelBIBREF9. Machine Translation ::: Neural Machine Translation Neural Machine Translation is one of the most recent approaches of machine translation that use a neural network based on the conditional probability of translating a given source language input to a given target language output as shown in Figure FIGREF5. NMT is more appealing as it requires less knowledge related to the structure of source as well as target language. It has outperformed traditional MT models in large-scale translation tasks such as English to German and English to French BIBREF10. In recent years various architectures are proposed to achieve neural network based machine translation such as, simple encoder-decoder based model, RNN based model and LSTM model that learn problems with long-range temporal dependencies and the most advanced neural model for machine translation is Attention mechanism-based model. Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_t+1$ and the input for position $t$ BIBREF12. This inherently sequential nature of RNN makes impossible to apply parallelization within training examples. But for longer sequence lengths, it becomes critical as memory constraints limits batching across examplesBIBREF13. One of the major drawback of models that works on sequence-to-sequence model is that it is not able to generate words that are rarely encountered in input corpus. For solving this problem, attention mechanism can be applied in traditional sequence-to-sequence model. It allows modeling of dependencies without regard to their distance in the input or output. The concept of “attention" has gained popularity recently in training of neural networks, allowing models to learn alignments between different modalities, e.g., between image objects and agent actions in the dynamic control problem BIBREF13. As shown in Figure FIGREF6, it also provides context which will become helpful for generating more natural looking sentences including rare words. Recently, attentional NMT models have dominated the field of machine translation. They are pushing the boundary of translation performance by continuing new development in NMT architectures. Evaluation Matrices We can compare the performance of any machine translation model by comparing it across various evaluation matrices. In this paper, the following evaluation matrices are used for estimating the performance of our model. Evaluation Matrices ::: Translation error rate Translation error rate or TER measures the amount of editing it requires to match the human-generated output. It was designed for evaluating the output of machine translation avoiding the knowledge intensiveness of meaning-based approaches. This method provides more meaningful insights when there is a large number of reference sentences available in the dataset. We can find TER of any translated sentences using the following equation BIBREF14: Evaluation Matrices ::: Perplexity Matrix Perplexity is a measure of language model performance based on average probability. Perplexity can be defined as the inverse probability of the sentences available in test data, normalized by the number of words in generated sentences. It can be calculated using following equation BIBREF15: Evaluation Matrices ::: BLEU BLEU uses the basic concepts of n-gram precision to calculate similarity between reference and generated sentence. It correlates highly with human expert review as it uses the average score of all result in test dataset rather than providing result of each sentence. BLEU score can be computed using the following equation BIBREF16: Proposed System As shown in Figure FIGREF13, our proposed model is divided into mainly three different parts. Encoder, Decoder and Attention mechanism. Our encoder has two LSTM layers with 128 units of LSTM cells. This encoder will output encoded word embedding vector. This embedding vector is provided as input to decoder. Decoder is also consist of two LSTM layers with 128 units of lstm cells. It will take encoded vector and produce the output using beam search method. Whenever any output is produced the value of hidden state is compared with all input states to derive weights for attention mechanism. Based on attention weights, context vector is calculated and it is given as additional input to decoder for generating context relevant translation based on previous outcomes. Implementation ::: Datasets In order to work with neural networks we require large amount of training data. As neural networks are learning with experience, more the experience accurate the learning is. Wide range of work has been carried out for non Indian languages. So enough amount of parallel corpus is available like English-French, English German, etc. But on Indian languages most of corpus was available only for English-Hindi language pair. The only dataset available for Gujarati language is OPUSBIBREF17, which is a collection of translated texts from user manual of the open source software. So in order to create machine translation system that works on conversational level we have created our new dataset. The created "eng_guj_parallel_corpus" contains nearly 65000 sentences in parallel format. We have also made it available for all researchers as open source dataset and can be downloaded from https://github.com/shahparth123/eng_guj_parallel_corpus. It is collection of sentences describing the activity or scenes in both Gujarati and English language. Implementation ::: Experiment Setup For our experiment we have used Google Cloud's n1-highmem-2 instance with Intel Xeon E5 processor, 13 GB of primary memory and Tesla K80(2496 CUDA Core) GPU with 12GB of GPU memory. For creating and training deep neural networks TensorFlow deep learning library is usedBIBREF18. Results and Discussion ::: Results In our experiment we have trained our proposed neural machine translation model using "eng_guj_parallel_corpus" with 37000 epoch. Some of the results for proposed model is given in following Figure FIGREF17 and FIGREF18 : As seen in figures, in most of the cases our model produces comparable result with human translator. Result for BLEU score for our model and Google's Neural Machine Translation is compared in table TABREF19: Conclusion The conventional machine translation approaches are fast and efficient enough in processing. They have been proven significant in delivering good accuracy with their limited scope of application. But, they are facing difficulties in generating a target sentence or corpus with human-like fluency. Neural machine translation has played a significant role to outperformed difficulties associated with conventional machine translation approaches. However, the NMT models widely used in recent years like Seq-2-Seq has given great accuracy in generating fluent target language. Though, on some real environmental situations, specifically in the case when the model is coming across rare words the accuracy is decreased dramatically. To overcome this limitation of recent NMT models, an Attention mechanism is equipped in the model that has improved the accuracy. We have achieved an average BLEU score of 59.73 on training corpus and 40.33 on test corpus from parallel English-Gujarati corpus having 65,000 sentences.
Google's Neural Machine Translation
f887d5b7cf2bcc1412ef63bff4146f7208818184
f887d5b7cf2bcc1412ef63bff4146f7208818184_0
Q: Do they use the cased or uncased BERT model? Text: Introduction SemEval Task 4 BIBREF1 tasked participating teams with identifying news articles that are misleading to their readers, a phenomenon often associated with “fake news” distributed by partisan sources BIBREF2 . We approach the problem through transfer learning to fine-tune a model for the document classification task. We use the BERT model based on the implementation of the github repository pytorch-pretrained-bert on some of the data provided by Task 4 of SemEval. BERT has been used to learn useful representations for a variety of natural language tasks, achieving state of the art performance in these tasks after being fine-tuned BIBREF0 . It is a language representation model that is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. Thus, it may be able to adequately account for complex characteristics as such blind, prejudiced reasoning and extreme bias that are important to reliably identifying hyperpartisanship in articles. We show that BERT performs well on hyperpartisan sentiment classification. We use unsupervised learning on the set of 600,000 source-labeled articles provided as part of the task, then train using supervised learning for the 645 hand-labeled articles. We believe that learning on source-labeled articles would bias our model to learn the partisanship of a source, instead of the article. Additionally, the accuracy of the model on validation data labeled by article differs heavily when the articles are labeled by publisher. Thus, we decided to use a small subset of the hand-labeled articles as our validation set for all of our experiments. As the articles are too large for the model to be trained on the full text each time, we consider the number of word-pieces that the model uses from each article a hyperparameter. A second major issue we explore is what information the model is using to make decisions. This is particularly important for BERT because neural models are often viewed like black boxes. This view is problematic for a task like hyperpartisan news detection where users may reasonably want explanations as to why an article was flagged. We specifically explore how much of the article is needed by the model, how consistent the model behaves on an article, and whether the model focuses on individual words and phrases or if it uses more global understanding. We find that the model only needs a short amount of context (100 word pieces), is very consistent throughout an article, and most of the model's accuracy arises from locally examining the article. In this paper, we demonstrate the effectiveness of BERT models for the hyperpartisan news classification task, with validation accuracy as high as 85% and test accuracy as high as 77% . We also make significant investigations into the importance of different factors relating to the articles and training in BERT's success. The remainder of this paper is organized as follows. Section SECREF2 describes previous work on the BERT model and semi-supervised learning. Section SECREF3 outlines our model, data, and experiments. Our results are presented in Section SECREF4 , with their ramifications discussed in Section SECREF5 . We close with an introduction to our system's namesake, fictional journalist Clint Buchanan, in Section SECREF6 . Related Work We build upon the Bidirectional Encoder Representations from Transformers (BERT) model. BERT is a deep bidirectional transformer that has been successfully tuned to a variety of tasks BIBREF0 . BERT functions as a language model over character sequences, with tokenization as described by BIBREF3 . The transformer architecture BIBREF4 is based upon relying on self-attention layers to encode a sequence. To allow the language model to be trained in a bidirectional manner instead of predicting tokens autoregressively, BERT was pre-trained to fill in the blanks for a piece of text, also known as the Cloze task BIBREF5 . Due to the small size of our training data, it was necessary to explore techniques from semi-supervised learning. BIBREF6 found pre-training a model as a language model on a larger corpus to be beneficial for a variety of experiments. We also investigated the use of self-training BIBREF7 to increase our effective training dataset size. Lastly, the motivation of examining the effective context of our classification model was based on BIBREF8 . It was found that much higher performance than expected was achieved on the ImageNet dataset BIBREF9 by aggregating predictions from local patches. This revealed that typical ImageNet models could acquire most of their performance from local decisions. Methodology Next, we describe the variations of the BERT model used in our experiments, the data we used, and details of the setup of each of our experiments. Model We adjust the standard BERT model for the hyperpartisan news task, evaluating its performance both on a validation set we construct and on the test set provided by Task 4 at SemEval. The training of the model follows the methodology of the original BERT paper. We choose to experiment with the use of the two different pre-trained versions of the BERT model, BERT-LARGE and BERT-BASE. The two differ in the number of layers and hidden sizes in the underlying model. BERT-BASE consists of 12 layers and 110 million parameters, while BERT-LARGE consists of 24 layers and 340 million parameters. Training and Test Sets We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We take the first 80% of this data set for our training set and the last 20% for the validation set. Since the test set is also hand-labeled we found that the 645 articles are much more representative of the final test set than the articles labeled by publisher. The model's performance on articles labeled by publisher was not much above chance level. Due to an intrinsic limitation of the BERT model, we are unable to consider sequences of longer than 512 word pieces for classification problems. These word pieces refer to the byte-pair encoding that BERT relies on for tokenization. These can be actual words, but less common words may be split into subword pieces BIBREF3 . The longest article in the training set contains around 6500 word pieces. To accommodate this model limitation, we work with truncated versions of the articles. We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model. Experiments We first investigate the impact of pre-training on BERT-BASE's performance. We then compare the performance of BERT-BASE with BERT-LARGE. For both, we vary the number of word-pieces from each article that are used in training. We perform tests with 100, 250 and 500 word pieces. We also explore whether and how the BERT models we use classify different parts of each individual article. Since the model can only consider a limited number of word pieces and not a full article, we test how the model judges different sections of the same article. Here, we are interested in the extent to which the same class will be assigned to each segment of an article. Finally, we test whether the model's behavior varies if we randomly shuffle word-pieces from the articles during training. Our goal in this experiment is to understand whether the model focuses on individual words and phrases or if it achieves more global understanding. We alter the the size of the chunks to be shuffled ( INLINEFORM0 ) in each iteration of this experiment, from shuffling individual word-pieces ( INLINEFORM1 ) to shuffling larger multiword chunks. Results Our results are primarily based on a validation set we constructed using the last 20% of the hand-labeled articles. It is important to note that our validation set was fairly unbalanced. About 72% of articles were not hyperpartisan and this mainly arose because we were not provided with a balanced set of hand-labeled articles. The small validation split ended up increasing the imbalance in exchange for training on a more balanced set. The test accuracies we report were done on SemEval Task 4's balanced test dataset. Importance of Pre-training Our first experiment was checking the importance of pre-training. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. We then trained the model on sequence lengths of 100, 250 and 500. The accuracy for each sequence length after 100 epochs is shown in TABREF7 and is labeled as UP (unsupervised pre-training). The other column shows how well BERT-base trained without pre-training. We found improvements for lower sequence lengths, but not at 500 word pieces. Since the longer chunk should have been more informative, and since our hand-labeled training set only contained 516 articles, this likely indicates that BERT experiences training difficulty when dealing with long sequences on such a small dataset. As the cost to do pre-training was only a one time cost all of our remaining experiments use a pre-trained model. We evaluated this model on the SemEval 2019 Task 4: Hyperpartisan News Detection competition's pan19-hyperpartisan-news-detection-by-article-test-dataset-2018-12-07 dataset using TIRA BIBREF10 . Our model, with a maximium sequence length of 250, had an accuracy of INLINEFORM0 . It had higher precision ( INLINEFORM1 ) than recall ( INLINEFORM2 ), for an overall F1-score of INLINEFORM3 . Importance of Sequence Length Next, we further explore the impact of sequence length using BERT-LARGE. The model took approximately 3 days to pre-train when using 4 NVIDIA GeForce GTX 1080 Ti. On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. The model's training time scaled roughly linearly with sequence length. We did a grid search on sequence length and learning rate. Table TABREF9 shows that the model consistently performed best at a sequence length of 100. This is a discrepancy from BERT-BASE indicating that the larger model struggled more with training on a small amount of long sequences. For our best trained BERT-LARGE, we submitted the model for evaluation on TIRA. Surprisingly, the test performance (75.1%) of the larger model was worse than the base model. The experiments in BIBREF0 consistently found improvements when using the large model. The main distinction here is a smaller training dataset than in their tasks. The experiments in the remaining sections use the same hyperparameters as the optimal BERT-LARGE. Model Consistency Due to the small training dataset, we tried self-training to increase our effective training set. We trained the model for 40 epochs. For the remaining 60 epochs, after each epoch we had the model make predictions on five slices of 500 unlabeled articles. If an article had the same prediction for more than four slices, we added it to the labeled training data. The model always added every article to the training set, though, since it always made the same prediction for all 5 slices. This caused self-training to be ineffective, but also revealed that the model's predictions were very consistent across segments of a single article. Effective Model Context Finally, we investigate whether the model's accuracy primarily arose from examining words or short phrases, or if the decisions were more global. We permuted the word pieces in the article at various levels of granularity. At the finest level (permute_ngrams = 1), we permuted every single word piece, forcing the model to process a bag of word pieces. At coarser levels, ngrams were permuted. As the sequence length for these experiments was 100, permute_ngrams = 100 corresponds to no permutation. The results can be found in TABREF13 . Accuracy drops a lot with only a bag of word pieces, but still reaches 67.4%. Also, most of the accuracy of the model (within 2%) is achieved with only 4-grams of word pieces, so the model is not getting much of a boost from global content. Discussion Our successful results demonstrate the adaptability of the BERT model to different tasks. With a relatively small training set of articles, we were able to train models with high accuracy on both the validation set and the test set. Our models classified different parts of a given article identically, demonstrating that the overall hyperpartisan aspects were similar across an article. In addition, the model had significantly lower accuracy when word pieces were shuffled around, but that accuracy was almost entirely restored when shuffling around chunks of four or more word pieces, suggesting that most of the important features can already be extracted at this level. In future work, we we would like to make use of the entire article. Naively, running this over each chunk would be computationally infeasible, so it may be worth doing a full pass on a few chunks and cheaper computations on other chunks. Namesake Our system is named after Clint Buchanan, a fictional journalist on the soap opera One Life to Live. Following the unbelievable stories of Clint and his associates may be one of the few tasks more difficult than identifying hyperpartisan news.
Unanswerable
ace60950ccd6076bf13e12ee2717e50bc038a175
ace60950ccd6076bf13e12ee2717e50bc038a175_0
Q: How are the two different models trained? Text: Introduction SemEval Task 4 BIBREF1 tasked participating teams with identifying news articles that are misleading to their readers, a phenomenon often associated with “fake news” distributed by partisan sources BIBREF2 . We approach the problem through transfer learning to fine-tune a model for the document classification task. We use the BERT model based on the implementation of the github repository pytorch-pretrained-bert on some of the data provided by Task 4 of SemEval. BERT has been used to learn useful representations for a variety of natural language tasks, achieving state of the art performance in these tasks after being fine-tuned BIBREF0 . It is a language representation model that is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. Thus, it may be able to adequately account for complex characteristics as such blind, prejudiced reasoning and extreme bias that are important to reliably identifying hyperpartisanship in articles. We show that BERT performs well on hyperpartisan sentiment classification. We use unsupervised learning on the set of 600,000 source-labeled articles provided as part of the task, then train using supervised learning for the 645 hand-labeled articles. We believe that learning on source-labeled articles would bias our model to learn the partisanship of a source, instead of the article. Additionally, the accuracy of the model on validation data labeled by article differs heavily when the articles are labeled by publisher. Thus, we decided to use a small subset of the hand-labeled articles as our validation set for all of our experiments. As the articles are too large for the model to be trained on the full text each time, we consider the number of word-pieces that the model uses from each article a hyperparameter. A second major issue we explore is what information the model is using to make decisions. This is particularly important for BERT because neural models are often viewed like black boxes. This view is problematic for a task like hyperpartisan news detection where users may reasonably want explanations as to why an article was flagged. We specifically explore how much of the article is needed by the model, how consistent the model behaves on an article, and whether the model focuses on individual words and phrases or if it uses more global understanding. We find that the model only needs a short amount of context (100 word pieces), is very consistent throughout an article, and most of the model's accuracy arises from locally examining the article. In this paper, we demonstrate the effectiveness of BERT models for the hyperpartisan news classification task, with validation accuracy as high as 85% and test accuracy as high as 77% . We also make significant investigations into the importance of different factors relating to the articles and training in BERT's success. The remainder of this paper is organized as follows. Section SECREF2 describes previous work on the BERT model and semi-supervised learning. Section SECREF3 outlines our model, data, and experiments. Our results are presented in Section SECREF4 , with their ramifications discussed in Section SECREF5 . We close with an introduction to our system's namesake, fictional journalist Clint Buchanan, in Section SECREF6 . Related Work We build upon the Bidirectional Encoder Representations from Transformers (BERT) model. BERT is a deep bidirectional transformer that has been successfully tuned to a variety of tasks BIBREF0 . BERT functions as a language model over character sequences, with tokenization as described by BIBREF3 . The transformer architecture BIBREF4 is based upon relying on self-attention layers to encode a sequence. To allow the language model to be trained in a bidirectional manner instead of predicting tokens autoregressively, BERT was pre-trained to fill in the blanks for a piece of text, also known as the Cloze task BIBREF5 . Due to the small size of our training data, it was necessary to explore techniques from semi-supervised learning. BIBREF6 found pre-training a model as a language model on a larger corpus to be beneficial for a variety of experiments. We also investigated the use of self-training BIBREF7 to increase our effective training dataset size. Lastly, the motivation of examining the effective context of our classification model was based on BIBREF8 . It was found that much higher performance than expected was achieved on the ImageNet dataset BIBREF9 by aggregating predictions from local patches. This revealed that typical ImageNet models could acquire most of their performance from local decisions. Methodology Next, we describe the variations of the BERT model used in our experiments, the data we used, and details of the setup of each of our experiments. Model We adjust the standard BERT model for the hyperpartisan news task, evaluating its performance both on a validation set we construct and on the test set provided by Task 4 at SemEval. The training of the model follows the methodology of the original BERT paper. We choose to experiment with the use of the two different pre-trained versions of the BERT model, BERT-LARGE and BERT-BASE. The two differ in the number of layers and hidden sizes in the underlying model. BERT-BASE consists of 12 layers and 110 million parameters, while BERT-LARGE consists of 24 layers and 340 million parameters. Training and Test Sets We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We take the first 80% of this data set for our training set and the last 20% for the validation set. Since the test set is also hand-labeled we found that the 645 articles are much more representative of the final test set than the articles labeled by publisher. The model's performance on articles labeled by publisher was not much above chance level. Due to an intrinsic limitation of the BERT model, we are unable to consider sequences of longer than 512 word pieces for classification problems. These word pieces refer to the byte-pair encoding that BERT relies on for tokenization. These can be actual words, but less common words may be split into subword pieces BIBREF3 . The longest article in the training set contains around 6500 word pieces. To accommodate this model limitation, we work with truncated versions of the articles. We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model. Experiments We first investigate the impact of pre-training on BERT-BASE's performance. We then compare the performance of BERT-BASE with BERT-LARGE. For both, we vary the number of word-pieces from each article that are used in training. We perform tests with 100, 250 and 500 word pieces. We also explore whether and how the BERT models we use classify different parts of each individual article. Since the model can only consider a limited number of word pieces and not a full article, we test how the model judges different sections of the same article. Here, we are interested in the extent to which the same class will be assigned to each segment of an article. Finally, we test whether the model's behavior varies if we randomly shuffle word-pieces from the articles during training. Our goal in this experiment is to understand whether the model focuses on individual words and phrases or if it achieves more global understanding. We alter the the size of the chunks to be shuffled ( INLINEFORM0 ) in each iteration of this experiment, from shuffling individual word-pieces ( INLINEFORM1 ) to shuffling larger multiword chunks. Results Our results are primarily based on a validation set we constructed using the last 20% of the hand-labeled articles. It is important to note that our validation set was fairly unbalanced. About 72% of articles were not hyperpartisan and this mainly arose because we were not provided with a balanced set of hand-labeled articles. The small validation split ended up increasing the imbalance in exchange for training on a more balanced set. The test accuracies we report were done on SemEval Task 4's balanced test dataset. Importance of Pre-training Our first experiment was checking the importance of pre-training. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. We then trained the model on sequence lengths of 100, 250 and 500. The accuracy for each sequence length after 100 epochs is shown in TABREF7 and is labeled as UP (unsupervised pre-training). The other column shows how well BERT-base trained without pre-training. We found improvements for lower sequence lengths, but not at 500 word pieces. Since the longer chunk should have been more informative, and since our hand-labeled training set only contained 516 articles, this likely indicates that BERT experiences training difficulty when dealing with long sequences on such a small dataset. As the cost to do pre-training was only a one time cost all of our remaining experiments use a pre-trained model. We evaluated this model on the SemEval 2019 Task 4: Hyperpartisan News Detection competition's pan19-hyperpartisan-news-detection-by-article-test-dataset-2018-12-07 dataset using TIRA BIBREF10 . Our model, with a maximium sequence length of 250, had an accuracy of INLINEFORM0 . It had higher precision ( INLINEFORM1 ) than recall ( INLINEFORM2 ), for an overall F1-score of INLINEFORM3 . Importance of Sequence Length Next, we further explore the impact of sequence length using BERT-LARGE. The model took approximately 3 days to pre-train when using 4 NVIDIA GeForce GTX 1080 Ti. On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. The model's training time scaled roughly linearly with sequence length. We did a grid search on sequence length and learning rate. Table TABREF9 shows that the model consistently performed best at a sequence length of 100. This is a discrepancy from BERT-BASE indicating that the larger model struggled more with training on a small amount of long sequences. For our best trained BERT-LARGE, we submitted the model for evaluation on TIRA. Surprisingly, the test performance (75.1%) of the larger model was worse than the base model. The experiments in BIBREF0 consistently found improvements when using the large model. The main distinction here is a smaller training dataset than in their tasks. The experiments in the remaining sections use the same hyperparameters as the optimal BERT-LARGE. Model Consistency Due to the small training dataset, we tried self-training to increase our effective training set. We trained the model for 40 epochs. For the remaining 60 epochs, after each epoch we had the model make predictions on five slices of 500 unlabeled articles. If an article had the same prediction for more than four slices, we added it to the labeled training data. The model always added every article to the training set, though, since it always made the same prediction for all 5 slices. This caused self-training to be ineffective, but also revealed that the model's predictions were very consistent across segments of a single article. Effective Model Context Finally, we investigate whether the model's accuracy primarily arose from examining words or short phrases, or if the decisions were more global. We permuted the word pieces in the article at various levels of granularity. At the finest level (permute_ngrams = 1), we permuted every single word piece, forcing the model to process a bag of word pieces. At coarser levels, ngrams were permuted. As the sequence length for these experiments was 100, permute_ngrams = 100 corresponds to no permutation. The results can be found in TABREF13 . Accuracy drops a lot with only a bag of word pieces, but still reaches 67.4%. Also, most of the accuracy of the model (within 2%) is achieved with only 4-grams of word pieces, so the model is not getting much of a boost from global content. Discussion Our successful results demonstrate the adaptability of the BERT model to different tasks. With a relatively small training set of articles, we were able to train models with high accuracy on both the validation set and the test set. Our models classified different parts of a given article identically, demonstrating that the overall hyperpartisan aspects were similar across an article. In addition, the model had significantly lower accuracy when word pieces were shuffled around, but that accuracy was almost entirely restored when shuffling around chunks of four or more word pieces, suggesting that most of the important features can already be extracted at this level. In future work, we we would like to make use of the entire article. Naively, running this over each chunk would be computationally infeasible, so it may be worth doing a full pass on a few chunks and cheaper computations on other chunks. Namesake Our system is named after Clint Buchanan, a fictional journalist on the soap opera One Life to Live. Following the unbelievable stories of Clint and his associates may be one of the few tasks more difficult than identifying hyperpartisan news.
They pre-train the models using 600000 articles as an unsupervised dataset and then fine-tune the models on small training set.
2e1660405bde64fb6c211e8753e52299e269998f
2e1660405bde64fb6c211e8753e52299e269998f_0
Q: How long is the dataset? Text: Introduction SemEval Task 4 BIBREF1 tasked participating teams with identifying news articles that are misleading to their readers, a phenomenon often associated with “fake news” distributed by partisan sources BIBREF2 . We approach the problem through transfer learning to fine-tune a model for the document classification task. We use the BERT model based on the implementation of the github repository pytorch-pretrained-bert on some of the data provided by Task 4 of SemEval. BERT has been used to learn useful representations for a variety of natural language tasks, achieving state of the art performance in these tasks after being fine-tuned BIBREF0 . It is a language representation model that is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. Thus, it may be able to adequately account for complex characteristics as such blind, prejudiced reasoning and extreme bias that are important to reliably identifying hyperpartisanship in articles. We show that BERT performs well on hyperpartisan sentiment classification. We use unsupervised learning on the set of 600,000 source-labeled articles provided as part of the task, then train using supervised learning for the 645 hand-labeled articles. We believe that learning on source-labeled articles would bias our model to learn the partisanship of a source, instead of the article. Additionally, the accuracy of the model on validation data labeled by article differs heavily when the articles are labeled by publisher. Thus, we decided to use a small subset of the hand-labeled articles as our validation set for all of our experiments. As the articles are too large for the model to be trained on the full text each time, we consider the number of word-pieces that the model uses from each article a hyperparameter. A second major issue we explore is what information the model is using to make decisions. This is particularly important for BERT because neural models are often viewed like black boxes. This view is problematic for a task like hyperpartisan news detection where users may reasonably want explanations as to why an article was flagged. We specifically explore how much of the article is needed by the model, how consistent the model behaves on an article, and whether the model focuses on individual words and phrases or if it uses more global understanding. We find that the model only needs a short amount of context (100 word pieces), is very consistent throughout an article, and most of the model's accuracy arises from locally examining the article. In this paper, we demonstrate the effectiveness of BERT models for the hyperpartisan news classification task, with validation accuracy as high as 85% and test accuracy as high as 77% . We also make significant investigations into the importance of different factors relating to the articles and training in BERT's success. The remainder of this paper is organized as follows. Section SECREF2 describes previous work on the BERT model and semi-supervised learning. Section SECREF3 outlines our model, data, and experiments. Our results are presented in Section SECREF4 , with their ramifications discussed in Section SECREF5 . We close with an introduction to our system's namesake, fictional journalist Clint Buchanan, in Section SECREF6 . Related Work We build upon the Bidirectional Encoder Representations from Transformers (BERT) model. BERT is a deep bidirectional transformer that has been successfully tuned to a variety of tasks BIBREF0 . BERT functions as a language model over character sequences, with tokenization as described by BIBREF3 . The transformer architecture BIBREF4 is based upon relying on self-attention layers to encode a sequence. To allow the language model to be trained in a bidirectional manner instead of predicting tokens autoregressively, BERT was pre-trained to fill in the blanks for a piece of text, also known as the Cloze task BIBREF5 . Due to the small size of our training data, it was necessary to explore techniques from semi-supervised learning. BIBREF6 found pre-training a model as a language model on a larger corpus to be beneficial for a variety of experiments. We also investigated the use of self-training BIBREF7 to increase our effective training dataset size. Lastly, the motivation of examining the effective context of our classification model was based on BIBREF8 . It was found that much higher performance than expected was achieved on the ImageNet dataset BIBREF9 by aggregating predictions from local patches. This revealed that typical ImageNet models could acquire most of their performance from local decisions. Methodology Next, we describe the variations of the BERT model used in our experiments, the data we used, and details of the setup of each of our experiments. Model We adjust the standard BERT model for the hyperpartisan news task, evaluating its performance both on a validation set we construct and on the test set provided by Task 4 at SemEval. The training of the model follows the methodology of the original BERT paper. We choose to experiment with the use of the two different pre-trained versions of the BERT model, BERT-LARGE and BERT-BASE. The two differ in the number of layers and hidden sizes in the underlying model. BERT-BASE consists of 12 layers and 110 million parameters, while BERT-LARGE consists of 24 layers and 340 million parameters. Training and Test Sets We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We take the first 80% of this data set for our training set and the last 20% for the validation set. Since the test set is also hand-labeled we found that the 645 articles are much more representative of the final test set than the articles labeled by publisher. The model's performance on articles labeled by publisher was not much above chance level. Due to an intrinsic limitation of the BERT model, we are unable to consider sequences of longer than 512 word pieces for classification problems. These word pieces refer to the byte-pair encoding that BERT relies on for tokenization. These can be actual words, but less common words may be split into subword pieces BIBREF3 . The longest article in the training set contains around 6500 word pieces. To accommodate this model limitation, we work with truncated versions of the articles. We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model. Experiments We first investigate the impact of pre-training on BERT-BASE's performance. We then compare the performance of BERT-BASE with BERT-LARGE. For both, we vary the number of word-pieces from each article that are used in training. We perform tests with 100, 250 and 500 word pieces. We also explore whether and how the BERT models we use classify different parts of each individual article. Since the model can only consider a limited number of word pieces and not a full article, we test how the model judges different sections of the same article. Here, we are interested in the extent to which the same class will be assigned to each segment of an article. Finally, we test whether the model's behavior varies if we randomly shuffle word-pieces from the articles during training. Our goal in this experiment is to understand whether the model focuses on individual words and phrases or if it achieves more global understanding. We alter the the size of the chunks to be shuffled ( INLINEFORM0 ) in each iteration of this experiment, from shuffling individual word-pieces ( INLINEFORM1 ) to shuffling larger multiword chunks. Results Our results are primarily based on a validation set we constructed using the last 20% of the hand-labeled articles. It is important to note that our validation set was fairly unbalanced. About 72% of articles were not hyperpartisan and this mainly arose because we were not provided with a balanced set of hand-labeled articles. The small validation split ended up increasing the imbalance in exchange for training on a more balanced set. The test accuracies we report were done on SemEval Task 4's balanced test dataset. Importance of Pre-training Our first experiment was checking the importance of pre-training. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. We then trained the model on sequence lengths of 100, 250 and 500. The accuracy for each sequence length after 100 epochs is shown in TABREF7 and is labeled as UP (unsupervised pre-training). The other column shows how well BERT-base trained without pre-training. We found improvements for lower sequence lengths, but not at 500 word pieces. Since the longer chunk should have been more informative, and since our hand-labeled training set only contained 516 articles, this likely indicates that BERT experiences training difficulty when dealing with long sequences on such a small dataset. As the cost to do pre-training was only a one time cost all of our remaining experiments use a pre-trained model. We evaluated this model on the SemEval 2019 Task 4: Hyperpartisan News Detection competition's pan19-hyperpartisan-news-detection-by-article-test-dataset-2018-12-07 dataset using TIRA BIBREF10 . Our model, with a maximium sequence length of 250, had an accuracy of INLINEFORM0 . It had higher precision ( INLINEFORM1 ) than recall ( INLINEFORM2 ), for an overall F1-score of INLINEFORM3 . Importance of Sequence Length Next, we further explore the impact of sequence length using BERT-LARGE. The model took approximately 3 days to pre-train when using 4 NVIDIA GeForce GTX 1080 Ti. On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. The model's training time scaled roughly linearly with sequence length. We did a grid search on sequence length and learning rate. Table TABREF9 shows that the model consistently performed best at a sequence length of 100. This is a discrepancy from BERT-BASE indicating that the larger model struggled more with training on a small amount of long sequences. For our best trained BERT-LARGE, we submitted the model for evaluation on TIRA. Surprisingly, the test performance (75.1%) of the larger model was worse than the base model. The experiments in BIBREF0 consistently found improvements when using the large model. The main distinction here is a smaller training dataset than in their tasks. The experiments in the remaining sections use the same hyperparameters as the optimal BERT-LARGE. Model Consistency Due to the small training dataset, we tried self-training to increase our effective training set. We trained the model for 40 epochs. For the remaining 60 epochs, after each epoch we had the model make predictions on five slices of 500 unlabeled articles. If an article had the same prediction for more than four slices, we added it to the labeled training data. The model always added every article to the training set, though, since it always made the same prediction for all 5 slices. This caused self-training to be ineffective, but also revealed that the model's predictions were very consistent across segments of a single article. Effective Model Context Finally, we investigate whether the model's accuracy primarily arose from examining words or short phrases, or if the decisions were more global. We permuted the word pieces in the article at various levels of granularity. At the finest level (permute_ngrams = 1), we permuted every single word piece, forcing the model to process a bag of word pieces. At coarser levels, ngrams were permuted. As the sequence length for these experiments was 100, permute_ngrams = 100 corresponds to no permutation. The results can be found in TABREF13 . Accuracy drops a lot with only a bag of word pieces, but still reaches 67.4%. Also, most of the accuracy of the model (within 2%) is achieved with only 4-grams of word pieces, so the model is not getting much of a boost from global content. Discussion Our successful results demonstrate the adaptability of the BERT model to different tasks. With a relatively small training set of articles, we were able to train models with high accuracy on both the validation set and the test set. Our models classified different parts of a given article identically, demonstrating that the overall hyperpartisan aspects were similar across an article. In addition, the model had significantly lower accuracy when word pieces were shuffled around, but that accuracy was almost entirely restored when shuffling around chunks of four or more word pieces, suggesting that most of the important features can already be extracted at this level. In future work, we we would like to make use of the entire article. Naively, running this over each chunk would be computationally infeasible, so it may be worth doing a full pass on a few chunks and cheaper computations on other chunks. Namesake Our system is named after Clint Buchanan, a fictional journalist on the soap opera One Life to Live. Following the unbelievable stories of Clint and his associates may be one of the few tasks more difficult than identifying hyperpartisan news.
645, 600000
82a28c1ed7988513d5984f6dcacecb7e90f64792
82a28c1ed7988513d5984f6dcacecb7e90f64792_0
Q: How big are negative effects of proposed techniques on high-resource tasks? Text: Introduction Multiple tasks may often benefit from others by leveraging more available data. For natural language tasks, a simple approach is to pre-train embeddings BIBREF0, BIBREF1 or a language model BIBREF2, BIBREF3 over a large corpus. The learnt representations may then be used for upstream tasks such as part-of-speech tagging or parsing, for which there is less annotated data. Alternatively, multiple tasks may be trained simultaneously with either a single model or by sharing some model components. In addition to potentially benefit from multiple data sources, this approach also reduces the memory use. However, multi-task models of similar size as single-task baselines often under-perform because of their limited capacity. The underlying multi-task model learns to improve on harder tasks, but may hit a plateau, while simpler (or data poor) tasks can be over-trained (over-fitted). Regardless of data complexity, some tasks may be forgotten if the schedule is improper, also known as catastrophic forgetting BIBREF4. In this paper, we consider multilingual neural machine translation (NMT), where both of the above pathological learning behaviors are observed, sub-optimal accuracy on high-resource, and forgetting on low-resource language pairs. Multilingual NMT models are generally trained by mixing language pairs in a predetermined fashion, such as sampling from each task uniformly BIBREF5 or in proportion to dataset sizes BIBREF6. While results are generally acceptable with a fixed schedule, it leaves little control over the performance of each task. We instead consider adaptive schedules that modify the importance of each task based on their validation set performance. The task schedule may be modified explicitly by controlling the probability of each task being sampled. Alternatively, the schedule may be fixed, with the impact of each task controlled by scaling the gradients or the learning rates. In this case, we highlight important subtleties that arise with adaptive learning rate optimizers such as Adam BIBREF7. Our proposed approach improves the low-resource pair accuracy while keeping the high resource accuracy intact within the same multi-task model. Explicit schedules A common approach for multi-task learning is to train on each task uniformly BIBREF5. Alternatively, each task may be sampled following a fixed non-uniform schedule, often favoring either a specific task of interest or tasks with larger amounts of data BIBREF6, BIBREF8. Kipperwasser and Ballesteros BIBREF8 also propose variable schedules that increasingly favor some tasks over time. As all these schedules are pre-defined (as a function of the training step or amount of available training data), they offer limited control over the performance of all tasks. As such, we consider adaptive schedules that vary based on the validation performance of each task during training. To do so, we assume that the baseline validation performance of each task, if trained individually, is known in advance. When training a multi-task model, validation scores are continually recorded in order to adjust task sampling probabilities. The unnormalized score $w_i$ of task $i$ is given by where $s_i$ is the latest validation BLEU score and $b_i$ is the (approximate) baseline performance. Tasks that perform poorly relative to their baseline will be over-sampled, and vice-versa for language pairs with good performance. The hyper-parameter $\alpha $ controls how agressive oversampling is, while $\epsilon $ prevents numerical errors and slightly smooths out the distribution. Final probabilities are simply obtained by dividing the raw scores by their sum. Implicit schedules Explicit schedules may possibly be too restrictive in some circumstances, such as models trained on a very high number of tasks, or when one task is sampled much more often than others. Instead of explicitly varying task schedules, a similar impact may be achieved through learning rate or gradient manipulation. For example, the GradNorm BIBREF9 algorithm scales task gradients based on the magnitude of the gradients as well as on the training losses. As the training loss is not always a good proxy for validation and test performance, especially compared to a single-task baseline, we continue using validation set performance to guide gradient scaling factors. Here, instead of the previous weighting schemes, we consider one that satisfies the following desiderata. In addition to favoring tasks with low relative validation performance, we specify that task weights are close to uniform early on, when performance is still low on all tasks. We also as set a minimum task weight to avoid catastrophic forgetting. Task weights $w_i, i=1,...,N$, follow where $S_i = \frac{s_i}{b_i}$ and $\overline{S}$ is the average relative score $(\sum _{j=1}^N S_j)/N$. $\gamma $ sets the floor to prevent catastrophic forgetting, $\alpha $ adjusts how quickly and strongly the schedule may deviate from uniform, while a small $\beta $ emphasizes deviations from the mean score. With two tasks, the task weights already sum up to two, as in GradNorm BIBREF9. With more tasks, the weights may be adjusted so their their sum matches the number of tasks. Implicit schedules ::: Optimization details Scaling either the gradients $g_t$ or the per-task learning rates $\alpha $ is equivalent with standard stochastic gradient descent, but not with adaptive optimizers such as Adam BIBREF7, whose update rule is given in Eq. DISPLAY_FORM5. Moreover, sharing or not the optimizer accumulators (eg. running average of 1st and 2nd moment $\hat{m}_t$ and $\hat{v}_t$ of the gradients) is also impactful. Using separate optimizers and simultaneously scaling the gradients of individual tasks is ineffective. Indeed, Adam is scale-insensitive because the updates are divided by the square root of the second moment estimate $\hat{v}_t$. The opposite scenario, a shared optimizer across tasks with scaled learning rates, is also problematic as the momentum effect ($\hat{m}_t$) will blur all tasks together at every update. All experiments we present use distinct optimizers, with scaled learning rates. The converse, a shared optimizer with scaled gradients, could also potentially be employed. Experiments ::: Data We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10. Experiments ::: Models All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup. Experiments ::: Results The main results are summarized in Table TABREF10. Considering the amount of training data, we trained single task baselines for 400K and 600K steps for En-De and En-Fr respectively, where multi-task models are trained for 900K steps after training. All reported scores are the average of the last 20 checkpoints. Within each general schedule type, model selection was performed by maximizing the average development BLEU score between the two tasks. With uniform sampling, results improve by more than 1 BLEU point on En-De, but there is a significant degradation on En-Fr. Sampling En-Fr with a 75% probability gives similar results on En-De, but the En-Fr performance is now comparable to the baseline. Explicit adaptive scheduling behaves similarly on En-De and somewhat trails the En-Fr baseline. For implicit schedules, GradNorm performs reasonably strongly on En-De, but suffers on En-Fr, although slightly less than with uniform sampling. Implicit validation-based scheduling still improves upon the En-De baseline, but less than the other approaches. On En-Fr, this approach performs about as well as the baseline and the multilingual model with a fixed 75% En-Fr sampling probability. Overall, adaptive approaches satisfy our desiderata, satisfactory performance on both tasks, but an hyper-parameter search over constant schedules led to slightly better results. One main appeal of adaptive models is their potential ability to scale much better to a very large number of tasks, where a large hyper-parameter search would prove prohibitively expensive. Additional results are presented in the appendix. Discussion and other related work To train multi-task vision models, Liu et al. BIBREF13 propose a similar dynamic weight average approach. Task weights are controlled by the ratio between a recent training loss and the loss at a previous time step, so that tasks that progress faster will be downweighted, while straggling ones will be upweighted. This approach contrasts with the curriculum learning framework proposed by Matiisen et al. BIBREF14, where tasks with faster progress are preferred. Loss progress, and well as a few other signals, were also employed by Graves et al. BIBREF15, which formulated curriculum learning as a multi-armed bandit problem. One advantage of using progress as a signal is that the final baseline losses are not needed. Dynamic weight average could also be adapted to employ a validation metric as opposed to the training loss. Alternatively, uncertainty may be used to adjust multi-task weights BIBREF16. Sener and Volkun BIBREF17 discuss multi-task learning as a multi-objective optimization. Their objective tries to achieve Pareto optimality, so that a solution to a multi-task problem cannot improve on one task without hurting another. Their approach is learning-based, and contrarily to ours, doesn't require a somewhat ad-hoc mapping between task performance (or progress) and task weights. However, Pareto optimality of the training losses does not guarantee Pareto optimality of the evaluation metrics. Xu et al. present AutoLoss BIBREF18, which uses reinforcement learning to train a controller that determines the optimization schedule. In particular, they apply their framework to (single language pair) NMT with auxiliary tasks. With implicit scheduling approaches, the effective learning rates are still dominated by the underlying predefined learning rate schedule. For single tasks, hypergradient descent BIBREF19 adjusts the global learning rate by considering the direction of the gradient and of the previous update. This technique could likely be adapted for multi-task learning, as long as the tasks are sampled randomly. Tangentially, adaptive approaches may behave poorly if validation performance varies much faster than the rate at which it is computed. Figure FIGREF36 (appendix) illustrates a scenario, with an alternative parameter sharing scheme, where BLEU scores and task probabilities oscillate wildly. As one task is favored, the other is catastrophically forgotten. When new validation scores are computed, the sampling weights change drastically, and the first task now begins to be forgotten. Conclusion We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task. For future work, in order to increase the utility of adaptive schedulers, it would be beneficial to explore their use on a much larger number of simultaneous tasks. In this scenario, they may prove more useful as hyper-parameter search over fixed schedules would become cumbersome. Impact of hyper-parameters In this appendix, we present the impact of various hyper-parameters for the different schedule types. Figure FIGREF11 illustrates the effect of sampling ratios in explicit constant scheduling. We vary the sampling ratio for a task from 10% to 90% and evaluated the development and test BLEU scores by using this fixed schedule throughout the training. Considering the disproportional dataset sizes between two tasks (1/40), oversampling high-resource task yields better overall performance for both tasks. While a uniform sampling ratio favors the low-resource task (50%-50%), more balanced results are obtained with a 75% - 25% split favoring the high-resource task. Explicit Dev-Based schedule results are illustrated in Figure FIGREF16 below, where we explored varying $\alpha $ and $\epsilon $ parameters, to control oversampling and forgetting. Implicit validation-based scheduling progress We here present how the task weights, learning rates and validation BLEU scores are modified over time with an implicit schedule. For the implicit schedule hyper-parameters, we set $\alpha =16$, $\beta =0.1$, $\gamma =0.05$ with baselines $b_i$ being 24 and 35 for En-De and En-Fr respectively. For the best performing model, we used inverse-square root learning rate schedule BIBREF11 with a learning rate of 1.5 and 40K warm-up steps. Task weights are adaptively changed by the scheduler during training (Figure FIGREF31 top-left), and predicted weights are used to adjust the learning rates for each task (Figure FIGREF31 top-right). Following Eq. DISPLAY_FORM3, computed relative scores for each task, $S_j$, are illustrated in Figure FIGREF31 bottom-left. Finally, progression of the validation set BLEU scores with their corresponding baselines (as solid horizontal lines) are given in in Figure FIGREF31 bottom-right. Possible training instabilities This appendix presents a failed experiment with wildly varying oscillations. All encoder parameters were tied, as well as the first four layers of the decoder and the softmax. An explicit schedule was employed.
The negative effects were insignificant.
d4a6f5034345036dbc2d4e634a8504f79d42ca69
d4a6f5034345036dbc2d4e634a8504f79d42ca69_0
Q: What datasets are used for experiments? Text: Introduction Multiple tasks may often benefit from others by leveraging more available data. For natural language tasks, a simple approach is to pre-train embeddings BIBREF0, BIBREF1 or a language model BIBREF2, BIBREF3 over a large corpus. The learnt representations may then be used for upstream tasks such as part-of-speech tagging or parsing, for which there is less annotated data. Alternatively, multiple tasks may be trained simultaneously with either a single model or by sharing some model components. In addition to potentially benefit from multiple data sources, this approach also reduces the memory use. However, multi-task models of similar size as single-task baselines often under-perform because of their limited capacity. The underlying multi-task model learns to improve on harder tasks, but may hit a plateau, while simpler (or data poor) tasks can be over-trained (over-fitted). Regardless of data complexity, some tasks may be forgotten if the schedule is improper, also known as catastrophic forgetting BIBREF4. In this paper, we consider multilingual neural machine translation (NMT), where both of the above pathological learning behaviors are observed, sub-optimal accuracy on high-resource, and forgetting on low-resource language pairs. Multilingual NMT models are generally trained by mixing language pairs in a predetermined fashion, such as sampling from each task uniformly BIBREF5 or in proportion to dataset sizes BIBREF6. While results are generally acceptable with a fixed schedule, it leaves little control over the performance of each task. We instead consider adaptive schedules that modify the importance of each task based on their validation set performance. The task schedule may be modified explicitly by controlling the probability of each task being sampled. Alternatively, the schedule may be fixed, with the impact of each task controlled by scaling the gradients or the learning rates. In this case, we highlight important subtleties that arise with adaptive learning rate optimizers such as Adam BIBREF7. Our proposed approach improves the low-resource pair accuracy while keeping the high resource accuracy intact within the same multi-task model. Explicit schedules A common approach for multi-task learning is to train on each task uniformly BIBREF5. Alternatively, each task may be sampled following a fixed non-uniform schedule, often favoring either a specific task of interest or tasks with larger amounts of data BIBREF6, BIBREF8. Kipperwasser and Ballesteros BIBREF8 also propose variable schedules that increasingly favor some tasks over time. As all these schedules are pre-defined (as a function of the training step or amount of available training data), they offer limited control over the performance of all tasks. As such, we consider adaptive schedules that vary based on the validation performance of each task during training. To do so, we assume that the baseline validation performance of each task, if trained individually, is known in advance. When training a multi-task model, validation scores are continually recorded in order to adjust task sampling probabilities. The unnormalized score $w_i$ of task $i$ is given by where $s_i$ is the latest validation BLEU score and $b_i$ is the (approximate) baseline performance. Tasks that perform poorly relative to their baseline will be over-sampled, and vice-versa for language pairs with good performance. The hyper-parameter $\alpha $ controls how agressive oversampling is, while $\epsilon $ prevents numerical errors and slightly smooths out the distribution. Final probabilities are simply obtained by dividing the raw scores by their sum. Implicit schedules Explicit schedules may possibly be too restrictive in some circumstances, such as models trained on a very high number of tasks, or when one task is sampled much more often than others. Instead of explicitly varying task schedules, a similar impact may be achieved through learning rate or gradient manipulation. For example, the GradNorm BIBREF9 algorithm scales task gradients based on the magnitude of the gradients as well as on the training losses. As the training loss is not always a good proxy for validation and test performance, especially compared to a single-task baseline, we continue using validation set performance to guide gradient scaling factors. Here, instead of the previous weighting schemes, we consider one that satisfies the following desiderata. In addition to favoring tasks with low relative validation performance, we specify that task weights are close to uniform early on, when performance is still low on all tasks. We also as set a minimum task weight to avoid catastrophic forgetting. Task weights $w_i, i=1,...,N$, follow where $S_i = \frac{s_i}{b_i}$ and $\overline{S}$ is the average relative score $(\sum _{j=1}^N S_j)/N$. $\gamma $ sets the floor to prevent catastrophic forgetting, $\alpha $ adjusts how quickly and strongly the schedule may deviate from uniform, while a small $\beta $ emphasizes deviations from the mean score. With two tasks, the task weights already sum up to two, as in GradNorm BIBREF9. With more tasks, the weights may be adjusted so their their sum matches the number of tasks. Implicit schedules ::: Optimization details Scaling either the gradients $g_t$ or the per-task learning rates $\alpha $ is equivalent with standard stochastic gradient descent, but not with adaptive optimizers such as Adam BIBREF7, whose update rule is given in Eq. DISPLAY_FORM5. Moreover, sharing or not the optimizer accumulators (eg. running average of 1st and 2nd moment $\hat{m}_t$ and $\hat{v}_t$ of the gradients) is also impactful. Using separate optimizers and simultaneously scaling the gradients of individual tasks is ineffective. Indeed, Adam is scale-insensitive because the updates are divided by the square root of the second moment estimate $\hat{v}_t$. The opposite scenario, a shared optimizer across tasks with scaled learning rates, is also problematic as the momentum effect ($\hat{m}_t$) will blur all tasks together at every update. All experiments we present use distinct optimizers, with scaled learning rates. The converse, a shared optimizer with scaled gradients, could also potentially be employed. Experiments ::: Data We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10. Experiments ::: Models All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup. Experiments ::: Results The main results are summarized in Table TABREF10. Considering the amount of training data, we trained single task baselines for 400K and 600K steps for En-De and En-Fr respectively, where multi-task models are trained for 900K steps after training. All reported scores are the average of the last 20 checkpoints. Within each general schedule type, model selection was performed by maximizing the average development BLEU score between the two tasks. With uniform sampling, results improve by more than 1 BLEU point on En-De, but there is a significant degradation on En-Fr. Sampling En-Fr with a 75% probability gives similar results on En-De, but the En-Fr performance is now comparable to the baseline. Explicit adaptive scheduling behaves similarly on En-De and somewhat trails the En-Fr baseline. For implicit schedules, GradNorm performs reasonably strongly on En-De, but suffers on En-Fr, although slightly less than with uniform sampling. Implicit validation-based scheduling still improves upon the En-De baseline, but less than the other approaches. On En-Fr, this approach performs about as well as the baseline and the multilingual model with a fixed 75% En-Fr sampling probability. Overall, adaptive approaches satisfy our desiderata, satisfactory performance on both tasks, but an hyper-parameter search over constant schedules led to slightly better results. One main appeal of adaptive models is their potential ability to scale much better to a very large number of tasks, where a large hyper-parameter search would prove prohibitively expensive. Additional results are presented in the appendix. Discussion and other related work To train multi-task vision models, Liu et al. BIBREF13 propose a similar dynamic weight average approach. Task weights are controlled by the ratio between a recent training loss and the loss at a previous time step, so that tasks that progress faster will be downweighted, while straggling ones will be upweighted. This approach contrasts with the curriculum learning framework proposed by Matiisen et al. BIBREF14, where tasks with faster progress are preferred. Loss progress, and well as a few other signals, were also employed by Graves et al. BIBREF15, which formulated curriculum learning as a multi-armed bandit problem. One advantage of using progress as a signal is that the final baseline losses are not needed. Dynamic weight average could also be adapted to employ a validation metric as opposed to the training loss. Alternatively, uncertainty may be used to adjust multi-task weights BIBREF16. Sener and Volkun BIBREF17 discuss multi-task learning as a multi-objective optimization. Their objective tries to achieve Pareto optimality, so that a solution to a multi-task problem cannot improve on one task without hurting another. Their approach is learning-based, and contrarily to ours, doesn't require a somewhat ad-hoc mapping between task performance (or progress) and task weights. However, Pareto optimality of the training losses does not guarantee Pareto optimality of the evaluation metrics. Xu et al. present AutoLoss BIBREF18, which uses reinforcement learning to train a controller that determines the optimization schedule. In particular, they apply their framework to (single language pair) NMT with auxiliary tasks. With implicit scheduling approaches, the effective learning rates are still dominated by the underlying predefined learning rate schedule. For single tasks, hypergradient descent BIBREF19 adjusts the global learning rate by considering the direction of the gradient and of the previous update. This technique could likely be adapted for multi-task learning, as long as the tasks are sampled randomly. Tangentially, adaptive approaches may behave poorly if validation performance varies much faster than the rate at which it is computed. Figure FIGREF36 (appendix) illustrates a scenario, with an alternative parameter sharing scheme, where BLEU scores and task probabilities oscillate wildly. As one task is favored, the other is catastrophically forgotten. When new validation scores are computed, the sampling weights change drastically, and the first task now begins to be forgotten. Conclusion We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task. For future work, in order to increase the utility of adaptive schedulers, it would be beneficial to explore their use on a much larger number of simultaneous tasks. In this scenario, they may prove more useful as hyper-parameter search over fixed schedules would become cumbersome. Impact of hyper-parameters In this appendix, we present the impact of various hyper-parameters for the different schedule types. Figure FIGREF11 illustrates the effect of sampling ratios in explicit constant scheduling. We vary the sampling ratio for a task from 10% to 90% and evaluated the development and test BLEU scores by using this fixed schedule throughout the training. Considering the disproportional dataset sizes between two tasks (1/40), oversampling high-resource task yields better overall performance for both tasks. While a uniform sampling ratio favors the low-resource task (50%-50%), more balanced results are obtained with a 75% - 25% split favoring the high-resource task. Explicit Dev-Based schedule results are illustrated in Figure FIGREF16 below, where we explored varying $\alpha $ and $\epsilon $ parameters, to control oversampling and forgetting. Implicit validation-based scheduling progress We here present how the task weights, learning rates and validation BLEU scores are modified over time with an implicit schedule. For the implicit schedule hyper-parameters, we set $\alpha =16$, $\beta =0.1$, $\gamma =0.05$ with baselines $b_i$ being 24 and 35 for En-De and En-Fr respectively. For the best performing model, we used inverse-square root learning rate schedule BIBREF11 with a learning rate of 1.5 and 40K warm-up steps. Task weights are adaptively changed by the scheduler during training (Figure FIGREF31 top-left), and predicted weights are used to adjust the learning rates for each task (Figure FIGREF31 top-right). Following Eq. DISPLAY_FORM3, computed relative scores for each task, $S_j$, are illustrated in Figure FIGREF31 bottom-left. Finally, progression of the validation set BLEU scores with their corresponding baselines (as solid horizontal lines) are given in in Figure FIGREF31 bottom-right. Possible training instabilities This appendix presents a failed experiment with wildly varying oscillations. All encoder parameters were tied, as well as the first four layers of the decoder and the softmax. An explicit schedule was employed.
the WMT'14 English-French (En-Fr) and English-German (En-De) datasets.
54fa5196d0e6d5e84955548f4ef51bfd9b707a32
54fa5196d0e6d5e84955548f4ef51bfd9b707a32_0
Q: Are this techniques used in training multilingual models, on what languages? Text: Introduction Multiple tasks may often benefit from others by leveraging more available data. For natural language tasks, a simple approach is to pre-train embeddings BIBREF0, BIBREF1 or a language model BIBREF2, BIBREF3 over a large corpus. The learnt representations may then be used for upstream tasks such as part-of-speech tagging or parsing, for which there is less annotated data. Alternatively, multiple tasks may be trained simultaneously with either a single model or by sharing some model components. In addition to potentially benefit from multiple data sources, this approach also reduces the memory use. However, multi-task models of similar size as single-task baselines often under-perform because of their limited capacity. The underlying multi-task model learns to improve on harder tasks, but may hit a plateau, while simpler (or data poor) tasks can be over-trained (over-fitted). Regardless of data complexity, some tasks may be forgotten if the schedule is improper, also known as catastrophic forgetting BIBREF4. In this paper, we consider multilingual neural machine translation (NMT), where both of the above pathological learning behaviors are observed, sub-optimal accuracy on high-resource, and forgetting on low-resource language pairs. Multilingual NMT models are generally trained by mixing language pairs in a predetermined fashion, such as sampling from each task uniformly BIBREF5 or in proportion to dataset sizes BIBREF6. While results are generally acceptable with a fixed schedule, it leaves little control over the performance of each task. We instead consider adaptive schedules that modify the importance of each task based on their validation set performance. The task schedule may be modified explicitly by controlling the probability of each task being sampled. Alternatively, the schedule may be fixed, with the impact of each task controlled by scaling the gradients or the learning rates. In this case, we highlight important subtleties that arise with adaptive learning rate optimizers such as Adam BIBREF7. Our proposed approach improves the low-resource pair accuracy while keeping the high resource accuracy intact within the same multi-task model. Explicit schedules A common approach for multi-task learning is to train on each task uniformly BIBREF5. Alternatively, each task may be sampled following a fixed non-uniform schedule, often favoring either a specific task of interest or tasks with larger amounts of data BIBREF6, BIBREF8. Kipperwasser and Ballesteros BIBREF8 also propose variable schedules that increasingly favor some tasks over time. As all these schedules are pre-defined (as a function of the training step or amount of available training data), they offer limited control over the performance of all tasks. As such, we consider adaptive schedules that vary based on the validation performance of each task during training. To do so, we assume that the baseline validation performance of each task, if trained individually, is known in advance. When training a multi-task model, validation scores are continually recorded in order to adjust task sampling probabilities. The unnormalized score $w_i$ of task $i$ is given by where $s_i$ is the latest validation BLEU score and $b_i$ is the (approximate) baseline performance. Tasks that perform poorly relative to their baseline will be over-sampled, and vice-versa for language pairs with good performance. The hyper-parameter $\alpha $ controls how agressive oversampling is, while $\epsilon $ prevents numerical errors and slightly smooths out the distribution. Final probabilities are simply obtained by dividing the raw scores by their sum. Implicit schedules Explicit schedules may possibly be too restrictive in some circumstances, such as models trained on a very high number of tasks, or when one task is sampled much more often than others. Instead of explicitly varying task schedules, a similar impact may be achieved through learning rate or gradient manipulation. For example, the GradNorm BIBREF9 algorithm scales task gradients based on the magnitude of the gradients as well as on the training losses. As the training loss is not always a good proxy for validation and test performance, especially compared to a single-task baseline, we continue using validation set performance to guide gradient scaling factors. Here, instead of the previous weighting schemes, we consider one that satisfies the following desiderata. In addition to favoring tasks with low relative validation performance, we specify that task weights are close to uniform early on, when performance is still low on all tasks. We also as set a minimum task weight to avoid catastrophic forgetting. Task weights $w_i, i=1,...,N$, follow where $S_i = \frac{s_i}{b_i}$ and $\overline{S}$ is the average relative score $(\sum _{j=1}^N S_j)/N$. $\gamma $ sets the floor to prevent catastrophic forgetting, $\alpha $ adjusts how quickly and strongly the schedule may deviate from uniform, while a small $\beta $ emphasizes deviations from the mean score. With two tasks, the task weights already sum up to two, as in GradNorm BIBREF9. With more tasks, the weights may be adjusted so their their sum matches the number of tasks. Implicit schedules ::: Optimization details Scaling either the gradients $g_t$ or the per-task learning rates $\alpha $ is equivalent with standard stochastic gradient descent, but not with adaptive optimizers such as Adam BIBREF7, whose update rule is given in Eq. DISPLAY_FORM5. Moreover, sharing or not the optimizer accumulators (eg. running average of 1st and 2nd moment $\hat{m}_t$ and $\hat{v}_t$ of the gradients) is also impactful. Using separate optimizers and simultaneously scaling the gradients of individual tasks is ineffective. Indeed, Adam is scale-insensitive because the updates are divided by the square root of the second moment estimate $\hat{v}_t$. The opposite scenario, a shared optimizer across tasks with scaled learning rates, is also problematic as the momentum effect ($\hat{m}_t$) will blur all tasks together at every update. All experiments we present use distinct optimizers, with scaled learning rates. The converse, a shared optimizer with scaled gradients, could also potentially be employed. Experiments ::: Data We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10. Experiments ::: Models All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup. Experiments ::: Results The main results are summarized in Table TABREF10. Considering the amount of training data, we trained single task baselines for 400K and 600K steps for En-De and En-Fr respectively, where multi-task models are trained for 900K steps after training. All reported scores are the average of the last 20 checkpoints. Within each general schedule type, model selection was performed by maximizing the average development BLEU score between the two tasks. With uniform sampling, results improve by more than 1 BLEU point on En-De, but there is a significant degradation on En-Fr. Sampling En-Fr with a 75% probability gives similar results on En-De, but the En-Fr performance is now comparable to the baseline. Explicit adaptive scheduling behaves similarly on En-De and somewhat trails the En-Fr baseline. For implicit schedules, GradNorm performs reasonably strongly on En-De, but suffers on En-Fr, although slightly less than with uniform sampling. Implicit validation-based scheduling still improves upon the En-De baseline, but less than the other approaches. On En-Fr, this approach performs about as well as the baseline and the multilingual model with a fixed 75% En-Fr sampling probability. Overall, adaptive approaches satisfy our desiderata, satisfactory performance on both tasks, but an hyper-parameter search over constant schedules led to slightly better results. One main appeal of adaptive models is their potential ability to scale much better to a very large number of tasks, where a large hyper-parameter search would prove prohibitively expensive. Additional results are presented in the appendix. Discussion and other related work To train multi-task vision models, Liu et al. BIBREF13 propose a similar dynamic weight average approach. Task weights are controlled by the ratio between a recent training loss and the loss at a previous time step, so that tasks that progress faster will be downweighted, while straggling ones will be upweighted. This approach contrasts with the curriculum learning framework proposed by Matiisen et al. BIBREF14, where tasks with faster progress are preferred. Loss progress, and well as a few other signals, were also employed by Graves et al. BIBREF15, which formulated curriculum learning as a multi-armed bandit problem. One advantage of using progress as a signal is that the final baseline losses are not needed. Dynamic weight average could also be adapted to employ a validation metric as opposed to the training loss. Alternatively, uncertainty may be used to adjust multi-task weights BIBREF16. Sener and Volkun BIBREF17 discuss multi-task learning as a multi-objective optimization. Their objective tries to achieve Pareto optimality, so that a solution to a multi-task problem cannot improve on one task without hurting another. Their approach is learning-based, and contrarily to ours, doesn't require a somewhat ad-hoc mapping between task performance (or progress) and task weights. However, Pareto optimality of the training losses does not guarantee Pareto optimality of the evaluation metrics. Xu et al. present AutoLoss BIBREF18, which uses reinforcement learning to train a controller that determines the optimization schedule. In particular, they apply their framework to (single language pair) NMT with auxiliary tasks. With implicit scheduling approaches, the effective learning rates are still dominated by the underlying predefined learning rate schedule. For single tasks, hypergradient descent BIBREF19 adjusts the global learning rate by considering the direction of the gradient and of the previous update. This technique could likely be adapted for multi-task learning, as long as the tasks are sampled randomly. Tangentially, adaptive approaches may behave poorly if validation performance varies much faster than the rate at which it is computed. Figure FIGREF36 (appendix) illustrates a scenario, with an alternative parameter sharing scheme, where BLEU scores and task probabilities oscillate wildly. As one task is favored, the other is catastrophically forgotten. When new validation scores are computed, the sampling weights change drastically, and the first task now begins to be forgotten. Conclusion We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task. For future work, in order to increase the utility of adaptive schedulers, it would be beneficial to explore their use on a much larger number of simultaneous tasks. In this scenario, they may prove more useful as hyper-parameter search over fixed schedules would become cumbersome. Impact of hyper-parameters In this appendix, we present the impact of various hyper-parameters for the different schedule types. Figure FIGREF11 illustrates the effect of sampling ratios in explicit constant scheduling. We vary the sampling ratio for a task from 10% to 90% and evaluated the development and test BLEU scores by using this fixed schedule throughout the training. Considering the disproportional dataset sizes between two tasks (1/40), oversampling high-resource task yields better overall performance for both tasks. While a uniform sampling ratio favors the low-resource task (50%-50%), more balanced results are obtained with a 75% - 25% split favoring the high-resource task. Explicit Dev-Based schedule results are illustrated in Figure FIGREF16 below, where we explored varying $\alpha $ and $\epsilon $ parameters, to control oversampling and forgetting. Implicit validation-based scheduling progress We here present how the task weights, learning rates and validation BLEU scores are modified over time with an implicit schedule. For the implicit schedule hyper-parameters, we set $\alpha =16$, $\beta =0.1$, $\gamma =0.05$ with baselines $b_i$ being 24 and 35 for En-De and En-Fr respectively. For the best performing model, we used inverse-square root learning rate schedule BIBREF11 with a learning rate of 1.5 and 40K warm-up steps. Task weights are adaptively changed by the scheduler during training (Figure FIGREF31 top-left), and predicted weights are used to adjust the learning rates for each task (Figure FIGREF31 top-right). Following Eq. DISPLAY_FORM3, computed relative scores for each task, $S_j$, are illustrated in Figure FIGREF31 bottom-left. Finally, progression of the validation set BLEU scores with their corresponding baselines (as solid horizontal lines) are given in in Figure FIGREF31 bottom-right. Possible training instabilities This appendix presents a failed experiment with wildly varying oscillations. All encoder parameters were tied, as well as the first four layers of the decoder and the softmax. An explicit schedule was employed.
English to French and English to German
a997fc1a62442fd80d1873cd29a9092043f025ad
a997fc1a62442fd80d1873cd29a9092043f025ad_0
Q: What baselines non-adaptive baselines are used? Text: Introduction Multiple tasks may often benefit from others by leveraging more available data. For natural language tasks, a simple approach is to pre-train embeddings BIBREF0, BIBREF1 or a language model BIBREF2, BIBREF3 over a large corpus. The learnt representations may then be used for upstream tasks such as part-of-speech tagging or parsing, for which there is less annotated data. Alternatively, multiple tasks may be trained simultaneously with either a single model or by sharing some model components. In addition to potentially benefit from multiple data sources, this approach also reduces the memory use. However, multi-task models of similar size as single-task baselines often under-perform because of their limited capacity. The underlying multi-task model learns to improve on harder tasks, but may hit a plateau, while simpler (or data poor) tasks can be over-trained (over-fitted). Regardless of data complexity, some tasks may be forgotten if the schedule is improper, also known as catastrophic forgetting BIBREF4. In this paper, we consider multilingual neural machine translation (NMT), where both of the above pathological learning behaviors are observed, sub-optimal accuracy on high-resource, and forgetting on low-resource language pairs. Multilingual NMT models are generally trained by mixing language pairs in a predetermined fashion, such as sampling from each task uniformly BIBREF5 or in proportion to dataset sizes BIBREF6. While results are generally acceptable with a fixed schedule, it leaves little control over the performance of each task. We instead consider adaptive schedules that modify the importance of each task based on their validation set performance. The task schedule may be modified explicitly by controlling the probability of each task being sampled. Alternatively, the schedule may be fixed, with the impact of each task controlled by scaling the gradients or the learning rates. In this case, we highlight important subtleties that arise with adaptive learning rate optimizers such as Adam BIBREF7. Our proposed approach improves the low-resource pair accuracy while keeping the high resource accuracy intact within the same multi-task model. Explicit schedules A common approach for multi-task learning is to train on each task uniformly BIBREF5. Alternatively, each task may be sampled following a fixed non-uniform schedule, often favoring either a specific task of interest or tasks with larger amounts of data BIBREF6, BIBREF8. Kipperwasser and Ballesteros BIBREF8 also propose variable schedules that increasingly favor some tasks over time. As all these schedules are pre-defined (as a function of the training step or amount of available training data), they offer limited control over the performance of all tasks. As such, we consider adaptive schedules that vary based on the validation performance of each task during training. To do so, we assume that the baseline validation performance of each task, if trained individually, is known in advance. When training a multi-task model, validation scores are continually recorded in order to adjust task sampling probabilities. The unnormalized score $w_i$ of task $i$ is given by where $s_i$ is the latest validation BLEU score and $b_i$ is the (approximate) baseline performance. Tasks that perform poorly relative to their baseline will be over-sampled, and vice-versa for language pairs with good performance. The hyper-parameter $\alpha $ controls how agressive oversampling is, while $\epsilon $ prevents numerical errors and slightly smooths out the distribution. Final probabilities are simply obtained by dividing the raw scores by their sum. Implicit schedules Explicit schedules may possibly be too restrictive in some circumstances, such as models trained on a very high number of tasks, or when one task is sampled much more often than others. Instead of explicitly varying task schedules, a similar impact may be achieved through learning rate or gradient manipulation. For example, the GradNorm BIBREF9 algorithm scales task gradients based on the magnitude of the gradients as well as on the training losses. As the training loss is not always a good proxy for validation and test performance, especially compared to a single-task baseline, we continue using validation set performance to guide gradient scaling factors. Here, instead of the previous weighting schemes, we consider one that satisfies the following desiderata. In addition to favoring tasks with low relative validation performance, we specify that task weights are close to uniform early on, when performance is still low on all tasks. We also as set a minimum task weight to avoid catastrophic forgetting. Task weights $w_i, i=1,...,N$, follow where $S_i = \frac{s_i}{b_i}$ and $\overline{S}$ is the average relative score $(\sum _{j=1}^N S_j)/N$. $\gamma $ sets the floor to prevent catastrophic forgetting, $\alpha $ adjusts how quickly and strongly the schedule may deviate from uniform, while a small $\beta $ emphasizes deviations from the mean score. With two tasks, the task weights already sum up to two, as in GradNorm BIBREF9. With more tasks, the weights may be adjusted so their their sum matches the number of tasks. Implicit schedules ::: Optimization details Scaling either the gradients $g_t$ or the per-task learning rates $\alpha $ is equivalent with standard stochastic gradient descent, but not with adaptive optimizers such as Adam BIBREF7, whose update rule is given in Eq. DISPLAY_FORM5. Moreover, sharing or not the optimizer accumulators (eg. running average of 1st and 2nd moment $\hat{m}_t$ and $\hat{v}_t$ of the gradients) is also impactful. Using separate optimizers and simultaneously scaling the gradients of individual tasks is ineffective. Indeed, Adam is scale-insensitive because the updates are divided by the square root of the second moment estimate $\hat{v}_t$. The opposite scenario, a shared optimizer across tasks with scaled learning rates, is also problematic as the momentum effect ($\hat{m}_t$) will blur all tasks together at every update. All experiments we present use distinct optimizers, with scaled learning rates. The converse, a shared optimizer with scaled gradients, could also potentially be employed. Experiments ::: Data We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10. Experiments ::: Models All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup. Experiments ::: Results The main results are summarized in Table TABREF10. Considering the amount of training data, we trained single task baselines for 400K and 600K steps for En-De and En-Fr respectively, where multi-task models are trained for 900K steps after training. All reported scores are the average of the last 20 checkpoints. Within each general schedule type, model selection was performed by maximizing the average development BLEU score between the two tasks. With uniform sampling, results improve by more than 1 BLEU point on En-De, but there is a significant degradation on En-Fr. Sampling En-Fr with a 75% probability gives similar results on En-De, but the En-Fr performance is now comparable to the baseline. Explicit adaptive scheduling behaves similarly on En-De and somewhat trails the En-Fr baseline. For implicit schedules, GradNorm performs reasonably strongly on En-De, but suffers on En-Fr, although slightly less than with uniform sampling. Implicit validation-based scheduling still improves upon the En-De baseline, but less than the other approaches. On En-Fr, this approach performs about as well as the baseline and the multilingual model with a fixed 75% En-Fr sampling probability. Overall, adaptive approaches satisfy our desiderata, satisfactory performance on both tasks, but an hyper-parameter search over constant schedules led to slightly better results. One main appeal of adaptive models is their potential ability to scale much better to a very large number of tasks, where a large hyper-parameter search would prove prohibitively expensive. Additional results are presented in the appendix. Discussion and other related work To train multi-task vision models, Liu et al. BIBREF13 propose a similar dynamic weight average approach. Task weights are controlled by the ratio between a recent training loss and the loss at a previous time step, so that tasks that progress faster will be downweighted, while straggling ones will be upweighted. This approach contrasts with the curriculum learning framework proposed by Matiisen et al. BIBREF14, where tasks with faster progress are preferred. Loss progress, and well as a few other signals, were also employed by Graves et al. BIBREF15, which formulated curriculum learning as a multi-armed bandit problem. One advantage of using progress as a signal is that the final baseline losses are not needed. Dynamic weight average could also be adapted to employ a validation metric as opposed to the training loss. Alternatively, uncertainty may be used to adjust multi-task weights BIBREF16. Sener and Volkun BIBREF17 discuss multi-task learning as a multi-objective optimization. Their objective tries to achieve Pareto optimality, so that a solution to a multi-task problem cannot improve on one task without hurting another. Their approach is learning-based, and contrarily to ours, doesn't require a somewhat ad-hoc mapping between task performance (or progress) and task weights. However, Pareto optimality of the training losses does not guarantee Pareto optimality of the evaluation metrics. Xu et al. present AutoLoss BIBREF18, which uses reinforcement learning to train a controller that determines the optimization schedule. In particular, they apply their framework to (single language pair) NMT with auxiliary tasks. With implicit scheduling approaches, the effective learning rates are still dominated by the underlying predefined learning rate schedule. For single tasks, hypergradient descent BIBREF19 adjusts the global learning rate by considering the direction of the gradient and of the previous update. This technique could likely be adapted for multi-task learning, as long as the tasks are sampled randomly. Tangentially, adaptive approaches may behave poorly if validation performance varies much faster than the rate at which it is computed. Figure FIGREF36 (appendix) illustrates a scenario, with an alternative parameter sharing scheme, where BLEU scores and task probabilities oscillate wildly. As one task is favored, the other is catastrophically forgotten. When new validation scores are computed, the sampling weights change drastically, and the first task now begins to be forgotten. Conclusion We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task. For future work, in order to increase the utility of adaptive schedulers, it would be beneficial to explore their use on a much larger number of simultaneous tasks. In this scenario, they may prove more useful as hyper-parameter search over fixed schedules would become cumbersome. Impact of hyper-parameters In this appendix, we present the impact of various hyper-parameters for the different schedule types. Figure FIGREF11 illustrates the effect of sampling ratios in explicit constant scheduling. We vary the sampling ratio for a task from 10% to 90% and evaluated the development and test BLEU scores by using this fixed schedule throughout the training. Considering the disproportional dataset sizes between two tasks (1/40), oversampling high-resource task yields better overall performance for both tasks. While a uniform sampling ratio favors the low-resource task (50%-50%), more balanced results are obtained with a 75% - 25% split favoring the high-resource task. Explicit Dev-Based schedule results are illustrated in Figure FIGREF16 below, where we explored varying $\alpha $ and $\epsilon $ parameters, to control oversampling and forgetting. Implicit validation-based scheduling progress We here present how the task weights, learning rates and validation BLEU scores are modified over time with an implicit schedule. For the implicit schedule hyper-parameters, we set $\alpha =16$, $\beta =0.1$, $\gamma =0.05$ with baselines $b_i$ being 24 and 35 for En-De and En-Fr respectively. For the best performing model, we used inverse-square root learning rate schedule BIBREF11 with a learning rate of 1.5 and 40K warm-up steps. Task weights are adaptively changed by the scheduler during training (Figure FIGREF31 top-left), and predicted weights are used to adjust the learning rates for each task (Figure FIGREF31 top-right). Following Eq. DISPLAY_FORM3, computed relative scores for each task, $S_j$, are illustrated in Figure FIGREF31 bottom-left. Finally, progression of the validation set BLEU scores with their corresponding baselines (as solid horizontal lines) are given in in Figure FIGREF31 bottom-right. Possible training instabilities This appendix presents a failed experiment with wildly varying oscillations. All encoder parameters were tied, as well as the first four layers of the decoder and the softmax. An explicit schedule was employed.
Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers