question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
05671d068679be259493df638d27c106e7dd36d0
What is the performance proposed model achieved on MathQA?
[ "Operation accuracy: 71.89\nExecution accuracy: 55.95" ]
[ [ "Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline." ] ]
a3a871ca2417b2ada9df1438d282c45e4b4ad668
How do previous methods perform on the Switchboard Dialogue Act and DailyDialog datasets?
[ "Table TABREF20 , Table TABREF22, Table TABREF23" ]
[ [ "We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.", "In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.", "To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.", "The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances." ] ]
0fcac64544842dd06d14151df8c72fc6de5d695c
What previous methods is the proposed method compared against?
[ "BLSTM+Attention+BLSTM\nHierarchical BLSTM-CRF\nCRF-ASN\nHierarchical CNN (window 4)\nmLSTM-RNN\nDRLM-Conditional\nLSTM-Softmax\nRCNN\nCNN\nCRF\nLSTM\nBERT" ]
[ [] ]
4e841138f307839fd2c212e9f02489e27a5f830c
What is dialogue act recognition?
[ "DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. " ]
[ [ "Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.", "DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16." ] ]
37103369e5792ece49a71666489016c4cea94cda
Which natural language(s) are studied?
[ "Unanswerable" ]
[ [] ]
479d334b79c1eae3f2aa3701d28aa0d8dd46036a
Does the performance necessarily drop when more control is desired?
[ "Yes" ]
[ [ "Performance INLINEFORM0 Trade-off: To see if the selector affects performance, we also ask human annotators to judge the text fluency. The fluency score is computed as the average number of text being judged as fluent. We include generations from the standard Enc-Dec model. Table TABREF32 shows the best fluency is achieved for Enc-Dec. Imposing a content selector always affects the fluency a bit. The main reason is that when the controllability is strong, the change of selection will directly affect the text realization so that a tiny error of content selection might lead to unrealistic text. If the selector is not perfectly trained, the fluency will inevitably be influenced. When the controllability is weaker, like in RS, the fluency is more stable because it will not be affected much by the selection mask. For SS and Bo.Up, the drop of fluency is significant because of the gap of soft approximation and the independent training procedure. In general, VRS does properly decouple content selection from the enc-dec architecture, with only tiny degrade on the fluency." ] ]
b02d2d351bd2e49d4d59db0a8a6ef23cb90bfbc4
How does the model perform in comparison to end-to-end headline generation models?
[ "Unanswerable" ]
[ [] ]
a035472a5c6cf238bed62b63d28100c546d40bd5
How is the model trained to do only content selection?
[ "target some heuristically extracted contents, treat INLINEFORM1 as a latent variable and co-train selector and generator by maximizing the marginal data likelihood, reinforcement learning to approximate the marginal likelihood, Variational Reinforce-Select (VRS) which applies variational inference BIBREF10 for variance reduction" ]
[ [ "Our goal is to decouple the content selection from the decoder by introducing an extra content selector. We hope the content-level diversity can be fully captured by the content selector for a more interpretable and controllable generation process. Following BIBREF6 , BIBREF16 , we define content selection as a sequence labeling task. Let INLINEFORM0 denote a sequence of binary selection masks. INLINEFORM1 if INLINEFORM2 is selected and 0 otherwise. INLINEFORM3 is assumed to be independent from each other and is sampled from a bernoulli distribution INLINEFORM4 . INLINEFORM6 is the bernoulli parameter, which we estimate using a two-layer feedforward network on top of the source encoder. Text are generated by first sampling INLINEFORM7 from INLINEFORM8 to decide which content to cover, then decode with the conditional distribution INLINEFORM9 . The text is expected to faithfully convey all selected contents and drop unselected ones. Fig. FIGREF8 depicts this generation process. Note that the selection is based on the token-level context-aware embeddings INLINEFORM10 and will maintain information from the surrounding contexts. It encourages the decoder to stay faithful to the original information instead of simply fabricating random sentences by connecting the selected tokens.", "The most intuitive way is training the content selector to target some heuristically extracted contents. For example, we can train the selector to select overlapped words between the source and target BIBREF6 , sentences with higher tf-idf scores BIBREF20 or identified image objects that appear in the caption BIBREF21 . A standard encoder-decoder model is independently trained. In the testing stage, the prediction of the content selector is used to hard-mask the attention vector to guide the text generation in a bottom-up way. Though easy to train, Bottom-up generation has the following two problems: (1) The heuristically extracted contents might be coarse and cannot reflect the variety of human languages and (2) The selector and decoder are independently trained towards different objectives thus might not adapt to each other well.", "INLINEFORM0 as Latent Variable: Another way is to treat INLINEFORM1 as a latent variable and co-train selector and generator by maximizing the marginal data likelihood. By doing so, the selector has the potential to automatically explore optimal selecting strategies best fit for the corresponding generator component.", "Reinforce-select (RS) BIBREF24 , BIBREF9 utilizes reinforcement learning to approximate the marginal likelihood. Specifically, it is trained to maximize a lower bound of the likelihood by applying the Jensen inequalily: DISPLAYFORM0", "We propose Variational Reinforce-Select (VRS) which applies variational inference BIBREF10 for variance reduction. Instead of directly integrating over INLINEFORM0 , it imposes a proposal distribution INLINEFORM1 for importance sampling. The marginal likelihood is lower bounded by: DISPLAYFORM0" ] ]
3213529b6405339dfd0c1d2a0f15719cdff0fa93
What is the baseline model used?
[ "The baseline models used are DrQA modified to support answering no answer questions, DrQA+CoQA which is pre-tuned on CoQA dataset, vanilla BERT, BERT+review tuned on domain reviews, BERT+CoQA tuned on the supervised CoQA data" ]
[ [ "DrQA is a CRC baseline coming with the CoQA dataset. Note that this implementation of DrQA is different from DrQA for SQuAD BIBREF8 in that it is modified to support answering no answer questions by having a special token unknown at the end of the document. So having a span with unknown indicates NO ANSWER. This baseline answers the research question RQ1.", "DrQA+CoQA is the above baseline pre-tuned on CoQA dataset and then fine-tuned on $(\\text{RC})_2$ . We use this baseline to show that even DrQA pre-trained on CoQA is sub-optimal for RCRC. This baseline is used to answer RQ1 and RQ3.", "BERT is the vanilla BERT model directly fine-tuned on $(\\text{RC})_2$ . We use this baseline for ablation study on the effectiveness of pre-tuning. All these BERT's variants are used to answer RQ2.", "BERT+review first tunes BERT on domain reviews using the same objectives as BERT pre-training and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that a simple domain-adaptation of BERT is not good.", "BERT+CoQA first fine-tunes BERT on the supervised CoQA data and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that pre-tuning is very competitive even compared with models trained from large-scale supervised data. This also answers RQ3." ] ]
70afd28b0ecc02eb8e404e7ff9f89879bf71a670
Is this auto translation tool based on neural networks?
[ "Yes" ]
[ [ "CodeInternational: A tool which can translate code between human languages, powered by Google Translate." ] ]
42c02c554ab4ceaf30a8ca770be4f271887554c2
What are results of public code repository study?
[ "Non-English code is a large-scale phenomena., Transliteration is common in identifiers for all languages., Languages clusters into three distinct groups based on how speakers use identifiers/comments/transliteration., Non-latin script users write comments in their L1 script but write identifiers in English., Right-to-left (RTL) language scripts, such as Arabic, have no observed prevalence on GitHub identifiers" ]
[ [ "How do non-English speakers program in a language like Java, where the keywords and core libraries are written in English? We employ a data driven approach to tell the story of non-English code and inform the decisions we made in our auto-translator. We analyzed Java repositories on GitHub, the largest host of source code in the world, where 1.1 million unique users host 2.9 million public Java projects. We downloaded and analyzed the human language used for writing comments (in Java code), naming identifiers (method and variable names), and writing git commit messages. We focused on Java code as it is both one of the most popular source-code languages on GitHub and in the classroom. A selection of results from this study are that:", "Non-English code is a large-scale phenomena.", "Transliteration is common in identifiers for all languages.", "Languages clusters into three distinct groups based on how speakers use identifiers/comments/transliteration.", "Non-latin script users write comments in their L1 script but write identifiers in English.", "Right-to-left (RTL) language scripts, such as Arabic, have no observed prevalence on GitHub identifiers, implying that existing coders who speak RTL languages have substantial barriers in using their native script in code." ] ]
5f0bb32d70ee8e4c4c59dc5c193bc0735fd751cc
Where is the dataset from?
[ "dialogue simulator" ]
[ [ "Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains." ] ]
a88a454ac1a1230263166fd824e5daebb91cb05a
What data augmentation techniques are used?
[ "back translation between English and Chinese" ]
[ [ "Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy." ] ]
bbaf7cbae88c085faa6bbe3319e4943362fe1ad4
Do all teams use neural networks for their models?
[ "Unanswerable" ]
[ [] ]
a6b99b7f32fb79a7db996fef76e9d83def05c64b
How are the models evaluated?
[ "Active Intent Accuracy, Requested Slot F1, Average Goal Accuracy, Joint Goal Accuracy" ]
[ [ "We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.", "Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.", "Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.", "Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.", "Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly." ] ]
d47c074012eae27426cd700f841fd8bf490dcc7b
What is the baseline model?
[ "Unanswerable" ]
[ [] ]
b43fa27270eeba3e80ff2a03754628b5459875d6
What domains are present in the data?
[ "Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather" ]
[ [] ]
458dbf217218fcab9153e33045aac08a2c8a38c6
How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?
[ "Total number of annotated data:\nSemeval'15: 10712\nSemeval'16: 28632\nTass'15: 69000\nSentipol'14: 6428" ]
[ [] ]
cebf3e07057339047326cb2f8863ee633a62f49f
In which languages did the approach outperform the reported results?
[ "Arabic, German, Portuguese, Russian, Swedish" ]
[ [ "In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases." ] ]
ef8099e2bc0ac4abc4f8216740e80f2fa22f41f6
What eight language are reported on?
[ "Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish" ]
[ [ "In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques." ] ]
1e68a1232ab09b6bff506e442acc8ad742972102
What are the components of the multilingual framework?
[ "text-transformations to the messages, vector space model, Support Vector Machine" ]
[ [ "In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics." ] ]
0bd992a6a218331aa771d922e3c7bb60b653949a
Is the proposed method compared to previous methods?
[ "Yes" ]
[ [ "Following the algorithms above, with the consideration of both the advantages and disadvantages of them, in this project, I am going to use a modified method: sound-class based skip-grams with bipartite networks (BipSkip). The whole procedure is quite straightforward and could be divided into three steps. First step: the word pair and their skip-grams are two sets of the bipartite networks. The second step is optional, which is to refine the bipartite network. Before I run the program, I will be asked to input a threshold, which determines if the program should delete the skip-gram nodes linked to fewer word nodes than the threshold itself. According to the experiment, even though I did not input any threshold as one of the parameters, the algorithm could still give the same answer but with more executing time. In the last step, the final generated bipartite graph would be connected to a monopartite graph and partitioned into cognate sets with the help of graph partitioning algorithms. Here I would use Informap algorithm BIBREF12. To make a comparison to this method, I am using CCM and SCA for distance measurement in this experiment, too. UPGMA algorithm would be used accordingly in these two cases." ] ]
052d19b456f1795acbb8463312251869cc5b38da
What metrics are used to evaluate results?
[ "Unanswerable" ]
[ [] ]
7b89515d731d04dd5cbfe9c2ace2eb905c119cbc
Which is the baseline model?
[ "The three baseline models are the i-vector model, a standard RNN LID system and a multi-task RNN LID system. " ]
[ [ "As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data of two languages from Babel: Assamese and Georgian (AG).", "The two RNN LID baselines are: a standard RNN LID system (AG-RNN-LID) that discriminates between the two languages in its output, and a multi-task system (AG-RNN-MLT) that was trained to discriminate between the two languages as well as the phones. More precisely, the output units of the AG-RNN-MLT are separated into two groups: an LID group that involves two units corresponding to Assamese and Georgian respectively, and an ASR group that involves $3,349$ bilingual senones that are inherited from an HMM/GMM ASR system trained with the speech data of Assamese and Georgian, following the standard WSJ s5 HMM/GMM recipe of Kaldi. The WSJ s5 nnet3 recipe of Kaldi is then used to train the AG-RNN-LID and AG-RNN-MLT systems." ] ]
1db37e98768f09633dfbc78616992c9575f6dba4
How big is the Babel database?
[ "Unanswerable" ]
[ [] ]
79a28839fee776d2fed01e4ac39f6fedd6c6a143
What is the main contribution of the paper?
[ "Proposing an improved RNN model, the phonetic temporal neural LID approach, based on phonetic features that results in better performance" ]
[ [ "All the present neural LID methods are based on acoustic features, e.g., Mel filter banks (Fbanks) or Mel frequency cepstral coefficients (MFCCs), with phonetic information largely overlooked. This may have significantly hindered the performance of neural LID. Intuitively, it is a long-standing hypothesis that languages can be discriminated between by phonetic properties, either distributional or temporal; additionally, phonetic features represent information at a higher level than acoustic features, and so are more invariant with respect to noise and channels. Pragmatically, it has been demonstrated that phonetic information, either in the form of phone sequences, phone posteriors, or phonetic bottleneck features, can significantly improve LID accuracy in both the conventional PRLM approach BIBREF11 and the more modern i-vector system BIBREF34 , BIBREF35 , BIBREF36 . In this paper, we will investigate the utilization of phonetic information to improve neural LID. The basic concept is to use a phone-discriminative model to produce frame-level phonetic features, and then use these features to enhance RNN LID systems that were originally built with raw acoustic features. The initial step is therefore feature combination, with the phonetic feature used as auxiliary information to assist acoustic RNN LID. This is improved further, as additional research identified that a simpler model using only the phonetic feature as the RNN LID input provides even better performance. We call this RNN model based on phonetic features the phonetic temporal neural LID approach, or PTN LID. As well as having a simplified model structure, the PTN offers deeper insight into the LID task by rediscovering the value of the phonetic temporal property in language discrimination. This property was historically widely and successfully applied in token-based approaches, e.g., PRLM BIBREF11 , but has been largely overlooked due to the popularity of the i-vector approach." ] ]
da2b43d7d048f3f59adf26a67ce66bd2d8a06326
What training settings did they try?
[ "Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. , experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . , Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\\dashv $ , in addition to the symbols in $\\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions." ]
[ [ "Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. We remark, for the sake of clarity, that our test design is slightly different from the traditional testing approaches used by BIBREF10 , BIBREF9 , BIBREF12 , since we do not consider the shortest sequence in a language whose output was incorrectly predicted by the model, or the largest accepted test set, or the accuracy of the model on a fixed test set.", "It has been shown by BIBREF9 that LSTMs can learn $a^n b^n$ and $a^n b^n c^n$ with 1 and 2 hidden units, respectively. Similarly, BIBREF24 demonstrated that a simple RNN architecture containing a single hidden unit with carefully tuned parameters can develop a canonical linear counting mechanism to recognize the simple context-free language $a^n b^n$ , for $n \\le 250$ . We wanted to explore whether the stability of the networks would improve with an increase in capacity of the LSTM model. We, therefore, varied the number of hidden units in our LSTM models as follows. We experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . The 36 hidden unit case represents an over-parameterized network with more than enough theoretical capacity to recognize all these languages.", "Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\\dashv $ , in addition to the symbols in $\\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions. During testing, we use an output threshold criterion of $0.5$ for the sigmoid output layer to indicate which characters were predicted by the model. We then turn this prediction task into a classification task by accepting a sample if our model predicts all of its output values correctly and rejecting it otherwise." ] ]
b7708cbb50085eb41e306bd2248f1515a5ebada8
How do they get the formal languages?
[ "These are well-known formal languages some of which was used in the literature to evaluate the learning capabilities of RNNs." ]
[ [ "BIBREF7 investigated the learning capabilities of simple RNNs to process and formalize a context-free grammar containing hierarchical (recursively embedded) dependencies: He observed that distinct parts of the networks were able to learn some complex representations to encode certain grammatical structures and dependencies of the context-free grammar. Later, BIBREF8 introduced an RNN with an external stack memory to learn simple context-free languages, such as $a^n b^m$ , $a^nb^ncb^ma^m$ , and $a^{n+m} b^n c^m$ . Similar studies BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 have explored the existence of stable counting mechanisms in simple RNNs, which would enable them to learn various context-free and context-sensitive languages, but none of the RNN architectures proposed in the early days were able to generalize the training set to longer (or more complex) test samples with substantially high accuracy.", "BIBREF9 , on the other hand, proposed a variant of Long Short-Term Memory (LSTM) networks to learn two context-free languages, $a^n b^n$ , $a^n b^m B^m A^n$ , and one strictly context-sensitive language, $a^n b^n c^n$ . Given only a small fraction of samples in a formal language, with values of $n$ (and $m$ ) ranging from 1 to a certain training threshold $N$ , they trained an LSTM model until its full convergence on the training set and then tested it on a more generalized set. They showed that their LSTM model outperformed the previous approaches in capturing and generalizing the aforementioned formal languages. By analyzing the cell states and the activations of the gates in their LSTM model, they further demonstrated that the network learns how to count up and down at certain places in the sample sequences to encode information about the underlying structure of each of these formal languages." ] ]
17988d65e46ff7d756076e9191890aec177b081e
Are the unobserved samples from the same distribution as the training data?
[ "No" ]
[ [ "In the present work, we address these limitations by providing a more nuanced evaluation of the learning capabilities of RNNs. In particular, we investigate the effects of three different aspects of a network's generalization: data distribution, length-window, and network capacity. We define an informative protocol for assessing the performance of RNNs: Instead of training a single network until it has learned its training set and then evaluating it on its test set, as BIBREF9 do in their study, we monitor and test the network's performance at each epoch during the entire course of training. This approach allows us to study the stability of the solutions reached by the network. Furthermore, we do not restrict ourselves to a test set of sequences of fixed lengths during testing. Rather, we exhaustively enumerate all the sequences in a language by their lengths and then go through the sequences in the test set one by one until our network errs $k$ times, thereby providing a more fine-grained evaluation criterion of its generalization capabilities.", "Previous studies have examined various length distribution models to generate appropriate training sets for each formal language: BIBREF16 , BIBREF11 , BIBREF12 , for instance, used length distributions that were skewed towards having more short sequences than long sequences given a training length-window, whereas BIBREF9 used a uniform distribution scheme to generate their training sets. The latter briefly comment that the distribution of lengths of sequences in the training set does influence the generalization ability and convergence speed of neural networks, and mention that training sets containing abundant numbers of both short and long sequences are learned by networks much more quickly than uniformly distributed regimes. Nevertheless, they do not systematically compare or explicitly report their findings. To study the effect of various length distributions on the learning capability and speed of LSTM models, we experimented with four discrete probability distributions supported on bounded intervals (Figure 2 ) to sample the lengths of sequences for the languages. We briefly recall the probability distribution functions for discrete uniform and Beta-Binomial distributions used in our data generation procedure." ] ]
11c77ee117cb4de825016b6ccff59ff021f84a38
By how much does their model outperform the baseline in the cross-domain evaluation?
[ "$2.2\\%$ absolute accuracy improvement on the laptops test set, $3.6\\%$ accuracy improvement on the restaurants test set" ]
[ [ "To answer RQ3, which is concerned with domain adaptation, we can see in the grayed out cells in tab:results, which correspond to the cross-domain adaption case where the BERT language model is trained on the target domain, that domain adaptation works well with $2.2\\%$ absolute accuracy improvement on the laptops test set and even $3.6\\%$ accuracy improvement on the restaurants test set compared to BERT-base." ] ]
0b92fb692feb35d4b4bf4665f7754d283d6ad5f3
What are the performance results?
[ "results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset, new state-of-the-art on the restaurants dataset with accuracies of $79.19\\%$ and $87.14\\%$, respectively." ]
[ [ "To answer RQ2, which is concerned with in-domain ATSC performance, we see in tab:results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset and new state-of-the-art on the restaurants dataset with accuracies of $79.19\\%$ and $87.14\\%$, respectively. On the restaurants dataset, this corresponds to an absolute improvement of $2.2\\%$ compared to the previous state-of-the-art method BERT-PT. Language model finetuning produces a larger improvement on the restaurants dataset. We think that one reason for that might be that the restaurants domain is underrepresented in the pre-training corpora of BERTBASE. Generally, we find that language model finetuning helps even if the finetuning domain does not match the evaluation domain. We think the reason for this might be that the BERT-base model is pre-trained more on knowledge-based corpora like Wikipedia than on text containing opinions. Another finding is that BERT-ADA Joint performs better on the laptops dataset than BERT-ADA Rest, although the unique amount of laptop reviews are the same in laptops- and joint-corpora. We think that confusion can be created when mixing the domains, but this needs to be investigated further. We also find that the XLNet-base baseline performs generally stronger than BERT-base and even outperforms BERT-ADA Lapt with an accuracy of $79.89\\%$ on the laptops dataset.", "In general, the ATSC task generalizes well cross-domain, with about 2-$3\\%$ drop in accuracy compared to in-domain training. We think the reason for this might be that syntactical relationships between the aspect-target and the phrase expressing sentiment polarity as well as knowing the sentiment-polarity itself are sufficient to solve the ATSC task in many cases." ] ]
521a7042b6308e721a7c8046be5084bc5e8ca246
What is a confusion network or lattice?
[ "graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences" ]
[ [ "A number of important downstream and upstream applications rely on accurate confidence scores in graph-like structures, such as confusion networks (CN) in Fig. 2 and lattices in Fig. 2 , where arcs connected by nodes represent hypothesised words. This section describes an extension of BiRNNs to CNs and lattices." ] ]
06776b8dfd1fe27b5376ae44436b367a71ff9912
What dataset is used for training?
[ "Mandarin dataset, Cantonese dataset" ]
[ [ "We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary. We require all samples within each language to be from the same speaker to avoid the difficulties associated with channel effects and inter-speaker variation. We randomly sample 400 words from each language, which are mostly between 2 and 4 syllables; to reduce the prosody effects with longer utterances, we exclude words longer than 4 syllables." ] ]
f1831b2e96ff8ef65b8fde8b4c2ee3e04b7ac4bf
How close do clusters match to ground truth tone categories?
[ "NMI between cluster assignments and ground truth tones for all sylables is:\nMandarin: 0.641\nCantonese: 0.464" ]
[ [ "To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables." ] ]
20ec88c45c1d633adfd7bff7bbf3336d01fb6f37
what are the evaluation metrics?
[ "Precision, Recall, F1" ]
[ [] ]
a4fe5d182ddee24e5bbf222d6d6996b3925060c8
which datasets were used in evaluation?
[ "CoNLL 2003, GermEval 2014, CoNLL 2002, Egunkaria, MUC7, Wikigold, MEANTIME, SONAR-1, Ancora 2.0" ]
[ [] ]
f463db61de40ae86cf5ddd445783bb34f5f8ab67
what are the baselines?
[ "Perceptron model using the local features." ]
[ [ "Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. Our NERC system is publicly available and distributed under the Apache 2.0 License and part of the IXA pipes tools BIBREF38 . Every result reported in this paper is obtained using the conlleval script from the CoNLL 2002 and CoNLL 2003 shared tasks. To guarantee reproducibility of results we also make publicly available the models and the scripts used to perform the evaluations. The system, models and evaluation scripts can be found in the ixa-pipe-nerc website.", "The local features constitute our baseline system on top of which the clustering features are added. We implement the following feature set, partially inspired by previous work BIBREF46 :" ] ]
3d7ab856a5cade7ab374fc2f2713a4d0a30bbd56
What multilingual word representations are used?
[ " a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space" ]
[ [ "We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results." ] ]
212977344f4bf2ae8f060bdac0317db2d1801724
Do the authors identify any cultural differences in irony use?
[ "No" ]
[ [ "From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets." ] ]
0c29d08f766b06ceb2421aa402e71a2d65a5a381
What neural architectures are used?
[ "Convolutional Neural Network (CNN)" ]
[ [ "Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library." ] ]
c9ee70c481c801892556eb6b9fd8ee38197923be
What text-based features are used?
[ "language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities), language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words)" ]
[ [ "Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library." ] ]
a24a7a460fd5e60d71a7e787401c68caa4702df6
What monolingual word representations are used?
[ "AraVec for Arabic, FastText for French, and Word2vec Google News for English." ]
[ [ "Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library." ] ]
e8d1792fc56a32bd4c95f61c2ea4cf29088edd7c
Does the proposed method outperform a baseline?
[ "No" ]
[ [] ]
ceda2a4872132b8e0a526c0f2c701d0df060c3af
What type of RNN is used?
[ "RNN, LSTM" ]
[ [ "Social media mining is considerably an important source of information in many of health applications. This working note presents RNN and LSTM based embedding system for social media health text classification. Due to limited number of tweets, the performance of the proposed method is very less. However, the obtained results are considerable and open the way in future to apply for the social media health text classification. Moreover, the performance of the LSTM based embedding for task 2 is good in comparison to the task 1. This is primarily due to the fact that the target classes of task 1 data set imbalanced. Hence, the proposed method can be applied on large number of tweets corpus in order to attain the best performance.", "Recently, the deep learning methods have performed well BIBREF8 and used in many tasks mainly due to that it doesn't rely on any feature engineering mechanism. However, the performance of deep learning methods implicitly relies on the large amount of raw data sets. To make use of unlabeled data, BIBREF9 proposed semi-supervised approach based on Convolutional neural network for adverse drug event detection. Though the data sets of task 1 and task 2 are limited, this paper proposes RNN and LSTM based embedding method." ] ]
5758ebff49807a51d080b0ce10ba3f86dcf71925
What do they constrain using integer linear programming?
[ "low-rank approximation of the co-occurrence matrix" ]
[ [ "In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts\" will be allowed to partially contain “bike elements\" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings." ] ]
e84ba95c9a188fda4563f45e53fbc8728d8b5dab
Do they build one model per topic or on all topics?
[ "One model per topic." ]
[ [ "The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 ." ] ]
caf9819be516d2c5a7bfafc80882b07517752dfa
Do they quantitavely or qualitatively evalute the output of their low-rank approximation to verify the grouping of lexical items?
[ "They evaluate quantitatively." ]
[ [ "In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less. For the matrix completion algorithm, we perform grid search (on a scale of [0, 5] with stepsize 0.5) to tune the hyper-parameter INLINEFORM0 (Eq. EQREF10 ) with a leave-one-lecture-out (for student responses) or leave-one-task-out (for others) cross-validation.", "The results are shown in Table TABREF25 . INLINEFORM0 significantly on all three courses. That is, a bigram does receive a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. Therefore, H1.a holds. For H1.b, we only observe INLINEFORM1 significantly on Stat2016 and there is no significant difference between INLINEFORM2 and INLINEFORM3 on the other two courses. First, the gold-standard data set is still small in the sense that only a limited portion of bigrams in the entire data set are evaluated. Second, the assumption that phrases annotated by different colors are not necessarily unrelated is too strong. For example, “hypothesis testing\" and “H1 and Ho conditions\" are in different colors in the example of Table TABREF15 , but one is a subtopic of the other. An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work. Third, the gold standards are pairs of semantically similar bigrams, while matrix completion captures bigrams that occurs in a similar context, which is not necessarily equivalent to semantic similarity. For example, the sentence “graphs make it easier to understand concepts\" in Table TABREF25 is associated with “hard to\"." ] ]
b1e90a546dc92e96b657fff5dad8e89f4ac6ed5e
Do they evaluate their framework on content of low lexical variety?
[ "No" ]
[ [] ]
f8d32088d17b32b0c877d59965b35c4f51f0ceea
Do the authors report on English datasets only?
[ "Yes" ]
[ [ "We consider a dataset of curated gang and non-gang members' Twitter profiles collected from our previous work BIBREF9 . It was developed by querying the Followerwonk Web service API with location-neutral seed words known to be used by gang members across the U.S. in their Twitter profiles. The dataset was further expanded by examining the friends, follower, and retweet networks of the gang member profiles found by searching for seed words. Specific details about our data curation procedure are discussed in BIBREF9 . Ultimately, this dataset consists of 400 gang member profiles and 2,865 non-gang member profiles. For each user profile, we collected up to most recent 3,200 tweets from their Twitter timelines, profile description text, profile and cover images, and the comments and video descriptions for every YouTube video shared by them. Table 1 provides statistics about the number of words found in each type of feature in the dataset. It includes a total of 821,412 tweets from gang members and 7,238,758 tweets from non-gang members." ] ]
4f0f446bf4518af7f539f6283145135192d5c00b
Which supervised learning algorithms are used in the experiments?
[ "Logistic Regression (LR), Random Forest (RF), Support Vector Machines (SVM)" ]
[ [ "To build the classifiers we used three different learning algorithms, namely Logistic Regression (LR), Random Forest (RF), and Support Vector Machines (SVM). We used version 0.17.1 of scikit-learn machine learning library for Python to implement the classifiers. An open source tool of Python, Gensim BIBREF19 was used to generate the word embeddings. We compare our results with the two best performing systems reported in BIBREF9 which are the two state-of-the-art models for identifying gang members in Twitter. Both baseline models are built from a random forest classifier trained over term frequencies for unigrams in tweet text, emoji, profile data, YouTube video data and image tags. Baseline Model(1) considers all 3,285 gang and non-gang member profiles in our dataset. Baseline Model(2) considers all Twitter profiles that contain every feature type discussed in Section SECREF2 . Because a Twitter profile may not have every feature type, baseline Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. However, we compare our results to both baseline models and report the improvements." ] ]
663b36f99ad2422f4d3a8c6398ebf55ceab7770d
How in YouTube content translated into a vector format?
[ "words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline" ]
[ [ "We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile. To combine the word vectors, we consider five different methods. Letting the size of a word vector be INLINEFORM0 , for a Twitter profile INLINEFORM1 with INLINEFORM2 unique words and the vector of the INLINEFORM3 word in INLINEFORM4 denoted by INLINEFORM5 , we compute the feature vector for the Twitter profile INLINEFORM6 by:" ] ]
be595b2017545b0359db6abf4914a155bdd10d23
How is the ground truth of gang membership established in this dataset?
[ " text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles" ]
[ [ "In our previous work BIBREF9 , we curated what may be the largest set of gang member profiles to study how gang member Twitter profiles can be automatically identified based on the content they share online. A data collection process involving location neutral keywords used by gang members, with an expanded search of their retweet, friends and follower networks, led to identifying 400 authentic gang member profiles on Twitter. Our study discovered that the text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles. While a very promising INLINEFORM0 measure with low false positive rate was achieved, we hypothesize that the diverse kinds and the multitude of features employed (e.g. unigrams of tweet text) could be amenable to an improved representation for classification. We thus explore the possibility of mapping these features into a considerably smaller feature space through the use of word embeddings." ] ]
79b174d20ea5dd4f35e25c9425fb97f40e27cd6f
Do they evaluate ablated versions of their CNN+RNN model?
[ "No" ]
[ [] ]
21a96b328b43a568f9ba74cbc6d4689dbc4a3d7b
Do they single out a validation set from the fixed SRE training set?
[ "No" ]
[ [ "Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Vectors are low-dimensional representations of Baum-Welch statistics obtained with respect to a GMM, referred to as universal background model (UBM), in a single subspace which includes all characteristics of speaker and inter-session variability, named total variability matrix BIBREF0 . We trained on each acoustic feature a full covariance, gender-independent UBM model with 2048 Gaussians followed by a 600-dimensional i-vector extractor to establish our MFCC- and PLP-based i-vector systems. The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor. The open-source Kaldi software has been used for all these processing steps BIBREF12 ." ] ]
30803eefd7cdeb721f47c9ca72a5b1d750b8e03b
How well does their system perform on the development set of SRE?
[ "EER 16.04, Cmindet 0.6012, Cdet 0.6107" ]
[ [ "In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set." ] ]
442f8da2c988530e62e4d1d52c6ec913e3ec5bf1
Which are the novel languages on which SRE placed emphasis on?
[ "Cebuano and Mandarin, Tagalog and Cantonese" ]
[ [ "The fixed training condition is used to build our speaker recognition system. Only conversational telephone speech data from datasets released through the linguistic data consortium (LDC) have been used, including NIST SRE 2004-2010 and the Switchboard corpora (Switchboard Cellular Parts I and II, Switchboard2 Phase I,II and III) for different steps of system training. A more detailed description of the data used in the system training is presented in Table TABREF1 . We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training. We will indicate when and how we used this set in the training in the following sections." ] ]
ae60079da9d3d039965368acbb23c6283bc3da94
Does this approach perform better than context-based word embeddings?
[ "Yes" ]
[ [ "To verify the word embeddings learned by our model we use the task of synonym discovery, whereby we analyze if it is possible to identify a pair of words as synonyms only through their embedding vectors. Synonym discovery is a common task in research; and it has been used before to test word embedding schemes BIBREF0 . We compare the performance of our Chinese word embedding vectors in the task of synonym discovery against another set of embedding vectors that was constructed with a co-occurrence model BIBREF1 . We also investigate the performance of synonym discovery with the Sino-Korean word embeddings by our method. Our test results shows that our approach out-performs the previous model.", "Our embeddings also proved to perform better than our benchmark dataset. Figure shows the distribution of the similarity measure between pairs of synonyms and random pairs of words in the benchmark dataset. In this sample, almost 32% of synonyms show a similarity score that places them away from zero, while 5% of random pairs of words are placed outside of that range. Table compares performance, and dimensionality in both strategies to learn embeddings." ] ]
83f567489da49966af3dc5df2d9d20232bb8cb1e
Have the authors tried this approach on other languages?
[ "No" ]
[ [ "We believe that our model can help expand our understanding of word embedding; and also help reevaluate the value of etymology in data mining and machine learning. We are excited to see etymological graphs used in other ways to extract knowledge. We also are especially interested in seeing this model applied to different languages." ] ]
ff0f77392abc905fe76e0b8c28a76dfb0372a0ec
What features did they train on?
[ "direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset" ]
[ [ "Our features consisted of direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset." ] ]
6c4cd8da5b4b298f29af3123b58d9a5d4b02180b
How big is the test set?
[ "Unanswerable" ]
[ [] ]
ed4fb6bce855ca932548689e45fde21f26a71035
What is coattention?
[ "Unanswerable" ]
[ [] ]
4cc5ba404d6a47363f119d9db7266157d3bb246b
What off-the-shelf QA model was used to answer sub-questions?
[ "$\\textsc {BERT}_{\\textsc {BASE}}$ ensemble from BIBREF3" ]
[ [ "We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\\textsc {BERT}_{\\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\\textsc {RoBERTa}_{\\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\\textsc {RoBERTa}_{\\textsc {LARGE}}$ ensemble)." ] ]
1d72770d075b22411ec86d8bdee532f8c643740b
How large is the improvement over the baseline?
[ "3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, 10 F1 gain on the out-of-domain dev set." ]
[ [ "Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set)." ] ]
af1439c68b28c27848203f863675946380d28943
What is the strong baseline that this work outperforms?
[ "RoBERTa baseline" ]
[ [ "Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set).", "We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \\in \\lbrace 1, \\dots , P \\rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows:", "We use $\\textsc {RoBERTa}_{\\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\\textsc {BERT}_{\\textsc {BASE}}$ ensemble from BIBREF3." ] ]
046ff04d1018447b22e00acb125125cae5a23fb7
Which dataset do they use?
[ "small_parallel_enja, Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5" ]
[ [ "We used two different corpora for the experiments: small_parallel_enja and Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5. small_parallel_enja is a small-scale corpus that is consist of sentences filtered sentence length 4 to 16 words, and ASPEC is a mid-scale corpus of the scientific paper domain. Table TABREF21 shows their detailed statistics." ] ]
5a06f11aa75a8affde3d595c40fb03e06769e368
Do they trim the search space of possible output sequences?
[ "No" ]
[ [] ]
ffbd6f583692db66b719a846ba2b7f6474df481a
Which model architecture do they use to build a model?
[ "model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs)" ]
[ [ "The model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs); the encoder converts source words into a sequence of vectors, and the decoder generates target language words one-by-one with the attention mechanism based on the conditional probability shown in the equation DISPLAY_FORM2 and DISPLAY_FORM3. The details are described below." ] ]
74fe054f5243c8593ddd2c0628f91657246b7dfa
Do they compare simultaneous translation performance to regular machine translation?
[ "No" ]
[ [] ]
cc2b98b46497c71e955e844fb36e9ef6e2784640
Which metrics do they use to evaluate simultaneous translation?
[ "BLEU BIBREF8, RIBES BIBREF9, token-level delay" ]
[ [ "We used “Wait-k” models and general NMT models as baseline models. General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called Full Sentence). For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy." ] ]
6959e87cf2668a03854da3f042c87e6fdb2ade8a
How big are FigureQA and DVQA datasets?
[ "Unanswerable" ]
[ [] ]
a7f07ae48eed084c3144214228f4ecb72bc0a0e3
What models other than SAN-VOES are trained on new PlotQA dataset?
[ "IMG-only, QUES-only, SAN, SANDY, VOES-Oracle, VOES" ]
[ [ "We compare the performance of the following models:", "- IMG-only: This is a simple baseline where we just pass the image through a VGG19 and use the embedding of the image to predict the answer from a fixed vocabulary.", "- QUES-only: This is a simple baseline where we just pass the question through a LSTM and use the embedding of the question to predict the answer from a fixed vocabulary.", "- SANBIBREF2: This is a state of the art VQA model which is an encoder-decoder model with a multi-layer stacked attention BIBREF26 mechanism. It obtains a representation for the image using a deep CNN and a representation for the query using LSTM. It then uses the query representation to locate relevant regions in the image and uses this to pick an answer from a fixed vocabulary.", "- SANDYBIBREF1: This is the best performing model on the DVQA dataset and is a variant of SAN. Unfortunately, the code for this model is not available and the description in the paper was not detailed enough for us to reimplement it. Hence, we report the numbers for this model only on DVQA (from the original paper).", "- VOES: This is our model as described in section SECREF3 which is specifically designed for questions which do not have answers from a fixed vocabulary.", "- VOES-Oracle: blackThis is our model where the first three stages of VOES are replaced by an Oracle, i.e., the QA model answers questions on a table that has been generated using the ground truth annotations of the plot. With this we can evaluate the performance of the WikiTableQA model when it is not affected by the VED model's errors.", "- SAN-VOES: Given the complementary strengths of SAN-VQA and VOES, we train a hybrid model with a binary classifier which given a question decides whether to use the SAN or the VOES model. The data for training this binary classifier is generated by comparing the predictions of a trained SAN model and a trained VOES model on the training dataset. For a given question, the label is set to 1 (pick SAN) if the performance of SAN was better than that of VOES. We ignore questions where there is a tie. The classifier is a simple LSTM based model which computes a representation for the question using an LSTM and uses this representation to predict 1/0. At test time, we first pass the question through this model and depending on the output of this model use SAN or VOES." ] ]
eced6a6dffe43c28e6d06ab87eed98c135f285a3
Do the authors report only on English language data?
[ "Yes" ]
[ [ "We segment a hashtag into meaningful English phrases. The `#' character is removed from the tweet text. As for example, #killthebill is transformed into kill the bill.", "In order to achieve this, we use a dictionary of English words. We recursively break the hashtagged phrase into segments and match the segments in the dictionary until we get a complete set of meaningful words. This is important since many users tend to post tweets where the actual message of the tweet is expressed in form of terse hashtagged phrases.", "The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 . It predicts sentiment for each sentence within the article. We calculate the fraction of sentences that are negative, positive, and neutral and use these three values as features." ] ]
7fdeef2b1c8f6bd5d7c3a44e533d8aae2bbc155f
What dataset of tweets is used?
[ "tweets about `ObamaCare' in USA collected during march 2010" ]
[ [ "Our dataset contains tweets about `ObamaCare' in USA collected during march 2010. It is divided into three subsets (train, dev, and test). Some tweets are manually annotated with one of the following classes." ] ]
be074c880263f56e0d4a8f42d9a95d2d77ac2280
What external sources of information are used?
[ "landing pages of URLs" ]
[ [ "In this report we have presented a sentiment analysis tool for Twitter posts. We have discussed the characteristics of Twitter that make existing sentiment analyzers perform poorly. The model proposed in this report has addressed the challenges by using normalization methods and features specific to this media. We show that using external knowledge outside the tweet text (from landing pages of URLs) and user features can significantly improve performance. We have presented experimental results and comparison with state-of-the-art tools." ] ]
2a57fdc7e985311989b6829c1ceb201096e5c809
What linguistic features are used?
[ "Parts of Speech (POS) tags, Prior polarity of the words, Capitalization, Negation, Text Feature" ]
[ [ "We use two basic features:", "Parts of Speech (POS) tags: We use the POS tagger of NLTK to tag the tweet texts BIBREF0 . We use counts of noun, adjective, adverb, verb words in a tweet as POS features.", "Prior polarity of the words: We use a polarity dictionary BIBREF3 to get the prior polarity of words. The dictionary contains positive, negative and neutral words along with their polarity strength (weak or strong). The polarity of a word is dependent on its POS tag. For example, the word `excuse' is negative when used as `noun' or `adjective', but it carries a positive sense when used as a `verb'. We use the tags produced by NLTK postagger while selecting the prior polarity of a word from the dictionary. We also employ stemming (Porter Stemmer implementation from NLTK) while performing the dictionary lookup to increase number of matches. We use the counts of weak positive words, weak negative words, strong positive words and strong negative words in a tweet as features.", "We have also explored some advanced features that helps improve detecting sentiment of tweets.", "Emoticons: We use the emoticon dictionary from BIBREF2 , and count the positive and negtive emocicons for each tweet.", "The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 . It predicts sentiment for each sentence within the article. We calculate the fraction of sentences that are negative, positive, and neutral and use these three values as features.", "Hashtag: We count the number of hashtags in each tweet.", "Capitalization: We assume that capitalization in the tweets has some relationship with the degree of sentiment. We count the number of words with capitalization in the tweets.", "Retweet: This is a boolean feature indicating whether the tweet is a retweet or not.", "User Mention: A boolean feature indicating whether the tweet contains a user mention.", "Negation: Words like `no', `not', `won't' are called negation words since they negate the meaning of the word that is following it. As for example `good' becomes `not good'. We detect all the negation words in the tweets. If a negation word is followed by a polarity word, then we negate the polarity of that word. For example, if `good' is preceeded by a `not', we change the polarity from `weak positive' to `weak negative'.", "Text Feature: We use tf-idf based text features to predict the sentiment of a tweet. We perform tf-idf based scoring of words in a tweet and the hashtags present in the tweets. We use the tf-idf vectors to train a classifier and predict the sentiment. This is then used as a stacked prediction feature in the final classifier." ] ]
53807f435d33fe5ce65f5e7bda7f77712194f6ab
What are the key issues around whether the gold standard data produced in such an annotation is reliable?
[ " only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics, low-effort responses from crowdworkers" ]
[ [ "Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31.", "Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly." ] ]
2ec9c1590c96f17a66c7d4eb95dc5d3a447cb973
How were the machine learning papers from ArXiv sampled?
[ "sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph), filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive), filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive)" ]
[ [ "We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined." ] ]
208e667982160cfbce49ef49ad96f6ab094292ac
What are the core best practices of structured content analysis?
[ "“coding scheme” is defined, coders are trained with the coding scheme, Training sometimes results in changes to the coding scheme, calculation of “inter-annotator agreement” or “inter-rater reliability.”, there is a process of “reconciliation” for disagreements" ]
[ [ "Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based." ] ]
35eb8464e934a2769debe14148667c61bf1da243
In what sense is data annotation similar to structured content analysis?
[ "structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items." ]
[ [ "Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17.", "Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based." ] ]
5774e019101415a43e0b5a780179fd897fc013fd
What additional information is found in the dataset?
[ "the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet" ]
[ [ "We collected COVID-19 related Arabic tweets from January 1, 2020 until April 15, 2020, using Twitter streaming API and the Tweepy Python library. We have collected more than 3,934,610 million tweets so far. In our dataset, we store the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet. We created a list of the most common Arabic keywords associated with COVID-19. Using Twitter’s streaming API, we searched for any tweet containing the keyword(s) in the text of the tweet. Table TABREF1 shows the list of keywords used along with the starting date of tracking each keyword. Furthermore, Table TABREF2 shows the list of hashtags we have been tracking along with the number of tweets collected from each hashtag. Indeed, some tweets were irrelevant, and we kept only those that were relevant to the pandemic." ] ]
fc33a09401d12f4fe2338b391301380d34a60e5f
Is the dataset focused on a region?
[ "Yes" ]
[ [] ]
1b046ec7f0e1a33e77078bedef7e83c5c07b61de
Over what period of time were the tweets collected?
[ "from January 1, 2020 until April 15, 2020" ]
[ [ "We collected COVID-19 related Arabic tweets from January 1, 2020 until April 15, 2020, using Twitter streaming API and the Tweepy Python library. We have collected more than 3,934,610 million tweets so far. In our dataset, we store the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet. We created a list of the most common Arabic keywords associated with COVID-19. Using Twitter’s streaming API, we searched for any tweet containing the keyword(s) in the text of the tweet. Table TABREF1 shows the list of keywords used along with the starting date of tracking each keyword. Furthermore, Table TABREF2 shows the list of hashtags we have been tracking along with the number of tweets collected from each hashtag. Indeed, some tweets were irrelevant, and we kept only those that were relevant to the pandemic." ] ]
55fb92afa118450f09329764efe22612676c2d85
Are the tweets location-specific?
[ "Yes" ]
[ [] ]
19cdce39e8265e7806212eeee2fd55f8ef2f3d47
How big is the dataset?
[ "more than 3,934,610 million tweets" ]
[ [ "We collected COVID-19 related Arabic tweets from January 1, 2020 until April 15, 2020, using Twitter streaming API and the Tweepy Python library. We have collected more than 3,934,610 million tweets so far. In our dataset, we store the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet. We created a list of the most common Arabic keywords associated with COVID-19. Using Twitter’s streaming API, we searched for any tweet containing the keyword(s) in the text of the tweet. Table TABREF1 shows the list of keywords used along with the starting date of tracking each keyword. Furthermore, Table TABREF2 shows the list of hashtags we have been tracking along with the number of tweets collected from each hashtag. Indeed, some tweets were irrelevant, and we kept only those that were relevant to the pandemic." ] ]
524abe0ab77db168d5b2f0b68dba0982ac5c1d8e
Do the authors suggest any future extensions to this work?
[ "Yes" ]
[ [ "The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs." ] ]
858c51842fc3c1f3e6d2d7d853c94f6de27afade
Which of the classifiers showed the best performance?
[ "Logistic regression" ]
[ [] ]
7c9c73508da628d58aaadb258f3a9d4cc2a8a9b3
Were any other word similar metrics, besides Jaccard metric, tested?
[ "Yes" ]
[ [] ]
7b2bf0c1a24a2aa01d49f3c7e1bdc7401162c116
How are the keywords associated with events such as protests selected?
[ "By using a Bayesian approach and by using word-pairs, where they extract all the pairs of co-occurring words within each tweet. They search for the words that achieve the highest number of spikes matching the days of events." ]
[ [ "We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike').", "According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike”" ] ]
e09e89b3945b756609278dcffb5f89d8a52a02cd
How many speeches are in the dataset?
[ "5575 speeches" ]
[ [] ]
0cf5132ac7904b7b81e17938d5815f70926a5180
What classification models were used?
[ "fastText and SVM BIBREF16" ]
[ [ "Text classification is a core task to many applications, like spam detection, sentiment analysis or smart replies. We used fastText and SVM BIBREF16 for preliminary experiments. We have pre-processed the text removing punctuation's and lowering the case. Facebook developers have developed fastText BIBREF17 which is a library for efficient learning of word representations and sentence classification. The reason we have used fastText is because of its promising results in BIBREF18 ." ] ]
1d860d7f615b9ca404c504f9df4231a702f840ef
Do any speeches not fall in these categories?
[ "Unanswerable" ]
[ [] ]
ed7985e733066cd067b399c36a3f5b09e532c844
What is different in BERT-gen from standard BERT?
[ "They use a left-to-right attention mask so that the input tokens can only attend to other input tokens, and the target tokens can only attend to the input tokens and already generated target tokens." ]
[ [ "As we iteratively concatenate the generated tokens, the BERT bi-directional self-attention mechanism would impact, at every new token, the representations of the previous tokens. To counter that, we use a left-to-right attention mask, similar to the one employed in the original Transformer decoder BIBREF1. For the input tokens in $X$, we apply such mask to all the target tokens $Y$ that were concatenated to $X$, so that input tokens can only attend to the other input tokens. Conversely, for target tokens $y_t$, we put an attention mask on all tokens $y_{>t}$, allowing target tokens $y_t$ to attend only to the input tokens and the already generated target tokens." ] ]
cd8de03eac49fd79b9d4c07b1b41a165197e1adb
How are multimodal representations combined?
[ "The image feature vectors are mapped into BERT embedding dimensions and treated like a text sequence afterwards." ]
[ [ "To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual token embeddings. In order to make the $o_j$ vectors (of dimension $2048+4=2052$) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer $W$ of dimensions $2052\\hspace{-1.00006pt}\\times \\hspace{-1.00006pt}768$. The $N$ object regions detected in an image, are thus represented as $X_{img} = (W.o_1,...,W.o_N)$. Once mapped into the BERT embedding space with $W$, the image is seen by the rest of the model as a sequence of units with no explicit indication if it is a text or an image embedding." ] ]
63850ac98a47ae49f0f49c1c1a6e45c6c447272c
What is the problem with existing metrics that they are trying to address?
[ "Answer with content missing: (whole introduction) However, recent\nstudies observe the limits of ROUGE and find in\nsome cases, it fails to reach consensus with human.\njudgment (Paulus et al., 2017; Schluter, 2017)." ]
[ [ "Building Extractive CNN/Daily Mail" ] ]
313087c69caeab2f58e7abd62664d3bd93618e4e
How do they evaluate their proposed metric?
[ "manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference," ]
[ [ "As shown in Table TABREF13, there is almost no discrimination among the last four methods under ROUGE-1 F1, and their rankings under ROUGE-1/2/L are quite different. In contrast, FAR shows that UnifiedSum(E) covers the most facets. Although FAR is supposed to be favored as FAMs are already manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference, we rank UnifiedSum(E), NeuSum, and Lead-3 in Table TABREF15. The order of the 1st rank in the human evaluation coincides with FAR. FAR also has higher Spearman's coefficient $\\rho $ than ROUGE (0.457 vs. 0.44, n=30, threshold=0.362 at 95% significance)." ] ]
8ec2ca6c7f60c46eedac1fe0530b5c4448800fec
What is a facet?
[ "Unanswerable" ]
[ [] ]
cfbccb51f0f8f8f125b40168ed66384e2a09762b
How are discourse embeddings analyzed?
[ "They perform t-SNE clustering to analyze discourse embeddings" ]
[ [ "To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work." ] ]
feb4e92ff1609f3a5e22588da66532ff689f3bcc
What was the previous state-of-the-art?
[ "character bigram CNN classifier" ]
[ [ "In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically," ] ]
f10325d022e3f95223f79ab00f8b42e3bb7ca040
How are discourse features incorporated into the model?
[ "They derive entity grid with grammatical relations and RST discourse relations and concatenate them with pooling vector for the char-bigrams before feeding to the resulting vector to the softmax layer." ]
[ [ "CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer." ] ]
5e65bb0481f3f5826291c7cc3e30436ab4314c61
What discourse features are used?
[ "Entity grid with grammatical relations and RST discourse relations." ]
[ [ "CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer." ] ]