question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
98eb245c727c0bd050d7686d133fa7cd9d25a0fb
Was evaluation metrics and criteria were used to evaluate the output of the cascaded multimodal speech translation?
[ "BLEU scores" ]
[ [ "We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7." ] ]
537a786794604ecc473fb3ef6222e0c3cb81f772
What dataset was used in this work?
[ "How2" ]
[ [ "The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system." ] ]
dc5ff2adbe1a504122e3800c9ca1d348de391c94
How do they evaluate the sentence representations?
[ "The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 .\n\nThe cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset., Supervised Evaluation\nIt includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 ." ]
[ [ "Unsupervised Evaluation", "The unsupervised tasks include five tasks from SemEval Semantic Textual Similarity (STS) in 2012-2016 BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 and the SemEval2014 Semantic Relatedness task (SICK-R) BIBREF35 .", "The cosine similarity between vector representations of two sentences determines the textual similarity of two sentences, and the performance is reported in Pearson's correlation score between human-annotated labels and the model predictions on each dataset.", "Supervised Evaluation", "It includes Semantic relatedness (SICK) BIBREF35 , SemEval (STS-B) BIBREF36 , paraphrase detection (MRPC) BIBREF37 , question-type classification (TREC) BIBREF38 , movie review sentiment (MR) BIBREF39 , Stanford Sentiment Treebank (SST) BIBREF40 , customer product reviews (CR) BIBREF41 , subjectivity/objectivity classification (SUBJ) BIBREF42 , opinion polarity (MPQA) BIBREF43 ." ] ]
04b43deab0fd753e3419ed8741c10f652b893f02
What are the two decoding functions?
[ "a linear projection and a bijective function with continuous transformation though ‘affine coupling layer’ of (Dinh et al.,2016). " ]
[ [ "In this case, the decoding function is a linear projection, which is $= f_{\\text{de}}()=+ $ , where $\\in ^{d_\\times d_}$ is a trainable weight matrix and $\\in ^{d_\\times 1}$ is the bias term.", "A family of bijective transformation was designed in NICE BIBREF17 , and the simplest continuous bijective function $f:^D\\rightarrow ^D$ and its inverse $f^{-1}$ is defined as:", "$$h: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2+m(_1) \\nonumber \\\\ h^{-1}: \\hspace{14.22636pt} _1 &= _1, & _2 &= _2-m(_1) \\nonumber $$ (Eq. 15)", "where $_1$ is a $d$ -dimensional partition of the input $\\in ^D$ , and $m:^d\\rightarrow ^{D-d}$ is an arbitrary continuous function, which could be a trainable multi-layer feedforward neural network with non-linear activation functions. It is named as an `additive coupling layer' BIBREF17 , which has unit Jacobian determinant. To allow the learning system to explore more powerful transformation, we follow the design of the `affine coupling layer' BIBREF24 :", "$$h: \\hspace{5.69046pt} _1 &= _1, & _2 &= _2 \\odot \\text{exp}(s(_1)) + t(_1) \\nonumber \\\\ h^{-1}: \\hspace{5.69046pt} _1 &= _1, & _2 &= (_2-t(_1)) \\odot \\text{exp}(-s(_1)) \\nonumber $$ (Eq. 16)", "where $s:^d\\rightarrow ^{D-d}$ and $t:^d\\rightarrow ^{D-d}$ are both neural networks with linear output units.", "The requirement of the continuous bijective transformation is that, the dimensionality of the input $$ and the output $$ need to match exactly. In our case, the output $\\in ^{d_}$ of the decoding function $f_{\\text{de}}$ has lower dimensionality than the input $\\in ^{d_}$ does. Our solution is to add an orthonormal regularised linear projection before the bijective function to transform the vector representation of a sentence to the desired dimension." ] ]
515e10a71d78ccd9c7dc93cd942924a4c85d3a30
How is language modelling evaluated?
[ "perplexity of the models" ]
[ [ "Table TABREF32 shows the perplexity of the models on each of the test sets. As expected, each model performs better on the test set they have been trained on. When applied to a different test set, both see an increase in perplexity. However, the Leipzig model seems to have more trouble generalizing: its perplexity nearly doubles on the SwissCrawl test set and raises by twenty on the combined test set." ] ]
fbabde18ebec5852e3d46b1f8ce0afb42350ce62
Why there is only user study to evaluate the model?
[ "Unanswerable" ]
[ [] ]
52e8c9ed66ace1780e41815260af1309064d20de
What datasets are used to evaluate the model?
[ "WN18 and FB15k" ]
[ [ "Datasets: We experimented on four standard datasets: WN18 and FB15k are extracted by BIBREF5 from Wordnet BIBREF32 Freebase BIBREF33 . We used the same train/valid/test sets as in BIBREF5 . WN18 contains 40,943 entities, 18 relations and 141,442 train triples. FB15k contains 14,951 entities, 1,345 relations and 483,142 train triples. In order to test the expressiveness ability rather than relational pattern learning power of models, FB15k-237 BIBREF34 and WN18RR BIBREF35 exclude the triples with inverse relations from FB15k and WN18 which reduced the size of their training data to 56% and 61% respectively. Baselines: We compare MDE with several state-of-the-art relational learning approaches. Our baselines include, TransE, TransH, TransD, TransR, STransE, DistMult, NTN, RESCAL, ER-MLP, and ComplEx and SimplE. We report the results of TransE, DistMult, and ComplEx from BIBREF13 and the results of TransR and NTN from BIBREF36 , and ER-MLP from BIBREF15 . The results on the inverse relation excluded datasets are from BIBREF11 , Table 13 for TransE and RotatE and the rest are from BIBREF35 ." ] ]
3ee721c3531bf1b9a1356a40205d088c9a7a44fc
How did they gather the data?
[ "simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers " ]
[ [ "Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.", "It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below." ] ]
6dcbe941a3b0d5193f950acbdc574f1cfb007845
What are the domains covered in the dataset?
[ "Alarm\nBank\nBus\nCalendar\nEvent\nFlight\nHome\nHotel\nMedia\nMovie\nMusic\nRentalCar\nRestaurant\nRideShare\nService\nTravel\nWeather" ]
[ [ "The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset." ] ]
544b68f6f729e5a62c2461189682f9e4307a05c6
What is their baseline?
[ "Google's Neural Machine Translation" ]
[ [ "As seen in figures, in most of the cases our model produces comparable result with human translator. Result for BLEU score for our model and Google's Neural Machine Translation is compared in table TABREF19:" ] ]
f887d5b7cf2bcc1412ef63bff4146f7208818184
Do they use the cased or uncased BERT model?
[ "Unanswerable" ]
[ [] ]
ace60950ccd6076bf13e12ee2717e50bc038a175
How are the two different models trained?
[ "They pre-train the models using 600000 articles as an unsupervised dataset and then fine-tune the models on small training set." ]
[ [ "We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model.", "We first investigate the impact of pre-training on BERT-BASE's performance. We then compare the performance of BERT-BASE with BERT-LARGE. For both, we vary the number of word-pieces from each article that are used in training. We perform tests with 100, 250 and 500 word pieces.", "Next, we further explore the impact of sequence length using BERT-LARGE. The model took approximately 3 days to pre-train when using 4 NVIDIA GeForce GTX 1080 Ti. On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. The model's training time scaled roughly linearly with sequence length. We did a grid search on sequence length and learning rate." ] ]
2e1660405bde64fb6c211e8753e52299e269998f
How long is the dataset?
[ "645, 600000" ]
[ [ "We focus primarily on the smaller data set of 645 hand-labeled articles provided to task participants, both for training and for validation. We take the first 80% of this data set for our training set and the last 20% for the validation set. Since the test set is also hand-labeled we found that the 645 articles are much more representative of the final test set than the articles labeled by publisher. The model's performance on articles labeled by publisher was not much above chance level.", "Our first experiment was checking the importance of pre-training. We pre-trained BERT-base on the 600,000 articles without labels by using the same Cloze task BIBREF5 that BERT had originally used for pre-training. We then trained the model on sequence lengths of 100, 250 and 500. The accuracy for each sequence length after 100 epochs is shown in TABREF7 and is labeled as UP (unsupervised pre-training). The other column shows how well BERT-base trained without pre-training. We found improvements for lower sequence lengths, but not at 500 word pieces. Since the longer chunk should have been more informative, and since our hand-labeled training set only contained 516 articles, this likely indicates that BERT experiences training difficulty when dealing with long sequences on such a small dataset. As the cost to do pre-training was only a one time cost all of our remaining experiments use a pre-trained model." ] ]
82a28c1ed7988513d5984f6dcacecb7e90f64792
How big are negative effects of proposed techniques on high-resource tasks?
[ "The negative effects were insignificant." ]
[ [ "We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task." ] ]
d4a6f5034345036dbc2d4e634a8504f79d42ca69
What datasets are used for experiments?
[ "the WMT'14 English-French (En-Fr) and English-German (En-De) datasets." ]
[ [ "We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10." ] ]
54fa5196d0e6d5e84955548f4ef51bfd9b707a32
Are this techniques used in training multilingual models, on what languages?
[ "English to French and English to German" ]
[ [ "We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely. Words are split into subwords units with a joint vocabulary of 32K tokens. BLEU scores are computed on the tokenized output with multi-bleu.perl from Moses BIBREF10." ] ]
a997fc1a62442fd80d1873cd29a9092043f025ad
What baselines non-adaptive baselines are used?
[ "Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers" ]
[ [ "All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup." ] ]
9a4aa0e4096c73cd2c3b1eab437c1bf24ae7bf03
What text sequences are associated with each vertex?
[ "abstracts, sentences" ]
[ [ "Although these methods have demonstrated performance gains over structure-only network embeddings, the relationship between text sequences for a pair of vertices is accounted for solely by comparing their sentence embeddings. However, as shown in Figure 1 , to assess the similarity between two research papers, a more effective strategy would compare and align (via local-weighting) individual important words (keywords) within a pair of abstracts, while information from other words (e.g., stop words) that tend to be less relevant can be effectively ignored (down-weighted). This alignment mechanism is difficult to accomplish in models where text sequences are first embedded into a common space and then compared in pairs BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 .", "We propose to learn a semantic-aware Network Embedding (NE) that incorporates word-level alignment features abstracted from text sequences associated with vertex pairs. Given a pair of sentences, our model first aligns each word within one sentence with keywords from the other sentence (adaptively up-weighted via an attention mechanism), producing a set of fine-grained matching vectors. These features are then accumulated via a simple but efficient aggregation function, obtaining the final representation for the sentence. As a result, the word-by-word alignment features (as illustrated in Figure 1 ) are explicitly and effectively captured by our model. Further, the learned network embeddings under our framework are adaptive to the specific (local) vertices that are considered, and thus are context-aware and especially suitable for downstream tasks, such as link prediction. Moreover, since the word-by-word matching procedure introduced here is highly parallelizable and does not require any complex encoding networks, such as Long Short-Term Memory (LSTM) or Convolutional Neural Networks (CNNs), our framework requires significantly less time for training, which is attractive for large-scale network applications." ] ]
1d1ab5d8a24dfd15d95a5a7506ac0456d1192209
How long does it take for the model to run?
[ "Unanswerable" ]
[ [] ]
09a993756d2781a89f7ec5d7992f812d60e24232
Do they report results only on English data?
[ "Yes" ]
[ [ "Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library BIBREF10 , while for tweets we used the NLTK tweets tokenizer BIBREF11 . For training, we select a sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate.", "Unsupervised Similarity Evaluation Results. In Table TABREF19 , we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. Also, C-PHRASE uses data three times the size of the Toronto book corpus. Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table TABREF21 , despite the fact that we use no parse tree information. Official STS 2017 benchmark. In the official results of the most recent edition of the STS 2017 benchmark BIBREF35 , our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method." ] ]
37eba8c3cfe23778498d95a7dfddf8dfb725f8e2
Which other unsupervised models are used for comparison?
[ "Sequential (Denoising) Autoencoder, TF-IDF BOW, SkipThought, FastSent, Siamese C-BOW, C-BOW, C-PHRASE, ParagraphVector" ]
[ [ "We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.", "Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.", "We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.", "The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.", "BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.", "The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .", "FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.", "In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.", "Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.", "In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.", "Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material." ] ]
cdf1bf4b202576c39e063921f6b63dc9e4d6b1ff
What metric is used to measure performance?
[ "Accuracy and F1 score for supervised tasks, Pearson's and Spearman's correlation for unsupervised tasks" ]
[ [ "We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.", "Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.", "Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs. The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sentences/phrases, namely Twitter, headlines, news, forum, WordNet and images." ] ]
03f4e5ac5a9010191098d6d66ed9bbdfafcbd013
How do the n-gram features incorporate compositionality?
[ "by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words" ]
[ [ "We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.", "Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0" ] ]
9a9338d0e74fd315af643335e733445031bd7656
Which dataset do they use?
[ " AMI IHM meeting corpus" ]
[ [ "Experiments were conducted using the AMI IHM meeting corpus BIBREF18 to evaluated the speech recognition performance of various language models. The Kaldi training data configuration was used. A total of 78 hours of speech was used in acoustic model training. This consists of about 1M words of acoustic transcription. Eight meetings were excluded from the training set and used as the development and test sets." ] ]
3103502cf07726d3eeda34f31c0bdf1fc0ae964e
How do Zipf and Herdan-Heap's laws differ?
[ "Zipf's law describes change of word frequency rate, while Heaps-Herdan describes different word number in large texts (assumed that Hepas-Herdan is consequence of Zipf's)" ]
[ [ "Statistical characterization of languages has been a field of study for decadesBIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Even simple quantities, like letter frequency, can be used to decode simple substitution cryptogramsBIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11. These kind of universal results have long piqued the interest of physicists and mathematicians, as well as linguistsBIBREF12, BIBREF13, BIBREF14. Indeed, a large amount of effort has been devoted to try to understand the origin of Zipf's law, in some cases arguing that it arises from the fact that texts carry information BIBREF15, all the way to arguing that it is the result of mere chance BIBREF16, BIBREF17. Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21" ] ]
aaec98481defc4c230f84a64cdcf793d89081a76
What was the best performing baseline?
[ "Lead-3" ]
[ [ "We compared several summarization methods which can be categorized into three groups: unsupervised, non-neural supervised, and neural supervised methods. For the unsupervised methods, we tested:", "As a baseline, we used Lead-N which selects INLINEFORM0 leading sentences as the summary. For all methods, we extracted 3 sentences as the summary since it is the median number of sentences in the gold summaries that we found in our exploratory analysis.", "Table TABREF26 shows the test INLINEFORM0 score of ROUGE-1, ROUGE-2, and ROUGE-L of all the tested models described previously. The mean and standard deviation (bracketed) of the scores are computed over the 5 folds. We put the score obtained by an oracle summarizer as Oracle. Its summaries are obtained by using the true labels. This oracle summarizer acts as the upper bound of an extractive summarizer on our dataset. As we can see, in general, every scenario of NeuralSum consistently outperforms the other models significantly. The best scenario is NeuralSum with word embedding size of 300, although its ROUGE scores are still within one standard deviation of NeuralSum with the default word embedding size. Lead-3 baseline performs really well and outperforms almost all the other models, which is not surprising and even consistent with other work that for news summarization, Lead-N baseline is surprisingly hard to beat. Slightly lower than Lead-3 are LexRank and Bayes, but their scores are still within one standard deviation of each other so their performance are on par. This result suggests that a non-neural supervised summarizer is not better than an unsupervised one, and thus if labeled data are available, it might be best to opt for a neural summarizer right away. We also want to note that despite its high ROUGE, every NeuralSum scenario scores are still considerably lower than Oracle, hinting that it can be improved further. Moreover, initializing with FastText pre-trained embedding slightly lowers the scores, although they are still within one standard deviation. This finding suggests that the effect of FastText pre-trained embedding is unclear for our case." ] ]
69b41524dc5820143e45f2f3545cd5c0a70e2922
Which approaches did they use?
[ "SumBasic, Lsa, LexRank, TextRank, Bayes, Hmm, MaxEnt, NeuralSum, Lead-N" ]
[ [ "We compared several summarization methods which can be categorized into three groups: unsupervised, non-neural supervised, and neural supervised methods. For the unsupervised methods, we tested:", "SumBasic, which uses word frequency to rank sentences and selects top sentences as the summary BIBREF13 , BIBREF14 .", "Lsa, which uses latent semantic analysis (LSA) to decompose the term-by-sentence matrix of a document and extracts sentences based on the result. We experimented with the two approaches proposed in BIBREF15 and BIBREF16 respectively.", "LexRank, which constructs a graph representation of a document, where nodes are sentences and edges represent similarity between two sentences, and runs PageRank algorithm on that graph and extracts sentences based on the resulting PageRank values BIBREF17 . In the original implementation, sentences shorter than a certain threshold are removed. Our implementation does not do this removal to reduce the number of tunable hyperparameters. Also, it originally uses cross-sentence informational subsumption (CSIS) during sentence selection stage but the paper does not explain it well. Instead, we used an approximation to CSIS called cross-sentence word overlap described in BIBREF18 by the same authors.", "TextRank, which is very similar to LexRank but computes sentence similarity based on the number of common tokens BIBREF19 .", "Bayes, which represents each sentence as a feature vector and uses naive Bayes to classify them BIBREF5 . The original paper computes TF-IDF score on multi-word tokens that are identified automatically using mutual information. We did not do this identification, so our TF-IDF computation operates on word tokens.", "Hmm, which uses hidden Markov model where states correspond to whether the sentence should be extracted BIBREF20 . The original work uses QR decomposition for sentence selection but our implementation does not. We simply ranked the sentences by their scores and picked the top 3 as the summary.", "MaxEnt, which represents each sentence as a feature vector and leverages maximum entropy model to compute the probability of a sentence should be extracted BIBREF21 . The original approach puts a prior distribution over the labels but we put the prior on the weights instead. Our implementation still agrees with the original because we employed a bias feature which should be able to learn the prior label distribution.", "As for the neural supervised method, we evaluated NeuralSum BIBREF11 using the original implementation by the authors. We modified their implementation slightly to allow for evaluating the model with ROUGE. Note that all the methods are extractive. Our implementation code for all the methods above is available online.", "As a baseline, we used Lead-N which selects INLINEFORM0 leading sentences as the summary. For all methods, we extracted 3 sentences as the summary since it is the median number of sentences in the gold summaries that we found in our exploratory analysis." ] ]
72122e0bc5da1d07c0dadb3401aab2acd748424d
What is the size of the dataset?
[ "20K" ]
[ [ "In this work, we introduce IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated several well-known extractive single-document summarization methods on the dataset. The dataset consists of online news articles and has almost 200 times more documents than the next largest one of the same domain BIBREF2 . To encourage further research in this area, we make our dataset publicly available. In short, the contribution of this work is two-fold:", "We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 ." ] ]
1af4d56eeaf74460ca2c621a2ad8a5d8dbac491c
Did they use a crowdsourcing platform for the summaries?
[ "No" ]
[ [ "We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 ." ] ]
3f5f74c39a560b5d916496e05641783c58af2c5d
How are the synthetic examples generated?
[ "Random perturbation of Wikipedia sentences using mask-filling with BERT, backtranslation and randomly drop out" ]
[ [ "One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \\tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\\tilde{}$. Let us describe those techniques." ] ]
07f5e360e91b99aa2ed0284d7d6688335ed53778
Do they measure the number of created No-Arc long sequences?
[ "No" ]
[ [] ]
11dde2be9a69a025f2fc29ce647201fb5a4df580
By how much does the new parser outperform the current state-of-the-art?
[ "Proposed method achieves 94.5 UAS and 92.4 LAS compared to 94.3 and 92.2 of best state-of-the -art greedy based parser. Best state-of-the art parser overall achieves 95.8 UAS and 94.6 LAS." ]
[ [ "Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).", "We even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead. This may be because our system takes into consideration any permissible attachment between the focus word INLINEFORM0 and any word in INLINEFORM1 at each configuration, while their approach is limited by the arc-eager logic: it allows all possible rightward arcs (possibly fewer than our approach as the arc-eager stack usually contains a small number of words), but only one leftward arc is permitted per parser state. It is also worth noting that the arc-swift and NL-Covington parsers have the same worst-case time complexity, ( INLINEFORM2 ), as adding non-local arc transitions to the arc-eager parser increases its complexity from linear to quadratic, but it does not affect the complexity of the Covington algorithm. Thus, it can be argued that this technique is better suited to Covington than to arc-eager parsing." ] ]
bcce5eef9ddc345177b3c39c469b4f8934700f80
Do they evaluate only on English datasets?
[ "Yes" ]
[ [ "Twitter data: We used the Twitter API to scrap tweets with hashtags. For instance, for Bitcoin, the #BTC and #Bitcoin tags were used. The Twitter API only allows a maximum of 450 requests per 15 minute and historical data up to 7 days. Throughout our project we collect data for almost 30 days. Bitcoin had about 25000 tweets per day amounting to a total of approximately 10 MB of data daily. For each tweet, the ID, text, username, number of followers, number of retweets, creation date and time was also stored. All non-English tweets were filtered out by the API. We further processed the full tweet text by removing links, images, videos and hashtags to feed in to the algorithm." ] ]
d3092f78bdbe7e741932e3ddf997e8db42fa044c
What experimental evaluation is used?
[ "root mean square error between the actual and the predicted price of Bitcoin for every minute" ]
[ [ "Once the KryptoOracle engine was bootstrapped with historical data, the real time streamer was started. The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value. All the calculated values were then stored back to the Spark training RDD for storage. The RDD persisted all the data while training and check-pointed itself to the Hive database after certain period of time." ] ]
e2427f182d7cda24eb7197f7998a02bc80550f15
How is the architecture fault-tolerant?
[ "By using Apache Spark which stores all executions in a lineage graph and recovers to the previous steady state from any fault" ]
[ [ "KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning.", "Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state." ] ]
0457242fb2ec33446799de229ff37eaad9932f2a
Which elements of the platform are modular?
[ "handling large volume incoming data, sentiment analysis on tweets and predictive online learning" ]
[ [ "In this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety." ] ]
5e997d4499b18f1ee1ef6fa145cadbc018b8dd87
What is the source of memes?
[ "Google Images, Reddit Memes Dataset" ]
[ [ "We built a dataset for the task of hate speech detection in memes with 5,020 images that were weakly labeled into hate or non-hate memes, depending on their source. Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . We assumed that all memes in the dataset do not contain any hate message, as we considered that average Reddit memes do not belong to this class. A total of 3,325 non-hate memes were collected. We split the dataset into train (4266 memes) and validation (754 memes) subsets. The splits were random and the distribution of classes in the two subsets is the same. We didn't split the dataset into three subsets because of the small amount of data we had and decided to rely on the validation set metrics." ] ]
12c7d79d2a26af2d445229d0c8ba3ba1aab3f5b5
Is the dataset multimodal?
[ "Yes" ]
[ [ "The text and image encodings were combined by concatenation, which resulted in a feature vector of 4,864 dimensions. This multimodal representation was afterward fed as input into a multi-layer perceptron (MLP) with two hidden layer of 100 neurons with a ReLU activation function. The last single neuron with no activation function was added at the end to predict the hate speech detection score." ] ]
98daaa9eaa1e1e574be336b8933b861bfd242e5e
How is each instance of the dataset annotated?
[ "weakly labeled into hate or non-hate memes, depending on their source" ]
[ [ "We built a dataset for the task of hate speech detection in memes with 5,020 images that were weakly labeled into hate or non-hate memes, depending on their source. Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . We assumed that all memes in the dataset do not contain any hate message, as we considered that average Reddit memes do not belong to this class. A total of 3,325 non-hate memes were collected. We split the dataset into train (4266 memes) and validation (754 memes) subsets. The splits were random and the distribution of classes in the two subsets is the same. We didn't split the dataset into three subsets because of the small amount of data we had and decided to rely on the validation set metrics." ] ]
a93196fb0fb5f8202912971e14552fd7828976db
Which dataset do they use for text modelling?
[ "Penn Treebank (PTB), end-to-end (E2E) text generation corpus" ]
[ [ "We evaluate our model on two public datasets, namely, Penn Treebank (PTB) BIBREF9 and the end-to-end (E2E) text generation corpus BIBREF10, which have been used in a number of previous works for text generation BIBREF0, BIBREF5, BIBREF11, BIBREF12. PTB consists of more than 40,000 sentences from Wall Street Journal articles whereas the E2E dataset contains over 50,000 sentences of restaurant reviews. The statistics of these two datasets are summarised in Table TABREF11." ] ]
983c2fe7bdbf471bb8b15db858fd2cbec86b96a5
Do they compare against state of the art text generation?
[ "Yes" ]
[ [ "We compare our HR-VAE model with three strong baselines using VAE for text modelling:", "VAE-LSTM-base: A variational autoencoder model which uses LSTM for both encoder and decoder. KL annealing is used to tackled the latent variable collapse issue BIBREF0;", "VAE-CNN: A variational autoencoder model with a LSTM encoder and a dilated CNN decoder BIBREF7;", "vMF-VAE: A variational autoencoder model using LSTM for both encoder and decoder where the prior distribution is the von Mises-Fisher (vMF) distribution rather than a Gaussian distribution BIBREF5." ] ]
a5418e4af99a2cbd6b7a2b8041388a2d01b8efb2
How do they evaluate generated text quality?
[ "Loss analysis. To conduct a more thorough evaluation, we further investigate model behaviours in terms of both reconstruction loss and KL loss, as shown in Figure FIGREF14. These plots were obtained based on the E2E training set using the inputless setting." ]
[ [ "Loss analysis. To conduct a more thorough evaluation, we further investigate model behaviours in terms of both reconstruction loss and KL loss, as shown in Figure FIGREF14. These plots were obtained based on the E2E training set using the inputless setting." ] ]
9e9e9e0a52563b42e96b8c89ea12f5a916daa7f0
Was the system only evaluated over the second shared task?
[ "No" ]
[ [ "In recent years, there has been a rapid growth in the usage of social media. People post their day-to-day happenings on regular basis. BIBREF0 propose four tasks for detecting drug names, classifying medication intake, classifying adverse drug reaction and detecting vaccination behavior from tweets. We participated in the Task2 and Task4." ] ]
b540cd4fe9dc4394f64d5b76b0eaa4d9e30fb728
Could you tell me more about the metrics used for performance evaluation?
[ "BLUE utilizes different metrics for each of the tasks: Pearson correlation coefficient, F-1 scores, micro-averaging, and accuracy" ]
[ [ "The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence. We use the standard overall accuracy to evaluate the performance.", "The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences. The relations with types were compared to annotated data. We use the standard micro-average precision, recall, and F1-score metrics.", "The aim of the named entity recognition task is to predict mention spans given in the text BIBREF20 . The results are evaluated through a comparison of the set of mention spans annotated within the document with the set of mention spans predicted by the model. We evaluate the results by using the strict version of precision, recall, and F1-score. For disjoint mentions, all spans also must be strictly correct. To construct the dataset, we used spaCy to split the text into a sequence of tokens when the original datasets do not provide such information.", "The sentence similarity task is to predict similarity scores based on sentence pairs. Following common practice, we evaluate similarity by using Pearson correlation coefficients.", "HoC (the Hallmarks of Cancers corpus) consists of 1,580 PubMed abstracts annotated with ten currently known hallmarks of cancer BIBREF27 . Annotation was performed at sentence level by an expert with 15+ years of experience in cancer research. We use 315 ( $\\sim $ 20%) abstracts for testing and the remaining abstracts for training. For the HoC task, we followed the common practice and reported the example-based F1-score on the abstract level BIBREF28 , BIBREF29 ." ] ]
41173179efa6186eef17c96f7cbd8acb29105b0e
which tasks are used in BLUE benchmark?
[ "Inference task\nThe aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence, Document multilabel classification\nThe multilabel classification task predicts multiple labels from the texts., Relation extraction\nThe aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences., Named entity recognition\nThe aim of the named entity recognition task is to predict mention spans given in the text , Sentence similarity\nThe sentence similarity task is to predict similarity scores based on sentence pairs" ]
[ [] ]
0bd683c51a87a110b68b377e9a06f0a3e12c8da0
What are the tasks that this method has shown improvements?
[ "bilingual dictionary induction, monolingual and cross-lingual word similarity, and cross-lingual hypernym discovery" ]
[ [ "Our experimental results show that the proposed additional transformation does not only benefit cross-lingual evaluation tasks, but, perhaps surprisingly, also monolingual ones. In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery.", "Tables 2 and 3 show the monolingual and cross-lingual word similarity results, respectively. For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. Moreover, in cases where no improvement is obtained, the differences tend to be minimal, with the exception of RG-65, but this is a very small test set for which larger variations can thus be expected. In contrast, there are a few cases where substantial gains were obtained by using our model. This is most notable for English WordSim and SimLex in the monolingual setting.", "As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics. The higher scores indicate that the two monolingual embedding spaces become more tightly integrated because of our additional transformation. It is worth highlighting here the case of English-Finnish, where the gains obtained in $P@5$ and $P@10$ are considerable. This might indicate that our approach is especially useful for morphologically richer languages such as Finnish, where the limitations of the previous bilingual mappings are most apparent.", "The results listed in Table 4 indicate several trends. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement across all configurations. However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric). This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning. Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3). Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available." ] ]
a979749e59e6e300a453d8a8b1627f97101799de
Why does the model improve in monolingual spaces as well?
[ "because word pair similarity increases if the two words translate to similar parts of the cross-lingual embedding space" ]
[ [ "As a complement of this analysis we show some qualitative results which give us further insights on the transformations of the vector space after our average approximation. In particular, we analyze the reasons behind the higher quality displayed by our bilingual embeddings in monolingual settings. While VecMap and MUSE do not transform the initial monolingual spaces, our model transforms both spaces simultaneously. In this analysis we focus on the source language of our experiments (i.e., English). We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space. More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice." ] ]
b10632eaa0ca48f86522d8ec38b1d702cb0b8c01
What are the categories being extracted?
[ "Unanswerable" ]
[ [] ]
8fa7011e7beaa9fb4083bf7dd75d1216f9c7b2eb
Do the authors test their annotation projection techniques on tasks other than AMR?
[ "No" ]
[ [ "Cross-lingual techniques can cope with the lack of labeled data on languages when this data is available in at least one language, usually English. The annotation projection method, which we follow in this work, is one way to address this problem. It was introduced for POS tagging, base noun phrase bracketing, NER tagging, and inflectional morphological analysis BIBREF29 but it has also been used for dependency parsing BIBREF30 , role labeling BIBREF31 , BIBREF32 and semantic parsing BIBREF26 . Another common thread of cross-lingual work is model transfer, where parameters are shared across languages BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 ." ] ]
e0b7acf4292b71725b140f089c6850aebf2828d2
How is annotation projection done when languages have different word order?
[ "Word alignments are generated for parallel text, and aligned words are assumed to also share AMR node alignments." ]
[ [ "AMR is not grounded in the input sentence, therefore there is no need to change the AMR annotation when projecting to another language. We think of English labels for the graph nodes as ones from an independent language, which incidentally looks similar to English. However, in order to train state-of-the-art AMR parsers, we also need to project the alignments between AMR nodes and words in the sentence (henceforth called AMR alignments). We use word alignments, similarly to other annotation projection work, to project the AMR alignments to the target languages.", "Our approach depends on an underlying assumption that we make: if a source word is word-aligned to a target word and it is AMR aligned with an AMR node, then the target word is also aligned to that AMR node. More formally, let $S = s_1 \\dots s_{\\vert s \\vert }$ be the source language sentence and $T = t_1 \\dots t_{\\vert t \\vert }$ be the target language sentence; $A_s(\\cdot )$ be the AMR alignment mapping word tokens in $S$ to the set of AMR nodes that are triggered by it; $A_t(\\cdot )$ be the same function for $T$ ; $v$ be a node in the AMR graph; and finally, $W(\\cdot )$ be an alignment that maps a word in $S$ to a subset of words in $T$ . Then, the AMR projection assumption is: $T = t_1 \\dots t_{\\vert t \\vert }$0", "Word alignments were generated using fast_align BIBREF10 , while AMR alignments were generated with JAMR BIBREF11 . AMREager BIBREF12 was chosen as the pre-existing English AMR parser. AMREager is an open-source AMR parser that needs only minor modifications for re-use with other languages. Our multilingual adaptation of AMREager is available at http://www.github.com/mdtux89/amr-eager-multilingual. It requires tokenization, POS tagging, NER tagging and dependency parsing, which for English, German and Chinese are provided by CoreNLP BIBREF13 . We use Freeling BIBREF14 for Spanish, as CoreNLP does not provide dependency parsing for this language. Italian is not supported in CoreNLP: we use Tint BIBREF15 , a CoreNLP-compatible NLP pipeline for Italian." ] ]
b6ffa18d49e188c454188669987b0a4807ca3018
What is the reasoning method that is used?
[ "SPARQL" ]
[ [ "The DBpedia lookup service, which is based on the Spotlight index BIBREF18 , is used for entity lookup (retrieval). The DBpedia SPARQL endpoint is used for query answering and reasoning. The reported results are based on the following settings: the Adam optimizer together with cross-entropy loss are used for network training; $d_r$ and $d_a$ are set to 200 and 50 respectively; $N_0$ is set to 1200; word2vec trained with the latest Wikipedia article dump is adopted for word embedding; and ( $T_s$ , $T_p$ , $T_l$ ) are set to (12, 4, 12) for S-Lite and (12, 4, 15) for R-Lite. The experiments are run on a workstation with Intel(R) Xeon(R) CPU E5-2670 @2.60GHz, with programs implemented by Tensorflow." ] ]
2b61893b22ac190c94c2cb129e86086888347079
What KB is used in this work?
[ "DBpedia" ]
[ [ "In this study, we investigate KB literal canonicalization using a combination of RNN-based learning and semantic technologies. We first predict the semantic types of a literal by: (i) identifying candidate classes via lexical entity matching and KB queries; (ii) automatically generating positive and negative examples via KB sampling, with external semantics (e.g., from other KBs) injected for improved quality; (iii) training classifiers using relevant subject-predicate-literal triples embedded in an attentive bidirectional RNN (AttBiRNN); and (iv) using the trained classifiers and KB class hierarchy to predict candidate types. The novelty of our framework lies in its knowledge-based learning; this includes automatic candidate class extraction and sampling from the KB, triple embedding with different importance degrees suggesting different semantics, and using the predicted types to identify a potential canonical entity from the KB. We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . The results are very promising, with significant improvements over several baselines, including the existing state-of-the-art." ] ]
a996b6aee9be88a3db3f4127f9f77a18ed10caba
What's the precision of the system?
[ "0.8320 on semantic typing, 0.7194 on entity matching" ]
[ [] ]
65e2f97f2fe8eb5c2fa41cb95c02b577e8d6e5ee
How did they measure effectiveness?
[ "number of dialogs that resulted in launching a skill divided by total number of dialogs" ]
[ [ "We trained the DQN agent using an $\\epsilon $-greedy policy with $\\epsilon $ decreasing linearly from 1 to $0.1$ over $100,000$ steps. Additionally, we tuned a window size to include previous dialog turns as input and set $\\gamma $ to $0.9$. We ran the method 30 times for $150,000$ steps, and in each run, after every 10,000 steps, we sampled $3,000$ dialog episodes with no exploration to evaluate the performance. The optimal parameters were found using Hyperopt BIBREF23 (see Appendix B). Figure FIGREF9 shows the simulation results during training. The Y-axis in the figure is the success rate of the agent (measured in terms of number of dialogs that resulted in launching a skill divided by total number of dialogs), and the X-axis is the number of learning steps. Given our choice of reward function, the increase in success rate is indicative of the agent learning to improve its policy over time. Furthermore, the RL agent outperformed the rule-based agent with average success rate of $68.00\\% (\\pm 2\\%$) in simulation." ] ]
83f14af3ccca4ab9deb4c6d208f624d1e79dc7eb
Which of the two ensembles yields the best performance?
[ "Answer with content missing: (Table 2) CONCAT ensemble" ]
[ [ "Figure FIGREF5 shows that E-BERT performs comparable to BERT and ERNIE on unfiltered LAMA. However, E-BERT is less affected by filtering on LAMA-UHN, suggesting that its performance is more strongly due to factual knowledge. Recall that we lack entity embeddings for 46% of Google-RE subjects, i.e., E-BERT cannot improve over BERT on almost half of the Google-RE tuples." ] ]
0154d8be772193bfd70194110f125813057413a4
What are the two ways of ensembling BERT and E-BERT?
[ "mean-pooling their outputs (AVG), concatenating the entity and its name with a slash symbol (CONCAT)" ]
[ [ "We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is." ] ]
e737cfe0f6cfc6d3ac6bec32231d9c893bfc3fc9
How is it determined that a fact is easy-to-guess?
[ " filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch), person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them" ]
[ [ "Filter 1: The string match filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch). This simple heuristic deletes up to 81% of triples from individual relations (see Appendix for statistics and examples).", "Filter 2: Of course, entity names can be revealing in ways that are more subtle. As illustrated by our French actor example, a person's name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. Our person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them. Consider our previous example (Jean_Marais, native-language, French). We whitespace-tokenize the subject name into Jean and Marais. If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean_Marais. On the other hand, if neither Jean nor Marais are considered French, but a correct answer is given nonetheless, then we consider this sufficient evidence for factual knowledge." ] ]
42be49b883eba268e3dbc5c3ff4631442657dcbb
How is dependency parsing empirically verified?
[ " At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks." ]
[ [ "We evaluate our model on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting BIBREF30, BIBREF31. POS tags are predicted by the Stanford Tagger BIBREF32. For constituent parsing, we use the standard evalb tool to evaluate the F1 score. For dependency parsing, we apply Stanford basic dependencies (SD) representation BIBREF4 converted by the Stanford parser. Following previous work BIBREF27, BIBREF33, we report the results without punctuations for both treebanks.", "Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models.", "Multitask learning (MTL) is a natural solution in neural models for multiple inputs and multiple outputs, which is adopted in this work to decode constituent and dependency in a single model. BIBREF15 indicates that when tasks are sufficiently similar, especially with syntactic nature, MTL would be useful. In contrast to previous work on deep MTL BIBREF16, BIBREF17, our model focuses on more related tasks and benefits from the strong inherent relation. At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks." ] ]
8d4f0815f8a23fe45c298c161fc7a27f3bb0d338
How are different network components evaluated?
[ "For different numbers of shared layers, the results are in Table TABREF14. We respectively disable the constituent and the dependency parser to obtain a separate learning setting for both parsers in our model. " ]
[ [ "Shared Self-attention Layers As our model providing two outputs from one input, there is a bifurcation setting for how much shared part should be determined. Both constituent and dependency parsers share token representation and 8 self-attention layers at most. Assuming that either parser always takes input information flow through 8 self-attention layers as shown in Figure FIGREF4, then the number of shared self-attention layers varying from 0 to 8 may reflect the shared degree in the model. When the number is set to 0, it indicates only token representation is shared for both parsers trained for the joint loss through each own 8 self-attention layers. When the number is set to less than 8, for example, 6, then it means that both parsers first shared 6 layers from token representation then have individual 2 self-attention layers.", "For different numbers of shared layers, the results are in Table TABREF14. We respectively disable the constituent and the dependency parser to obtain a separate learning setting for both parsers in our model. The comparison in Table TABREF14 indicates that even though without any shared self-attention layers, joint training of our model may significantly outperform separate learning mode. At last, the best performance is still obtained from sharing full 8 self-attention layers." ] ]
a6665074b067abb2676d5464f36b2cb07f6919d3
What are the performances obtained for PTB and CTB?
[ ". On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing., On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing." ]
[ [ "Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models." ] ]
b0fbd4b0f02b877a0d3df1d8ccc47d90dd49147c
What are the models used to perform constituency and dependency parsing?
[ "token representation, self-attention encoder,, Constituent Parsing Decoder, Dependency Parsing Decoder" ]
[ [ "Using an encoder-decoder backbone, our model may be regarded as an extension of the constituent parsing model of BIBREF18 as shown in Figure FIGREF4. The difference is that in our model both constituent and dependency parsing share the same token representation and shared self-attention layers and each has its own individual Self-Attention Layers and subsequent processing layers. Our model includes four modules: token representation, self-attention encoder, constituent and dependency parsing decoder.", "Our Model ::: Constituent Parsing Decoder", "Our Model ::: Dependency Parsing Decoder" ] ]
3288a50701a80303fd71c8c5ede81cbee14fa2c7
Is the proposed layer smaller in parameters than a Transformer?
[ "No" ]
[ [] ]
22b8836cb00472c9780226483b29771ae3ebdc87
What is the new initialization method proposed in this paper?
[ "They initialize their word and entity embeddings with vectors pre-trained over a large corpus of unlabeled data." ]
[ [ "Training our model implicitly embeds the vocabulary of words and collection of entities in a common space. However, we found that explicitly initializing these embeddings with vectors pre-trained over a large collection of unlabeled data significantly improved performance (see Section \"Effects of initialized embeddings and corrupt-sampling schemes\" ). To this end, we implemented an approach based on the Skip-Gram with Negative-Sampling (SGNS) algorithm by mikolov2013distributed that simultaneously trains both word and entity vectors." ] ]
540e9db5595009629b2af005e3c06610e1901b12
How was a quality control performed so that the text is noisy but the annotations are accurate?
[ "The authors believe that the Wikilinks corpus contains ground truth annotations while being noisy. They discard mentions that cannot have ground-truth verified by comparison with Wikipedia." ]
[ [ "Wikilinks can be seen as a large-scale, naturally-occurring, crowd-sourced dataset where thousands of human annotators provide ground truths for mentions of interest. This means that the dataset contains various kinds of noise, especially due to incoherent contexts. The contextual noise presents an interesting test-case that supplements existing datasets that are sourced from mostly coherent and well-formed text.", "We prepare our dataset from the local-context version of Wikilinks, and resolve ground-truth links using a Wikipedia dump from April 2016. We use the page and redirect tables for resolution, and keep the database pageid column as a unique identifier for Wikipedia entities. We discard mentions where the ground-truth could not be resolved (only 3% of mentions)." ] ]
bd1a3c651ca2b27f283d3f36df507ed4eb24c2b0
Is it a neural model? How is it trained?
[ "No, it is a probabilistic model trained by finding feature weights through gradient ascent" ]
[ [ "Here we describe the components of our probabilistic model of question generation. Section \"Compositionality and computability\" describes two key elements of our approach, compositionality and computability, as reflected in the choice to model questions as programs. Section \"A grammar for producing questions\" describes a grammar that defines the space of allowable questions/programs. Section \"Probabilistic generative model\" specifies a probabilistic generative model for sampling context-sensitive, relevant programs from this space. The remaining sections cover optimization, the program features, and alternative models (Sections \"Optimization\" - \"Alternative models\" ).", "The details of optimization are as follows. First, a large set of 150,000 questions is sampled in order to approximate the gradient at each step via importance sampling. Second, to run the procedure for a given model and training set, we ran 100,000 iterations of gradient ascent at a learning rate of 0.1. Last, for the purpose of evaluating the model (computing log-likelihood), the importance sampler is also used to approximate the normalizing constant in Eq. 14 via the estimator $Z \\approx \\mathbb {E}_{x\\sim q}[\\frac{p(x;\\mathbf {\\theta })}{q(x)}]$ ." ] ]
5a2c0c55a43dcc0b9439d330d2cbe1d5d444bf36
How do people engage in Twitter threads on different types of news?
[ "Unanswerable" ]
[ [] ]
0c78d2fe8bc5491b5fd8a2166190c59eba069ced
How are the clusters related to security, violence and crime identified?
[ "Yes" ]
[ [] ]
d2473c039ab85f8e9e99066894658381ae852e16
What are the features of used to customize target user interaction?
[ "image feature, question feature, label vector for the user's answer" ]
[ [ "We represent each instance of image, question, and user choice as a triplet consisting of image feature, question feature, and the label vector for the user's answer. In addition, collecting multiple choices from identical users enables us to represent any two instances by the same user as a pair of triplets, assuming source-target relation. With these pairs of triplets, we can train the system to predict a user's choice on a new image and a new question, given the same user's choice on the previous image and its associated question. User's choice $x_{ans_i}$ is represented as one-hot vector where the size of the vector is equal to the number of possible choices. We refer to the fused feature representation of this triplet consisting of image, question, and the user's choice as choice vector.", "As discussed earlier, we attempt to reflect user's interest by asking questions that provide visual context. The foremost prerequisite for the interactive questions to perform that function is the possibility of various answers or interpretations. In other words, a question whose answer is so obvious that it can be answered in an identical way would not be valid as an interactive question. In order to make sure that each generated question allows for multiple possible answers, we internally utilize the VQA module. The question generated by the VQG module is passed on to VQA module, where the probability distribution $p_{ans}$ for all candidate answers $C$ is determined. If the most likely candidate $c_i=\\max p_{ans}$ , where $c_i \\in C$ , has a probability of being answer over a certain threshold $\\alpha $ , then the question is considered to have a single obvious answer, and is thus considered ineligible. The next question generated by VQG is passed on to VQA to repeat the same process until the the following requirement is met:", "In our experiments, we set $\\alpha $ as 0.33. We also excluded the yes/no type of questions. Figure 4 illustrates an example of a question where the most likely answer had a probability distribution over the threshold (and is thus ineligible), and another question whose probability distribution over the candidate answers was more evenly distributed (and thus proceeds to narrative generation stage)." ] ]
5d6cc65b73f428ea2a499bcf91995ef5441f63d4
How they evaluate quality of generated output?
[ "Through human evaluation where they are asked to evaluate the generated output on a likert scale." ]
[ [ "Automatic evaluation of Natural Language Generation (NLG) systems is a challenging task BIBREF22. For QG, $n$-gram based similarity metrics are commonly used. These measures evaluate how similar the generated text is to the corresponding reference(s). While they are known to suffer from several shortcomings BIBREF29, BIBREF30, they allow to evaluate specific properties of the developed models. In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection SECREF32.", "In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods.", "Before submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table TABREF33." ] ]
0a8bc204a76041a25cee7e9f8e2af332a17da67a
What automated metrics authors investigate?
[ "BLEU, Self-BLEU, n-gram based score, probability score" ]
[ [ "One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).", "Within the field of Computational Creativity, Diversity is considered a desirable property BIBREF31. Indeed, generating always the same question such as “What is the meaning of the universe?\" would be an undesirable behavior, reminiscent of the “collapse mode\" observed in Generative Adversarial Networks (GAN) BIBREF32. Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences. Self-BLEU is computed as follows: for each generated sentence $s_i$, a BLEU score is computed using $s_i$ as hypothesis while the other generated sentences are used as reference. When averaged over all the references, it thus provides a measure of how diverse the sentences are. Lower Self-BLEU scores indicate more diversity. We refer to these metrics as Self-B* throughout this paper.", "Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:", "n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.", "probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer." ] ]
81686454f215e28987c7ad00ddce5ffe84b37195
What supervised models are experimented with?
[ "Unanswerable" ]
[ [] ]
fc06502fa62803b62f6fd84265bfcfb207c1113b
Who annotated the data?
[ "annotators who were not security experts, researchers in either NLP or computer security" ]
[ [ "We developed our annotation guidelines through six preliminary rounds of annotation, covering 560 posts. Each round was followed by discussion and resolution of every post with disagreements. We benefited from members of our team who brought extensive domain expertise to the task. As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The data annotated during this process is not included in Table TABREF3 .", "Once we had defined the annotation standard, we annotated datasets from Darkode, Hack Forums, Blackhat, and Nulled as described in Table TABREF3 . Three people annotated every post in the Darkode training, Hack Forums training, Blackhat test, and Nulled test sets; these annotations were then merged into a final annotation by majority vote. The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation." ] ]
ce807a42370bfca10fa322d6fa772e4a58a8dca1
What are the four forums the data comes from?
[ "Darkode, Hack Forums, Blackhat and Nulled." ]
[ [] ]
f91835f17c0086baec65ebd99d12326ae1ae87d2
How do they obtain parsed source sentences?
[ "Stanford CoreNLP BIBREF11 " ]
[ [ "We use an off-the-shelf parser, in this case Stanford CoreNLP BIBREF11 , to create binary constituency parses. These parses are linearized as shown in Table TABREF6 . We tokenize the opening parentheses with the node label (so each node label begins with a parenthesis) but keep the closing parentheses separate from the words they follow. For our task, the parser failed on one training sentence of 5.9 million, which we discarded, and succeeded on all test sentences. It took roughly 16 hours to parse the 5.9 million training sentences." ] ]
14e78db206a8180ea637774aa572b073e3ffa219
What kind of encoders are used for the parsed source sentence?
[ "RNN encoders" ]
[ [ "We propose a multi-source framework for injecting linearized source parses into NMT. This model consists of two identical RNN encoders with no shared parameters, as well as a standard RNN decoder. For each target sentence, two versions of the source sentence are used: the sequential (standard) version and the linearized parse (lexicalized or unlexicalized). Each of these is encoded simultaneously using the encoders; the encodings are then combined and used as input to the decoder. We combine the source encodings using the hierarchical attention combination proposed by libovicky2017attention. This consists of a separate attention mechanism for each encoder; these are then combined using an additional attention mechanism over the two separate context vectors. This multi-source method is thus able to combine the advantages of both standard RNN-based encodings and syntactic encodings." ] ]
bc1e3f67d607bfc7c4c56d6b9763d3ae7f56ad5b
Whas is the performance drop of their model when there is no parsed input?
[ " improvements of up to 1.5 BLEU over the seq2seq baseline" ]
[ [ "The proposed models are compared against two baselines. The first, referred to here as seq2seq, is the standard RNN-based neural machine translation system with attention BIBREF0 . This baseline does not use the parsed data.", "The multi-source systems improve strongly over both baselines, with improvements of up to 1.5 BLEU over the seq2seq baseline and up to 1.1 BLEU over the parse2seq baseline. In addition, the lexicalized multi-source systems yields slightly higher BLEU scores than the unlexicalized multi-source systems; this is surprising because the lexicalized systems have significantly longer sequences than the unlexicalized ones. Finally, it is interesting to compare the seq2seq and parse2seq baselines. Parse2seq outperforms seq2seq by only a small amount compared to multi-source; thus, while adding syntax to NMT can be helpful, some ways of doing so are more effective than others." ] ]
e8e00b4c0673af5ab02ec82563105e4157cc54bb
How were their results compared to state-of-the-art?
[ "transformer model achieves higher BLEU score than both Attention encoder-decoder and sequence-sequence model" ]
[ [ "Table TABREF48 shows the BLEU score of all three models based on English-Hindi, Hindi-English on CFILT's test dataset respectively. From the results which we get, it is evident that the transformer model achieves higher BLEU score than both Attention encoder-decoder and sequence-sequence model. Attention encoder-decoder achieves better BLEU score and sequence-sequence model performs the worst out of the three which further consolidates the point that if we are dealing with long source and target sentences then attention mechanism is very much required to capture long term dependencies and we can solely rely on the attention mechanism, overthrowing recurrent cells completely for the machine translation task." ] ]
18ad60f97f53af64cb9db2123c0d8846c57bfa4a
What supports the claim that injected CNN into recurent units will enhance ability of the model to catch local context and reduce ambiguities?
[ "word embeddings to generate a new feature, i.e., summarizing a local context" ]
[ [ "The embedding-wise convolution is to apply a convolution filter w $\\in \\mathbb {R}^{\\mathcal {\\\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as", "By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \\in \\mathbb {R}^{\\mathcal {\\\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \\in \\mathbb {R}^{\\mathcal {\\\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks." ] ]
87357448ce4cae3c59d4570a19c7a9df4c086bd8
How is CNN injected into recurent units?
[ "The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward." ]
[ [ "In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.", "The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both." ] ]
1ccc4f63268aa7841cc6fd23535c9cbe85791007
Are there some results better than state of the art on these tasks?
[ "Yes" ]
[ [ "To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows." ] ]
afe34e553c3c784dbf02add675b15c27638cdd45
Do experiment results show consistent significant improvement of new approach over traditional CNN and RNN models?
[ "Yes" ]
[ [ "The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper." ] ]
3f46d8082a753265ec2a88ae8f1beb6651e281b6
What datasets are used for testing sentiment classification and reading comprehension?
[ "CBT NE/CN, MR Movie reviews, IMDB Movie reviews, SUBJ" ]
[ [ "In the sentiment classification task, we tried our model on the following public datasets.", "MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.", "IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.", "SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.", "We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable." ] ]
63d9b12dc3ff3ceb1aed83ce11371bca8aac4e8f
So we do not use pre-trained embedding in this case?
[ "Yes" ]
[ [] ]
0bd864f83626a0c60f5e96b73fb269607afc7c09
How are sentence embeddings incorporated into the speech recognition system?
[ "BERT generates sentence embeddings that represent words in context. These sentence embeddings are merged into a single conversational-context vector that is used to calculate a gated embedding and is later combined with the output of the decoder h to provide the gated activations for the next hidden layer." ]
[ [ "There exist many word/sentence embeddings which are publicly available. We can broadly classify them into two categories: (1) non-contextual word embeddings, and (2) contextual word embeddings. Non-contextual word embeddings, such as Word2Vec BIBREF1 , GloVe BIBREF39 , fastText BIBREF17 , maps each word independently on the context of the sentence where the word occur in. Although it is easy to use, it assumes that each word represents a single meaning which is not true in real-word. Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations BIBREF20 , BERT BIBREF22 , encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. The BERT model proposed a masked language model training approach enabling them to also learn good “sentence” representation in order to predict the masked word.", "In this work, we explore both types of embeddings to learn conversational-context embeddings as illustrated in Figure 1 . The first method is to use word embeddings, fastText, to generate 300-dimensional embeddings from 10k-dimensional one-hot vector or distribution over words of each previous word and then merge into a single context vector, $e^k_{context}$ . Since we also consider multiple word/utterance history, we consider two simple ways to merge multiple embeddings (1) mean, and (2) concatenation. The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods. Since our A2W model uses a restricted vocabulary of 10k as our output units and which is different from the external embedding models, we need to handle out-of-vocabulary words. For fastText, words that are missing in the pretrained embeddings we map them to a random multivariate normal distribution with the mean as the sample mean and variance as the sample variance of the known words. For BERT, we use its provided tokenizer to generates byte pair encodings to handle OOV words.", "Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, $e^k_{context}$ to be used in next $k$ -th utterance prediction.", "We use contextual gating mechanism in our decoder network to combine the conversational-context embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. Using these contextual gates can be beneficial to decide how to weigh the different embeddings, conversational-context, word and speech embeddings. Rather than merely concatenating conversational-context embeddings BIBREF6 , contextual gating can achieve more improvement because its increased representational power using multiplicative interactions.", "Figure 2 illustrates our proposed contextual gating mechanism. Let $e_w = e_w(y_{u-1})$ be our previous word embedding for a word $y_{u-1}$ , and let $e_s = e_s(x^k_{1:T})$ be a speech embedding for the acoustic features of current $k$ -th utterance $x^k_{1:T}$ and $e_c = e_c(s_{k-1-n:k-1})$ be our conversational-context embedding for $n$ -number of preceding utterances ${s_{k-1-n:k-1}}$ . Then using a gating mechanism:", "$$g = \\sigma (e_c, e_w, e_s)$$ (Eq. 15)", "where $\\sigma $ is a 1 hidden layer DNN with $\\texttt {sigmoid}$ activation, the gated embedding $e$ is calcuated as", "$$e = g \\odot (e_c, e_w, e_s) \\\\ h = \\text{LSTM}(e)$$ (Eq. 16)", "and fed into the LSTM decoder hidden layer. The output of the decoder $h$ is then combined with conversational-context embedding $e_c$ again with a gating mechanism,", "$$g = \\sigma (e_C, h) \\\\ \\hat{h} = g \\odot (e_c, h)$$ (Eq. 17)", "Then the next hidden layer takes these gated activations, $\\hat{h}$ , and so on." ] ]
c77d6061d260f627f2a29a63718243bab5a6ed5a
How different is the dataset size of source and target?
[ "the training dataset is large while the target dataset is usually much smaller" ]
[ [ "The procedure of transfer learning in this work is straightforward and includes two steps. The first step is to pre-train the model on one MCQA dataset referred to as the source task, which usually contains abundant training data. The second step is to fine-tune the same model on the other MCQA dataset, which is referred to as the target task, that we actually care about, but that usually contains much less training data. The effectiveness of transfer learning is evaluated by the model's performance on the target task." ] ]
4c7b29f6e3cc1e902959a1985146ccc0b15fe521
How do you find the entity descriptions?
[ "Wikipedia" ]
[ [ "Adding another source: description-based embeddings. While in this paper, we focus on the contexts and names of entities, there is a textual source of information about entities in KBs which we can also make use of: descriptions of entities. We extract Wikipedia descriptions of FIGMENT entities filtering out the entities ( $\\sim $ 40,000 out of $\\sim $ 200,000) without description." ] ]
b34c60eb4738e0439523bcc679fe0fe70ceb8bde
How is OpenBookQA different from other natural language QA?
[ "in the OpenBookQA setup the open book part is much larger, the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required" ]
[ [ "The OpenBookQA dataset has a collection of questions and four answer choices for each question. The dataset comes with 1326 facts representing an open book. It is expected that answering each question requires at least one of these facts. In addition it requires common knowledge. To obtain relevant common knowledge we use an IR system BIBREF6 front end to a set of knowledge rich sentences. Compared to reading comprehension based QA (RCQA) setup where the answers to a question is usually found in the given small paragraph, in the OpenBookQA setup the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required. This leads to multiple challenges. First, finding the relevant facts in an open book (which is much bigger than the small paragraphs in the RCQA setting) is a challenge. Then, finding the relevant common knowledge using the IR front end is an even bigger challenge, especially since standard IR approaches can be misled by distractions. For example, Table 1 shows a sample question from the OpenBookQA dataset. We can see the retrieved missing knowledge contains words which overlap with both answer options A and B. Introduction of such knowledge sentences increases confusion for the question answering model. Finally, reasoning involving both facts from open book, and common knowledge leads to multi-hop reasoning with respect to natural language text, which is also a challenge." ] ]
9623884915b125d26e13e8eeebe9a0f79d56954b
At what text unit/level were documents processed?
[ "documents are segmented into paragraphs and processed at the paragraph level" ]
[ [ "We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:" ] ]
77db56fee07b01015a74413ca31f19bea7203f0b
What evaluation metric were used for presenting results?
[ "F$_1$, precision, and recall" ]
[ [ "Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models." ] ]
c309e87c9e08cf847f31e554577d6366faec1ea0
Was the structure of regulatory filings exploited when training the model?
[ "No" ]
[ [ "We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:" ] ]
81cee2fc6edd9b7bc65bbf6b4aa35782339e6cff
What type of documents are supported by the annotation platform?
[ "Variety of formats supported (PDF, Word...), user can define content elements of document" ]
[ [ "All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators." ] ]
79620a2b4b121b6d3edd0f7b1d4a8cc7ada0b516
What are the state-of-the-art models for the task?
[ "To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset" ]
[ [ "Results of our proposed method, the top three methods in the original Fake News Challenge, and the best-performing methods since the challenge's conclusion on the FNC-I test set are displayed in Table TABREF12. A confusion matrix for our method is presented in the Appendix. To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset. Notably, since the conclusion of the Fake News Challenge in 2017, the weighted-accuracy error-rate has decreased by 8%, signifying improved performance of NLP models and innovations in the domain of stance detection, as well as a continued interest in combating the spread of disinformation." ] ]
2555ca85ff6b56bd09c3919aa6b277eb7a4d4631
Which datasets are used for evaluation?
[ "Stanford Sentiment Treebank" ]
[ [ "In our experiments, we use as input the 2210 tokenized sentences of the Stanford Sentiment Treebank test set BIBREF2 , preprocessing them by lowercasing as was done in BIBREF8 . On five-class sentiment prediction of full sentences (very negative, negative, neutral, positive, very positive) the model achieves 46.3% accuracy, and for binary classification (positive vs. negative, ignoring neutral sentences) the test accuracy is 82.9%." ] ]
d028dcef22cdf0e86f62455d083581d025db1955
What are the strong baselines you have?
[ "optimize single task with no synthetic data" ]
[ [ "We optimized our single-task baseline to get a strong baseline in order to exclude better results in multi-task learning in comparison to single-task learning only because of these two following points: network parameters suit the multi-task learning approach better and a better randomness while training in the multi-task learning. To exclude the first point, we tested different hyperparameters for the single-task baseline. We tested all the combinations of the following hyperparameter values: 256, 512, or 1024 as the sizes for the hidden states of the LSTMs, 256, 512, or 1024 as word embedding sizes, and a dropout of 30 %, 40 %, or 50 %. We used subword units generated by byte-pair encoding (BPE) BIBREF16 as inputs for our model. To avoid bad subword generation for the synthetic datasets, in addition to the training dataset, we considered the validation and test dataset for the generating of the BPE merge operations list. We trained the configurations for 14 epochs and trained every configuration three times. We chose the training with the best quality with regard to the validation F1-score to exclude disadvantages of a bad randomness. We got the best quality with regard to the F1-score with 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and a dropout of 30 %. For the batch size, we used 64." ] ]
593e307d9a9d7361eba49484099c7a8147d3dade
What are causal attribution networks?
[ "networks where nodes represent causes and effects, and directed edges represent cause-effect relationships proposed by humans" ]
[ [ "In this work we compare causal attribution networks derived from three datasets. A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\\rightarrow $ “sickness”) of the causal attribution network." ] ]
6f8881e60fdaca7c1b35a5acc7125994bb1206a3
How accurate is their predictive model?
[ "Unanswerable" ]
[ [] ]
6a7370dd12682434248d006ffe0a72228c439693
How large language sets are able to be explored using this approach?
[ "Unanswerable" ]
[ [] ]
a71ebd8dc907d470f6bd3829fa949b15b29a0631
how did they ask if a tweet was racist?
[ "if it includes negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture." ]
[ [ "The classification of racist insults presents us with the problem of giving an adequate definition of racism. More so than in other domains, judging whether an utterance is an act of racism is highly personal and does not easily fit a simple definition. The Belgian anti-racist law forbids discrimination, violence and crime based on physical qualities (like skin color), nationality or ethnicity, but does not mention textual insults based on these qualities. Hence, this definition is not adequate for our purposes, since it does not include the racist utterances one would find on social media; few utterances that people might perceive as racist are actually punishable by law, as only utterances which explicitly encourage the use of violence are illegal. For this reason, we use a common sense definition of racist language, including all negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture. In this, we follow paolo2015racist, bonilla2002linguistics and razavi2010offensive, who show that racism is no longer strictly limited to physical or ethnic qualities, but can also include social and cultural aspects." ] ]
1546356a8c5893dc2d298dcbd96d0307731dd54d
What other cross-lingual approaches is the model compared to?
[ "The baseline model BIBREF5 we compare with regards the output space of the model as a subset INLINEFORM2 where INLINEFORM3 is the set of all tag sets seen in this training data." ]
[ [ "Data-driven models for morphological analysis are constructed using training data INLINEFORM0 consisting of INLINEFORM1 training examples. The baseline model BIBREF5 we compare with regards the output space of the model as a subset INLINEFORM2 where INLINEFORM3 is the set of all tag sets seen in this training data. Specifically, they solve the task as a multi-class classification problem where the classes are individual tag sets. In low-resource scenarios, this indicates that INLINEFORM4 and even for those tag sets existing in INLINEFORM5 we may have seen very few training examples. The conditional probability of a sequence of tag sets given the sentence is formulated as a 0th order CRF. DISPLAYFORM0" ] ]
9f5507a8c835c4671020d7d310fff2930d44e75a
What languages are explored?
[ "Danish/Swedish (da/sv), Russian/Bulgarian (ru/bg), Finnish/Hungarian (fi/hu), Spanish/Portuguese (es/pt)" ]
[ [ "We used the Universal Dependencies Treebank UD v2.1 BIBREF0 for our experiments. We picked four low-resource/high-resource language pairs, each from a different family: Danish/Swedish (da/sv), Russian/Bulgarian (ru/bg), Finnish/Hungarian (fi/hu), Spanish/Portuguese (es/pt). Picking languages from different families would ensure that we obtain results that are on average consistent across languages." ] ]