question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
6cd01609c8afb425fbed941441a2528123352940
Do they show an example of usage for INFODENS?
[ "Yes" ]
[ [ "The framework can be used as a standalone toolkit without any modifications given the implemented features and classifiers. For example, it can be used to extract features for usage with other machine learning tools, or to evaluate given features with the existing classifiers or regressors. Extending the framework with new feature extractors or classifiers is as simple as a drag and drop placement of the new code files into the feature_extractor and classifer directories respectively. The framework will then detect the new extensions dynamically at runtime. In this section we explore how each use case is handled.", "Since a main use case for the framework is extracting engineered and learned features, it was designed such that developing a new feature extractor would require minimal effort. Figure FIGREF19 demonstrates a simple feature extractor that retrieves the sentence length. More complicated features and learned features are provided in the repository which can be used as a guide for developers. Documentation for adding classifiers and format writers is described in the Wiki of the repository but is left out of this paper due to the limited space." ] ]
7a70fb11cb3449749f0c2c06101965bf5d02c54a
What kind of representation exploration does INFODENS provide?
[ "Unanswerable" ]
[ [] ]
03b939ad70593f6475c56e9be73ba409d33faa62
What models do they compare to?
[ "LEAD, QUERY_SIM, MultiMR, SVR, DocEmb, ISOLATION" ]
[ [ "To evaluate the summarization performance of AttSum, we implement rich extractive summarization methods. Above all, we introduce two common baselines. The first one just selects the leading sentences to form a summary. It is often used as an official baseline of DUC, and we name it “LEAD”. The other system is called “QUERY_SIM”, which directly ranks sentences according to its TF-IDF cosine similarity to the query. In addition, we implement two popular extractive query-focused summarization methods, called MultiMR BIBREF2 and SVR BIBREF20 . MultiMR is a graph-based manifold ranking method which makes uniform use of the sentence-to-sentence relationships and the sentence-to-query relationships. SVR extracts both query-dependent and query-independent features and applies Support Vector Regression to learn feature weights. Note that MultiMR is unsupervised while SVR is supervised. Since our model is totally data-driven, we introduce a recent summarization system DocEmb BIBREF9 that also just use deep neural network features to rank sentences. It initially works for generic summarization and we supplement the query information to compute the document representation.", "To verify the effectiveness of the joint model, we design a baseline called ISOLATION, which performs saliency ranking and relevance ranking in isolation. Specifically, it directly uses the sum pooling over sentence embeddings to represent the document cluster. Therefore, the embedding similarity between a sentence and the document cluster could only measure the sentence saliency. To include the query information, we supplement the common hand-crafted feature TF-IDF cosine similarity to the query. This query-dependent feature, together with the embedding similarity, are used in sentence ranking. ISOLATION removes the attention mechanism, and mixtures hand-crafted and automatically learned features. All these methods adopt the same sentence selection process illustrated in Section \"Sentence Selection\" for a fair comparison." ] ]
940873658ee64e131cafcf9b3d26a45a98920cc2
What is the optimal trading strategy based on reinforcement learning?
[ "Unanswerable" ]
[ [] ]
3d39e57e90903b776389f1b01ca238a6feb877f3
Do the authors give any examples of major events which draw the public's attention and the impact they have on stock price?
[ "Yes" ]
[ [ "Figure FIGREF6 shows the relationship between Tesla stock return and stock sentiment score. According the distribution of the sentiment score, the sentiment on Tesla is slightly skewed towards positive during the testing period. The price has been increased significantly during the testing period, which reflected the positive sentiment. The predicting power of sentiment score is more significant when the sentiment is more extreme and less so when the sentiment is neutral." ] ]
69ef007fc131b04b5b71b0b446db2f77f434f1b3
Which tweets are used to output the daily sentiment signal?
[ "Tesla and Ford are investigated on how Twitter sentiment could impact the stock price" ]
[ [ "There are two options of getting the Tweets. First, Twitter provides an API to download the Tweets. However, rate limit and history limit make it not an option for this paper. Second, scrapping Tweets directly from Twitter website. Using the second option, the daily Tweets for stocks of interest from 2015 January to 2017 June were downloaded.", "For this reason, two companies from the same industry, Tesla and Ford are investigated on how Twitter sentiment could impact the stock price. Tesla is an electronic car company that shows consecutive negative operating cash flow and net income but carries very high expectation from the public. Ford, is a traditional auto maker whose stock prices has been stabilized to represent the company fundamentals.", "To translate each tweet into a sentiment score, the Stanford coreNLP software was used. Stanford CoreNLP is designed to make linguistic analysis accessible to the general public. It provides named Entity Recognition, co-reference and basic dependencies and many other text understanding applications. An example that illustrate the basic functionality of Stanford coreNLP is shown in Figure. FIGREF5" ] ]
20df24165b881f97dc1ac32f343939554dd68011
What is the baseline machine learning prediction approach?
[ "linear logistic regression to a set of stock technical signals" ]
[ [ "To evaluate if the sentiment feature improves the prediction accuracy, a baseline model is defined. The baseline applies linear logistic regression to a set of stock technical signals to predict the following day’s stock return sign (+/‐). No sentiment features are provided to the baseline model." ] ]
551f77b58c48ee826d78b4bf622bb42b039eca8c
What are the weaknesses of their proposed interpretability quantification method?
[ "can be biased by dataset used and may generate categories which are suboptimal compared to human designed categories" ]
[ [ "Our interpretability measurements are based on our proposed dataset SEMCAT, which was designed to be a comprehensive dataset that contains a diverse set of word categories. Yet, it is possible that the precise interpretability scores that are measured here are biased by the dataset used. In general, two main properties of the dataset can affect the results: category selection and within-category word selection. To examine the effects of these properties on interpretability evaluations, we create alternative datasets by varying both category selection and word selection for SEMCAT. Since SEMCAT is comprehensive in terms of the words it contains for the categories, these datasets are created by subsampling the categories and words included in SEMCAT. Since random sampling of words within a category may perturb the capacity of the dataset in reflecting human judgement, we subsample r% of the words that are closest to category centers within each category, where $r \\in \\lbrace 40,60,80,100\\rbrace $ . To examine the importance of number of categories in the dataset we randomly select $m$ categories from SEMCAT where $m \\in \\lbrace 30,50,70,90,110\\rbrace $ . We repeat the selection 10 times independently for each $m$ .", "In contrast to the category coverage, the effects of within-category word coverage on interpretability scores can be more complex. Starting with few words within each category, increasing the number of words is expected to more uniformly sample from the word distribution, more accurately reflect the semantic relations within each category and thereby enhance interpretability scores. However, having categories over-abundant in words might inevitably weaken semantic correlations among them, reducing the discriminability of the categories and interpretability of the embedding. Table 3 shows that, interestingly, changing the category coverage has different effects on the interpretability scores of different types of embeddings. As category word coverage increases, interpretability scores for random embedding gradually decrease while they monotonically increase for the GloVe embedding. For semantic spaces $\\mathcal {I}$ and $\\mathcal {I}^*$ , interpretability scores increase as the category coverage increases up to 80 $\\%$ of that of SEMCAT, then the scores decrease. This may be a result of having too comprehensive categories as argued earlier, implying that categories with coverage of around 80 $\\%$ of SEMCAT are better suited for measuring interpretability. However, it should be noted that the change in the interpretability scores for different word coverages might be effected by non-ideal subsampling of category words. Although our word sampling method, based on words' distances to category centers, is expected to generate categories that are represented better compared to random sampling of category words, category representations might be suboptimal compared to human designed categories." ] ]
74cd51a5528c6c8e0b634f3ad7a9ce366dfa5706
What advantages does their proposed method of quantifying interpretability have over the human-in-the-loop evaluation they compare to?
[ "it is less expensive and quantifies interpretability using continuous values rather than binary evaluations" ]
[ [ "In the word embedding literature, the problem of interpretability has been approached via several different routes. For learning sparse, interpretable word representations from co-occurrence variant matrices, BIBREF21 suggested algorithms based on non-negative matrix factorization (NMF) and the resulting representations are called non-negative sparse embeddings (NNSE). To address memory and scale issues of the algorithms in BIBREF21 , BIBREF22 proposed an online method of learning interpretable word embeddings. In both studies, interpretability was evaluated using a word intrusion test introduced in BIBREF20 . The word intrusion test is expensive to apply since it requires manual evaluations by human observers separately for each embedding dimension. As an alternative method to incorporate human judgement, BIBREF23 proposed joint non-negative sparse embedding (JNNSE), where the aim is to combine text-based similarity information among words with brain activity based similarity information to improve interpretability. Yet, this approach still requires labor-intensive collection of neuroimaging data from multiple subjects.", "In addition to investigating the semantic distribution in the embedding space, a word category dataset can be also used to quantify the interpretability of the word embeddings. In several studies, BIBREF21 , BIBREF22 , BIBREF20 , interpretability is evaluated using the word intrusion test. In the word intrusion test, for each embedding dimension, a word set is generated including the top 5 words in the top ranks and a noisy word (intruder) in the bottom ranks of that dimension. The intruder is selected such that it is in the top ranks of a separate dimension. Then, human editors are asked to determine the intruder word within the generated set. The editors' performances are used to quantify the interpretability of the embedding. Although evaluating interpretability based on human judgements is an effective approach, word intrusion is an expensive method since it requires human effort for each evaluation. Furthermore, the word intrusion test does not quantify the interpretability levels of the embedding dimensions, instead it yields a binary decision as to whether a dimension is interpretable or not. However, using continuous values is more adequate than making binary evaluations since interpretability levels may vary gradually across dimensions." ] ]
4bf4374135c39d10dafece4bed8ef547dc3bf9f0
How do they generate a graphic representation of a query from a query?
[ "Unanswerable" ]
[ [] ]
e2a637f1d93e1ea9f29c96ff0fc6bc017209065b
How do they gather data for the query explanation problem?
[ "hand crafted by users" ]
[ [ "WikiTableQuestions BIBREF1 is a question answering dataset over semi-structured tables. It is comprised of question-answer pairs on HTML tables, and was constructed by selecting data tables from Wikipedia that contained at least 8 rows and 5 columns. Amazon Mechanical Turk workers were then tasked with writing trivia questions about each table. In contrast to common NLIDB benchmarks BIBREF2 , BIBREF0 , BIBREF15 , WikiTableQuestions contains 22,033 questions and is an order of magnitude larger than previous state-of-the-art datasets. Its questions were not designed by predefined templates but were hand crafted by users, demonstrating high linguistic variance. Compared to previous datasets on knowledge bases it covers nearly 4,000 unique column headers, containing far more relations than closed domain datasets BIBREF15 , BIBREF2 and datasets for querying knowledge bases BIBREF16 . Its questions cover a wide range of domains, requiring operations such as table lookup, aggregation, superlatives (argmax, argmin), arithmetic operations, joins and unions. The complexity of its questions can be shown in Tables TABREF6 and TABREF66 ." ] ]
b3bd217287b8c765b0d461dc283afec779dbf039
Which query explanation method was preffered by the users in terms of correctness?
[ "hybrid approach" ]
[ [ "Results in Table TABREF64 show the correctness rates of these scenarios. User correctness score is superior to that of the baseline parser by 7.5% (from 37.1% to 44.6%), while the hybrid approach outscores both with a correctness of 48.7% improving the baseline by 11.6%. For the user and hybrid correctness we used a INLINEFORM0 test to measure significance. Random queries and tables included in the experiment are presented in Table TABREF66 . We also include a comparison of the top ranked query of the baseline parser compared to that of the user." ] ]
e8647f9dc0986048694c34ab9ce763b3167c3deb
Do they conduct a user study where they show an NL interface with and without their explanation?
[ "No" ]
[ [ "We have examined the effect to which our query explanations can help users improve the correctness of a baseline NL interface. Our user study compares the correctness of three scenarios:", "Parser correctness - our baseline is the percentage of examples where the top query returned by the semantic parser was correct.", "User correctness - the percentage of examples where the user selected a correct query from the top-7 generated by the parser.", "Hybrid correctness - correctness of queries returned by a combination of the previous two scenarios. The system returns the query marked by the user as correct; if the user marks all queries as incorrect it will return the parser's top candidate." ] ]
a0876fcbcb5a5944b412613e885703f14732676c
How do the users in the user studies evaluate reliability of a NL interface?
[ "Unanswerable" ]
[ [] ]
84d36bca06786070e49d3db784e42a51dd573d36
What was the task given to workers?
[ "conceptualization task" ]
[ [ "We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017." ] ]
7af01e2580c332e2b5e8094908df4e43a29c8792
How was lexical diversity measured?
[ "By computing number of unique responses and number of responses divided by the number of unique responses to that question for each of the questions" ]
[ [ "To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.", "Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B)." ] ]
c78f18606524539e4c573481e5bf1e0a242cc33c
How many responses did they obtain?
[ "1001" ]
[ [ "We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017." ] ]
0cf6d52d7eafd43ff961377572bccefc29caf612
What crowdsourcing platform was used?
[ "AMT" ]
[ [ "We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017." ] ]
ddd6ba43c4e1138156dd2ef03c25a4c4a47adad0
Are results reported only for English data?
[ "No" ]
[ [ "We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. We also analyze the output text and the output label distribution of the models, showing the power of the proposed approach. Finally, we show the cases where the correspondences learned by the proposed approach are still problematic, which can be explained based on the approach we adopt.", "Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo. The whole dataset is split into three parts, with 2,400,591 pairs in PART I for training, 10,666 pairs in PART II for validation, and 1,106 pairs in PART III for testing. The authors of the dataset have manually annotated the relevance scores, ranging from 1 to 5, of the text-summary pairs in PART II and PART III. They suggested that only pairs with scores no less than three should be used for evaluation, which leaves 8,685 pairs in PART II, and 725 pairs in PART III. From the statistics of the PART II and PART III, we can see that more than 20% of the pairs are dropped to maintain semantic quality. It indicates that the training set, which has not been manually annotated and checked, contains a huge quantity of unrelated text-summary pairs." ] ]
bd99aba3309da96e96eab3e0f4c4c8c70b51980a
Which existing models does this approach outperform?
[ "RNN-context, SRB, CopyNet, RNN-distract, DRGD" ]
[ [] ]
73bb8b7d7e98ccb88bb19ecd2215d91dd212f50d
What human evaluation method is proposed?
[ "comparing the summary with the text instead of the reference and labeling the candidate bad if it is incorrect or irrelevant" ]
[ [ "More detailed explanation is introduced in Section SECREF2 . Another problem for abstractive text summarization is that the system summary cannot be easily evaluated automatically. ROUGE BIBREF9 is widely used for summarization evaluation. However, as ROUGE is designed for extractive text summarization, it cannot deal with summary paraphrasing in abstractive text summarization. Besides, as ROUGE is based on the reference, it requires high-quality reference summary for a reasonable evaluation, which is also lacking in the existing dataset for Chinese social media text summarization. We argue that for proper evaluation of text generation task, human evaluation cannot be avoided. We propose a simple and practical human evaluation for evaluating text summarization, where the summary is evaluated against the source content instead of the reference. It handles both of the problems of paraphrasing and lack of high-quality reference. The contributions of this work are summarized as follows:", "For text summarization, a common automatic evaluation method is ROUGE BIBREF9 . The generated summary is evaluated against the reference summary, based on unigram recall (ROUGE-1), bigram recall (ROUGE-2), and recall of longest common subsequence (ROUGE-L). To facilitate comparison with the existing systems, we adopt ROUGE as the automatic evaluation method. The ROUGE is calculated on the character level, following the previous work BIBREF1 . However, for abstractive text summarization, the ROUGE is sub-optimal, and cannot assess the semantic consistency between the summary and the source content, especially when there is only one reference for a piece of text. The reason is that the same content may be expressed in different ways with different focuses. Simple word match cannot recognize the paraphrasing. It is the case for all of the existing large-scale datasets. Besides, as aforementioned, ROUGE is calculated on the character level in Chinese text summarization, making the metrics favor the models on the character level in practice. In Chinese, a word is the smallest semantic element that can be uttered in isolation, not a character. In the extreme case, the generated text could be completely intelligible, but the characters could still match. In theory, calculating ROUGE metrics on the word level could alleviate the problem. However, word segmentation is also a non-trivial task for Chinese. There are many kinds of segmentation rules, which will produce different ROUGE scores. We argue that it is not acceptable to introduce additional systematic bias in automatic evaluations, and automatic evaluation for semantically related tasks can only serve as a reference. To avoid the deficiencies, we propose a simple human evaluation method to assess the semantic consistency. Each summary candidate is evaluated against the text rather than the reference. If the candidate is irrelevant or incorrect to the text, or the candidate is not understandable, the candidate is labeled bad. Otherwise, the candidate is labeled good. Then, we can get an accuracy of the good summaries. The proposed evaluation is very simple and straight-forward. It focuses on the relevance between the summary and the text. The semantic consistency should be the major consideration when putting the text summarization methods into practice, but the current automatic methods cannot judge properly. For detailed guidelines in human evaluation, please refer to Appendix SECREF6 . In the human evaluation, the text-summary pairs are dispatched to two human annotators who are native speakers of Chinese. As in our setting the summary is evaluated against the reference, the number of the pairs needs to be manually evaluated is four times the number of the pairs in the test set, because we need to compare four systems in total. To decrease the workload and get a hint about the annotation quality at the same time, we adopt the following procedure. We first randomly select 100 pairs in the validation set for the two human annotators to evaluate. Each pair is annotated twice, and the inter-annotator agreement is checked. We find that under the protocol, the inter-annotator agreement is quite high. In the evaluation of the test set, a pair is only annotated once to accelerate evaluation. To further maintain consistency, summaries of the same source content will not be distributed to different annotators." ] ]
86e3136271a7b93991c8de5d310ab15a6ac5ab8c
How is human evaluation performed, what were the criteria?
[ "(1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting, (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic, (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query" ]
[ [ "Three annotators from a commercial annotation company are recruited to conduct our human evaluation. Responses from different models are shuffled for labeling. 300 test queries are randomly selected out, and annotators are asked to independently score the results of these queries with different points in terms of their quality: (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting; (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic (e.g.,“我不知道(I don't know)\", “我也是(Me too)”, “我喜欢(I like it)\" etc.); (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query." ] ]
b48cd91219429f910b1ea6fcd6f4bd143ddf096f
What automatic metrics are used?
[ "BLEU, Distinct-1 & distinct-2" ]
[ [ "To evaluate the responses generated by all compared methods, we compute the following automatic metrics on our test set:", "BLEU: BLEU-n measures the average n-gram precision on a set of reference responses. We report BLEU-n with n=1,2,3,4.", "Distinct-1 & distinct-2 BIBREF5: We count the numbers of distinct uni-grams and bi-grams in the generated responses and divide the numbers by the total number of generated uni-grams and bi-grams in the test set. These metrics can be regarded as an automatic metric to evaluate the diversity of the responses." ] ]
4f1a5eed730fdcf0e570f9118fc09ef2173c6a1b
What other kinds of generation models are used in experiments?
[ " Seq2seq, CVAE, Hierarchical Gated Fusion Unit (HGFU), Mechanism-Aware Neural Machine (MANM)" ]
[ [ "In our work, we focus on comparing various methods that model $p(\\mathbf {y}|\\mathbf {x})$ differently. We compare our proposed discrete CVAE (DCVAE) with the two-stage sampling approach to three categories of response generation models:", "Baselines: Seq2seq, the basic encoder-decoder model with soft attention mechanism BIBREF30 used in decoding and beam search used in testing; MMI-bidi BIBREF5, which uses the MMI to re-rank results from beam search.", "CVAE BIBREF14: We adjust the original work which is for multi-round conversation for our single-round setting. For a fair comparison, we utilize the same keywords used in our network pre-training as the knowledge-guided features in this model.", "Other enhanced encoder-decoder models: Hierarchical Gated Fusion Unit (HGFU) BIBREF12, which incorporates a cue word extracted using pointwise mutual information (PMI) into the decoder to generate meaningful responses; Mechanism-Aware Neural Machine (MANM) BIBREF13, which introduces latent embeddings to allow for multiple diverse response generation." ] ]
4bdad5a20750c878d1a891ef255621f6172b6a79
How does discrete latent variable has an explicit semantic meaning to improve the CVAE on short-text conversation?
[ "we connect each latent variable with a word in the vocabulary, thus each latent variable has an exact semantic meaning." ]
[ [ "We find that BIBREF27 zhao2018unsupervised make use of a set of discrete variables that define high-level attributes of a response. Although they interpret meanings of the learned discrete latent variables by clustering data according to certain classes (e.g. dialog acts), such latent variables still have no exact meanings. In our model, we connect each latent variable with a word in the vocabulary, thus each latent variable has an exact semantic meaning. Besides, they focus on multi-turn dialogue generation and presented an unsupervised discrete sentence representation learning method learned from the context while our concentration is primarily on single-turn dialogue generation with no context information." ] ]
2e3265d83d2a595293ed458152d3ee76ad19e244
What news dataset was used?
[ "collection of headlines published by HuffPost BIBREF12 between 2012 and 2018" ]
[ [ "The News Category Dataset BIBREF11 is a collection of headlines published by HuffPost BIBREF12 between 2012 and 2018, and was obtained online from Kaggle BIBREF13. The full dataset contains 200k news headlines with category labels, publication dates, and short text descriptions. For this analysis, a sample of roughly 33k headlines spanning 23 categories was used. Further analysis can be found in table SECREF12 in the appendix." ] ]
c2432884287dca4af355698a543bc0db67a8c091
How do they determine similarity between predicted word and topics?
[ "number of relevant output words as a function of the headline’s category label" ]
[ [ "To test the proposed methods ability to generate unsupervised words, it was necessary to devise a method of measuring word relevance. Topic modeling was used based on the assumption that words found in the same topic are more relevant to one another then words from different topics BIBREF14. The complete 200k headline dataset BIBREF11 was modeled using a Naïve Bayes Algorithm BIBREF15 to create a word-category co-occurrence model. The top 200 most relevant words were then found for each category and used to create the topic table SECREF12. It was assumed that each category represented its own unique topic.", "The number of relevant output words as a function of the headline’s category label were measured, and can be found in figure SECREF4. The results demonstrate that the proposed method could correctly identify new words relevant to the input topic at a signal to noise ratio of 4 to 1." ] ]
226ae469a65611f041de3ae545be0e386dba7d19
What is the language model pre-trained on?
[ "Wikipedea Corpus and BooksCorpus" ]
[ [ "This method is unique since it avoids needing a prior dataset by using the information found within the weights of a general language model. Word embedding models, and BERT in particular, contain vast amounts of information collected through the course of their training. BERT Base for instance, has 110 Million parameters and was trained on both Wikipedea Corpus and BooksCorpus BIBREF0, a combined collection of over 3 Billion words. The full potential of such vastly trained general language models is still unfolding. This paper demonstrates that by carefully prompting and analysing these models, it is possible to extract new information from them, and extend short-text analysis beyond the limitations posed by word count." ] ]
8ad815b29cc32c1861b77de938c7269c9259a064
What languages are represented in the dataset?
[ "EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO" ]
[ [ "We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset." ] ]
3f9ef59ac06db3f99b8b6f082308610eb2d3626a
Which existing language ID systems are tested?
[ "langid.py library, encoder-decoder EquiLID system, GRU neural network LanideNN system, CLD2, CLD3" ]
[ [ "For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10. We obtained publicly-available implementations of each of these algorithms, and test them all against our three datasets. In Table TABREF33, we report each algorithm's accuracy and F1 score, the two metrics usually reported in the LID literature. We also included precision and recall values, which are necessary for computing F1 score. And finally we included the speed in number of messages handled per second. This metric is not often discussed in the LID literature, but is of particular importance when dealing with a massive dataset such as ours or a massive streaming source such as Twitter." ] ]
203d322743353aac8a3369220e1d023a78c2cae3
How was the one year worth of data collected?
[ "Unanswerable" ]
[ [] ]
557d1874f736d9d487eb823fe8f6dab4b17c3c42
Which language family does Mboshi belong to?
[ "Bantu" ]
[ [ "Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words." ] ]
f41c401a4c6e1be768f8e68f774af3661c890ffd
Does the paper report any alignment-only baseline?
[ "Yes" ]
[ [ "Recall that our goal is to discover words in an unsegmented stream of target characters (or phonemes) in the under-resourced language. In this section, we first describe a baseline method inspired by the “align to segment” of BIBREF12, BIBREF13. We then propose two extensions providing the model with a signal relevant to the segmentation process, so as to move towards a joint learning of segmentation and alignment." ] ]
09cd7ae01fe97bba230c109d0234fee80a1f013b
What is the dataset used in the paper?
[ "French-Mboshi 5K corpus" ]
[ [ "Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words." ] ]
be3e020ba84bc53dfb90b8acaf549004b66e31e2
How is the word segmentation task evaluated?
[ "precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF), exact-match (X) metric" ]
[ [ "We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances. Our main results are in Figure FIGREF47, where we report averaged scores over 10 runs. As a comparison with another bilingual method inspired by the “align to segment” approach, we also include the results obtained using the statistical models of BIBREF9, denoted Pisa, in Table TABREF46." ] ]
24014a040447013a8cf0c0f196274667320db79f
What are performance compared to former models?
[ "model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF" ]
[ [ "Table TABREF11 presents the results on 14 treebanks from the CoNLL shared tasks. Our model yields the best results on both UAS and LAS metrics of all languages except the Japanese. As for Japanese, our model gives unsatisfactory results because the original treebank was written in Roman phonetic characters instead of hiragana, which is used by both common Japanese writing and our pre-trained embeddings. Despite this, our model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF.", "Following BIBREF23, we report results on the test sets of 12 different languages from the UD treebanks along with the current state-of-the-art: BIAF and STACKPTR. Although both BIAF and STACKPTR parsers have achieved relatively high parsing accuracies on the 12 languages and have all UAS higher than 90%, our model achieves state-of-the-art results in all languages for both UAS and LAS. Overall, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF." ] ]
9aa52b898d029af615b95b18b79078e9bed3d766
How faster is training and decoding compared to former models?
[ "Proposed vs best baseline:\nDecoding: 8541 vs 8532 tokens/sec\nTraining: 8h vs 8h" ]
[ [ "In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime." ] ]
c431c142f5b82374746a2b2f18b40c6874f7131d
What datasets was the method evaluated on?
[ "WMT18 EnDe bitext, WMT16 EnRo bitext, WMT15 EnFr bitext, We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs." ]
[ [ "To support this hypothesis, we first demonstrate that the permutation and word-dropping noise used by BIBREF19 do not improve or significantly degrade NMT accuracy, corroborating that noise might act as an indicator that the source is back-translated, without much loss in mutual information between the source and target. We then train models on WMT English-German (EnDe) without BT noise, and instead explicitly tag the synthetic data with a reserved token. We call this technique “Tagged Back-Translation\" (TaggedBT). These models achieve equal to slightly higher performance than the noised variants. We repeat these experiments with WMT English-Romanian (EnRo), where NoisedBT underperforms standard BT and TaggedBT improves over both techniques. We demonstrate that TaggedBT also allows for effective iterative back-translation with EnRo, a technique which saw quality losses when applied with standard back-translation.", "We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs." ] ]
7835d8f578386834c02e2c9aba78a345059d56ca
Is the model evaluated against a baseline?
[ "No" ]
[ [] ]
32e78ca99ba8b8423d4b21c54cd5309cb92191fc
How many people are employed for the subjective evaluation?
[ "14 volunteers" ]
[ [ "We collected a total of 657 ratings by 14 volunteers, 5 Italian and 9 non-Italian listeners, spread over the 24 clips and three testing conditions. We conducted a statistical analysis of the data with linear mixed-effects models using the lme4 package for R BIBREF31. We analyzed the naturalness score (response variable) against the following two-level fixed effects: dubbing system A vs. B, system A vs. C, and system B vs. C. We run separate analysis for Italian and non-Italian listeners. In our mixed models, listeners and video clips are random effects, as they represent a tiny sample of the respective true populationsBIBREF31. We keep models maximal, i.e. with intercepts and slopes for each random effect, end remove terms required to avoid singularities BIBREF32. Each model is fitted by maximum likelihood and significance of intercepts and slopes are computed via t-test." ] ]
ffc5ad48b69a71e92295a66a9a0ff39548ab3cf1
What other embedding models are tested?
[ "GloVe embeddings trained by BIBREF10 on Wikipedia and Gigaword 5 (vocab: 400K, dim: 300), w2v-gn, Word2vec BIBREF5 trained on the Google News dataset (vocab: 3M, dim: 300), DeepWalk , node2vec" ]
[ [ "In our experiments, we used WordNet 3.0 BIBREF9 as our external knowledge base INLINEFORM0 . For word embeddings, we experimented with two popular models: (1) GloVe embeddings trained by BIBREF10 on Wikipedia and Gigaword 5 (vocab: 400K, dim: 300), and (2) w2v-gn, Word2vec BIBREF5 trained on the Google News dataset (vocab: 3M, dim: 300).", "Our coverage enhancement starts by transforming the knowledge base INLINEFORM0 into a vector space representation that is comparable to that of the corpus-based space INLINEFORM1 . To this end, we use two techniques for learning low-dimensional feature spaces from knowledge graphs: DeepWalk and node2vec. DeepWalk uses a stream of short random walks in order to extract local information for a node from the graph. By treating these walks as short sentences and phrases in a special language, the approach learns latent representations for each node. Similarly, node2vec learns a mapping of nodes to continuous vectors that maximizes the likelihood of preserving network neighborhoods of nodes. Thanks to a flexible objective that is not tied to a particular sampling strategy, node2vec reports improvements over DeepWalk on multiple classification and link prediction datasets. For both these systems we used the default parameters and set the dimensionality of output representation to 100. Also, note than nodes in the semantic graph of WordNet represent synsets. Hence, a polysemous word would correspond to multiple nodes. In our experiments, we use the MaxSim assumption of BIBREF11 in order to map words to synsets." ] ]
1024f22110c436aa7a62a1022819bfe62dc0d336
How is performance measured?
[ "To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. " ]
[ [ "To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. A high performance on this benchmark shows that the mapping has been successful in placing semantically similar terms near to each other whereas dissimilar terms are relatively far apart in the space. We repeat the computation for each pair in the reverse direction." ] ]
f062723bda695716aa7cb0f27675b7fc0d302d4d
How are rare words defined?
[ "judged by 10 raters on a [0,10] scale" ]
[ [ "In order to verify the reliability of our technique in coverage expansion for infrequent words we did a set of experiments on the Rare Word similarity dataset BIBREF6 . The dataset comprises 2034 pairs of rare words, such as ulcerate-change and nurturance-care, judged by 10 raters on a [0,10] scale. Table TABREF15 shows the results on the dataset for three pre-trained word embeddings (cf. § SECREF2 ), in their initial form as well as when enriched with additional words from WordNet." ] ]
b13d0e463d5eb6028cdaa0c36ac7de3b76b5e933
What datasets are used to evaluate the model?
[ "WN18 and FB15k" ]
[ [] ]
50e3fd6778dadf8ec0ff589aa8b18c61bdcacd41
What other datasets are used?
[ "WikiText-TL-39" ]
[ [ "To pretrain BERT and GPT-2 language models, as well as an AWD-LSTM language model for use in ULMFiT, a large unlabeled training corpora is needed. For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18. We form training-validation-test splits of 70%-15%-15% from this corpora." ] ]
c5980fe1a0c53bce1502cc674c8a2ed8c311f936
What is the size of the dataset?
[ "3,206" ]
[ [ "We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera." ] ]
7d3c036ec514d9c09c612a214498fc99bf163752
What is the source of the dataset?
[ "Online sites tagged as fake news site by Verafiles and NUJP and news website in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera" ]
[ [ "We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera." ] ]
ef7b62a705f887326b7ebacbd62567ee1f2129b3
What were the baselines?
[ "Siamese neural network consisting of an embedding layer, a LSTM layer and a feed-forward layer with ReLU activations" ]
[ [ "We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.", "We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations." ] ]
23d0637f8ae72ae343556ab135eedc7f4cb58032
How do they show that acquiring names of places helps self-localization?
[ "unsupervised morphological analyzer capable of using lattices improved the accuracy of phoneme recognition and word segmentation, Consequently, this result suggests that this word segmentation method considers the multiple hypothesis of speech recognition as a whole and reduces uncertainty such as variability in recognition" ]
[ [ "Table TABREF54 shows the results of PAR. Table TABREF55 presents examples of the word segmentation results of the three considered methods. We found that the unsupervised morphological analyzer capable of using lattices improved the accuracy of phoneme recognition and word segmentation. Consequently, this result suggests that this word segmentation method considers the multiple hypothesis of speech recognition as a whole and reduces uncertainty such as variability in recognition by using the syllable recognition results in the lattice format." ] ]
21c104d14ba3db7fe2cd804a191f9e6258208235
How do they evaluate how their model acquired words?
[ "PAR score" ]
[ [ "Accuracy of acquired phoneme sequences representing the names of places", "We evaluated whether the names of places were properly learned for the considered teaching places. This experiment assumes a request for the best phoneme sequence INLINEFORM0 representing the self-position INLINEFORM1 for a robot. The robot moves close to each teaching place. The probability of a word INLINEFORM2 when the self-position INLINEFORM3 of the robot is given, INLINEFORM4 , can be obtained by using equation ( EQREF37 ). The word having the best probability was selected. We compared the PAR with the correct phoneme sequence and a selected name of the place. Because “kiqchiN” and “daidokoro” were taught for the same place, the word whose PAR was the higher score was adopted.", "Fig. FIGREF63 shows the results of PAR for the word considered the name of a place. SpCoA (latticelm), the proposed method using the results of unsupervised word segmentation on the basis of the speech recognition results in the lattice format, showed the best PAR score. In the 1-best and BoS methods, a part syllable sequence of the name of a place was more minutely segmented as shown in Table TABREF55 . Therefore, the robot could not learn the name of the teaching place as a coherent phoneme sequence. In contrast, the robot could learn the names of teaching places more accurately by using the proposed method." ] ]
d557752c4706b65dcdb7718272180c59d77fb7a7
Which method do they use for word segmentation?
[ "unsupervised word segmentation method latticelm" ]
[ [ "The proposed method can learn words related to places from the utterances of sentences. We use an unsupervised word segmentation method latticelm that can directly segment words from the lattices of the speech recognition results of the uttered sentences BIBREF22 . The lattice can represent to a compact the set of more promising hypotheses of a speech recognition result, such as N-best, in a directed graph format. Unsupervised word segmentation using the lattices of syllable recognition is expected to be able to reduce the variability and errors in phonemes as compared to NPYLM BIBREF13 , i.e., word segmentation using the 1-best speech recognition results." ] ]
1bdf7e9f3f804930b2933ebd9207a3e000b27742
Does their model start with any prior knowledge of words?
[ "No" ]
[ [ "The objectives of this study were to build a robot that learns words related to places and efficiently utilizes this learned vocabulary in self-localization. Lexical acquisition related to places is expected to enable a robot to improve its spatial cognition. A schematic representation depicting the target task of this study is shown in Fig. FIGREF3 . This study assumes that a robot does not have any vocabularies in advance but can recognize syllables or phonemes. The robot then performs self-localization while moving around in the environment, as shown in Fig. FIGREF3 (a). An utterer speaks a sentence including the name of the place to the robot, as shown in Fig. FIGREF3 (b). For the purposes of this study, we need to consider the problems of self-localization and lexical acquisition simultaneously." ] ]
a74886d789a5d7ebcf7f151bdfb862c79b6b8a12
What were the baselines?
[ "a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30" ]
[ [ "The base sentence embedding model is a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 . We opt for this strong baseline sentence encoding model, as opposed to engineering sentence embeddings that work particularly well for this dataset, to showcase the dataset. We would expect pre-trained contextual encoding models, e.g. ELMO BIBREF33 , ULMFit BIBREF34 , BERT BIBREF35 , to offer complementary performance gains, as has been shown for a few recent papers BIBREF36 , BIBREF37 ." ] ]
e9ccc74b1f1b172224cf9f01e66b1fa9e34d2593
What metadata is included?
[ "besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date" ]
[ [] ]
2948015c2a5cd6a7f2ad99b4622f7e4278ceb0d4
How many expert journalists were there?
[ "Unanswerable" ]
[ [] ]
c33d0bc5484c38de0119c8738ffa985d1bd64424
Do the images have multilingual annotations or monolingual ones?
[ "monolingual" ]
[ [ "We experiment using a dataset derived from Google Images search results. The dataset consists of queries and the corresponding image search results. For example, one (query, image) pair might be “cat with big ears” and an image of a cat. Each (query, image) pair also has a weight corresponding to a relevance score of the image for the query. The dataset includes 3 billion (query, image, weight) triples, with 900 million unique images and 220 million unique queries. The data was prepared by first taking the query-image set, filtering to remove any personally identifiable information and adult content, and tokenizing the remaining queries by replacing special characters with spaces and trimming extraneous whitespace. Rare tokens (those that do not appear in queries at least six times) are filtered out. Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images. For example, if the query “back pain” is made by a user with English as her home language, then the query is stored as “en:back en:pain”. The dataset includes queries in about 130 languages." ] ]
93b1b94b301a46251695db8194a2536639a22a88
Could you learn such embedding simply from the image annotations and without using visual information?
[ "Yes" ]
[ [ "Another approach for generating query and image representations is treating images as a black box. Without using pixel data, how well can we do? Given the statistics of our dataset (3B query, image pairs with 220M unique queries and 900M unique images), we know that different queries co-occur with the same images. Intuitively, if a query $q_1$ co-occurs with many of the same images as query $q_2$ , then $q_1$ and $q_2$ are likely to be semantically similar, regardless of the visual content of the shared images. Thus, we can use a method that uses only co-occurrence statistics to better understand how well we can capture relationships between queries. This method serves as a baseline to our initial approach leveraging image understanding." ] ]
e8029ec69b0b273954b4249873a5070c2a0edb8a
How much important is the visual grounding in the learning of the multilingual representations?
[ "performance is significantly degraded without pixel data" ]
[ [] ]
f4e17b14318b9f67d60a8a2dad1f6b506a10ab36
How is the generative model evaluated?
[ "Comparing BLEU score of model with and without attention" ]
[ [ "We train an LSTM with and without attention on the training set. After training, we take the best model in terms of BLEU score BIBREF16 on the development set and calculate the BLEU score on the test set. To our surprise, we found that using attention yields only a marginally higher BLEU score (43.1 vs. 42.8). We suspect that this is due to the fact that generating entailed sentences has a larger space of valid target sequences, which makes the use of BLEU problematic and penalizes correct solutions. Hence, we manually annotated 100 random test sentences and decided whether the generated sentence can indeed be inferred from the source sentence. We found that sentences generated by an LSTM with attention are substantially more accurate ( $82\\%$ accuracy) than those generated from an LSTM baseline ( $71.7\\%$ ). To gain more insights into the model's capabilities, we turn to a thorough qualitative analysis of the attention LSTM model in the remainder of this paper." ] ]
fac052c4ad6b19a64d7db32fd08df38ad2e22118
How do they evaluate their method?
[ "Calinski-Harabasz score, t-SNE, UMAP" ]
[ [ "In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for k-means clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score BIBREF42 , also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of $[0, +\\infty ]$ and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is $\\mathcal {O}(N)$ .", "For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF52 and Uniform Manifold Approximation and Projection (UMAP) BIBREF53 mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries BIBREF54 , BIBREF55 on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU." ] ]
aa54e12ff71c25b7cff1e44783d07806e89f8e54
What is an example of a health-related tweet?
[ "The health benefits of alcohol consumption are more limited than previously thought, researchers say" ]
[ [] ]
1405824a6845082eae0458c94c4affd7456ad0f7
Was the introduced LSTM+CNN model trained on annotated data in a supervised fashion?
[ "Yes" ]
[ [ "We train our models on Sentiment140 and Amazon product reviews. Both of these datasets concentrates on sentiment represented by a short text. Summary description of other datasets for validation are also as below:", "We have provided baseline results for the accuracy of other models against datasets (as shown in Table 1 ) For training the softmax model, we divide the text sentiment to two kinds of emotion, positive and negative. And for training the tanh model, we convert the positive and negative emotion to [-1.0, 1.0] continuous sentiment score, while 1.0 means positive and vice versa. We also test our model on various models and calculate metrics such as accuracy, precision and recall and show the results are in Table 2 . Table 3 , Table 4 , Table 5 , Table 6 and Table 7 . Table 8 are more detail information with precisions and recall of our models against other datasets." ] ]
5be94c7c54593144ba2ac79729d7545f27c79d37
What is the challenge for other language except English
[ "not researched as much as English" ]
[ [ "Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. To accomplish this, an appropriate dataset must be constructed, annotated with the guidelines described in BIBREF0 . We, furthermore, set out to analyze the linguistic features that prove hard to detect by analyzing the patterns that prove hard to detect." ] ]
32e8eda2183bcafbd79b22f757f8f55895a0b7b2
How many categories of offensive language were there?
[ "3" ]
[ [ "In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:", "Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section \"Background\" .", "Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!", "Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort." ] ]
b69f0438c1af4b9ed89e531c056d9812d4994016
How large was the dataset of Danish comments?
[ "3600 user-generated comments" ]
[ [ "We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark. In light of the General Data Protection Regulations in Europe (GDPR) and the increased concern for online privacy, we applied some necessary pre-processing steps on our dataset to ensure the privacy of the authors of the comments that were used. Personally identifying content (such as the names of individuals, not including celebrity names) was removed. This was handled by replacing each name of an individual (i.e. author or subject) with @USER, as presented in both BIBREF0 and BIBREF2 . All comments containing any sensitive information were removed. We classify sensitive information as any information that can be used to uniquely identify someone by the following characteristics; racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, and bio-metric data." ] ]
2e9c6e01909503020070ec4faa6c8bf2d6c0af42
Who were the annotators?
[ "the author and the supervisor" ]
[ [ "We base our annotation procedure on the guidelines and schemas presented in BIBREF0 , discussed in detail in section \"Classification Structure\" . As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared. This was used as an opportunity to refine the mutual understanding of the task at hand and to discuss the mismatches in these annotations for each sub-task.", "In light of these findings our internal guidelines were refined so that no post should be labeled as offensive by interpreting any context that is not directly visible in the post itself and that any post containing any form of profanity should automatically be labeled as offensive. These stricter guidelines made the annotation procedure considerably easier while ensuring consistency. The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples." ] ]
fc65f19a30150a0e981fb69c1f5720f0136325b0
Is is known whether Sina Weibo posts are censored by humans or some automatic classifier?
[ "No" ]
[ [ "In cooperation with the ruling regime, Weibo sets strict control over the content published under its service BIBREF0. According to Zhu et al. zhu-etal:2013, Weibo uses a variety of strategies to target censorable posts, ranging from keyword list filtering to individual user monitoring. Among all posts that are eventually censored, nearly 30% of them are censored within 5–30 minutes, and nearly 90% within 24 hours BIBREF1. We hypothesize that the former are done automatically, while the latter are removed by human censors." ] ]
5067e5eb2cddbb34b71e8b74ab9210cd46bb09c5
Which matching features do they employ?
[ "Matching features from matching sentences from various perspectives." ]
[ [ "In this paper, we propose the MIMN model for NLI task. Our model introduces a multi-turns inference mechanism to process multi-perspective matching features. Furthermore, the model employs the memory mechanism to carry proceeding inference information. In each turn, the inference is based on the current matching feature and previous memory. Experimental results on SNLI dataset show that the MIMN model is on par with the state-of-the-art models. Moreover, our model achieves new state-of-the-art results on the MPE and the SCITAL datasets. Experimental results prove that the MIMN model can extract important information from multiple premises for the final judgment. And the model is good at handling the relationships of entailment and contradiction." ] ]
b974523a6cbd3cdc1fa924243ccc9711bbc7070d
How often are the newspaper websites crawled daily?
[ "RSS feeds in French on a daily basis" ]
[ [ "The Logoscope retrieves newspaper articles from several RSS feeds in French on a daily basis. The newspaper articles are preprocessed such that only the journalistic content is kept. The articles are then segmented into paragraphs and word forms. The resulting forms are filtered based on an exclusion list (French words found in several lexicons and corpora). They are then reordered in such a way that those words which are the most likely new word candidates appear on top, using a supervised classification method which will be described more in detail in Section SECREF71 ." ] ]
03502826f4919e251edba1525f84dd42f21b0253
How much better in terms of JSD measure did their model perform?
[ "Unanswerable" ]
[ [] ]
9368471073c66fefebc04f1820209f563a840240
What does the Jensen-Shannon distance measure?
[ "Unanswerable" ]
[ [] ]
981443fce6167b3f6cadf44f9f108d68c1a3f4ab
Which countries and languages do the political speeches and manifestos come from?
[ "german " ]
[ [ "Ideally one would choose for each topic a sample of reports from the entire political spectrum in order to form an unbiased opinion. But ordering media content with respect to the political spectrum at scale requires automated prediction of political bias. The aim of this study is to provide empirical evidence indicating that leveraging open data sources of german texts, automated political bias prediction is possible with above chance accuracy. These experimental results confirm and extend previous findings BIBREF0 , BIBREF1 ; a novel contribution of this work is a proof of concept which applies this technology to sort news article recommendations according to their political bias." ] ]
6d0f2cce46bc962c6527f7b4a77721799f2455c6
Do changes in policies of the political actors account for all of the mistakes the model made?
[ "Yes" ]
[ [ "In order to investigate the errors the models made confusion matrices were extracted for the predictions on the out-of-domain evaluation data for sentence level predictions (see tab:confusion) as well as topic level predictions (see tab:confusiontopic). One example illustrates that the mistakes the model makes can be associated with changes in the party policy. The green party has been promoting policies for renewable energy and against nuclear energy in their manifestos prior to both legislative periods. Yet the statements of the green party are more often predicted to be from the government parties than from the party that originally promoted these green ideas, reflecting the trend that these legislative periods governing parties took over policies from the green party. This effect is even more pronounced in the topic level predictions: a model trained on data from the 18th Bundestag predicts all manifesto topics of the green party to be from one of the parties of the governing coalition, CDU/CSU or SPD." ] ]
5816ebf15e31bdf70e1de8234132e146d64e31eb
What model are the text features used in to provide predictions?
[ " multinomial logistic regression" ]
[ [ "Bag-of-words feature vectors were used to train a multinomial logistic regression model. Let INLINEFORM0 be the true label, where INLINEFORM1 is the total number of labels and INLINEFORM2 is the concatenation of the weight vectors INLINEFORM3 associated with the INLINEFORM4 th party then DISPLAYFORM0" ] ]
5a9f94ae296dda06c8aec0fb389ce2f68940ea88
By how much does their method outperform the multi-head attention model?
[ "Their average improvement in Character Error Rate over the best MHA model was 0.33 percent points." ]
[ [] ]
85912b87b16b45cde79039447a70bd1f6f1f8361
How large is the corpus they use?
[ "449050" ]
[ [] ]
948327d7aa9f85943aac59e3f8613765861f97ff
Does each attention head in the decoder calculate the same output?
[ "No" ]
[ [ "On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output. Since each attention decoder captures different modalities, it is expected to improve the recognition performance with an ensemble effect. The calculation of the attention weight at the head INLINEFORM0 in Eq. ( EQREF21 ) is replaced with following equation: DISPLAYFORM0" ] ]
cdf7e60150a166d41baed9dad539e3b93b544624
Which distributional methods did they consider?
[ "WeedsPrec BIBREF8, invCL BIBREF11, SLQS model, cosine similarity" ]
[ [ "Most unsupervised distributional approaches for hypernymy detection are based on variants of the Distributional Inclusion Hypothesis BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF4 . Here, we compare to two methods with strong empirical results. As with most DIH measures, they are only defined for large, sparse, positively-valued distributional spaces. First, we consider WeedsPrec BIBREF8 which captures the features of INLINEFORM0 which are included in the set of a broader term's features, INLINEFORM1 : DISPLAYFORM0", "Second, we consider invCL BIBREF11 which introduces a notion of distributional exclusion by also measuring the degree to which the broader term contains contexts not used by the narrower term. In particular, let INLINEFORM0", "Although most unsupervised distributional approaches are based on the DIH, we also consider the distributional SLQS model based on on an alternative informativeness hypothesis BIBREF10 , BIBREF4 . Intuitively, the SLQS model presupposes that general words appear mostly in uninformative contexts, as measured by entropy. Specifically, SLQS depends on the median entropy of a term's top INLINEFORM0 contexts, defined as INLINEFORM1", "For completeness, we also include cosine similarity as a baseline in our evaluation." ] ]
c06b5623c35b6fa7938340fa340269dc81d061e1
Which benchmark datasets are used?
[ "noun-noun subset of bless, leds BIBREF13, bless, wbless, bibless, hyperlex BIBREF20" ]
[ [ "Detection: In hypernymy detection, the task is to classify whether pairs of words are in a hypernymy relation. For this task, we evaluate all models on five benchmark datasets: First, we employ the noun-noun subset of bless, which contains hypernymy annotations for 200 concrete, mostly unambiguous nouns. Negative pairs contain a mixture of co-hyponymy, meronymy, and random pairs. This version contains 14,542 total pairs with 1,337 positive examples. Second, we evaluate on leds BIBREF13 , which consists of 2,770 noun pairs balanced between positive hypernymy examples, and randomly shuffled negative pairs. We also consider eval BIBREF14 , containing 7,378 pairs in a mixture of hypernymy, synonymy, antonymy, meronymy, and adjectival relations. eval is notable for its absence of random pairs. The largest dataset is shwartz BIBREF2 , which was collected from a mixture of WordNet, DBPedia, and other resources. We limit ourselves to a 52,578 pair subset excluding multiword expressions. Finally, we evaluate on wbless BIBREF15 , a 1,668 pair subset of bless, with negative pairs being selected from co-hyponymy, random, and hyponymy relations. Previous work has used different metrics for evaluating on BLESS BIBREF11 , BIBREF5 , BIBREF6 . We chose to evaluate the global ranking using Average Precision. This allowed us to use the same metric on all detection benchmarks, and is consistent with evaluations in BIBREF4 .", "Direction: In direction prediction, the task is to identify which term is broader in a given pair of words. For this task, we evaluate all models on three datasets described by BIBREF16 : On bless, the task is to predict the direction for all 1337 positive pairs in the dataset. Pairs are only counted correct if the hypernymy direction scores higher than the reverse direction, i.e. INLINEFORM0 . We reserve 10% of the data for validation, and test on the remaining 90%. On wbless, we follow prior work BIBREF17 , BIBREF18 and perform 1000 random iterations in which 2% of the data is used as a validation set to learn a classification threshold, and test on the remainder of the data. We report average accuracy across all iterations. Finally, we evaluate on bibless BIBREF16 , a variant of wbless with hypernymy and hyponymy pairs explicitly annotated for their direction. Since this task requires three-way classification (hypernymy, hyponymy, and other), we perform two-stage classification. First, a threshold is tuned using 2% of the data, identifying whether a pair exhibits hypernymy in either direction. Second, the relative comparison of scores determines which direction is predicted. As with wbless, we report the average accuracy over 1000 iterations.", "Graded Entailment: In graded entailment, the task is to quantify the degree to which a hypernymy relation holds. For this task, we follow prior work BIBREF19 , BIBREF18 and use the noun part of hyperlex BIBREF20 , consisting of 2,163 noun pairs which are annotated to what degree INLINEFORM0 is-a INLINEFORM1 holds on a scale of INLINEFORM2 . For all models, we report Spearman's rank correlation INLINEFORM3 . We handle out-of-vocabulary (OOV) words by assigning the median of the scores (computed across the training set) to pairs with OOV words." ] ]
d325a3c21660dbc481b4e839ff1a2d37dcc7ca46
What hypernymy tasks do they study?
[ "Detection, Direction, Graded Entailment" ]
[ [ "Detection: In hypernymy detection, the task is to classify whether pairs of words are in a hypernymy relation. For this task, we evaluate all models on five benchmark datasets: First, we employ the noun-noun subset of bless, which contains hypernymy annotations for 200 concrete, mostly unambiguous nouns. Negative pairs contain a mixture of co-hyponymy, meronymy, and random pairs. This version contains 14,542 total pairs with 1,337 positive examples. Second, we evaluate on leds BIBREF13 , which consists of 2,770 noun pairs balanced between positive hypernymy examples, and randomly shuffled negative pairs. We also consider eval BIBREF14 , containing 7,378 pairs in a mixture of hypernymy, synonymy, antonymy, meronymy, and adjectival relations. eval is notable for its absence of random pairs. The largest dataset is shwartz BIBREF2 , which was collected from a mixture of WordNet, DBPedia, and other resources. We limit ourselves to a 52,578 pair subset excluding multiword expressions. Finally, we evaluate on wbless BIBREF15 , a 1,668 pair subset of bless, with negative pairs being selected from co-hyponymy, random, and hyponymy relations. Previous work has used different metrics for evaluating on BLESS BIBREF11 , BIBREF5 , BIBREF6 . We chose to evaluate the global ranking using Average Precision. This allowed us to use the same metric on all detection benchmarks, and is consistent with evaluations in BIBREF4 .", "Direction: In direction prediction, the task is to identify which term is broader in a given pair of words. For this task, we evaluate all models on three datasets described by BIBREF16 : On bless, the task is to predict the direction for all 1337 positive pairs in the dataset. Pairs are only counted correct if the hypernymy direction scores higher than the reverse direction, i.e. INLINEFORM0 . We reserve 10% of the data for validation, and test on the remaining 90%. On wbless, we follow prior work BIBREF17 , BIBREF18 and perform 1000 random iterations in which 2% of the data is used as a validation set to learn a classification threshold, and test on the remainder of the data. We report average accuracy across all iterations. Finally, we evaluate on bibless BIBREF16 , a variant of wbless with hypernymy and hyponymy pairs explicitly annotated for their direction. Since this task requires three-way classification (hypernymy, hyponymy, and other), we perform two-stage classification. First, a threshold is tuned using 2% of the data, identifying whether a pair exhibits hypernymy in either direction. Second, the relative comparison of scores determines which direction is predicted. As with wbless, we report the average accuracy over 1000 iterations.", "Graded Entailment: In graded entailment, the task is to quantify the degree to which a hypernymy relation holds. For this task, we follow prior work BIBREF19 , BIBREF18 and use the noun part of hyperlex BIBREF20 , consisting of 2,163 noun pairs which are annotated to what degree INLINEFORM0 is-a INLINEFORM1 holds on a scale of INLINEFORM2 . For all models, we report Spearman's rank correlation INLINEFORM3 . We handle out-of-vocabulary (OOV) words by assigning the median of the scores (computed across the training set) to pairs with OOV words." ] ]
eae13c9693ace504eab1f96c91b16a0627cd1f75
Do they repot results only on English data?
[ "Yes" ]
[ [ "We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks. Moreover, we achieve these results using an order of magnitude fewer trainable parameters than the current SOTA architecture." ] ]
bcec22a75c1f899e9fcea4996457cf177c50c4c5
What were the variables in the ablation study?
[ "(i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind" ]
[ [ "To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:", "We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.", "We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.", "We average the results for each set of hyperparameter across three trials with random weight initializations." ] ]
58f50397a075f128b45c6b824edb7a955ee8cba1
How many shared layers are in the system?
[ "1" ]
[ [] ]
9adcc8c4a10fa0d58f235b740d8d495ee622d596
How many additional task-specific layers are introduced?
[ "2 for the ADE dataset and 3 for the CoNLL04 dataset" ]
[ [] ]
91c81807374f2459990e5f9f8103906401abc5c2
What is barycentric Newton diagram?
[ " The basic idea of the visualization, drawing on Isaac Newton’s visualization of the color spectrum BIBREF8 , is to express a mixture in terms of its constituents as represented in barycentric coordinates." ]
[ [ "By using the probability values that emerge from the activation function in the neural network, rather than just the final classification, we can draw a barycentric Newton diagram, as shown in Figure 4 . The basic idea of the visualization, drawing on Isaac Newton’s visualization of the color spectrum BIBREF8 , is to express a mixture in terms of its constituents as represented in barycentric coordinates. This visualization allows an intuitive interpretation of which country a recipe belongs to. If the probability of Japanese is high, the recipe is mapped near the Japanese. The countries on the Newton diagram are placed by spectral graph drawing BIBREF9 , so that similar countries are placed nearby on the circle. The calculation is as follows. First we define the adjacency matrix $W$ as the similarity between two countries. The similarity between country $i$ and $j$ is calculated by cosine similarity of county $i$ vector and $j$ vector. These vector are defined in next section. $W_{ij} = sim(vec_i, vec_j)$ . The degree matrix $D$ is a diagonal matrix where $D_{ii} = \\sum _{j} W_{ij}$ . Next we calculate the eigendecomposition of $D^{-1}W$ . The second and third smallest eingenvalues and corresponded eingevectors are used for placing the countries. Eigenvectors are normalized so as to place the countries on the circle." ] ]
2cc42d14c8c927939a6b8d06f4fdee0913042416
Do they propose any solution to debias the embeddings?
[ "No" ]
[ [ "We have presented the first study on social bias in KG embeddings, and proposed a new metric for measuring such bias. We demonstrated that differences in the distributions of entities in real-world knowledge graphs (there are many more male bankers in Wikidata than female) translate into harmful biases related to professions being encoded in embeddings. Given that KGs are formed of real-world entities, we cannot simply equalize the counts; it is not possible to correct history by creating female US Presidents, etc. In light of this, we suggest that care is needed when applying graph embeddings in NLP pipelines, and work needed to develop robust methods to debias such embeddings." ] ]
b546f14feaa639e43aa64c799dc61b8ef480fb3d
How are these biases found?
[ "Unanswerable" ]
[ [] ]
8568c82078495ab421ecbae38ddd692c867eac09
How many layers of self-attention does the model have?
[ "1, 4, 8, 16, 32, 64" ]
[ [] ]
2ea382c676e418edd5327998e076a8c445d007a5
Is human evaluation performed?
[ "No" ]
[ [ "BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.", "Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.", "Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.", "F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses." ] ]
bd7a95b961af7caebf0430a7c9f675816c9c527f
What are the three datasets used?
[ "DSTC2, M2M-sim-M, M2M-sim-R" ]
[ [ "We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models." ] ]
f011d6d5287339a35d00cd9ce1dfeabb1f3c0563
Did they experiment with the corpus?
[ "Yes" ]
[ [ "In this section, we propose new simple disentanglement models that perform better than prior methods, and re-examine prior work. The models we consider are:" ] ]
2ba0c7576eb5b84463a59ff190d4793b67f40ccc
How were the feature representations evaluated?
[ "attention probes, using visualizations of the activations created by different pieces of text" ]
[ [ "Our work extends these explorations of the geometry of internal representations. Investigating how BERT represents syntax, we describe evidence that attention matrices contain grammatical representations. We also provide mathematical arguments that may explain the particular form of the parse tree embeddings described in BIBREF8 . Turning to semantics, using visualizations of the activations created by different pieces of text, we show suggestive evidence that BERT distinguishes word senses at a very fine level. Moreover, much of this semantic information appears to be encoded in a relatively low-dimensional subspace.", "To formalize what it means for attention matrices to encode linguistic features, we use an attention probe, an analog of edge probing BIBREF11 . An attention probe is a task for a pair of tokens, $(token_i, token_j)$ where the input is a model-wide attention vector formed by concatenating the entries $a_{ij}$ in every attention matrix from every attention head in every layer. The goal is to classify a given relation between the two tokens. If a linear model achieves reliable accuracy, it seems reasonable to say that the model-wide attention vector encodes that relation. We apply attention probes to the task of identifying the existence and type of dependency relation between two words." ] ]
c58e60b99a6590e6b9a34de96c7606b004a4f169
What linguistic features were probed for?
[ "dependency relation between two words, word sense" ]
[ [ "To formalize what it means for attention matrices to encode linguistic features, we use an attention probe, an analog of edge probing BIBREF11 . An attention probe is a task for a pair of tokens, $(token_i, token_j)$ where the input is a model-wide attention vector formed by concatenating the entries $a_{ij}$ in every attention matrix from every attention head in every layer. The goal is to classify a given relation between the two tokens. If a linear model achieves reliable accuracy, it seems reasonable to say that the model-wide attention vector encodes that relation. We apply attention probes to the task of identifying the existence and type of dependency relation between two words.", "Our first experiment is an exploratory visualization of how word sense affects context embeddings. For data on different word senses, we collected all sentences used in the introductions to English-language Wikipedia articles. (Text outside of introductions was frequently fragmentary.) We created an interactive application, which we plan to make public. A user enters a word, and the system retrieves 1,000 sentences containing that word. It sends these sentences to BERT-base as input, and for each one it retrieves the context embedding for the word from a layer of the user's choosing." ] ]
6a099dfe354a79936b59d651ba0887d9f586eaaf
Does the paper describe experiments with real humans?
[ "Yes" ]
[ [ "Thus, when size adjectives are noisier than color adjectives, the model produces overinformative referring expressions with color, but not with size – precisely the pattern observed in the literature BIBREF5 , BIBREF0 . Note also that no difference in adjective cost is necessary for obtaining the overinformativeness asymmetry, though assuming a greater cost for size than for color does further increase the observed asymmetry. We defer a discussion of costs to Section \"Experiment 1: scene variation in modified referring expressions\" , where we infer the best parameter values for both the costs and the semantic values of size and color, given data from a reference game experiment.", "We recruited 58 pairs of participants (116 participants total) over Amazon's Mechanical Turk who were each paid $1.75 for their participation. Data from another 7 pairs who prematurely dropped out of the experiment and who could therefore not be compensated for their work, were also included. Here and in all other experiments reported in this paper, participants' IP address was limited to US addresses and only participants with a past work approval rate of at least 95% were accepted." ] ]
f748cb05becc60e7d47d34f4c5f94189bc184d33
What are bottleneck features?
[ "Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese, South African English, These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available., The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'." ]
[ [ "One way to re-use information extracted from other multilingual corpora is to use multilingual bottleneck features (BNFs), which has shown to perform well in conventional ASR as well as intrinsic evaluations BIBREF19 , BIBREF26 , BIBREF27 , BIBREF20 , BIBREF28 , BIBREF29 . These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available. The bottom layers of the network are normally shared across all training languages. The network then splits into separate parts for each of the languages, or has a single shared output. The final output layer has phone labels or HMM states as targets. The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'. The intuition is that this layer should capture aspects that are common across all the languages. We use such features from a multilingual neural network in our CNN-DTW keyword spotting approach. The BNFs are trained on a set of well-resourced languages different from the target language." ] ]
1a06b7a2097ebbad0afc787ea0756db6af3dadf4
What languages are considered?
[ "Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese" ]
[ [ "A 6-layer 10-language TDNN was trained on the GlobalPhone corpus, also using 40-high resolution MFCC features as input, as described in BIBREF20 . For speaker adaptation, a 100-dimensional i-vector was appended to the the MFCC input features. The TDNN was trained with a block-softmax, with the hidden layers shared across all languages and a separate output layer for each language. Each of the six hidden layers had 625 dimensions, and was followed by a 39-dimensional bottleneck layer with ReLU activations and batch normalisation. Training was accomplished using the Kaldi Babel receipe using 198 hours of data in 10 languages (Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese) from GlobalPhone." ] ]
390aa2d733bd73699899a37e65c0dee4668d2cd8
Do they compare speed performance of their model compared to the ones using the LID model?
[ "Unanswerable" ]
[ [] ]
86083a02cc9a80b31cac912c42c710de2ef4adfd
How do they obtain language identities?
[ "model is trained to predict language IDs as well as the subwords, we add language IDs in the CS point of transcriptio" ]
[ [ "In this paper, we propose an improved RNN-T model with language bias to alleviate the problem. The model is trained to predict language IDs as well as the subwords. To ensure the model can learn CS information, we add language IDs in the CS point of transcription, as illustrated in Fig. 1. In the figure, we use the arrangements of different geometric icons to represent the CS distribution. Compared with normal text, the tagged data can bias the RNN-T to predict language IDs in CS points. So our method can model the CS distribution directly, no additional LID model is needed. Then we constrain the input word embedding with its corresponding language ID, which is beneficial for model to learn the language identity information from transcription. In the inference process, the predicted language IDs are used to adjust the output posteriors. The experiment results on CS corpus show that our proposed method outperforms the RNN-T baseline (without language bias) significantly. Overall, our best model achieves 16.2% and 12.9% relative error reduction on two test sets, respectively. To our best knowledge, this is the first attempt of using the RNN-T model with language bias as an end-to-end CSSR strategy." ] ]
29e5e055e01fdbf7b90d5907158676dd3169732d
What other multimodal knowledge base embedding methods are there?
[ "merging, concatenating, or averaging the entity and its features to compute its embeddings, graph embedding approaches, matrix factorization to jointly embed KB and textual relations" ]
[ [ "A number of methods utilize an extra type of information as the observed features for entities, by either merging, concatenating, or averaging the entity and its features to compute its embeddings, such as numerical values BIBREF26 (we use KBLN from this work to compare it with our approach using only numerical as extra attributes), images BIBREF27 , BIBREF28 (we use IKRL from the first work to compare it with our approach using only images as extra attributes), text BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , and a combination of text and image BIBREF35 . Further, BIBREF7 address the multilingual relation extraction task to attain a universal schema by considering raw text with no annotation as extra feature and using matrix factorization to jointly embed KB and textual relations BIBREF36 . In addition to treating the extra information as features, graph embedding approaches BIBREF37 , BIBREF38 consider observed attributes while encoding to achieve more accurate embeddings." ] ]
6c4d121d40ce6318ecdc141395cdd2982ba46cff
What is the data selection paper in machine translation
[ "BIBREF7, BIBREF26 " ]
[ [ "We observe that merely adding more tasks cannot provide much improvement on the target task. Thus, we propose two MTL training algorithms to improve the performance. The first method simply adopts a sampling scheme, which randomly selects training data from the auxiliary tasks controlled by a ratio hyperparameter; The second algorithm incorporates recent ideas of data selection in machine translation BIBREF7 . It learns the sample weights from the auxiliary tasks automatically through language models. Prior to this work, many studies have used upstream datasets to augment the performance of MRC models, including word embedding BIBREF5 , language models (ELMo) BIBREF8 and machine translation BIBREF1 . These methods aim to obtain a robust semantic encoding of both passages and questions. Our MTL method is orthogonal to these methods: rather than enriching semantic embedding with external knowledge, we leverage existing MRC datasets across different domains, which help make the whole comprehension process more robust and universal. Our experiments show that MTL can bring further performance boost when combined with contextual representations from pre-trained language models, e.g., ELMo BIBREF8 .", "We develop a novel re-weighting method to resolve these problems, using ideas inspired by data selection in machine translation BIBREF26 , BIBREF7 . We use $(Q^{k},P^{k},A^{k})$ to represent a data point from the $k$ -th task for $1\\le k\\le K$ , with $k=1$ being the target task. Since the passage styles are hard to evaluate, we only evaluate data points based on $Q^{k}$ and $A^k$ . Note that only data from auxiliary task ( $2\\le k\\le K$ ) is re-weighted; target task data always have weight 1." ] ]