question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
b6fb72437e3779b0e523b9710e36b966c23a2a40
How many rules had to be defined?
[ "WikiSQL - 2 rules (SELECT, WHERE)\nSimpleQuestions - 1 rule\nSequentialQA - 3 rules (SELECT, WHERE, COPY)" ]
[ [ "We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%.", "Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction.", "The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%." ] ]
e6469135e0273481cf11a6c737923630bc7ccfca
What datasets are used in this paper?
[ "WikiSQL, SimpleQuestions, SequentialQA" ]
[ [ "Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer.", "Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question.", "We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not." ] ]
06202ab8b28dcf3991523cf163b8844b42b9fc99
How much labeled data is available for these two languages?
[ "10k training and 1k test, 1,101 sentences (26k tokens)" ]
[ [ "The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it.", "The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio. The named entities in the dataset are personal names (PER), organization (ORG), location (LOC) and date & time (DATE). All other tokens are assigned a tag of \"O\"." ] ]
271019168ed3a2b0ef5e3780b48a1ebefc562b57
What was performance of classifiers before/after using distant supervision?
[ "Bi-LSTM: For low resource <17k clean data: Using distant supervision resulted in huge boost of F1 score (1k eg. ~9 to ~36 wit distant supervision)\nBERT: <5k clean data boost of F1 (1k eg. ~32 to ~47 with distant supervision)" ]
[ [ "The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token." ] ]
288613077787159e512e46b79190c91cd4e5b04d
What classifiers were used in experiments?
[ "Bi-LSTM, BERT" ]
[ [ "The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error.", "The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section." ] ]
cf74ff49dfcdda2cd67a896b4b982a1c3ee51531
In which countries are Hausa and Yor\`ub\'a spoken?
[ "Nigeria, Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan, Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil" ]
[ [ "Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering.", "Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Yorùbá has several dialects but the written language has been standardized by the 1974 Joint Consultative Committee on Education BIBREF21, it has 25 letters without the Latin characters (c, q, v, x and z) and with additional characters (ẹ, gb, ṣ , ọ). Yorùbá is a tone language and the tones are represented as diacritics in written text, there are three tones in Yorùbá namely low ( \\), mid (“$-$”) and high ($/$). The mid tone is usually ignored in writings. Often time articles written online including news articles like BBC and VON ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except they are in a context. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) will be mapped to owo without diacritics. Similar to the Hausa language, there are few or no labelled datasets for NLP tasks." ] ]
827c58f6cab6c6fe7a6c43bdc71150b61ba0eed4
What is the agreement score of their annotated dataset?
[ " Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\\alpha =0.27$ and $\\alpha =0.29$, Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\\alpha =0.35$ and $\\alpha =0.34$, This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$), The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$)." ]
[ [ "We calculated inter-annotator agreement by Krippendorff's Alpha BIBREF23, which accounts for different annotator pairs and empty values. To also zoom in on the particular agreement by category, we calculated mutual F-scores for each of the categories. This metric is typically used to evaluate system performance by category on gold standard data, but could also be applied to annotation pairs by alternating the roles of the two annotators between classifier and ground truth. A summary of the agreement by categorization is given in Table TABREF10. While both the Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\\alpha =0.27$ and $\\alpha =0.29$. The percent agreement on Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\\alpha =0.35$ and $\\alpha =0.34$. The mutual F-scores show marked differences in agreement by category, where the categories that were annotated most often typically yield a higher score. This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$). The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$). We found that these categories are often confused. After combining the annotations of the two, the stance agreement would be increased to $\\alpha =0.43$." ] ]
58ad7e8f7190e2a4f1588cae9a7842c56b37694d
What is the size of the labelled dataset?
[ "27,534 messages " ]
[ [ "We collected a total of 96,566 tweets from TwiNL, which we filtered in a number of ways. First, retweets were removed, as we wanted to focus on unique messages. This led to a removal of 31% of the messages. Second, we filtered out messages that contain a URL. Such messages often share a news headline and include a URL to refer to the complete news message. As a news headline does not reflect the stance of the person who posted the tweet, we decided to apply this filtering step. It is likely that part of the messages with a URL do include a message composed by the sender itself, but this step helps to clean many unwanted messages. Third, we removed messages that include a word related to animals and traveling (`dier’, animal; `landbouw’, agriculture; and `teek’, tick), as we strictly focus on messages that refer to vaccination that is part of the governmental vaccination program. 27,534 messages were left after filtering. This is the data set that is used for experimentation." ] ]
12eba1598dca14db64dbc8b73484639363a4618e
Which features do they use to model Twitter messages?
[ "word unigrams, bigrams, and trigrams" ]
[ [ "To properly distinguish word tokens and punctuation we tokenized the tweets by means of Ucto, a rule-based tokenizer with good performance on the Dutch language, and with a configuration specific for Twitter. Tokens were lowercased in order to focus on the content. Punctuation was maintained, as well as emoji and emoticons. Such markers could be predictive in the context of a discussion such as vaccination. To account for sequences of words and characters that might carry useful information, we extracted word unigrams, bigrams, and trigrams as features. Features were coded binary, i.e. set to 1 if a feature is seen in a message and set to 0 otherwise. During training, all features apart from the top 15,000 most frequent ones were removed." ] ]
4e468ce13b7f6ac05371c62c08c3cec1cd760517
Do they allow for messages with vaccination-related key terms to be of neutral stance?
[ "Yes" ]
[ [ "The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting." ] ]
3fb4334e5a4702acd44bd24eb1831bb7e9b98d31
How big are the datasets used?
[ "Evaluation datasets used:\nCMRC 2018 - 18939 questions, 10 answers\nDRCD - 33953 questions, 5 answers\nNIST MT02/03/04/05/06/08 Chinese-English - Not specified\n\nSource language train data:\nSQuAD - Not specified" ]
[ [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29.", "Note that, since the test and challenge sets are preserved by CMRC 2018 official to ensure the integrity of the evaluation process, we submitted our best-performing systems to the organizers to get these scores. The resource in source language was chosen as SQuAD BIBREF4 training data. The settings of the proposed approaches are listed below in detail.", "Translation: We use Google Neural Machine Translation (GNMT) system for translation. We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance." ] ]
a9acd1af4a869c17b95ec489cdb1ba7d76715ea4
Is this a span-based (extractive) QA task?
[ "Yes" ]
[ [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29." ] ]
afa94772fca7978f30973c43274ed826c40369eb
Are the contexts in a language different from the questions?
[ "Unanswerable" ]
[ [] ]
6f2118a0c64d5d2f49eee004d35b956cb330a10e
What datasets are used for training/testing models?
[ "Microsoft Research dataset containing movie, taxi and restaurant domains." ]
[ [ "The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues per domain and train/valid/test split is reported in Table TABREF11. At every turn both user and agent acts are annotated, we use only the agent side as targets in our experiment. The acts are ordered in the dataset (each output sentence aligns with one act). The size of the sets of acts, slots, and act-slot pairs are also listed in Table TABREF11. Table TABREF12 shows the count of turns with multiple act annotations, which amounts to 23% of the dataset. We use MSR's dialogue management code and knowledge base to obtain the state at each turn and use it as input to every model." ] ]
8a0a51382d186e8d92bf7e78277a1d48958758da
How better is gCAS approach compared to other approaches?
[ "For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52" ]
[ [] ]
b8dea4a98b4da4ef1b9c98a211210e31d6630cf3
What is specific to gCAS cell?
[ "It has three sequentially connected units to output continue, act and slots generating multi-acts in a doble recurrent manner." ]
[ [ "In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\\textit {continue}, \\textit {act}, \\textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them." ] ]
4146e1d8f79902c0bc034695998b724515b6ac81
What dataset do they evaluate their model on?
[ "CoNLL-2012 shared task BIBREF21 corpus" ]
[ [ "The CoNLL-2012 shared task BIBREF21 corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0. Following conventional approaches BIBREF9 , BIBREF11 , for each pronoun in the document, we consider candidate $n$ from the previous two sentences and the current sentence. For pronouns, we consider two types of them following BIBREF9 , i.e., third personal pronoun (she, her, he, him, them, they, it) and possessive pronoun (his, hers, its, their, theirs). Table 1 reports the number of the two type pronouns and the overall statistics for the experimental dataset. According to our selection range of candidate $n$ , on average, each pronoun has 4.6 candidates and 1.3 correct references." ] ]
42394c54a950bae8cebecda9de68ee78de69dc0d
What is the source of external knowledge?
[ "counts of predicate-argument tuples from English Wikipedia" ]
[ [ "The second type is the selectional preference (SP) knowledge. For this knowledge, we create a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength. Specifically, we use the English Wikipedia as the base corpus for such counting. Then we parse the entire corpus through the Stanford parser and record all dependency edges in the format of (predicate, argument, relation, number), where predicate is the governor and argument the dependent in the original parsed dependency edge. Later for sentences in the training and test data, we firstly parse each sentence and find out the dependency edge linking $p$ and its corresponding predicate. Then for each candidate $n$ in a sentence, we check the previously created SP knowledge base and find out how many times it appears as the argument of different predicates with the same dependency relation (i.e., nsubj and dobj). The resulted frequency is grouped into the following buckets [1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+] and we use the bucket id as the final SP knowledge. Thus in the previous example:" ] ]
e9d882775a132e172eea68ab6ab4621a924bb6b8
Which of their proposed attention methods works better overall?
[ "attention parsing" ]
[ [ "Tables and summarize the results from the link-prediction experiments on all three datasets, where a different ratio of edges are used for training. Results from models other than GANE are collected from BIBREF9 , BIBREF10 and BIBREF34 . We have also repeated these experiments on our own, and the results are consistent with the ones reported. Note that BIBREF34 did not report results on DMTE. Both GANE variants consistently outperform competing solutions. In the low-training-sample regime our solutions lead by a large margin, and the performance gap closes as the number of training samples increases. This indicates that our OT-based mutual attention framework can yield more informative textual representations than other methods. Note that GANE-AP delivers better results compared with GANE-OT, suggesting the attention parsing mechanism can further improve the low-level mutual attention matrix. More results on Cora and Hepth are provided in the SM." ] ]
6367877c05beebfdbb31e83c1f25dfddf925b6b6
Which dataset of texts do they use?
[ "Cora, Hepth, Zhihu" ]
[ [ "We consider three benchmark datasets: ( INLINEFORM0 ) Cora, a paper citation network with text information, built by BIBREF44 . We prune the dataset so that it only has papers on the topic of machine learning. ( INLINEFORM1 ) Hepth, a paper citation network from Arxiv on high energy physics theory, with paper abstracts as text information. ( INLINEFORM2 ) Zhihu, a Q&A network dataset constructed by BIBREF9 , which has 10,000 active users with text descriptions and their collaboration links. Summary statistics of these three datasets are summarized in Table . Pre-processing protocols from prior studies are used for data preparation BIBREF10 , BIBREF34 , BIBREF9 ." ] ]
d151327c93b67928313f8fad8079a4ff9ef89314
Do they measure how well they perform on longer sequences specifically?
[ "Yes" ]
[ [ "We further explore the effect of INLINEFORM0 -gram length in our model (i.e., the filter size for the covolutional layers used by the attention parsing module). In Figure FIGREF39 we plot the AUC scores for link prediction on the Cora dataset against varying INLINEFORM1 -gram length. The performance peaked around length 20, then starts to drop, indicating a moderate attention span is more preferable. Similar results are observed on other datasets (results not shown). Experimental details on the ablation study can be found in the SM." ] ]
70f9358dc01fd2db01a6b165e0b4e83e4a9141a7
Which other embeddings do they compare against?
[ "MMB, DeepWalk, LINE, Node2vec, TADW, CENE, CANE, WANE, DMTE" ]
[ [ "To demonstrate the effectiveness of the proposed solutions, we evaluated our model along with the following strong baselines. ( INLINEFORM0 ) Topology only embeddings: MMB BIBREF45 , DeepWalk BIBREF1 , LINE BIBREF33 , Node2vec BIBREF46 . ( INLINEFORM1 ) Joint embedding of topology & text: Naive combination, TADW BIBREF5 , CENE BIBREF6 , CANE BIBREF9 , WANE BIBREF10 , DMTE BIBREF34 . A brief summary of these competing models is provided in the Supplementary Material (SM)." ] ]
4a4616e1a9807f32cca9b92ab05e65b05c2a1bf5
What were the sizes of the test sets?
[ "Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences" ]
[ [ "Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively." ] ]
3752bbc5367973ab5b839ded08c57f51336b5c3d
What training data did they use?
[ "Training-22, NLM-180" ]
[ [ "Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively." ] ]
30db81df46474363d5749d7f6a94b7ef95cd3e01
What domains do they experiment with?
[ "Twitter, Yelp reviews and movie reviews" ]
[ [ "With unsupervised domain adaptation, one has access to labeled sentence specificity in one source domain, and unlabeled sentences in all target domains. The goal is to predict the specificity of target domain data. Our source domain is news, the only domain with publicly available labeled data for training BIBREF1 . We crowdsource sentence specificity for evaluation for three target domains: Twitter, Yelp reviews and movie reviews. The data is described in Section SECREF4 ." ] ]
5c26388a2c0b0452d529d5dd565a5375fdabdb70
What games are used to test author's methods?
[ "Lurking Horror, Afflicted, Anchorhead, 9:05, TextWorld games" ]
[ [ "TextWorld uses a grammar to generate similar games. Following BIBREF7, we use TextWorld's “home” theme to generate the games for the question-answering system. TextWorld is a framework that uses a grammar to randomly generate game worlds and quests. This framework also gives us information such as instructions on how to finish the quest, and a list of actions that can be performed at each step based on the current world state. We do not let our agent access this additional solution information or admissible actions list. Given the relatively small quest length for TextWorld games—games can be completed in as little as 5 steps—we generate 50 such games and partition them into train and test sets in a 4:1 ratio. The traces are generated on the training set, and the question-answering system is evaluated on the test set.", "We choose the game, 9:05 as our target task game due to similarities in structure in addition to the vocabulary overlap. Note that there are multiple possible endings to this game and we pick the simplest one for the purpose of training our agent.", "For the horror domain, we choose Lurking Horror to train the question-answering system on. The source and target task games are chosen as Afflicted and Anchorhead respectively. However, due to the size and complexity of these two games some modifications to the games are required for the agent to be able to effectively solve them. We partition each of these games and make them smaller by reducing the final goal of the game to an intermediate checkpoint leading to it. This checkpoints were identified manually using walkthroughs of the game; each game has a natural intermediate goal. For example, Anchorhead is segmented into 3 chapters in the form of objectives spread across 3 days, of which we use only the first chapter. The exact details of the games after partitioning is described in Table TABREF7. For Lurking Horror, we report numbers relevant for the oracle walkthrough. We then pre-prune the action space and use only the actions that are relevant for the sections of the game that we have partitioned out. The majority of the environment is still available for the agent to explore but the game ends upon completion of the chosen intermediate checkpoint." ] ]
184e1f28f96babf468f2bb4e1734f69646590cda
How is the domain knowledge transfer represented as knowledge graph?
[ "the knowledge graph is used to prune this space by ranking actions based on their presence in the current knowledge graph and the relations between the objects in the graph as in BIBREF7" ]
[ [ "The agent also has access to all actions accepted by the game's parser, following BIBREF2. For general interactive fiction environments, we develop our own method to extract this information. This is done by extracting a set of templates accepted by the parser, with the objects or noun phrases in the actions replaces with a OBJ tag. An example of such a template is \"place OBJ in OBJ\". These OBJ tags are then filled in by looking at all possible objects in the given vocabulary for the game. This action space is of the order of $A=\\mathcal {O}(|V| \\times |O|^2)$ where $V$ is the number of action verbs, and $O$ is the number of distinct objects in the world that the agent can interact with. As this is too large a space for a RL agent to effectively explore, the knowledge graph is used to prune this space by ranking actions based on their presence in the current knowledge graph and the relations between the objects in the graph as in BIBREF7" ] ]
71fca845edd33f6e227eccde10db73b99a7e157b
What was the baseline?
[ "the baseline provided by BIBREF8, the baselines provided by the ABSA organizers" ]
[ [ "Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing.", "In spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 ." ] ]
93b299acfb6fad104b9ebf4d0585d42de4047051
Which datasets are used?
[ "ABSA SemEval 2014-2016 datasets\nYelp Academic Dataset\nWikipedia dumps" ]
[ [ "Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one.", "Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range.", "In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 ." ] ]
e755fb599690d0d0c12ddb851ac731a0a7965797
Which six languages are experimented with?
[ "Dutch, French, Russian, Spanish , Turkish, English " ]
[ [ "In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset." ] ]
7e51490a362135267e75b2817de3c38dfe846e21
What shallow local features are extracted?
[ " Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context" ]
[ [ "The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm." ] ]
e98d331faacd50f8ec588d2466b5a85da1f37e6f
Do they compare results against state-of-the-art language models?
[ "Yes" ]
[ [ "All models outperform previously reported results for mlstm BIBREF8 despite lower parameter counts. This is likely due to our relatively small batch size. However, they perform fairly similarly. Encouraged by these results, we built an mgru with both hidden and intermediate state sizes set to that of the original mlstm (700). This version highly surpasses the previous state of the art while still having fewer parameters than previous work." ] ]
fbe22e133fa919f06abd8afbed3395af51d2bfef
Do they integrate the second-order term in the mLSTM?
[ "Unanswerable" ]
[ [] ]
f319f2c3f9339b0ce47478f5aa0c32da387a156e
Which dataset do they train their models on?
[ "Penn Treebank, Text8" ]
[ [ "Character-level language modeling (or character prediction) consists in predicting the next character while reading a document one character at a time. It is a common benchmark for rnn because of the heightened need for shared parametrization when compared to word-level models. We test mgru on two well-known datasets, the Penn Treebank and Text8." ] ]
02417455c05f09d89c2658f39705ac1df1daa0cd
How much does it minimally cost to fine-tune some model according to benchmarking framework?
[ "$1,728" ]
[ [] ]
6ce057d3b88addf97a30cb188795806239491154
What models are included in baseline benchmarking results?
[ "BERT, XLNET RoBERTa, ALBERT, DistilBERT" ]
[ [] ]
4cab33c8dd46002e0ccafda3916b37366a24a394
did they compare with other evaluation metrics?
[ "Yes" ]
[ [] ]
9dd65dca9dffd2bf78ecc22b17824edc885d1fa2
which datasets were used in validation?
[ "Unanswerable" ]
[ [] ]
e91692136033bbc3f19743d0ee5784365746a820
It looks like learning to paraphrase questions, a neural scoring model and a answer selection model cannot be trained end-to-end. How are they trained?
[ "using multiple pivot sentences" ]
[ [ "BIBREF11 revisit bilingual pivoting in the context of neural machine translation (NMT, BIBREF12 , BIBREF13 ) and present a paraphrasing model based on neural networks. At its core, NMT is trained end-to-end to maximize the conditional probability of a correct translation given a source sentence, using a bilingual corpus. Paraphrases can be obtained by translating an English string into a foreign language and then back-translating it into English. NMT-based pivoting models offer advantages over conventional methods such as the ability to learn continuous representations and to consider wider context while paraphrasing." ] ]
94e17980435aaa9fc3b5328f16f3368dc8a736bd
What multimodal representations are used in the experiments?
[ "The second method it to learn a common space for the two modalities before concatenation (project), The first method is concatenation of the text and image representation (concat)" ]
[ [ "Evaluation of Representation Models ::: Experiments ::: Multimodal representation.", "We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project).", "The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \\in \\mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function." ] ]
4d8b3928f89d73895a7655850a227fbac08cdae9
How much better is inference that has addition of image representation compared to text-only representations?
[ " largest improvement ($22-26\\%$ E.R) when text-based unsupervised models are combined with image representations" ]
[ [ "Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE." ] ]
b6e4b98fad3681691bcce13f57fb173aee30c592
How they compute similarity between the representations?
[ "similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations" ]
[ [ "In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations." ] ]
af7a9b56596f90c84f962098f7e836309161badf
How big is vSTS training data?
[ "1338 pairs for training" ]
[ [ "The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered.", "We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\\rho $) as the evaluation metric of the task." ] ]
ba61ed892b4f7930430389e80a0c8e3b701c8e5d
Which evaluation metrics do they use for language modelling?
[ " functional dissimilarity score, nearest neighbours experiment" ]
[ [ "The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. Furthermore, none of the evaluated models, including the proposed method, were trained with CoNLL2017 data.", "We computed the nearest neighbours experiment for all languages in the training data for the above models. The results are shown in Table TABREF27. The results show that general purpose language models do capture syntax information, which varies greatly across languages and models." ] ]
6a566095e25cbb56330456d7a1f3471693817712
Do they do quantitative quality analysis of learned embeddings?
[ "Yes" ]
[ [ "The high nearest neighbours accuracy indicates that syntax information was successfully captured by the embeddings. Table TABREF22 also shows that the syntactic information of multiple languages was captured by a single embedding model." ] ]
56c6ff65c64ca85951fdea54d6b096f28393c128
Do they evaluate on downstream tasks?
[ "Yes" ]
[ [ "Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks.", "We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. We reported the 1-NN and 5-NN accuracies for all languages using the same evaluation setting as described in the previous section. The results are shown in Table TABREF31 (top)." ] ]
356e462f7966e30665a387ed7a9ad2e830479da6
Which corpus do they use?
[ "The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16." ]
[ [ "To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin. These languages use 3 different scripts, 2 different language orderings, and belong to 4 language families.", "The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16. They were chosen for their high availability in multiple languages." ] ]
572458399a45fd392c3a4e07ce26dcff2ad5a07d
How much more accurate is the model than the baseline?
[ "For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, CLSTM, by 0.028, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.040. Human evaluation of the NAGM's generated outputs for the Oshiete-goo dataset had 47% ratings of (1), the highest rating, while CLSTM only received 21% ratings of (1). For the nfL6 dataset, the comparison of (1)'s was NAGM's 50% to CLSTM's 30%. " ]
[ [ "NAGMWA is much better than the other methods except NAGM, since it generates answers whose conclusions and supplements as well as their combinations closely match the questions. Thus, conclusions and supplements in the answers are consistent with each other and avoid confusion made by several different conclusion-supplement answers assigned to a single non-factoid questions. Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. Thus, NAGM generates more fluent sentences by assessing the context from conclusion to supplement sentences in addition to the closeness of the question and sentences as well as that of the question and sentence combinations.", "The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. Note that our evaluation followed the DUC-style strategy. Here, we mean “grammar” to cover grammaticality, non-redundancy, and referential clarity in the DUC strategy, whereas we mean the “content matched the questions” to refer to “focus” and “structure and coherence” in the DUC strategy. The evaluators were given more than a week to carefully evaluate the generated answers, so we consider that their judgments are reliable. Each expert evaluated 50 questions. We combined the scores of the experts by summing them. They did not know the identity of the system in the evaluation and reached their decisions independently.", "These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM. This is because, as can be seen in Table 7, NAGM generated longer and better question-related sentences than CLSTM did. NAGM generated grammatically good answers whose conclusion and supplement statements are well matched with the question and the supplement statement naturally follows the conclusion statement." ] ]
cb4727cd5643dabc3f5c95e851d5313f5d979bdc
How big is improvement over the old state-of-the-art performance on CoNLL-2009 dataset?
[ "our Open model achieves more than 3 points of f1-score than the state-of-the-art result" ]
[ [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task." ] ]
33d864153822bd378a98a732ace720e2c06a6bc6
What is new state-of-the-art performance on CoNLL-2009 dataset?
[ "In closed setting 84.22 F1 and in open 87.35 F1." ]
[ [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task." ] ]
b13cf4205f3952c3066b9fb81bd5c4277e2bc7f5
How big is CoNLL-2009 dataset?
[ "Unanswerable" ]
[ [] ]
86f24ecc89e743bb1534ac160d08859493afafe9
What different approaches of encoding syntactic information authors present?
[ "dependency head and dependency relation label, denoted as Dep and Rel for short, Tree-based Position Feature (TPF) as Dependency Path (DepPath), Shortest Dependency Path (SDP) as Relation Path (RelPath)" ]
[ [ "The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short.", "In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath)." ] ]
bab8c69e183bae6e30fc362009db9b46e720225e
What are two strong baseline methods authors refer to?
[ "Marcheggiani and Titov (2017) and Cai et al. (2018)" ]
[ [] ]
ead5dc1f3994b2031a1852ecc4f97ac5760ea977
How many category tags are considered?
[ "14 categories" ]
[ [] ]
86cd1228374721db67c0653f2052b1ada6009641
What domain does the dataset fall into?
[ "YouTube videos" ]
[ [ "We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set)." ] ]
7011b26ffc54769897e4859e4932aeddfab82c9f
What ASR system do they use?
[ "YouTube ASR system " ]
[ [ "The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles." ] ]
3a6559dc6eba7f5abddf3ac27376ba0b9643a908
What is the state of the art?
[ "Unanswerable" ]
[ [] ]
5cd5864077e4074bed01e3a611b747a2180088a0
How big are datasets used in experiments?
[ "2000 images" ]
[ [ "There is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\\%$ training, 5$\\%$ validation, and 5$\\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training." ] ]
d664054c8d1f8e84169d4ab790f2754274353685
What previously annotated databases are available?
[ "the UBC database BIBREF14" ]
[ [ "There is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\\%$ training, 5$\\%$ validation, and 5$\\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training." ] ]
03fb4b31742820df58504575c562bee672e016be
Do they address abstract meanings and concepts separately?
[ "Unanswerable" ]
[ [] ]
691cba5713c76a6870e35bc248ce1d29c0550bc7
Do they argue that all words can be derived from other (elementary) words?
[ "No" ]
[ [ "In order to understand what INLINEFORM0 is, we need to understand the mathematics of grammar. The study of the mathematical structure of grammar has indicated that the fundamental things making up sentences are not the words, but some atomic grammatical types, such as the noun-type and the sentence-type BIBREF23 , BIBREF24 , BIBREF25 . The transitive verb-type is not an atomic grammatical type, but a composite made up of two noun-types and one sentence-type. Hence, particularly interesting here is that atomic doesn't really mean smallest..." ] ]
4542a4e7eabb8006fb7bcff2ca6347cfb3fbc56b
Do they break down word meanings into elementary particles as in the standard model of quantum theory?
[ "No" ]
[ [ "In order to understand what INLINEFORM0 is, we need to understand the mathematics of grammar. The study of the mathematical structure of grammar has indicated that the fundamental things making up sentences are not the words, but some atomic grammatical types, such as the noun-type and the sentence-type BIBREF23 , BIBREF24 , BIBREF25 . The transitive verb-type is not an atomic grammatical type, but a composite made up of two noun-types and one sentence-type. Hence, particularly interesting here is that atomic doesn't really mean smallest...", "On the other hand, just like in particle physics where we have particles and anti-particles, the atomic types include types as well as anti-types. But unlike in particle physics, there are two kinds of anti-types, namely left ones and right ones. This makes language even more non-commutative than quantum theory!" ] ]
af8d3ee6a282aaa885e9126aa4bcb08ac68837e0
How big is the dataset used?
[ "over 41,250 videos and 825,000 captions in both English and Chinese., over 206,000 English-Chinese parallel translation pairs" ]
[ [ "We utilize the VATEX dataset for video captioning, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. It covers 600 human activities and a variety of video content. Each video is paired with 10 English and 10 Chinese diverse captions. We follow the official split with 25,991 videos for training, 3,000 videos for validation and 6,000 public test videos for final testing." ] ]
0e510d918456f3d2b390b501a145d92c4f125835
How they prove that multi-head self-attention is at least as powerful as convolution layer?
[ "constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer" ]
[ [ "The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\\Delta \\!\\!\\!\\!\\Delta _K = \\lbrace -\\lfloor K/2 \\rfloor , \\dots , \\lfloor K/2 \\rfloor \\rbrace ^2$ of all pixel shifts in a $K\\times K$ kernel. The exact condition can be found in the statement of Lemma UNKREF15." ] ]
7ac0cec79c8c2b1909b0a1cc0d4646fce09884ee
Is there a way of converting existing convolution layers into self-attention to perform very same convolution?
[ "Unanswerable" ]
[ [] ]
6fd07f4dc037a82c8fa0ed80469eb4171dcebf12
What authors mean by sufficient number of heads?
[ "Unanswerable" ]
[ [] ]
2caa8726222237af482e170c51c88099cefef6fc
Is there any nonnumerical experiment that also support author's claim, like analysis of attention layers in publicly available networks?
[ "No" ]
[ [] ]
5367f8979488aaa420d8a69fec656851095ecacb
What numerical experiments they perform?
[ "attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis, validate that our model learns a meaningful classifier we compare it to the standard ResNet18" ]
[ [ "The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers, when being trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that for both cases, the attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis.", "We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by BIBREF9 that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier we compare it to the standard ResNet18 BIBREF14 on the CIFAR-10 dataset BIBREF15. In all experiments, we use a $2\\times 2$ invertible down-sampling BIBREF16 on the input to reduce the size of the image as storing the attention coefficient tensor requires a large amount of GPU memory. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier." ] ]
3cca26a9474d3b0d278e4dd57e24b227e7c2cd41
What dataset is used?
[ "Brent corpus, PTB , Beijing University Corpus, Penn Chinese Treebank" ]
[ [ "We evaluate our model on both English and Chinese segmentation. For both languages we used standard datasets for word segmentation and language modeling. For all datasets, we used train, validation and test splits. Since our model assumes a closed character set, we removed validation and test samples which contain characters that do not appear in the training set. In the English corpora, whitespace characters are removed. In Chinese, they are not present to begin with. Refer to Appendix SECREF9 for dataset statistics.", "The Brent corpus is a standard corpus used in statistical modeling of child language acquisition BIBREF15 , BIBREF16 . The corpus contains transcriptions of utterances directed at 13- to 23-month-old children. The corpus has two variants: an orthographic one (BR-text) and a phonemic one (BR-phono), where each character corresponds to a single English phoneme. As the Brent corpus does not have a standard train and test split, and we want to tune the parameters by measuring the fit to held-out data, we used the first 80% of the utterances for training and the next 10% for validation and the rest for test.", "We use the commonly used version of the PTB prepared by BIBREF17 . However, since we removed space symbols from the corpus, our cross entropy results cannot be compared to those usually reported on this dataset.", "Since Chinese orthography does not mark spaces between words, there have been a number of efforts to annotate word boundaries. We evaluate against two corpora that have been manually segmented according different segmentation standards.", "The Beijing University Corpus was one of the corpora used for the International Chinese Word Segmentation Bakeoff BIBREF18 .", "We use the Penn Chinese Treebank Version 5.1 BIBREF19 . It generally has a coarser segmentation than PKU (e.g., in CTB a full name, consisting of a given name and family name, is a single token), and it is a larger corpus." ] ]
8f8f2b0046e1a78bd34c0c3d6b6cb24463a8ed7f
What language do they look at?
[ "English, Chinese" ]
[ [ "We evaluate our model on both English and Chinese segmentation. For both languages we used standard datasets for word segmentation and language modeling. For all datasets, we used train, validation and test splits. Since our model assumes a closed character set, we removed validation and test samples which contain characters that do not appear in the training set. In the English corpora, whitespace characters are removed. In Chinese, they are not present to begin with. Refer to Appendix SECREF9 for dataset statistics." ] ]
e37c32fce68759b2272adc1e44ea91c1a7c47059
What dierse domains and languages are present in new datasets?
[ "movies , restaurants, English , Korean" ]
[ [ "We provide three pairs of short/long datasets from different domains (movies and restaurants) and from different languages (English and Korean) suitable for the task: Mov_en, Res_en, and Mov_ko. Most of the datasets are from previous literature and are gathered differently The Mov_en datasets are gathered from different websites; the short dataset consists of hand-picked sentences by BIBREF19 from document-level reviews from the Rotten Tomatoes website, while the long dataset consists of reviews from the IMDB website obtained by BIBREF20. The Res_en dataset consists of reviews from Yelp, where the short dataset consists of reviews with character lengths less than 140 from BIBREF21, while reviews in the long dataset are gathered from BIBREF20. We also share new short/long datasets Mov_ko, which are gathered from two different channels, as shown in Figure FIGREF4, available in Naver Movies. Unlike previous datasets BIBREF9, BIBREF22 where they used polarity/binary (e.g., positive or negative) labels as classes, we also provide fine-grained classes, with five classes of different sentiment intensities (e.g., 1 is strong negative, 5 is strong positive), for Res_en and Mov_ko. Following the Cross Domain Transfer setting BIBREF9, BIBREF23, BIBREF24, we limit the size of the dataset to be small-scale to focus on the main task at hand. This ensures that models focus on the transfer task, and decrease the influence of other factors that can be found when using larger datasets. Finally, following BIBREF22, we provide additional unlabeled data for those models that need them BIBREF9, BIBREF23, except for the long dataset of Mov_ko, where the labeled reviews are very limited. We show the dataset statistics in Table TABREF9, and share the datasets here: https://github.com/rktamplayo/LeTraNets." ] ]
280f863cfd63b711980ca6c7f1409c0306473de7
Are their corpus and software public?
[ "Yes" ]
[ [ "Our system, including software and corpus, is available as an open source project for free research purpose and we believe that it is a good baseline for the development and comparison of future Vietnamese SRL systems. We plan to integrate this tool to Vitk, an open-source toolkit for processing Vietnamese text, which contains fundamental processing tools and are readily scalable for processing very large text data." ] ]
b5a2b03cfc5a64ad4542773d38372fffc6d3eac7
How are EAC evaluated?
[ "Qualitatively through efficiency, effectiveness and satisfaction aspects and quantitatively through metrics such as precision, recall, accuracy, BLEU score and even human judgement." ]
[ [ "We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessment will focus on assessing the functionality of the software, while quantitative more focus on measure the chatbots' performance with a number.", "Based on our investigation of several previous studies, we found that most of the works utilized ISO 9241 to assess chatbots' quality by focusing on the usability aspect. This aspect can be grouped into three focuses, including efficiency, effectiveness, and satisfaction, concerning systems' performance to achieve the specified goals. Here we will explain every focus based on several categories and quality attributes.", "In automatic evaluation, some studies focus on evaluating the system at emotion level BIBREF15 , BIBREF28 . Therefore, some common metrics such as precision, recall, and accuracy are used to measure system performance, compared to the gold label. This evaluation is similar to emotion classification tasks such as previous SemEval 2018 BIBREF32 and SemEval 2019 . Other studies also proposed to use perplexity to evaluate the model at the content level (to determine whether the content is relevant and grammatical) BIBREF14 , BIBREF39 , BIBREF28 . This evaluation metric is widely used to evaluate dialogue-based systems which rely on probabilistic approach BIBREF61 . Another work by BIBREF14 used BLEU to evaluate the machine response and compare against the gold response (the actual response), although using BLEU to measure conversation generation task is not recommended by BIBREF62 due to its low correlation with human judgment.", "This evaluation involves human judgement to measure the chatbots' performance, based on several criteria. BIBREF15 used three annotators to rate chatbots' response in two criteria, content (scale 0,1,2) and emotion (scale 0,1). Content is focused on measuring whether the response is natural acceptable and could plausible produced by a human. This metric measurement is already adopted and recommended by researchers and conversation challenging tasks, as proposed in BIBREF38 . Meanwhile, emotion is defined as whether the emotion expression contained in the response agrees with the given gold emotion category. Similarly, BIBREF28 used four annotators to score the response based on consistency, logic and emotion. Consistency measures the fluency and grammatical aspect of the response. Logic measures the degree whether the post and response logically match. Emotion measures the response, whether it contains the appropriate emotion. All of these aspects were measured by three scales 0, 1, and 2. Meanwhile, BIBREF39 proposed naturalness and emotion impact as criteria to evaluate the chatbots' response. Naturalness evaluates whether the response is intelligible, logically follows the context of the conversation, and acceptable as a human response, while emotion impact measures whether the response elicits a positive emotional or triggers an emotionally-positive dialogue, since their study focus only on positive emotion. Another study by BIBREF14 uses crowdsourcing to gather human judgement based on three aspects of performance including empathy/sympathy - did the responses show understanding of the feelings of the person talking about their experience?; relevance - did the responses seem appropriate to the conversation? Were they on-topic?; and fluency - could you understand the responses? Did the language seem accurate?. All of these aspects recorded with three different response, i.e., (1: not at all, 3: somewhat, 5: very much) from around 100 different annotators. After getting all of the human judgement with different criteria, some of these studies used a t-test to get the statistical significance BIBREF28 , BIBREF39 , while some other used inter-annotator agreement measurement such as Fleiss Kappa BIBREF15 , BIBREF14 . Based on these evaluations, they can compare their system performance with baseline or any other state of the art systems." ] ]
b093b440ae3cd03555237791550f3224d159d85b
What are the currently available datasets for EAC?
[ "EMPATHETICDIALOGUES dataset, a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries, SEMAINE corpus BIBREF30" ]
[ [ "Nowadays, most of chatbots technologies were built by using neural-based approach. Emotional Chatting Machine (ECM) BIBREF15 was the first works which exploiting deep learning approach in building a large-scale emotionally-aware conversational bot. Then several studies were proposed to deal with this research area by introducing emotion embedding representation BIBREF24 , BIBREF25 , BIBREF26 or modeling as reinforcement learning problem BIBREF27 , BIBREF28 . Most of these studies used encoder-decoder architecture, specifically sequence to sequence (seq2seq) learning. Some works also tried to introduce a new dataset in order to have a better gold standard and improve system performance. BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. Their dataset was used to develop a chatbot which captures human emotional states and elicits positive emotion during the conversation." ] ]
ad16c8261c3a0b88c685907387e1a6904eb15066
What are the research questions posed in the paper regarding EAC studies?
[ "how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance" ]
[ [ "In this work, a systematic review of emotionally-aware chatbots is proposed. We focus on three main issues, including, how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance. The rise of EAC was started by Parry, which uses a simple rule-based approach. Now, most of EAC are built by using a neural-based approach, by exploiting emotion classifier to detect emotion contained in the text. In the modern era, the development of EAC gains more attention since Emotion Generation Challenge shared task on NLPCC 2017. In this era, most EAC is developed by adopting encoder-decoder architecture with sequence-to-sequence learning. Some variant of the recurrent neural network is used in the learning process, including long-short-term memory (LSTM) and gated recurrent unit (GRU). There are also some datasets available for developing EAC now. However, the datasets are only available in English and Chinese. These datasets are gathered from various sources, including social media, online website and manual construction by crowdsourcing. Overall, the difference between these datasets and the common datasets for building chatbot is the presence of an emotion label. In addition, we also investigate the available affective resources which usually use in the emotion classification task. In this part, we only focus on English resources and found several resources from the old one such as LIWC and Emolex to the new one, including DepecheMood and EmoWordNet. In the final part, we gather information about how to evaluate the performance of EAC, and we can classify the approach into two techniques, including qualitative and quantitative assessment. For qualitative assessment, most studies used ISO 9241, which covers several aspects such as efficiency, effectiveness, and satisfaction. While in quantitative analysis, two techniques can be used, including automatic evaluation (by using perplexity) and manual evaluation (involving human judgement). Overall, we can see that effort to humanize chatbots by incorporation affective aspect is becoming the hot topic now. We also predict that this development will continue by going into multilingual perspective since up to now every chatbot only focusing on one language. Also, we think that in the future the studies of humanizing chatbot are not only utilized emotion information but will also focus on a contextual-aware chatbot." ] ]
d3014683dff9976b7c56b72203df99f0e27e9989
What evaluation metrics did they use?
[ "we report P@1, which is equivalent to accuracy, we also provide results with P@5 and P@10 in the Appendix" ]
[ [ "Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling BIBREF12 as the most common and best performing in the literature. Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align). The retrieved pairs are compared to the gold standard and evaluated using precision at $k$ (P@$k$, evaluating how often the correct translation is within the $k$ retrieved nearest neighbours of the query). Throughout this work we report P@1, which is equivalent to accuracy, but we also provide results with P@5 and P@10 in the Appendix." ] ]
ed522090941f61e97ec3a39f52d7599b573492dd
What is triangulation?
[ "Answer with content missing: (Chapter 3) The concept can be easily explained with an example, visualized in Figure 1. Consider the Portuguese (Pt) word trabalho which, according to the MUSE Pt–En dictionary, has the words job and work as possible En translations. In turn, these two En words can be translated to 4 and 5 Czech (Cs) words respectively. By utilizing the transitive property (which translation should exhibit) we can identify the set of 7 possible Cs translations for the Pt word trabalho." ]
[ [ "Conclusion" ] ]
5d164651a4aed7cf24d53ba9685b4bee8c965933
What languages are explored in this paper?
[ "Unanswerable" ]
[ [] ]
4670e1be9d6a260140d055c7685bce365781d82b
Did they experiment with the dataset on some tasks?
[ "Yes" ]
[ [ "In this work, we publish TWNERTC dataset in which named entities and categories of sentences have been automatically annotated. We use Turkish Wikipedia dumps as the text source and Freebase to construct a large-scale gazetteers to map fine-grained types to entities. To overcome noisy and ambiguous data, we leverage domain information which is given by Freebase and develop domain-independent and domain-dependent methodologies. All versions of datasets can be downloaded from our project web-page. Our main contributions are (1) the publication of Turkish corpus for coarse-grained and fine-grained NER, and TC research, (2) six different versions of corpus according to noise reduction methodology and entity types, (3) an analysis of the corpus and (4) benchmark comparisons for NER and TC tasks against human annotators. To the best of our knowledge, these datasets are the largest datasets available for Turkish NER ad TC tasks." ] ]
c2da598346b74541c78ecff5c9586b3857dd01b5
How better does the hybrid tiled CNN model perform than its counterparts?
[ "Unanswerable" ]
[ [] ]
013a8525dbf7a9e1e69acc1cff18bb7b8261cbad
Do they use pretrained word embeddings?
[ "Yes" ]
[ [ "In this study, we tackle a task of describing (defining) a phrase when given its local context as BIBREF2 , while allowing access to other usage examples via word embeddings trained from massive text (global contexts) BIBREF0 , BIBREF1 . We present LOG-Cad, a neural network-based description generator (Figure FIGREF1 ) to directly solve this task. Given a word with its context, our generator takes advantage of the target word's embedding, pre-trained from massive text (global contexts), while also encoding the given local context, combining both to generate a natural language description. The local and global contexts complement one another and are both essential; global contexts are crucial when local contexts are short and vague, while the local context is crucial when the target phrase is polysemous, rare, or unseen." ] ]
5efed109940bf74ed0a9d4a5e97a535502b23d27
Do they use skipgram version of word2vec?
[ "Yes" ]
[ [ "In this paper we focus on the skipgram approach with random negative examples proposed in BIBREF0 . This has been found to yield the best results among the proposed variants on a variety of semantic tests of the resulting vectors BIBREF7 , BIBREF0 . Given a corpus consisting of a sequence of sentences INLINEFORM0 each comprising a sequence of words INLINEFORM1 , the objective is to maximize the log likelihood: DISPLAYFORM0" ] ]
b8137eb0fa0b41f871c899a54154f640f0e9aca1
What domains are considered that have such large vocabularies?
[ "relational entities, general text-based attributes, descriptive text of images, nodes in graph structure of networks, queries" ]
[ [ "More recently, novel applications of word2vec involving unconventional generalized “words” and training corpuses have been proposed. These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few." ] ]
38b527783330468bf6c4829f7d998e6f17c615f0
Do they perform any morphological tokenization?
[ "No" ]
[ [ "This step entails counting occurrences of all words in the training corpus and sorting them in order of decreasing occurrence. As mentioned, the vocabulary is taken to be the INLINEFORM0 most frequently occurring words, that occur at least some number INLINEFORM1 times. It is implemented in Spark as a straight-forward map-reduce job." ] ]
6b2fbc1c083491a774233f9edf8f76bd879418df
How many nodes does the cluster have?
[ "Unanswerable" ]
[ [] ]
fb56743e942883d7e74a73c70bd11016acddc348
What data do they train the language models on?
[ " BABEL speech corpus " ]
[ [ "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation." ] ]
093dd1e403eac146bcd19b51a2ace316b36c6264
Do they report BLEU scores?
[ "No" ]
[ [] ]
1adbdb5f08d67d8b05328ccc86d297ac01bf076c
What languages do they use?
[ "Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages." ]
[ [ "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation." ] ]
da82b6dad2edd4911db1dc59e4ccd7f66c5fd79c
What architectures are explored to improve the seq2seq model?
[ "VGG-BLSTM, character-level RNNLM" ]
[ [ "Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below.", "We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes." ] ]
cf15c4652e23829d8fb4cf2a25e64408c18734c1
Why is this work different from text-only UNMT?
[ "the image can play the role of a pivot “language\" to bridge the two languages without paralleled corpus" ]
[ [ "Our idea is originally inspired by the text-only unsupervised MT (UMT) BIBREF8 , BIBREF9 , BIBREF0 , investigating whether it is possible to train a general MT system without any form of supervision. As BIBREF0 discussed, the text-only UMT is fundamentally an ill-posed problem, since there are potentially many ways to associate target with source sentences. Intuitively, since the visual content and language are closely related, the image can play the role of a pivot “language\" to bridge the two languages without paralleled corpus, making the problem “more well-defined\" by reducing the problem to supervised learning. However, unlike the text translation involving word generation (usually a discrete distribution), the task to generate a dense image from a sentence description itself is a challenging problem BIBREF10 . High quality image generation usually depends on a complicated or large scale neural network architecture BIBREF11 , BIBREF12 , BIBREF13 . Thus, it is not recommended to utilize the image dataset as a pivot “language\" BIBREF14 . Motivated by the cycle-consistency BIBREF15 , we tackle the unsupervised translation with a multi-modal framework which includes two sequence-to-sequence encoder-decoder models and one shared image feature extractor. We don't introduce the adversarial learning via a discriminator because of the non-differentiable $\\arg \\max $ operation during word generation. With five modules in our framework, there are multiple data streaming paths in the computation graph, inducing the auto-encoding loss and cycle-consistency loss, in order to achieve the unsupervised translation." ] ]
439af1232a012fc4d94ef2ffe305dd405bee3888
What is baseline used?
[ "Base , Base+Noise, Cleaning , Dynamic-CM , Global-CM, Global-ID-CM, Brown-CM , K-Means-CM" ]
[ [ "We follow the BiLSTM architecture from BIBREF3. Only the optimizer was changed for all models to NADAM BIBREF22 as this helped with convergence problems for increasing cluster numbers. The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training.", "The cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\\lambda \\in \\lbrace 0.3, 0.5, 0.7\\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set.", "We implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss." ] ]
b6a6bdca6dee70f8fe6dd1cfe3bb2c5ff03b1605
Did they evaluate against baseline?
[ "Yes" ]
[ [ "Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines." ] ]
8951fde01b1643fcb4b91e51f84e074ce3b69743
How they evaluate their approach?
[ "They evaluate newly proposed models in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise" ]
[ [ "Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines." ] ]
38c74ab8292a94fc5a82999400ee9c06be19f791
How large is the corpus?
[ "It contains 106,350 documents" ]
[ [] ]
ff307b10e56f75de6a32e68e25a69899478a13e4
Which document classifiers do they experiment with?
[ "logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37" ]
[ [ "Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus." ] ]
16af38f7c4774637cf8e04d4b239d6d72f0b0a3a
How large is the dataset?
[ "over 104k documents" ]
[ [] ]
e9209ebf38c4ae4d93884f68c7b5b3444e0604f3
What evaluation metrics are used to measure diversity?
[ "Unanswerable" ]
[ [] ]
4319a13a6c4a9494ccb465509c9d4265f63dc9b5
How is some information lost in the RNN-based generation models?
[ "the generated sentences often did not include all desired attributes." ]
[ [ "Natural language generation (NLG) is an essential component of an SDS. Given a semantic representation (SR) consisting of a dialogue act and a set of slot-value pairs, the generator should produce natural language containing the desired information. Traditionally NLG was based on templates BIBREF3 , which produce grammatically-correct sentences that contain all desired information. However, the lack of variation of these sentences made these systems seem tedious and monotonic. Trainable generators BIBREF4 , BIBREF5 can generate several sentences for the same SR, but the dependence on pre-defined operations limits their potential. Corpus-based approaches BIBREF6 , BIBREF7 learn to generate natural language directly from data without pre-defined rules. However, they usually require alignment between the sentence and the SR. Recently, Wen et al. wensclstm15 proposed an RNN-based approach, which outperformed previous methods on several metrics. However, the generated sentences often did not include all desired attributes." ] ]
5be62428f973a08c303c66018b081ad140c559c8
What is the model accuracy?
[ "Overall, our AMRAN outperforms all baselines, achieving 0.657 HR@10 and 0.410 NDCG@10." ]
[ [ "We adopt the leave-one-out evaluation protocol to evaluate the performance of our model and baselines. The leave-one-out evaluation protocol has been widely used in top-K recommendation tasks. In particular, we held the latest interaction of each user as the test set and used the remaining interactions for training. Each testing instance was paired with 99 randomly sampled negative instances. Each recommendation model ranks the 100 instances according to its predicted results. The ranked list is judged by Hit Ratio (HR) BIBREF49 and Normalized Discount Cumulative Gain (NDCG) BIBREF50 at the position 10. HR@10 is a recall-based metric, measuring the percentage of the testing item being correctly recommended in the top-10 position. NDCG@10 is a ranked evaluation metric which considers the position of the correct hit in the ranked result. Since both modules in our framework introduce randomness, we repeat each experiment 5 times with different weight initialization and randomly selecting neighbors. We report the average score of the best performance in each training process for both metrics to ensure the robustness of our framework.", "Overall, our AMRAN outperforms all baselines, achieving 0.657 HR@10 and 0.410 NDCG@10. It improves HR@10 by 5.3% and NDCG@10 by 3% over the best baseline (i.e., $NAIS_{concat}$)." ] ]
8b11bc3a23932afe7d52c19deffd9dec4830f2e9
How do the authors define fake news?
[ "Unanswerable" ]
[ [] ]