title
stringlengths 5
342
| author
stringlengths 3
2.17k
| year
int64 1.95k
2.02k
| abstract
stringlengths 0
12.7k
| pages
stringlengths 1
702
| queryID
stringlengths 4
40
| query
stringlengths 1
300
| paperID
stringlengths 0
40
| include
int64 0
1
|
---|---|---|---|---|---|---|---|---|
Improving Semantic Parsing via Answer Type Inference | Yavuz, Semih and
Gur, Izzeddin and
Su, Yu and
Srivatsa, Mudhakar and
Yan, Xifeng | 2,016 | nan | 149--159 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | f3594f9d60c98cac88f9033c69c2b666713ed6d6 | 1 |
{NLP} Infrastructure for the {L}ithuanian Language | Vitkut{\.e}-Ad{\v{z}}gauskien{\.e}, Daiva and
Utka, Andrius and
Amilevi{\v{c}}ius, Darius and
Krilavi{\v{c}}ius, Tomas | 2,016 | The Information System for Syntactic and Semantic Analysis of the Lithuanian language (lith. Lietuvi{\k{u}} kalbos sintaksin{\.e}s ir semantin{\.e}s analiz{\.e}s informacin{\.e} sistema, LKSSAIS) is the first infrastructure for the Lithuanian language combining Lithuanian language tools and resources for diverse linguistic research and applications tasks. It provides access to the basic as well as advanced natural language processing tools and resources, including tools for corpus creation and management, text preprocessing and annotation, ontology building, named entity recognition, morphosyntactic and semantic analysis, sentiment analysis, etc. It is an important platform for researchers and developers in the field of natural language technology. | 2539--2542 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 3a1d0127f51e144c1c280c353e6b316681da7d4b | 0 |
Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing | Xiong, Wenhan and
Wu, Jiawei and
Lei, Deren and
Yu, Mo and
Chang, Shiyu and
Guo, Xiaoxiao and
Wang, William Yang | 2,019 | Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance. Such techniques, however, are not directly applicable to more open and practical scenarios where the type set is not restricted by KB schema and includes a vast number of free-form types. To model the underlying label correlations without access to manually annotated label structures, we introduce a novel label-relational inductive bias, represented by a graph propagation layer that effectively encodes both global label co-occurrence statistics and word-level similarities. On a large dataset with over 10,000 free-form types, the graph-enhanced model equipped with an attention-based matching module is able to achieve a much higher recall score while maintaining a high-level precision. Specifically, it achieves a 15.3{\%} relative F1 improvement and also less inconsistency in the outputs. We further show that a simple modification of our proposed graph layer can also improve the performance on a conventional and widely-tested dataset that only includes KB-schema types. | 773--784 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | a0713d945b2e5c2bdeeba68399c8ac6ea84e0ca6 | 1 |
A free/open-source rule-based machine translation system for {C}rimean {T}atar to {T}urkish | G{\"o}k{\i}rmak, Memduh and
Tyers, Francis and
Washington, Jonathan | 2,019 | nan | 24--31 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | a3c842456422ed20b8a78520710e414c9d51f756 | 0 |
Prompt-learning for Fine-grained Entity Typing | Ding, Ning and
Chen, Yulin and
Han, Xu and
Xu, Guangwei and
Wang, Xiaobin and
Xie, Pengjun and
Zheng, Haitao and
Liu, Zhiyuan and
Li, Juanzi and
Kim, Hong-Gee | 2,022 | As an effective approach to adapting pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot, and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on four fine-grained entity typing benchmarks under fully supervised, few-shot, and zero-shot settings show the effectiveness of the prompt-learning paradigm and further make a powerful alternative to vanilla fine-tuning. | 6888--6901 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | bf722dc893ddaad5045fca5646212ec3badf3c5a | 1 |
Rethinking Positional Encoding in Tree Transformer for Code Representation | Peng, Han and
Li, Ge and
Zhao, Yunfei and
Jin, Zhi | 2,022 | Transformers are now widely used in code representation, and several recent works further develop tree Transformers to capture the syntactic structure in source code. Specifically, novel tree positional encodings have been proposed to incorporate inductive bias into Transformer.In this work, we propose a novel tree Transformer encoding node positions based on our new description method for tree structures. Technically, local and global soft bias shown in previous works is both introduced as positional encodings of our Transformer model. Our model finally outperforms strong baselines on code summarization and completion tasks across two languages, demonstrating our model{'}s effectiveness. Besides, extensive experiments and ablation study shows that combining both local and global paradigms is still helpful in improving model performance. We release our code at \url{https://github.com/AwdHanPeng/TreeTransformer}. | 3204--3214 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 20333c34f892c8e0c2f4e6c37295a8b43ef35c02 | 0 |
Fine-Grained Entity Typing via Hierarchical Multi Graph Convolutional Networks | Jin, Hailong and
Hou, Lei and
Li, Juanzi and
Dong, Tiansi | 2,019 | This paper addresses the problem of inferring the fine-grained type of an entity from a knowledge base. We convert this problem into the task of graph-based semi-supervised classification, and propose Hierarchical Multi Graph Convolutional Network (HMGCN), a novel Deep Learning architecture to tackle this problem. We construct three kinds of connectivity matrices to capture different kinds of semantic correlations between entities. A recursive regularization is proposed to model the subClassOf relations between types in given type hierarchy. Extensive experiments with two large-scale public datasets show that our proposed method significantly outperforms four state-of-the-art methods. | 4969--4978 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 074e3497b03366caf2e17acd59fb1c52ccf8be55 | 1 |
{GAL}s: 基於對抗式學習之整列式摘要法 ({GAL}s: A {GAN}-based Listwise Summarizer) | Kuo, Chia-Chih and
Chen, Kuan-Yu | 2,019 | nan | 15--24 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | e20fa739fd0b4f000966f37801ddb3a685ce0c5c | 0 |
Type-Aware Distantly Supervised Relation Extraction with Linked Arguments | Koch, Mitchell and
Gilmer, John and
Soderland, Stephen and
Weld, Daniel S. | 2,014 | nan | 1891--1901 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | bd6b372f343ca16cbf97981967bf896bf2e351fd | 1 |
Dive deeper: Deep Semantics for Sentiment Analysis | Jadhav, Nikhilkumar and
Bhattacharyya, Pushpak | 2,014 | nan | 113--118 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 830746182677f9c1dc27c87222e01a0bb0ca4dab | 0 |
Can {NLI} Models Verify {QA} Systems{'} Predictions? | Chen, Jifan and
Choi, Eunsol and
Durrett, Greg | 2,021 | To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just {``}good enough{''} in the context of imperfect QA datasets. We explore the use of natural language inference (NLI) as a way to achieve this goal, as NLI inherently requires the premise (document context) to contain all necessary information to support the hypothesis (proposed answer to the question). We leverage large pre-trained models and recent prior datasets to construct powerful question conversion and decontextualization modules, which can reformulate QA instances as premise-hypothesis pairs with very high reliability. Then, by combining standard NLI datasets with NLI examples automatically derived from QA training data, we can train NLI models to evaluate QA models{'} proposed answers. We show that our approach improves the confidence estimation of a QA model across different domains, evaluated in a selective QA setting. Careful manual analysis over the predictions of our NLI model shows that it can further identify cases where the QA model produces the right answer for the wrong reason, i.e., when the answer sentence cannot address all aspects of the question. | 3841--3854 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | e3bba08dc07c5f1372b78450990ba0ef305a834c | 1 |
Adaptation de ressources en langue anglaise pour interroger des donn{\'e}es tabulaires en fran{\c{c}}ais (Adaptation of resources in {E}nglish to query {F}rench tabular data) | Blandin, Alexis | 2,021 | Les r{\'e}cents d{\'e}veloppements des approches d{'}apprentissage neuronal profond ont permis des avanc{\'e}es tr{\`e}s significatives dans le domaine de l{'}interrogation des syst{\`e}mes d{'}information en langage naturel. Cependant, pour le fran{\c{c}}ais, les ressources {\`a} disposition ne permettent de consid{\'e}rer que les requ{\^e}tes sur des donn{\'e}es stock{\'e}es sous forme de texte. Or, aujourd{'}hui la majorit{\'e} des donn{\'e}es utilis{\'e}es en entreprise sont stock{\'e}es sous forme tabulaire. Il est donc int{\'e}ressant d{'}{\'e}valuer si les ressources anglophones associ{\'e}es (jeux de donn{\'e}es tabulaires et mod{\`e}les) peuvent {\^e}tre adapt{\'e}es au fran{\c{c}}ais tout en conservant de bons r{\'e}sultats. | 47--54 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 82e361c8f77753391e1f5245f346d13547e13309 | 0 |
Design Challenges for Entity Linking | Ling, Xiao and
Singh, Sameer and
Weld, Daniel S. | 2,015 | Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called Vinculum, for entity linking. We conduct an extensive evaluation on nine data sets, comparing Vinculum with two state-of-the-art systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence. | 315--328 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | c6b53dd64d79a59f49f261baac8d2581a29ca06a | 1 |
Paraphrase Identification and Semantic Similarity in {T}witter with Simple Features | Vo, Ngoc Phuoc An and
Magnolini, Simone and
Popescu, Octavian | 2,015 | nan | 10--19 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | db11cb39e978d1423bdc5bbdeb29706d41368cca | 0 |
Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation | Poliak, Adam and
Haldar, Aparajita and
Rudinger, Rachel and
Hu, J. Edward and
Pavlick, Ellie and
White, Aaron Steven and
Van Durme, Benjamin | 2,018 | We present a large-scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning. The collection results from recasting 13 existing datasets from 7 semantic phenomena into a common NLI structure, resulting in over half a million labeled context-hypothesis pairs in total. We refer to our collection as the DNC: Diverse Natural Language Inference Collection. The DNC is available online at \url{https://www.decomp.net}, and will grow over time as additional resources are recast and added from novel sources. | 67--81 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | a609671e92b03a421236f2873b571159d7c2515c | 1 |
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback | Lawrence, Carolin and
Riezler, Stefan | 2,018 | Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data. | 1820--1830 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 669e6be7cd92ba6bda39d9e3a030e72fde07a418 | 0 |
Improving Fine-grained Entity Typing with Entity Linking | Dai, Hongliang and
Du, Donghong and
Li, Xin and
Song, Yangqiu | 2,019 | Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention. In this paper, we use entity linking to help with the fine-grained entity type classification process. We propose a deep neural model that makes predictions based on both the context and the information obtained from entity linking results. Experimental results on two commonly used datasets demonstrates the effectiveness of our approach. On both datasets, it achieves more than 5{\%} absolute strict accuracy improvement over the state of the art. | 6210--6215 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | b74b272c7fe881614f3eb8c2504b037439571eec | 1 |
Hitachi at {MRP} 2019: Unified Encoder-to-Biaffine Network for Cross-Framework Meaning Representation Parsing | Koreeda, Yuta and
Morio, Gaku and
Morishita, Terufumi and
Ozaki, Hiroaki and
Yanai, Kohsuke | 2,019 | This paper describes the proposed system of the Hitachi team for the Cross-Framework Meaning Representation Parsing (MRP 2019) shared task. In this shared task, the participating systems were asked to predict nodes, edges and their attributes for five frameworks, each with different order of {``}abstraction{''} from input tokens. We proposed a unified encoder-to-biaffine network for all five frameworks, which effectively incorporates a shared encoder to extract rich input features, decoder networks to generate anchorless nodes in UCCA and AMR, and biaffine networks to predict edges. Our system was ranked fifth with the macro-averaged MRP F1 score of 0.7604, and outperformed the baseline unified transition-based MRP. Furthermore, post-evaluation experiments showed that we can boost the performance of the proposed system by incorporating multi-task learning, whereas the baseline could not. These imply efficacy of incorporating the biaffine network to the shared architecture for MRP and that learning heterogeneous meaning representations at once can boost the system performance. | 114--126 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 06835b411a20e424869fd3a6ce5c35a7082d5732 | 0 |
Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss | Xu, Peng and
Barbosa, Denilson | 2,018 | The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross-entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task. | 16--25 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 008405f7ee96677ac23cc38be360832af2d9f437 | 1 |
{UKP}-Athene: Multi-Sentence Textual Entailment for Claim Verification | Hanselowski, Andreas and
Zhang, Hao and
Li, Zile and
Sorokin, Daniil and
Schiller, Benjamin and
Schulz, Claudia and
Gurevych, Iryna | 2,018 | The Fact Extraction and VERification (FEVER) shared task was launched to support the development of systems able to verify claims by extracting supporting or refuting facts from raw text. The shared task organizers provide a large-scale dataset for the consecutive steps involved in claim verification, in particular, document retrieval, fact extraction, and claim classification. In this paper, we present our claim verification pipeline approach, which, according to the preliminary results, scored third in the shared task, out of 23 competing systems. For the document retrieval, we implemented a new entity linking approach. In order to be able to rank candidate facts and classify a claim on the basis of several selected facts, we introduce two extensions to the Enhanced LSTM (ESIM). | 103--108 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 0c2790d4894940a5cf9084b09788a6c65617c209 | 0 |
Embedding Methods for Fine Grained Entity Type Classification | Yogatama, Dani and
Gillick, Daniel and
Lazic, Nevena | 2,015 | nan | 291--296 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | cd51e6faf377104269ba1e905ce430650677155c | 1 |
A Pilot Experiment on Exploiting Translations for Literary Studies on Kafka{'}s {``}Verwandlung{''} | Cap, Fabienne and
R{\"o}siger, Ina and
Kuhn, Jonas | 2,015 | nan | 48--57 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 1bf9dc4e79fb66afc9b9037ffaa71d2847191477 | 0 |
{T}axo{C}lass: Hierarchical Multi-Label Text Classification Using Only Class Names | Shen, Jiaming and
Qiu, Wenda and
Meng, Yu and
Shang, Jingbo and
Ren, Xiang and
Han, Jiawei | 2,021 | Hierarchical multi-label text classification (HMTC) aims to tag each document with a set of classes from a taxonomic class hierarchy. Most existing HMTC methods train classifiers using massive human-labeled documents, which are often too costly to obtain in real-world applications. In this paper, we explore to conduct HMTC based on only class surface names as supervision signals. We observe that to perform HMTC, human experts typically first pinpoint a few most essential classes for the document as its {``}core classes{''}, and then check core classes{'} ancestor classes to ensure the coverage. To mimic human experts, we propose a novel HMTC framework, named TaxoClass. Specifically, TaxoClass (1) calculates document-class similarities using a textual entailment model, (2) identifies a document{'}s core classes and utilizes confident core classes to train a taxonomy-enhanced classifier, and (3) generalizes the classifier via multi-label self-training. Our experiments on two challenging datasets show TaxoClass can achieve around 0.71 Example-F1 using only class names, outperforming the best previous method by 25{\%}. | 4239--4249 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 15e100120f080b9ef4230b4cbb8e107b76e2b839 | 1 |
Self-supervised Regularization for Text Classification | Zhou, Meng and
Li, Zechen and
Xie, Pengtao | 2,021 | Text classification is a widely studied problem and has broad applications. In many real-world problems, the number of texts for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose SSL-Reg, a data-dependent regularization approach based on self-supervised learning (SSL). SSL (Devlin et al., 2019a) is an unsupervised learning approach that defines auxiliary tasks on input data without using any human-provided labels and learns data representations by solving these auxiliary tasks. In SSL-Reg, a supervised classification task and an unsupervised SSL task are performed simultaneously. The SSL task is unsupervised, which is defined purely on input texts without using any human- provided labels. Training a model using an SSL task can prevent the model from being overfitted to a limited number of class labels in the classification task. Experiments on 17 text classification datasets demonstrate the effectiveness of our proposed method. Code is available at \url{https://github.com/UCSD-AI4H/SSReg}. | 641--656 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 7da6d909368bbbaa355a129b0c2272ebdbd16a4c | 0 |
Grounding {`}Grounding{'} in {NLP} | Chandu, Khyathi Raghavi and
Bisk, Yonatan and
Black, Alan W | 2,021 | nan | 4283--4305 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 2550fafc0cbd8bbf7aadd864ac569596d33db038 | 1 |
Multi-Turn Target-Guided Topic Prediction with {M}onte {C}arlo Tree Search | Yang, Jingxuan and
Li, Si and
Guo, Jun | 2,021 | This paper concerns the problem of topic prediction in target-guided conversation, which requires the system to proactively and naturally guide the topic thread of the conversation, ending up with achieving a designated target subject. Existing studies usually resolve the task with a sequence of single-turn topic prediction. Greedy decision is made at each turn since it is impossible to explore the topics in future turns under the single-turn topic prediction mechanism. As a result, these methods often suffer from generating sub-optimal topic threads. In this paper, we formulate the target-guided conversation as a problem of multi-turn topic prediction and model it under the framework of Markov decision process (MDP). To alleviate the problem of generating sub-optimal topic thread, Monte Carlo tree search (MCTS) is employed to improve the topic prediction by conducting long-term planning. At online topic prediction, given a target and a start utterance, our proposed MM-TP (MCTS-enhanced MDP for Topic Prediction) firstly performs MCTS to enhance the policy for predicting the topic for each turn. Then, two retrieval models are respectively used to generate the responses of the agent and the user. Quantitative evaluation and qualitative study showed that MM-TP significantly improved the state-of-the-art baselines. | 324--334 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 8143632592e45f57d59257ac0e7cb9cb60634907 | 0 |
Fine-grained Entity Typing via Label Reasoning | Liu, Qing and
Lin, Hongyu and
Xiao, Xinyan and
Han, Xianpei and
Sun, Le and
Wu, Hua | 2,021 | Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types. In this paper, we argue that the implicitly entailed extrinsic and intrinsic dependencies between labels can provide critical knowledge to tackle the above challenges. To this end, we propose Label Reasoning Network(LRN), which sequentially reasons fine-grained entity labels by discovering and exploiting label dependencies knowledge entailed in the data. Specifically, LRN utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels, which can effectively model, learn and reason complex label dependencies in a sequence-to-set, end-to-end manner. Experiments show that LRN achieves the state-of-the-art performance on standard ultra fine-grained entity typing benchmarks, and can also resolve the long tail label problem effectively. | 4611--4622 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 7f30821267a11138497107d947ea39726e4b7fbd | 1 |
Learning Disentangled Latent Topics for {T}witter Rumour Veracity Classification | Dougrez-Lewis, John and
Liakata, Maria and
Kochkina, Elena and
He, Yulan | 2,021 | nan | 3902--3908 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 6260975a9a50ab68f136ab79f4a912e253aa2680 | 0 |
Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model | Dai, Hongliang and
Song, Yangqiu and
Wang, Haixun | 2,021 | Recently, there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping. | 1790--1799 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 70b49a024787d3ad374fb78dc87e3ba2b5e16566 | 1 |
{MRF}-Chat: Improving Dialogue with {M}arkov Random Fields | Grover, Ishaan and
Huggins, Matthew and
Breazeal, Cynthia and
Park, Hae Won | 2,021 | Recent state-of-the-art approaches in open-domain dialogue include training end-to-end deep-learning models to learn various conversational features like emotional content of response, symbolic transitions of dialogue contexts in a knowledge graph and persona of the agent and the user, among others. While neural models have shown reasonable results, modelling the cognitive processes that humans use when conversing with each other may improve the agent{'}s quality of responses. A key element of natural conversation is to tailor one{'}s response such that it accounts for concepts that the speaker and listener may or may not know and the contextual relevance of all prior concepts used in conversation. We show that a rich representation and explicit modeling of these psychological processes can improve predictions made by existing neural network models. In this work, we propose a novel probabilistic approach using Markov Random Fields (MRF) to augment existing deep-learning methods for improved next utterance prediction. Using human and automatic evaluations, we show that our augmentation approach significantly improves the performance of existing state-of-the-art retrieval models for open-domain conversational agents. | 4925--4936 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 26a15c0e1becb323f40a616003a15db48ea1581a | 0 |
Modeling Fine-Grained Entity Types with Box Embeddings | Onoe, Yasumasa and
Boratko, Michael and
McCallum, Andrew and
Durrett, Greg | 2,021 | Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types{'} complex interdependencies. We study the ability of box embeddings, which embed concepts as d-dimensional hyperrectangles, to capture hierarchies of types even when these relationships are not defined explicitly in the ontology. Our model represents both types and entity mentions as boxes. Each mention and its context are fed into a BERT-based model to embed that mention in our box space; essentially, this model leverages typological clues present in the surface text to hypothesize a type representation for the mention. Box containment can then be used to derive both the posterior probability of a mention exhibiting a given type and the conditional probability relations between types themselves. We compare our approach with a vector-based typing model and observe state-of-the-art performance on several entity typing benchmarks. In addition to competitive typing performance, our box-based model shows better performance in prediction consistency (predicting a supertype and a subtype together) and confidence (i.e., calibration), demonstrating that the box-based model captures the latent type hierarchies better than the vector-based model does. | 2051--2064 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 176e3cbe3141c8b874df663711dca9b7470b8243 | 1 |
The Low-Dimensional Linear Geometry of Contextualized Word Representations | Hernandez, Evan and
Andreas, Jacob | 2,021 | Black-box probing models can reliably extract linguistic features like tense, number, and syntactic role from pretrained word representations. However, the manner in which these features are encoded in representations remains poorly understood. We present a systematic study of the linear geometry of contextualized word representations in ELMO and BERT. We show that a variety of linguistic features (including structured dependency relationships) are encoded in low-dimensional subspaces. We then refine this geometric picture, showing that there are hierarchical relations between the subspaces encoding general linguistic categories and more specific ones, and that low-dimensional feature encodings are distributed rather than aligned to individual neurons. Finally, we demonstrate that these linear subspaces are causally related to model behavior, and can be used to perform fine-grained manipulation of BERT{'}s output distribution. | 82--93 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 761f1607b380df54546fd2114b458aa19109cd3d | 0 |
Syntax-Enhanced Pre-trained Model | Xu, Zenan and
Guo, Daya and
Tang, Duyu and
Su, Qinliang and
Shou, Linjun and
Gong, Ming and
Zhong, Wanjun and
Quan, Xiaojun and
Jiang, Daxin and
Duan, Nan | 2,021 | We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages. Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios. To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages. Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text. We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree. We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering. Results show that our model achieves state-of-the-art performance on six public benchmark datasets. We have two major findings. First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models. Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens. | 5412--5422 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 634e8fbeba53d45828846dd541ce0a0078c57b68 | 1 |
Enhancing Transformers with Gradient Boosted Decision Trees for {NLI} Fine-Tuning | Minixhofer, Benjamin and
Gritta, Milan and
Iacobacci, Ignacio | 2,021 | nan | 303--313 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 6d2773a788067dcce7ac6a82019649528d341b4e | 0 |
Interpretable Entity Representations through Large-Scale Typing | Onoe, Yasumasa and
Durrett, Greg | 2,020 | In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. The embeddings produced this way are effective when fed into downstream models, but they require end-task fine-tuning and are fundamentally difficult to interpret. In this paper, we present an approach to creating entity representations that are human readable and achieve high performance on entity-related tasks out of the box. Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types, indicating the confidence of a typing model{'}s decision that the entity belongs to the corresponding type. We obtain these representations using a fine-grained entity typing model, trained either on supervised ultra-fine entity typing data (Choi et al. 2018) or distantly-supervised examples from Wikipedia. On entity probing tasks involving recognizing entity identity, our embeddings used in parameter-free downstream models achieve competitive performance with ELMo- and BERT-based embeddings in trained models. We also show that it is possible to reduce the size of our type set in a learning-based way for particular domains. Finally, we show that these embeddings can be post-hoc modified through a small number of rules to incorporate domain knowledge and improve performance. | 612--624 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 782a50a48ba5d32839631254285d989bfadfd193 | 1 |
基于对话约束的回复生成研究(Research on Response Generation via Dialogue Constraints) | Guan, Mengyu and
Wang, Zhongqing and
Li, Shoushan and
Zhou, Guodong | 2,020 | 现有的对话系统中存在着生成{``}好的{''}、{``}我不知道{''}等无意义的安全回复问题。日常对话中,对话者通常围绕特定的主题进行讨论且每句话都有明显的情感和意图。因此该文提出了基于对话约束的回复生成模型,即在Seq2Seq模型的基础上,结合对对话的主题、情感、意图的识别。该方法对生成回复的主题、情感和意图进行约束,从而生成具有合理的情感和意图且与对话主题相关的回复。实验证明,该文提出的方法能有效地提高生成回复的质量。 | 225--235 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | a7a965d5d8e6addd2cef51201fbab44134ca3c3f | 0 |
Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start | Yin, Wenpeng and
Rajani, Nazneen Fatema and
Radev, Dragomir and
Socher, Richard and
Xiong, Caiming | 2,020 | A standard way to address different NLP problems is by first constructing a problem-specific dataset, then building a model to fit this dataset. To build the ultimate artificial intelligence, we desire a single machine that can handle diverse new problems, for which task-specific annotations are limited. We bring up textual entailment as a unified solver for such NLP problems. However, current research of textual entailment has not spilled much ink on the following questions: (i) How well does a pretrained textual entailment system generalize across domains with only a handful of domain-specific examples? and (ii) When is it worth transforming an NLP task into textual entailment? We argue that the transforming is unnecessary if we can obtain rich annotations for this task. Textual entailment really matters particularly when the target NLP task has insufficient annotations. Universal NLP can be probably achieved through different routines. In this work, we introduce Universal Few-shot textual Entailment (UFO-Entail). We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream NLP tasks such as question answering and coreference resolution when the end-task annotations are limited. | 8229--8239 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | e2d38543bd3cf813c63df336b21b003156ed48a8 | 1 |
The {N}iu{T}rans System for {WNGT} 2020 Efficiency Task | Hu, Chi and
Li, Bei and
Li, Yinqiao and
Lin, Ye and
Li, Yanyang and
Wang, Chenglong and
Xiao, Tong and
Zhu, Jingbo | 2,020 | This paper describes the submissions of the NiuTrans Team to the WNGT 2020 Efficiency Shared Task. We focus on the efficient implementation of deep Transformer models (Wang et al., 2019; Li et al., 2019) using NiuTensor, a flexible toolkit for NLP tasks. We explored the combination of deep encoder and shallow decoder in Transformer models via model compression and knowledge distillation. The neural machine translation decoding also benefits from FP16 inference, attention caching, dynamic batching, and batch pruning. Our systems achieve promising results in both translation quality and efficiency, e.g., our fastest system can translate more than 40,000 tokens per second with an RTX 2080 Ti while maintaining 42.9 BLEU on newstest2018. | 204--210 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 18f86146a095a30d57569197c04f807bb10064e8 | 0 |
{FASTMATCH}: Accelerating the Inference of {BERT}-based Text Matching | Pang, Shuai and
Ma, Jianqiang and
Yan, Zeyu and
Zhang, Yang and
Shen, Jianping | 2,020 | Recently, pre-trained language models such as BERT have shown state-of-the-art accuracies in text matching. When being applied to IR (or QA), the BERT-based matching models need to online calculate the representations and interactions for all query-candidate pairs. The high inference cost has prohibited the deployments of BERT-based matching models in many practical applications. To address this issue, we propose a novel BERT-based text matching model, in which the representations and the interactions are decoupled. Then, the representations of the candidates can be calculated and stored offline, and directly retrieved during the online matching phase. To conduct the interactions and generate final matching scores, a lightweight attention network is designed. Experiments based on several large scale text matching datasets show that the proposed model, called FASTMATCH, can achieve up to 100X speed-up to BERT and RoBERTa at the online matching phase, while keeping more up to 98.7{\%} of the performance. | 6459--6469 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | e349b4f2061ba72cd693787d933e8fdb42eea23b | 1 |
A Multi-task Learning Framework for Opinion Triplet Extraction | Zhang, Chen and
Li, Qiuchi and
Song, Dawei and
Wang, Benyou | 2,020 | The state-of-the-art Aspect-based Sentiment Analysis (ABSA) approaches are mainly based on either detecting aspect terms and their corresponding sentiment polarities, or co-extracting aspect and opinion terms. However, the extraction of aspect-sentiment pairs lacks opinion terms as a reference, while co-extraction of aspect and opinion terms would not lead to meaningful pairs without determining their sentiment dependencies. To address the issue, we present a novel view of ABSA as an opinion triplet extraction task, and propose a multi-task learning framework to jointly extract aspect terms and opinion terms, and simultaneously parses sentiment dependencies between them with a biaffine scorer. At inference phase, the extraction of triplets is facilitated by a triplet decoding method based on the above outputs. We evaluate the proposed framework on four SemEval benchmarks for ASBA. The results demonstrate that our approach significantly outperforms a range of strong baselines and state-of-the-art approaches. | 819--828 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 0c708c78e7c4b1d94b5c64f3469a58770995dc4d | 0 |
Hierarchical Entity Typing via Multi-level Learning to Rank | Chen, Tongfei and
Chen, Yunmo and
Van Durme, Benjamin | 2,020 | We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction. At training, our novel multi-level learning-to-rank loss compares positive types against negative siblings according to the type tree. During prediction, we define a coarse-to-fine decoder that restricts viable candidates at each level of the ontology based on already predicted parent type(s). Our approach significantly outperform prior work on strict accuracy, demonstrating the effectiveness of our method. | 8465--8475 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | d4b4484308fa6efd821ad8084cc9bde9d3f211b0 | 1 |
Automated Assessment of Noisy Crowdsourced Free-text Answers for {H}indi in Low Resource Setting | Agarwal, Dolly and
Gupta, Somya and
Baghel, Nishant | 2,020 | The requirement of performing assessments continually on a larger scale necessitates the implementation of automated systems for evaluation of the learners{'} responses to free-text questions. We target children of age group 8-14 years and use an ASR integrated assessment app to crowdsource learners{'} responses to free text questions in Hindi. The app helped collect 39641 user answers to 35 different questions of Science topics. Since the users are young children from rural India and may not be well-equipped with technology, it brings in various noise types in the answers. We describe these noise types and propose a preprocessing pipeline to denoise user{'}s answers. We showcase the performance of different similarity metrics on the noisy and denoised versions of user and model answers. Our findings have large-scale applications for automated answer assessment for school children in India in low resource settings. | 122--131 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 60bdd5a0b82b3c9b399868b3fc16d192171951f4 | 0 |
Description-Based Zero-shot Fine-Grained Entity Typing | Obeidat, Rasha and
Fern, Xiaoli and
Shahbazi, Hamed and
Tadepalli, Prasad | 2,019 | Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity mentions and their corresponding type representations on the known types. At test time, any new type can be incorporated into the system given its Wikipedia descriptions. We evaluate our approach on FIGER, a public benchmark entity tying dataset. Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data. Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data. | 807--814 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 51b958dd76a6aefcd521ec0f503c3e334f711362 | 1 |
Lattice-Based Transformer Encoder for Neural Machine Translation | Xiao, Fengshun and
Li, Jiangtong and
Zhao, Hai and
Wang, Rui and
Chen, Kehai | 2,019 | Neural machine translation (NMT) takes deterministic sequences for source representations. However, either word-level or subword-level segmentations have multiple choices to split a source sequence with different word segmentors or different subword vocabulary sizes. We hypothesize that the diversity in segmentations may affect the NMT performance. To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training. We propose two methods: 1) lattice positional encoding and 2) lattice-aware self-attention. These two methods can be used together and show complementary to each other to further improve translation performance. Experiment results show superiorities of lattice-based encoders in word-level and subword-level representations over conventional Transformer encoder. | 3090--3097 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 0ab0fda8774c303be8f8f8c8f684a890dcf5d455 | 0 |
Learning to Denoise Distantly-Labeled Data for Entity Typing | Onoe, Yasumasa and
Durrett, Greg | 2,019 | Distantly-labeled data can be used to scale up training of statistical models, but it is typically noisy and that noise can vary with the distant labeling technique. In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training. Our denoising approach consists of two parts. First, a filtering function discards examples from the distantly labeled data that are wholly unusable. Second, a relabeling function repairs noisy labels for the retained examples. Each of these components is a model trained on synthetically-noised examples generated from a small manually-labeled set. We investigate this approach on the ultra-fine entity typing task of Choi et al. (2018). Our baseline model is an extension of their model with pre-trained ELMo representations, which already achieves state-of-the-art performance. Adding distant data that has been denoised with our learned models gives further performance gains over this base model, outperforming models trained on raw distant data or heuristically-denoised distant data. | 2407--2417 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | dc138300b87f5bfccec609644d5edc08c4d783e9 | 1 |
Unbounded Stress in Subregular Phonology | Hao, Yiding and
Andersson, Samuel | 2,019 | This paper situates culminative unbounded stress systems within the subregular hierarchy for functions. While Baek (2018) has argued that such systems can be uniformly understood as input tier-based strictly local constraints, we show here that default-to-opposite-side and default-to-same-side stress systems belong to distinct subregular classes when they are viewed as functions that assign primary stress to underlying forms. While the former system can be captured by input tier-based input strictly local functions, a subsequential function class that we define here, the latter system is not subsequential, though it is weakly deterministic according to McCollum et al.{'}s (2018) non-interaction criterion. Our results motivate the extension of recently proposed subregular language classes to subregular functions and argue in favor of McCollum et al{'}s definition of weak determinism over that of Heinz and Lai (2013). | 135--143 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 7bcfa8384622e15a40f25e1b388485f8c09c1aec | 0 |
Ultra-Fine Entity Typing | Choi, Eunsol and
Levy, Omer and
Choi, Yejin and
Zettlemoyer, Luke | 2,018 | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict ultra-fine types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. | 87--96 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 4157834ed2d2fea6b6f652a72a9d0487edbc9f57 | 1 |
System Description of Supervised and Unsupervised Neural Machine Translation Approaches from {``}{NL} Processing{''} Team at {D}eep{H}ack.{B}abel Task | Gusev, Ilya and
Oboturov, Artem | 2,018 | nan | 45--52 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | f3537ff7aeb8e406369804b4c29b5f05ba5b1473 | 0 |
Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures | Vilnis, Luke and
Li, Xiang and
Murty, Shikhar and
McCallum, Andrew | 2,018 | Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anti-correlation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model. | 263--272 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | c6e0d70a81a83143d1f3b220d0941843ca03ca71 | 1 |
Multi-Task Neural Models for Translating Between Styles Within and Across Languages | Niu, Xing and
Rao, Sudha and
Carpuat, Marine | 2,018 | Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on style-annotated translation examples. | 1008--1021 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | e96b79eeb009ddceff50b4e864b1ee2edaf3ca6c | 0 |
Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework | White, Aaron Steven and
Rastogi, Pushpendre and
Duh, Kevin and
Van Durme, Benjamin | 2,017 | We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE). We present a general strategy to automatically generate one or more sentential hypotheses based on an input sentence and pre-existing manual semantic annotations. The resulting suite of datasets enables us to probe a statistical RTE model{'}s performance on different aspects of semantics. We demonstrate the value of this approach by investigating the behavior of a popular neural network RTE model. | 996--1005 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 4546b7207e1a87c205bdf45c70f7b06fb3c38e21 | 1 |
Enhancing Machine Translation of Academic Course Catalogues with Terminological Resources | Scansani, Randy and
Bernardini, Silvia and
Ferraresi, Adriano and
Gaspari, Federico and
Soffritti, Marcello | 2,017 | This paper describes an approach to translating course unit descriptions from Italian and German into English, using a phrase-based machine translation (MT) system. The genre is very prominent among those requiring translation by universities in European countries in which English is a non-native language. For each language combination, an in-domain bilingual corpus including course unit and degree program descriptions is used to train an MT engine, whose output is then compared to a baseline engine trained on the Europarl corpus. In a subsequent experiment, a bilingual terminology database is added to the training sets in both engines and its impact on the output quality is evaluated based on BLEU and post-editing score. Results suggest that the use of domain-specific corpora boosts the engines quality for both language combinations, especially for German-English, whereas adding terminological resources does not seem to bring notable benefits. | 1--10 | ef25f1586cf6630f4a30d41ee5a2848b064dede3 | Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference | 37c67bbb4c3fd0c3b9bd35fb62671e0f5ddfe4a1 | 0 |
Improving Semantic Parsing via Answer Type Inference | Yavuz, Semih and
Gur, Izzeddin and
Su, Yu and
Srivatsa, Mudhakar and
Yan, Xifeng | 2,016 | nan | 149--159 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | f3594f9d60c98cac88f9033c69c2b666713ed6d6 | 1 |
Integrating Optical Character Recognition and Machine Translation of Historical Documents | Afli, Haithem and
Way, Andy | 2,016 | Machine Translation (MT) plays a critical role in expanding capacity in the translation industry. However, many valuable documents, including digital documents, are encoded in non-accessible formats for machine processing (e.g., Historical or Legal documents). Such documents must be passed through a process of Optical Character Recognition (OCR) to render the text suitable for MT. No matter how good the OCR is, this process introduces recognition errors, which often renders MT ineffective. In this paper, we propose a new OCR to MT framework based on adding a new OCR error correction module to enhance the overall quality of translation. Experimentation shows that our new system correction based on the combination of Language Modeling and Translation methods outperforms the baseline system by nearly 30{\%} relative improvement. | 109--116 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | 82ccd8e2fa7b3e49d113e3abe194ecd4aa1e88f4 | 0 |
An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge | Hao, Yanchao and
Zhang, Yuanzhe and
Liu, Kang and
He, Shizhu and
Liu, Zhanyi and
Wu, Hua and
Zhao, Jun | 2,017 | With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural network-based (NN-based) methods develop, NN-based KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the cross-attention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach. | 221--231 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | e9287b896a1c7360567915c3932b8df1ee4a81f7 | 1 |
Learning User Embeddings from Emails | Song, Yan and
Lee, Chia-Jung | 2,017 | Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings. | 733--738 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | 4e6bd3eeb15413a22cb611be2770a632b31a1951 | 0 |
Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases | Chen, Yu and
Wu, Lingfei and
Zaki, Mohammed J. | 2,019 | When answering natural language questions over knowledge bases (KBs), different question components and KB aspects play different roles. However, most existing embedding-based methods for knowledge base question answering (KBQA) ignore the subtle inter-relationships between the question and the KB (e.g., entity types, relation paths and context). In this work, we propose to directly model the two-way flow of interactions between the questions and the KB via a novel Bidirectional Attentive Memory Network, called BAMnet. Requiring no external resources and only very few hand-crafted features, on the WebQuestions benchmark, our method significantly outperforms existing information-retrieval based methods, and remains competitive with (hand-crafted) semantic parsing based methods. Also, since we use attention mechanisms, our method offers better interpretability compared to other baselines. | 2913--2923 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | c4cc66e3652a6c3bb4d1737fea2f50bdb3fe3a70 | 1 |
Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings | Artetxe, Mikel and
Schwenk, Holger | 2,019 | Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings. In contrast to previous approaches, which rely on nearest neighbor retrieval with a hard threshold over cosine similarity, our proposed method accounts for the scale inconsistencies of this measure, considering the margin between a given sentence pair and its closest candidates instead. Our experiments show large improvements over existing methods. We outperform the best published results on the BUCC mining task and the UN reconstruction task by more than 10 F1 and 30 precision points, respectively. Filtering the English-German ParaCrawl corpus with our approach, we obtain 31.2 BLEU points on newstest2014, an improvement of more than one point over the best official filtered version. | 3197--3203 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | 30b09a853ab72e53078f1feefe6de5a847a2b169 | 0 |
Data Recombination for Neural Semantic Parsing | Jia, Robin and
Liang, Percy | 2,016 | nan | 12--22 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | b7eac64a8410976759445cce235469163d23ee65 | 1 |
Global Open Resources and Information for Language and Linguistic Analysis ({GORILLA}) | Cavar, Damir and
Cavar, Malgorzata and
Moe, Lwin | 2,016 | The infrastructure Global Open Resources and Information for Language and Linguistic Analysis (GORILLA) was created as a resource that provides a bridge between disciplines such as documentary, theoretical, and corpus linguistics, speech and language technologies, and digital language archiving services. GORILLA is designed as an interface between digital language archive services and language data producers. It addresses various problems of common digital language archive infrastructures. At the same time it serves the speech and language technology communities by providing a platform to create and share speech and language data from low-resourced and endangered languages. It hosts an initial collection of language models for speech and natural language processing (NLP), and technologies or software tools for corpus creation and annotation. GORILLA is designed to address the Transcription Bottleneck in language documentation, and, at the same time to provide solutions to the general Language Resource Bottleneck in speech and language technologies. It does so by facilitating the cooperation between documentary and theoretical linguistics, and speech and language technologies research and development, in particular for low-resourced and endangered languages. | 4484--4491 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | dae080f583ade82375888342fad6af00b4dfaa67 | 0 |
Question Answering on {F}reebase via Relation Extraction and Textual Evidence | Xu, Kun and
Reddy, Siva and
Feng, Yansong and
Huang, Songfang and
Zhao, Dongyan | 2,016 | nan | 2326--2336 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | e3919e94c811fd85f5038926fa354619861674f9 | 1 |
A {H}ungarian Sentiment Corpus Manually Annotated at Aspect Level | Szab{\'o}, Martina Katalin and
Vincze, Veronika and
Simk{\'o}, Katalin Ilona and
Varga, Viktor and
Hangya, Viktor | 2,016 | In this paper we present a Hungarian sentiment corpus manually annotated at aspect level. Our corpus consists of Hungarian opinion texts written about different types of products. The main aim of creating the corpus was to produce an appropriate database providing possibilities for developing text mining software tools. The corpus is a unique Hungarian database: to the best of our knowledge, no digitized Hungarian sentiment corpus that is annotated on the level of fragments and targets has been made so far. In addition, many language elements of the corpus, relevant from the point of view of sentiment analysis, got distinct types of tags in the annotation. In this paper, on the one hand, we present the method of annotation, and we discuss the difficulties concerning text annotation process. On the other hand, we provide some quantitative and qualitative data on the corpus. We conclude with a description of the applicability of the corpus. | 2873--2878 | 4c75564731f564e78cafc76e18739bbcf4fceeb2 | Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding | e627e852ca665fd2acc843807b61fc9a6a117a68 | 0 |
Improving Semantic Parsing via Answer Type Inference | Yavuz, Semih and
Gur, Izzeddin and
Su, Yu and
Srivatsa, Mudhakar and
Yan, Xifeng | 2,016 | nan | 149--159 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | f3594f9d60c98cac88f9033c69c2b666713ed6d6 | 1 |
Graph-Based Induction of Word Senses in {C}roatian | Bekavac, Marko and
{\v{S}}najder, Jan | 2,016 | Word sense induction (WSI) seeks to induce senses of words from unannotated corpora. In this paper, we address the WSI task for the Croatian language. We adopt the word clustering approach based on co-occurrence graphs, in which senses are taken to correspond to strongly inter-connected components of co-occurring words. We experiment with a number of graph construction techniques and clustering algorithms, and evaluate the sense inventories both as a clustering problem and extrinsically on a word sense disambiguation (WSD) task. In the cluster-based evaluation, Chinese Whispers algorithm outperformed Markov Clustering, yielding a normalized mutual information score of 64.3. In contrast, in WSD evaluation Markov Clustering performed better, yielding an accuracy of about 75{\%}. We are making available two induced sense inventories of 10,000 most frequent Croatian words: one coarse-grained and one fine-grained inventory, both obtained using the Markov Clustering algorithm. | 3014--3018 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | d0de3456691c9b0b311719cbd76d2df9ee060497 | 0 |
Paraphrase-Driven Learning for Open Question Answering | Fader, Anthony and
Zettlemoyer, Luke and
Etzioni, Oren | 2,013 | nan | 1608--1618 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | c0be2ac2f45681f1852fc1d298af5dceb85834f4 | 1 |
Proceedings of the International Conference Recent Advances in Natural Language Processing {RANLP} 2013 | nan | 2,013 | nan | nan | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | f24cc8bb580d451500e57fd1857d6f6907ac3140 | 0 |
Question Answering over {F}reebase with Multi-Column Convolutional Neural Networks | Dong, Li and
Wei, Furu and
Zhou, Ming and
Xu, Ke | 2,015 | nan | 260--269 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 1ef01e7bfab2041bc0c0a56a57906964df9fc985 | 1 |
{LORIA} System for the {WMT}15 Quality Estimation Shared Task | Langlois, David | 2,015 | nan | 323--329 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | cb4ee7bf3069695ea9b8802a2c1cd76b6cc73d0c | 0 |
{CFO}: Conditional Focused Neural Question Answering with Large-scale Knowledge Bases | Dai, Zihang and
Li, Lei and
Xu, Wei | 2,016 | nan | 800--810 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 76d28a1f4c52b2fbb798501e479023c4075b4803 | 1 |
Towards Lexical Encoding of Multi-Word Expressions in {S}panish Dialects | Bogantes, Diana and
Rodr{\'\i}guez, Eric and
Arauco, Alejandro and
Rodr{\'\i}guez, Alejandro and
Savary, Agata | 2,016 | This paper describes a pilot study in lexical encoding of multi-word expressions (MWEs) in 4 Latin American dialects of Spanish: Costa Rican, Colombian, Mexican and Peruvian. We describe the variability of MWE usage across dialects. We adapt an existing data model to a dialect-aware encoding, so as to represent dialect-related specificities, while avoiding redundancy of the data common for all dialects. A dozen of linguistic properties of MWEs can be expressed in this model, both on the level of a whole MWE and of its individual components. We describe the resulting lexical resource containing several dozens of MWEs in four dialects and we propose a method for constructing a web corpus as a support for crowdsourcing examples of MWE occurrences. The resource is available under an open license and paves the way towards a large-scale dialect-aware language resource construction, which should prove useful in both traditional and novel NLP applications. | 2255--2261 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | c464069a72a9b170eb9e99f31a8d5b6f4906a84d | 0 |
Investigating Entity Knowledge in {BERT} with Simple Neural End-To-End Entity Linking | Broscheit, Samuel | 2,019 | A typical architecture for end-to-end entity linking systems consists of three steps: mention detection, candidate generation and entity disambiguation. In this study we investigate the following questions: (a) Can all those steps be learned jointly with a model for contextualized text-representations, i.e. BERT? (b) How much entity knowledge is already contained in pretrained BERT? (c) Does additional entity knowledge improve BERT{'}s performance in downstream tasks? To this end we propose an extreme simplification of the entity linking setup that works surprisingly well: simply cast it as a per token classification over the entire entity vocabulary (over $700K$ classes in our case). We show on an entity linking benchmark that (i) this model improves the entity representations over plain BERT, (ii) that it outperforms entity linking architectures that optimize the tasks separately and (iii) that it only comes second to the current state-of-the-art that does mention detection and entity disambiguation jointly. Additionally, we investigate the usefulness of entity-aware token-representations in the text-understanding benchmark GLUE, as well as the question answering benchmarks SQUAD{\textasciitilde}V2 and SWAG and also the EN-DE WMT14 machine translation benchmark. To our surprise, we find that most of those benchmarks do not benefit from additional entity knowledge, except for a task with very small training data, the RTE task in GLUE, which improves by 2{\%}. | 677--685 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 399308fa54ade9b1362d56628132323489ce50cd | 1 |
{LINSPECTOR} {WEB}: A Multilingual Probing Suite for Word Representations | Eichler, Max and
{\c{S}}ahin, G{\"o}zde G{\"u}l and
Gurevych, Iryna | 2,019 | We present LINSPECTOR WEB , an open source multilingual inspector to analyze word representations. Our system provides researchers working in low-resource settings with an easily accessible web based probing tool to gain quick insights into their word embeddings especially outside of the English language. To do this we employ 16 simple linguistic probing tasks such as gender, case marking, and tense for a diverse set of 28 languages. We support probing of static word embeddings along with pretrained AllenNLP models that are commonly used for NLP downstream tasks such as named entity recognition, natural language inference and dependency parsing. The results are visualized in a polar chart and also provided as a table. LINSPECTOR WEB is available as an offline tool or at \url{https://linspector.ukp.informatik.tu-darmstadt.de}. | 127--132 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 63bc1232a882cecdf2b939be4b563a82fef2f63e | 0 |
Simple Question Answering by Attentive Convolutional Neural Network | Yin, Wenpeng and
Yu, Mo and
Xiang, Bing and
Zhou, Bowen and
Sch{\"u}tze, Hinrich | 2,016 | This work focuses on answering single-relation factoid questions over Freebase. Each question can acquire the answer from a single fact of form (subject, predicate, object) in Freebase. This task, simple question answering (SimpleQA), can be addressed via a two-step pipeline: entity linking and fact selection. In fact selection, we match the subject entity in a fact candidate with the entity mention in the question by a character-level convolutional neural network (char-CNN), and match the predicate in that fact with the question by a word-level CNN (word-CNN). This work makes two main contributions. (i) A simple and effective entity linker over Freebase is proposed. Our entity linker outperforms the state-of-the-art entity linker over SimpleQA task. (ii) A novel attentive maxpooling is stacked over word-CNN, so that the predicate representation can be matched with the predicate-focused question representation more effectively. Experiments show that our system sets new state-of-the-art in this task. | 1746--1756 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 812034099dd66df95a9f4ff741e17df62916ef4c | 1 |
Feature Derivation for Exploitation of Distant Annotation via Pattern Induction against Dependency Parses | Freitag, Dayne and
Niekrasz, John | 2,016 | nan | 36--45 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 1b4bc749341282b67277492034f26c011ee1761a | 0 |
Large-scale Semantic Parsing without Question-Answer Pairs | Reddy, Siva and
Lapata, Mirella and
Steedman, Mark | 2,014 | In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art. | 377--392 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 34b2fb4d05b80cb73d0be3c855f7b236fbc3640c | 1 |
A Vague Sense Classifier for Detecting Vague Definitions in Ontologies | Alexopoulos, Panos and
Pavlopoulos, John | 2,014 | nan | 33--37 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 5abeb622ed1fb738bab8b759038670b7b082f966 | 0 |
{F}reebase {QA}: Information Extraction or Semantic Parsing? | Yao, Xuchen and
Berant, Jonathan and
Van Durme, Benjamin | 2,014 | nan | 82--86 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | b75329489baf067e6f7bbb74f16ffd49fba80dfa | 1 |
Domain Adaptation with Active Learning for Coreference Resolution | Zhao, Shanheng and
Ng, Hwee Tou | 2,014 | nan | 21--29 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 9ee289dbda34e13e0e35df9b343738b107fd7ce6 | 0 |
Semantic Parsing via Paraphrasing | Berant, Jonathan and
Liang, Percy | 2,014 | nan | 1415--1425 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 3d1d42c9435b419ac928ebf7bcf4c86a460d6ef4 | 1 |
A Recursive Recurrent Neural Network for Statistical Machine Translation | Liu, Shujie and
Yang, Nan and
Li, Mu and
Zhou, Ming | 2,014 | nan | 1491--1500 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 5d43224147a5bb8b17b6a6fc77bf86490e86991a | 0 |
No Need to Pay Attention: Simple Recurrent Neural Networks Work! | Ture, Ferhan and
Jojic, Oliver | 2,017 | First-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base (KB). While this does not seem like a challenging task, many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65{\%}{--}76{\%} accuracy on benchmark sets. Our approach formulates the task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB. We train a recurrent neural network to solve each problem. On the SimpleQuestions dataset, our approach yields substantial improvements over previously published results {---} even neural networks based on much more complex architectures. The simplicity of our approach also has practical advantages, such as efficiency and modularity, that are valuable especially in an industry setting. In fact, we present a preliminary analysis of the performance of our model on real queries from Comcast{'}s X1 entertainment platform with millions of users every day. | 2866--2872 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | c3efd9334114f82644ed14c4b6083defc6209d85 | 1 |
{W}atset: Automatic Induction of Synsets from a Graph of Synonyms | Ustalov, Dmitry and
Panchenko, Alexander and
Biemann, Chris | 2,017 | This paper presents a new graph-based approach that induces synsets using synonymy dictionaries and word embeddings. First, we build a weighted graph of synonyms extracted from commonly available resources, such as Wiktionary. Second, we apply word sense induction to deal with ambiguous words. Finally, we cluster the disambiguated version of the ambiguous input graph into synsets. Our meta-clustering approach lets us use an efficient hard clustering algorithm to perform a fuzzy clustering of the graph. Despite its simplicity, our approach shows excellent results, outperforming five competitive state-of-the-art methods in terms of F-score on three gold standard datasets for English and Russian derived from large-scale manually constructed lexical resources. | 1579--1590 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | ba00cbd314dc52b299a8b0c34f1887bcd43cdc12 | 0 |
Character-Level Question Answering with Attention | He, Xiaodong and
Golub, David | 2,016 | nan | 1598--1607 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 698d675ba7134ac701de810c9ca4a6de72cb414b | 1 |
Towards a Linguistic Ontology with an Emphasis on Reasoning and Knowledge Reuse | Parvizi, Artemis and
Kohl, Matt and
Gonz{\`a}lez, Meritxell and
Saur{\'\i}, Roser | 2,016 | The Dictionaries division at Oxford University Press (OUP) is aiming to model, integrate, and publish lexical content for 100 languages focussing on digitally under-represented languages. While there are multiple ontologies designed for linguistic resources, none had adequate features for meeting our requirements, chief of which was the capability to losslessly capture diverse features of many different languages in a dictionary format, while supplying a framework for inferring relations like translation, derivation, etc., between the data. Building on valuable features of existing models, and working with OUP monolingual and bilingual dictionary datasets, we have designed and implemented a new linguistic ontology. The ontology has been reviewed by a number of computational linguists, and we are working to move more dictionary data into it. We have also developed APIs to surface the linked data to dictionary websites. | 441--448 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | da294532e499a663d36be2ab92a45111c4e5194e | 0 |
Dual Dynamic Memory Network for End-to-End Multi-turn Task-oriented Dialog Systems | Wang, Jian and
Liu, Junhao and
Bi, Wei and
Liu, Xiaojiang and
He, Kejing and
Xu, Ruifeng and
Yang, Min | 2,020 | Existing end-to-end task-oriented dialog systems struggle to dynamically model long dialog context for interactions and effectively incorporate knowledge base (KB) information into dialog generation. To conquer these limitations, we propose a Dual Dynamic Memory Network (DDMN) for multi-turn dialog generation, which maintains two core components: dialog memory manager and KB memory manager. The dialog memory manager dynamically expands the dialog memory turn by turn and keeps track of dialog history with an updating mechanism, which encourages the model to filter irrelevant dialog history and memorize important newly coming information. The KB memory manager shares the structural KB triples throughout the whole conversation, and dynamically extracts KB information with a memory pointer at each turn. Experimental results on three benchmark datasets demonstrate that DDMN significantly outperforms the strong baselines in terms of both automatic evaluation and human evaluation. Our code is available at \url{https://github.com/siat-nlp/DDMN}. | 4100--4110 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | d8d1bba29ee07abcc0586d5cbe056d11f8041077 | 1 |
Benefits of Intermediate Annotations in Reading Comprehension | Dua, Dheeru and
Singh, Sameer and
Gardner, Matt | 2,020 | Complex compositional reading comprehension datasets require performing latent sequential decisions that are learned via supervision from the final answer. A large combinatorial space of possible decision paths that result in the same answer, compounded by the lack of intermediate supervision to help choose the right path, makes the learning particularly hard for this task. In this work, we study the benefits of collecting intermediate reasoning supervision along with the answer during data collection. We find that these intermediate annotations can provide two-fold benefits. First, we observe that for any collection budget, spending a fraction of it on intermediate annotations results in improved model performance, for two complex compositional datasets: DROP and Quoref. Second, these annotations encourage the model to learn the correct latent reasoning steps, helping combat some of the biases introduced during the data collection process. | 5627--5634 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | f4874bd968b785cb9fceeccf26c333567a2b8dca | 0 |
Relabel the Noise: Joint Extraction of Entities and Relations via Cooperative Multiagents | Chen, Daoyuan and
Li, Yaliang and
Lei, Kai and
Shen, Ying | 2,020 | Distant supervision based methods for entity and relation extraction have received increasing popularity due to the fact that these methods require light human annotation efforts. In this paper, we consider the problem of shifted label distribution, which is caused by the inconsistency between the noisy-labeled training set subject to external knowledge graph and the human-annotated test set, and exacerbated by the pipelined entity-then-relation extraction manner with noise propagation. We propose a joint extraction approach to address this problem by re-labeling noisy instances with a group of cooperative multiagents. To handle noisy instances in a fine-grained manner, each agent in the cooperative group evaluates the instance by calculating a continuous confidence score from its own perspective; To leverage the correlations between these two extraction tasks, a confidence consensus module is designed to gather the wisdom of all agents and re-distribute the noisy training set with confidence-scored labels. Further, the confidences are used to adjust the training losses of extractors. Experimental results on two real-world datasets verify the benefits of re-labeling noisy instance, and show that the proposed model significantly outperforms the state-of-the-art entity and relation extraction methods. | 5940--5950 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | c07d534742b7c8c88a7483fd9c98bdcbf9cbcbc6 | 1 |
Improving Transformer Models by Reordering their Sublayers | Press, Ofir and
Smith, Noah A. and
Levy, Omer | 2,020 | Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly ordered transformers and train them with the language modeling objective. We observe that some of these models are able to achieve better performance than the interleaved baseline, and that those successful variants tend to have more self-attention at the bottom and more feedforward sublayers at the top. We propose a new transformer pattern that adheres to this property, the sandwich transformer, and show that it improves perplexity on multiple word-level and character-level language modeling benchmarks, at no cost in parameters, memory, or training time. However, the sandwich reordering pattern does not guarantee performance gains across every task, as we demonstrate on machine translation models. Instead, we suggest that further exploration of task-specific sublayer reorderings is needed in order to unlock additional gains. | 2996--3005 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 3ff8d265f4351e4b1fdac5b586466bee0b5d6fff | 0 |
Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs | Tu, Ming and
Wang, Guangtao and
Huang, Jing and
Tang, Yun and
He, Xiaodong and
Zhou, Bowen | 2,019 | Multi-hop reading comprehension (RC) across documents poses new challenge over single-document RC because it requires reasoning over multiple documents to reach the final answer. In this paper, we propose a new model to tackle the multi-hop RC problem. We introduce a heterogeneous graph with different types of nodes and edges, which is named as Heterogeneous Document-Entity (HDE) graph. The advantage of HDE graph is that it contains different granularity levels of information including candidates, documents and entities in specific document contexts. Our proposed model can do reasoning over the HDE graph with nodes representation initialized with co-attention and self-attention based context encoders. We employ Graph Neural Networks (GNN) based message passing algorithms to accumulate evidences on the proposed HDE graph. Evaluated on the blind test set of the Qangaroo WikiHop data set, our HDE graph based single model delivers competitive result, and the ensemble model achieves the state-of-the-art performance. | 2704--2713 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 3b7d05fc6e1e0a622f9a8772f4557a166f811698 | 1 |
A Multi-Task Approach for Disentangling Syntax and Semantics in Sentence Representations | Chen, Mingda and
Tang, Qingming and
Wiseman, Sam and
Gimpel, Kevin | 2,019 | We propose a generative model for a sentence that uses two latent variables, with one intended to represent the syntax of the sentence and the other to represent its semantics. We show we can achieve better disentanglement between semantic and syntactic representations by training with multiple losses, including losses that exploit aligned paraphrastic sentences and word-order information. We evaluate our models on standard semantic similarity tasks and novel syntactic similarity tasks. Empirically, we find that the model with the best performing syntactic and semantic representations also gives rise to the most disentangled representations. | 2453--2464 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | f716ed53188d5bc09c7b18ba35cb6e21ffb0273a | 0 |
{UH}op: An Unrestricted-Hop Relation Extraction Framework for Knowledge-Based Question Answering | Chen, Zi-Yuan and
Chang, Chih-Hung and
Chen, Yi-Pei and
Nayak, Jijnasa and
Ku, Lun-Wei | 2,019 | In relation extraction for knowledge-based question answering, searching from one entity to another entity via a single relation is called {``}one hop{''}. In related work, an exhaustive search from all one-hop relations, two-hop relations, and so on to the max-hop relations in the knowledge graph is necessary but expensive. Therefore, the number of hops is generally restricted to two or three. In this paper, we propose UHop, an unrestricted-hop framework which relaxes this restriction by use of a transition-based search framework to replace the relation-chain-based search one. We conduct experiments on conventional 1- and 2-hop questions as well as lengthy questions, including datasets such as WebQSP, PathQuestion, and Grid World. Results show that the proposed framework enables the ability to halt, works well with state-of-the-art models, achieves competitive performance without exhaustive searches, and opens the performance gap for long relation paths. | 345--356 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 4e4230418470efbb3a86d407d8eedf329f361f6d | 1 |
Hybrid {RNN} at {S}em{E}val-2019 Task 9: Blending Information Sources for Domain-Independent Suggestion Mining | Ezen-Can, Aysu and
Can, Ethem F. | 2,019 | Social media has an increasing amount of information that both customers and companies can benefit from. These social media posts can include Tweets or be in the form of vocalization of complements and complaints (e.g., reviews) of a product or service. Researchers have been actively mining this invaluable information source to automatically generate insights. Mining sentiments of customer reviews is an example that has gained momentum due to its potential to gather information that customers are not happy about. Instead of reading millions of reviews, companies prefer sentiment analysis to obtain feedback and to improve their products or services. In this work, we aim to identify information that companies can act on, or other customers can utilize for making their own experience better. This is different from identifying if reviews of a product or service is negative, positive, or neutral. To that end, we classify sentences of a given review as suggestion or not suggestion so that readers of the reviews do not have to go through thousands of reviews but instead can focus on actionable items and applicable suggestions. To identify suggestions within reviews, we employ a hybrid approach that utilizes a recurrent neural network (RNN) along with rule-based features to build a domain-independent suggestion mining model. In this way, a model trained on electronics reviews is used to extract suggestions from hotel reviews. | 1199--1203 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 6bef349cfcc7b58206b41823fa08ada09e434c8a | 0 |
{CNN} for Text-Based Multiple Choice Question Answering | Chaturvedi, Akshay and
Pandit, Onkar and
Garain, Utpal | 2,018 | The task of Question Answering is at the very core of machine comprehension. In this paper, we propose a Convolutional Neural Network (CNN) model for text-based multiple choice question answering where questions are based on a particular article. Given an article and a multiple choice question, our model assigns a score to each question-option tuple and chooses the final option accordingly. We test our model on Textbook Question Answering (TQA) and SciQ dataset. Our model outperforms several LSTM-based baseline models on the two datasets. | 272--277 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 1baa640feacca6806b7cb58f17c885b6865f337b | 1 |
Searching for the {X}-Factor: Exploring Corpus Subjectivity for Word Embeddings | Tkachenko, Maksim and
Chia, Chong Cher and
Lauw, Hady | 2,018 | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | 1212--1221 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 0324fdde1d8702da4ecaed7fb61f027f7ef58795 | 0 |
Cooperative Denoising for Distantly Supervised Relation Extraction | Lei, Kai and
Chen, Daoyuan and
Li, Yaliang and
Du, Nan and
Yang, Min and
Fan, Wei and
Shen, Ying | 2,018 | Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novelCOopeRativeDenoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods. | 426--436 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | c233e45922d3cf06c8a90ed4b28f045ec2e205fb | 1 |
Zero-shot Relation Classification as Textual Entailment | Obamuyide, Abiola and
Vlachos, Andreas | 2,018 | We consider the task of relation classification, and pose this task as one of textual entailment. We show that this formulation leads to several advantages, including the ability to (i) perform zero-shot relation classification by exploiting relation descriptions, (ii) utilize existing textual entailment models, and (iii) leverage readily available textual entailment datasets, to enhance the performance of relation classification systems. Our experiments show that the proposed approach achieves 20.16{\%} and 61.32{\%} in F1 zero-shot classification performance on two datasets, which further improved to 22.80{\%} and 64.78{\%} respectively with the use of conditional encoding. | 72--78 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 312b12dd6aa558b92df3ddd9b1057aa80a0ad718 | 0 |
Improved Neural Relation Detection for Knowledge Base Question Answering | Yu, Mo and
Yin, Wenpeng and
Hasan, Kazi Saidul and
dos Santos, Cicero and
Xiang, Bing and
Zhou, Bowen | 2,017 | Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. | 571--581 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 10b5dc51b61795718f79f3b4c9b5bbba44d252c0 | 1 |
Modelling semantic acquisition in second language learning | Kochmar, Ekaterina and
Shutova, Ekaterina | 2,017 | Using methods of statistical analysis, we investigate how semantic knowledge is acquired in English as a second language and evaluate the pace of development across a number of predicate types and content word combinations, as well as across the levels of language proficiency and native languages. Our exploratory study helps identify the most problematic areas for language learners with different backgrounds and at different stages of learning. | 293--302 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | a690dca513cee03a535e2cfa2b6026152cb5c81e | 0 |
Learning Hybrid Representations to Retrieve Semantically Equivalent Questions | dos Santos, C{\'\i}cero and
Barbosa, Luciano and
Bogdanova, Dasha and
Zadrozny, Bianca | 2,015 | nan | 694--699 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | 728aa52045cedce0ffb11975d880c7046abef3f2 | 1 |
Suitability of {P}ar{T}es Test Suite for Parsing Evaluation | Lloberes, Marina and
Castell{\'o}n, Irene and
Padr{\'o}, Llu{\'\i}s | 2,015 | nan | 61--65 | 4f929eb557a990cd3062c86c4be157909742245d | Knowledge-Based Reasoning Network for Relation Detection | c3448d9911e9a169f901617d8b74cb1bc8aa3c23 | 0 |