text
stringlengths
1k
1k
year
stringclasses
12 values
No
stringlengths
1
3
A Logic for Semantic Interpretation I Eugene Charniak and Robert Goldman Department of Computer Science Brown University, Box 1910 Providence RI 02912 Abstract We propose that logic (enhanced to encode probability information) is a good way of characterizing semantic in- terpretation. In support of this we give a fragment of an axiomatization for word-sense disambiguation, noun- phrase (and verb) reference, and case disambiguation. We describe an inference engine (Frail3) which actually takes this axiomatization and uses it to drive the semantic interpretation process. We claim three benefits from this scheme. First, the interface between semantic interpreta- tion and pragmatics has always been problematic, since all of the above tasks in general require pragmatic infer- ence. Now the interface is trivial, since both semantic interpretation and pragmatics use the same vocabulary and inference engine. The second benefit, related t
1988
11
Interpretation as Abduction Jerry R. Hobbs, Mark Stickel, Paul Martin, and Douglas Edwards Artificial Intelligence Center SRI International Abstract An approach to abductive inference developed in the TAC- ITUS project has resulted in a dramatic simplification of how the problem of interpreting texts is conceptualized. Its use in solving the local pragmatics problems of reference, compound nominals, syntactic ambiguity, and metonymy is described and illustrated. It also suggests an elegant and thorough integration of syntax, semantics, and pragmatics. 1 Introduction Abductive inference is inference to the best explanation. The process of interpreting sentences in discourse can be viewed as the process of providing the best explanation of why the sentences would be true. In the TACITUS Project at SRI, we have developed a scheme for abductive inference that yields a signi~caut simplification in the description of such inte
1988
12
PROJECT APRIL -- A PROGRESS REPORT Robin Haigh, Geoffrey Sampson, Eric Atwell Cenlre for Computer Analysis of Language and Speech, University of Leeds, Leeds LS2 9JT, UK ABSTRACT Parsing techniques based on rules defining grammaticality are difficult to use with authentic inputs, which are often grammatically messy. Instead, the APRIL system seeks a labelled tree su~cture which maximizes a numerical measure of conformity to statistical norms derived flom a sample of parsed text. No distinction between legal and illegal trees arises: any labelled tree has a value. Because the search space is large and has an irregular geometry, APRIL seeks the best tree using simulated annealing, a stochastic optimization technique. Beginning with an arbi- Irary tree, many randomly-generated local modifications are considered and adopted or rejected according to their effect on tree-value: acceptance decisions are made probabilistically,
1988
13
DISCOURSE DEIXIS: REFERENCE TO DISCOURSE SEGMENTS Bonnie Lynn Webber Department of Computer & Information Science University of Pennsylvania Philadelphia PA 19104-6389 ABSTRACT Computational approaches to discourse understanding have a two-part goal: (1) to identify those aspects of discourse understanding that require process-based accounts, andS(2) to characterize the processes and data structures they involve. To date, in the area of reference, process-hased ac.omnts have been developed for subsequent reference via anaphoric pronouns and reference via definite descriptors. In this paper, I propose and argue for a process-based account of subsequent reference via deiedc expressions. A significant feature of this account is that it attributes distinct mental reality to units of text often called discourse segments, a reality that is distinct from that of the entities deem therein. 1. INTRODUCTION There seem to be at lea
1988
14
Cues and control in Expert-Client Dialogues Steve Whittaker & Phil Stenton Hewlett-Packard Laboratories Filton Road, Bristol BSI2 6QZ, UK. email: sjw~hplb.csnet April 18, 1988 Abstract We conducted an empirical analysis into the relation between control and discourse struc- ture. We applied control criteria to four di- alognes and identified 3 levels of discourse structure. We investigated the mechanism for changing control between these structures and found that utterance type and not cue words predicted shifts of control. Participants used certain types of signals when discourse goals were proceeding successfully but resorted to interruptions when they were not. 1 Introduction A number of researchers have shown that there is organisation in discourse above the level of the individual utterance (5, 8, 9, 10), The cur- rent exploratory study uses control as a pa- rameter for identifying these higher level struc-
1988
15
A COMPUTATIONAL THEORY OF PERSPECTIVE AND REFERENCE IN NARRATIVE Janyce M. Wlebe and William J. Rapaport Department of Computer Science State University of New York at Buffalo Buffalo, NY 14260 wiebe~s.buffalo.edu, rapapurt~s.buffalo.edu ABSTRACT Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference In narrative is addressed: references in passages told from the perspec- tive of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented. 1. INTRODUCTION. A narrative is often told from the perspective of one or more of its characters; it cam also con- tain passages that are not told from the perspective o
1988
16
PARSING JAPANESE HONORIFICS IN UNIFICATION-BASED GRAMMAR Hiroyuki MAEDA, Susumu KATO, Kiyoshi KOGURE and Hitoshi IIDA ATR Interpreting Telephony Research Laboratories Twin 21 Bldg. MID Tower, 2-1-61 Shiromi, Higashi-ku, Osaka 540, Japan Abstract This paper presents a unification-based approach to Japanese honorifics based on a version of HPSG (Head-driven Phrase Structure Grammar)ll]121. Utterance parsing is based on lexical specifications of each lexical item, including honorifics, and a few general PSG rules using a parser capable of unifying cyclic feature structures. It is shown that the possible word orders of Japanese honorific predicate constituents can be automatically deduced in the proposed framework without independently specifying them. Discourse Information Change Rules (DICRs) that allow resolving a class of anaphors in honorific contexts are also formulated. 1. Introduction Japanese has a rich grammaticalized syste
1988
17
ASPECTS OF CLAUSE POLITENESS IN JAPANESE: AN EXTENDED INQUIRY SEMANTICS TREATMENT John A. Bateman* USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 U.S.A. (e-mail: bateman@va=a.isi.edu) Abstract The inquiry semantics approach of the Nigel compu- tational systemic grammar of English has proved capa- ble of revealing distinctions within propositional con- tent that the text planning process needs to control in order for adequate text to be generated. An extension to the chooser and inquiry framework motiwLted by a Japanese clause generator capable of expressing levels of politeness makes this facility available for revealing the distinctions necessary among interpersonal, social meanings also. This paper shows why the previous inquL'y framework wu incapable of the klnd of se- mantic control Japanese politeness requires and how the implemented extenslon achieves that contro
1988
18
EXPERIENCES WITH AN ON-LINE TRANSLATING DIALOGUE SYSTEM Seiji MHKE, Koichi HASEBE, Harold SOMERS , Shin-ya AMANO Research and Development Center Toshiba Corporation 1, Komukai Toshiba-cho, Saiwai-ku Kawasaki-City, Kanagawa, 210 Japan ABSTRACT An English-Japanese bi-directional machine translation system was connected to a keyboard conversation function on a workstation, and tested via a satellite link with users in Japan and Switzerland. The set-up is described, and some informal observations on the nature of the bilin- gual dialogues reported. INTRODUCTION We have been developing an English-Japanese bi-directional machine translation system imple- mented on a workstation (Amano 1986, Amano et a/. 1987). The system, which is interactive and designed for use by a translator, normally runs in an interactive mode, and includes a number of spe- cial bilingual editing functions. We recently real- ized a real-time
1988
19
SENTENCE FRAGMENTS REGULAR STRUCTURES Marcia C. Linebarger, Deborah A. Dahl, Lynette Hirschman, Rebecca J. Passonneau Paoli Research Center Unlsys Corporation P.O. Box 517 Paoli, PA ABSTRACT This paper describes an analysis of telegraphic fragments as regular structures (not errors) han- dled by rn~n~nal extensions to a system designed for processing the standard language. The modu- lar approach which has been implemented in the Unlsys natural language processing system PUNDIT is based on a division of labor in which syntax regulates the occurrence and distribution of elided elements, and semantics and pragumtics use the system's standard mechankms to inter- pret them. 1. INTRODUCTION In t]~ paper we discuss the syntactic, semantic, and pragmatic analysis of fragmentary sentences in English. Our central claim is that these sentences, which have often been classified in the literature with truly erroneous input s
1988
2
PLANNING COHERENT MULTISENTENTIAL TEXT Eduard H. Hovy USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292-6695, U.S.A. HOVY~VAXA.ISI.EDU Abstract Though most text generators are capable of sim- ply stringing together more than one sentence, they cannot determine which order will ensure a coherent paragraph. A paragraph is coherent when the information in successive sentences fol- lows some pattern of inference or of knowledge with which the hearer is familiar. To signal such inferences, speakers usually use relations that llnk successive sentences in fixed ways. A set of 20 relations that span most of what people usually say in English is proposed in the Rhetorical Struc- ture Theory of Mann and Thompson. This paper describes the formalization of these relations and their use in a prototype text planner that struc- tures input elements into coherent paragraphs. 1 The Pro
1988
20
A Practical Nonmonotonic Theory for Reasoning about Speech Acts Douglas Appelt, Kurt Konolige Artificial Intelligence Center and Center for the Study of Language and Information SRI International Menlo Park, California Abstract A prerequisite to a theory of the way agents un- derstand speech acts is a theory of how their be- liefs and intentions are revised as a consequence of events. This process of attitude revision is an interesting domain for the application of non- monotonic reasoning because speech acts have a conventional aspect that is readily represented by defaults, but that interacts with an agent's be- liefs and intentions in many complex ways that may override the defaults. Perrault has devel- oped a theory of speech acts, based on Rieter's default logic, that captures the conventional as- pect; it does not, however, adequately account for certain easily observed facts about attitude revi- sion resulting fro
1988
21
TWO TYPES OF PLANNING IN LANGUAGE GENERATION Eduard H. Hovy USC/Informat|on Sciences Institute 4676 Ar]miralty Way, Suite 1001 Marina del Rey, CA 90292-6695, U.S.A. [email protected] Abstract As our understanding of natural language gener- ation has increased, a number of tasks have been separated from realization and put together un- der the heading atext planning I. So far, however, no-one has enumerated the kinds of tasks a text planner should be able to do. This paper describes the principal lesson learned in combining a num- ber of planning tasks in a planner-realiser: plan- ning and realization should be interleaved, in a limited-commitment planning paradigm, to per- form two types of p]annlng: prescriptive and re- strictive. Limited-commitment planning consists of both prescriptive (hierarchical expansion) plan- ning and of restrictive planning (selecting from op- tions with reference to the status of active
1988
22
Assigning Intonational Features in Synthesized Spoken Directions ° James Raymond Davis The Media Laboratory MIT E15-325 Cambridge MA 02139 Julia Hirschberg AT&T Bell Laboratories 2D-450 600 Mountain Avenue Murray Hill N3 07974 Abstract Speakers convey much of the information hearers use to interpret discourse by varying prosodic features such as PHRASING, PITCH ACCENT placement, TUNE, and PITCH P.ANGE. The ability to emulate such variation is crucial to effective (synthetic) speech generation. While text-to- speech synthesis must rely primarily upon structural in- formation to determine appropriate intonational features, speech synthesized from an abstract representation of the message to be conveyed may employ much richer sources. The implementation of an intonation assignment compo- nent for Direction Assistance, a program which generates spoken directions, provides a first approximation of how rec
1988
23
ATOMIZATION IN GRAMMAR SHARING M~umi Kamey-m~, Micrneleclmnim and Compui~" Technology Coopomtion (MCC) 3500 West Balcones C.enm" Drive, Austin, Tcxas 78759 megumi@mcc~om ABSTRACT new insights with which to account for certain linguistic We describe a prototype SK~RED CmAt~eAR for the syntax of simple nominal expressions in Arabic, E~IL~lx, French, German, and Japanese implemented at MCC. In this Oamm~', a complex inheritance ian/cc of shared gr~mmAtlcal templates provides pans that each language can put together to form lansuug~specific gramm-ti~tl templates. We conclude that grammar shsrin 8 is not only possible but also desirable. It forces us to reveal cross- liuguistically invm'iant grammatie~ primitives that may otherwise rem~ conflamd with other primitives if we deal only with a single ~.nousge or l-n~uuge type. We call this the process of OaA~O~AT~CAL ^TOI~aZAT~ON. The specific implementation reported here uses catcgorial
1988
24
SYNTACTIC APPROACHES TO AUTOMATIC BOOK INDEXING Gerard Salton Department of Computer Science Cornell University Ithaca, NY 14853 ABSTRACT Automatic book indexing systems are based on the generation of phrase struc- tures capable of reflecting text content. • Some approaches are given for the automatic construction of back-of-book indexes using a syntactic analysis of the available texts, followed by the identifica- tion of nominal constructions, the assign- ment of importance weights to the term phrases, and the choice of phrases as index- ing units. INTRODUCTION Book indexing is of wide practical interest to authors, publishers, and readers of printed materials. For present purposes, a standard entry in a book index may be assumed to be a nominal construction listed in normal phrase order, or appearing in some permuted form with the. principal term as phrase head, Cross-refer
1988
25
Lexicon and grammar in probabilistic tagging of written English. Andrew David Be, ale Unit for Compum" ~ on the English Languase Univenity of ~ r Bailngg, Lancaster England LAI 4Yr mb0250~..az.~c~vaxl Abstract The paper describes the development of software for automatic grammatical ana]ysi$ of unl~'Ui~, unedited English text at the Unit for Compm= Research on the Ev~li~h Language (UCREL) at the Univet~ of Lancaster. The work is ~n'nmtly funded by IBM and carried out in collaboration with colleagues at IBM UK (W'~) and IBM Yorktown Heights. The paper will focus on the lexicon component of the word raging system, the UCREL grammar, the datal~zlks of parsed sentences, and the tools that have been written to support developmem of these comlm~ems. ~ wozk has applications to speech technology, sl~lfing conectim, end other areas of natural lmlguage pngessil~ ~ y , our goal is to provide a language model using
1988
26
PARSING VS. TEXT PROCESSING IN THE ANALYSIS OF DICTIONARY DEFINITIONS Thomas Ahlswede and Martha Evens Computer Science Dept. Illinois Institute of Technology Chicago, 11. 60616 312-567-5153 ABSTRACT We have analyzed definitions from Webster's Seventh New Collegiate Dictionary using Sager's Linguistic String Parser and again using basic UNIX text processing utilities such as grep and awk. Tiffs paper evaluates both procedures, compares their results, and discusses possible future lines of research exploiting and combining their respective strengths. Introduction As natural language systems grow more sophisticated, they need larger and more d~led lexicons. Efforts to automate the process of generating lexicons have been going on for years, and have often been combined with the analysis of machine-readable dictionaries. Since 1979, a group at HT under the leadership of Manha Evens has been using the machine-readab
1988
27
Polynomial Learnability and Locality of Formal Grammars Naoki Abe* Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA19104. ABSTRACT We apply a complexity theoretic notion of feasible learnability called "polynomial learnabillty" to the eval- uation of grammatical formalisms for linguistic descrip- tion. We show that a novel, nontriviai constraint on the degree of ~locMity" of grammars allows not only con- text free languages but also a rich d~s of mildy context sensitive languages to be polynomiaily learnable. We discuss possible implications, of this result t O the theory of naturai language acquisition. 1 Introduction Much of the formai modeling of natural language acqui- sition has been within the classic paradigm of ~identi- fication in the limit from positive examples" proposed by Gold [7]. A relatively restricted class of formal lan- guages has been shown to be unleaxnable in th
1988
28
Conditional Descriptions in Functional Unification Grammar Robert T. Kasper USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 U.S.A. Abstract A grammatical description often applies to a linguistic object only when that object has certain features. Such conditional descriptions can be indirectly modeled in Kay's Functional Unification Grammar (FUG) using functional descriptions that are embedded within disjunctive alternatives. An ex- tension to FUG is proposed that allows for a direct represen- tation of conditional descriptions. This extension has been used to model the input conditions on the systems of systemic grammar. Conditional descriptions are formally defined in terms of logical implication and negation. This formal defi- nition enables the use of conditional descriptions as a general notational extension to any of the unification-based gram° mar representation systems currently
1988
29
MULTI-LEVEL PLURALS AND DISTRIBUTIVITY Remko Scha and David Stallard BBN Laboratories Inc. 10 Moulton St. Cambridge, MA 02238 U.S.A. ABSTRACT We present a computational treatment of the semantics of plural Noun Phrases which extends an earlier approach presented by Scha [7] to be able to deal with multiple-level plurals ("the boys and the girls", "the juries and the committees", etc.) 1 We ar- gue that the arbitrary depth to which such plural struc- tures can be nested creates a correspondingly ar- bitrary ambiguity in the possibilities for the distribution of verbs over such NPs. We present a recursive translation rule scheme which accounts for this am- biguity, and in particular show how it allows for the option of "partial distributivity" that collective verbs have when applied to such plural Noun Phrases. 1 INTRODUCTION Syntactically parallel utterances which contain plural noun phrases often require entirely diff
1988
3
DEDUCTIVE PARSING WITH MULTIPLE LEVELS OF REPRESENTATION.* Mark Johnson, Brain and Cognitive Sciences, M.I.T. ABSTRACT This paper discusses a sequence of deductive parsers, called PAD1 - PAD5, that utilize an axiomatization of the principles and parameters of GB theory, including a restricted transformational component (Move-a). PAD2 uses an inference control strategy based on the "freeze" predicate of Prolog-II, while PAD3 - 5 utilize the Unfold-Fold transformation to transform the original axiomatization into a form that functions as a recursive descent Prolog parser for the fragment. INTRODUCTION This paper reports on several deductive parsers for a fragment of Chomsky's Government and Binding theory (Chomsky 1981, 1986; Van Riemsdijk and Williams 1984). These parsers were constructed to illustrate the 'Parsing as Deduction' approach, which views a parser as a specialized theorem-prover which uses knowl
1988
30
Graph-structured Stack and Natural Language Parsing Masaru Tomlta Center for Machine Translation and Computer Science Department Camegie-MeUon University Pittsburgh, PA 15213 Abstract A general device for handling nondeterminism in stack operations is described. The device, called a Graph-structured Stack, can eliminate duplication of operations throughout the nondeterministic processes. This paper then applies the graph-structured stack to various natural language parsing methods, including ATN, LR parsing, categodal grammar and principle- based parsing. The relationship between the graph- structured stack and a chart in chart parsing is also discussed. 1. Introduction A stack plays an important role in natural language parsing. It is the stack which gives a parser context- free (rather than regular) power by permitting recursions. Most parsing systems make explicit use of the stack. Augmented Transition Networ
1988
31
AN EAR.LEY-TYPE PAR.SING ALGOR.ITHM FOR. TR.EE ADJOINING GR_kMMAR.S * Yves Schabes and Aravind K. Joshi Department of Computer and Information Science University of Pennsylvania Philadelphia PA 19104-6389 USA schabes~liac.cis.upenn.edu joshi~cis.upenn.edu ABSTR.ACT We will describe an Earley-type parser for Tree Adjoining Grammars (TAGs). Although a CKY- type parser for TAGs has been developed earlier (Vijay-Shanker and :Icshi, 1985), this is the first practical parser for TAGs because as is well known for CFGs, the average behavior of Earley-type parsers is superior to that of CKY-type parsers. The core of the algorithm is described. Then we discuss modifications of the parsing algorithm that can parse extensions of TAGs such as constraints on adjunction, substitution, and feature structures for TAGs. We show how with the use of substi- tution in TAGs the system is able to parse di- rectly CFGs and TAGs. The system p
1988
32
A DEFINITE CLAUSE VERSION OF CATEGORIAL GRAMMAR Remo Pareschi," Department of Computer and Information Science, University of Pennsylvania, 200 S. 33 rd St., Philadelphia, PA 19104, t and Department of Artificial Intelligence and Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, Scotland remo(~linc.cis.upenn.edu ABSTRACT We introduce a first-order version of Catego- rial Grammar, based on the idea of encoding syn- tactic types as definite clauses. Thus, we drop all explicit requirements of adjacency between combinable constituents, and we capture word- order constraints simply by allowing subformu- lae of complex types to share variables ranging over string positions. We are in this way able to account for constructiods involving discontin- uous constituents. Such constructions axe difficult to handle in the more traditional version of Cate- gorial Grammar, which
1988
33
COMBINATORY CATEGORIAL GRAMMARS: GENERATIVE POWER AND RELATIONSHIP TO LINEAR CONTEXT-FREE REWRITING SYSTEMS" David J. Weir Aravind K. Joshi Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104-6389 Abstract Recent results have established that there is a family of languages that is exactly the class of languages generated by three independently developed grammar formalisms: Tree Adjoining Grammm~, Head Grammars, and Linear Indexed Grammars. In this paper we show that Combina- tory Categorial Grammars also generates the same class of languages. We discuss the slruclm'al descriptions pro- duced by Combinawry Categorial Grammars and com- pare them to those of grammar formalisms in the class of Linear Context-Free Rewriting Systems. We also discuss certain extensions of CombinaWry Categorial Grammars and their effect on the weak generative capacity. 1 Introduct
1988
34
Unification of Disjunctive Feature Descriptions Andreas Eisele, Jochen D6rre Institut f'dr Maschinelle Sprachverarbeitung Universit~t Stuttgart Keplerstr. 17, 7000 Stuttgart 1, West Germany Netmaih [email protected] Abstract The paper describes a new implementation of feature structures containing disjunctive values, which can be characterized by the following main points: Local representation of embedded dis- junctions, avoidance of expansion to disjunctive normal form and of repeated test-unifications for checking consistence. The method is based on a modification of Kasper and Rounds' calculus of feature descriptions and its correctness therefore is easy to see. It can handle cyclic structures and has been incorporated successfully into an envi- ronment for grammar development. 1 Motivation In current research in computational linguistics but also in extralinguistic fields unification has turned o
1988
35
THE INTERPRETATION OF RELATIONAL NOUNS Joe de Bruin" and Remko Scha BBN Laboratories 10 Mouiton Street Cambridge, MA 02238, USA ABSTRACT This paper 1 decdbes a computational treatment of the semantics of relational nouns. It covers relational nouns such as "sister.and "commander; and focuses especially on a particular subcategory of them, called function nouns ('speed; "distance', "rating'). Rela- tional nouns are usually viewed as either requiring non-compositional semantic interpretation, or causing an undesirable proliferation of syntactic rules. In con- trast to this, we present a treatment which is both syntactically uniform and semantically compositional. The core ideas of this treatment are: (1) The recog- nition of different levels of semantic analysis; in par- ticular, the distinction between an English-oriented and a domain-oriented level of meaning represen- tation. (2) The analysis of relational nouns as denoting
1988
4
QUANTIFIER SCOPING IN THE SRI CORE LANGUAGE ENGINE Douglas B. Moran Artificial Intelligence Center SKI International 333 Ravenswood Avenue Menlo Park, California 94025, USA ABSTRACT An algorithm for generating the possible quanti- fier scopings for a sentence, in order of preference, is outlined. The scoping assigned to a quantifier is determined by its interactions with other quan- tifiers, modals, negation, and certain syntactic- constituent boundaries. When a potential scoping is logically equivalent to another, the less preferred one is discarded. The relative scoping preferences of the individ- ual quantifiers are not embedded in the algorithm, but are specified by a set of rules. Many of the rules presented here have appeared in the linguis- tics literature and have been used in various natu- ral language processing systems. However, the co- ordination of these rules and the resulting coverage repres
1988
5
A General Computational Treatment of Comparatives for Natural Language Question Answering Bruce W. Ballard AT&T Bell Laborotories 600 Mountain Avenue Murray Hill, N.J. 07974 Abstract We discuss the techniques we have developed and implemented for the cross-categorial treatment of comparatives in TELl, a natural language question- answering system that's transportable among both application domains and types of backend retrieval systems. For purposes of illustration, we shall consider the example sentences "List the cars at least 20 inches more than twice as long as the Century is wide" and "Have any US companies made at least 3 more large cars than Buick?" Issues to be considered include comparative inflections, left recursion and other forms of nesting, extraposition of comparative complements, ellipsis, the wh element "how', and the translation of normalized parse trees into logical form. 1. Introduction We sh
1988
6
PARSING AND INTERPRETING COMPARATIVES Marmy Rayner SICS Box 1263, S-164 28 KISTA Sweden Amelie Banks UPMAIL Box 1205, S-750 02 UPPSALA Sweden Tel: +46 8 752 15 O0 Tel: +46 18 181051 ABSTRACT 1. INTRODUCTION We describe a fairly comprehensive handling of the syntax and semantics of comparative constructions. The analysis is largely based on the theory developed by Pinkham, but we advance arguments to support a different handling of phrasal comparatives - in particular, we use direct interpretation instead of C- ellipsis. We .explain the reasons for dividing comparative sentences into different categories, and for each category we give an example of the corresponding Montague semantics. The ideas have all been implemented within a large-scale grammar for Swedish. This paper is written with two distinct audiences in mind. On the
1988
7
Defining the Semantics of Verbal Modifiers in the Domain of Cooking Tasks Robin F. Karlin Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104-6389 Abstract SEAFACT (Semantic Analysis For the Animation of Cooking Tasks) is a natural language interface to a computer-generated animation system operating in the domain of cooking tasks. SEAFACT allows the user to specify cooking tasks "using a small subset of English. The system analyzes English input and pro- duces a representation of the task which can drive motion synthesis procedures. Tl~is paper describes the semantic analysis of verbal modifiers on which the SEAFACT implementation is based. Introduction SEAFACT is a natural language interface to a computer-generated animation system (Karlin, 1988). SEAFACT operates in the domain of cooking tasks. The domain is limited to a mini-world con- sisting of a small set of verbs ch
1988
8
THE INTERPRETATION OF TENSE AND ASPECT IN ENGLISH Mary Dalrymple Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, California 94025 USA ABSTRACT An analysis of English tense and aspect is pre- sented that specifies temporal precedence relations within a sentence. The relevant reference points for interpretation are taken to be the initial and terminal points of events in the world, as well as two "hypothetical" times: the perfect time (when a sentence contains perfect aspect) and the pro- gressive or during time. A method for providing temporal interpretation for nontensed elements in the sentence is also described. 1. Introduction The analysis of tense and aspect requires spec- ifying what relations can or cannot hold among times and events, given a sentence describing those events. 1 For example, a specification of the mean- ing of the past-tense sentence "John ate a cake" involve
1988
9
A TRANSFER MODEL USING A TYPED FEATURE STRUCTURE REWRITING SYSTEM WITH INHERITANCE R6mi Zajac ATR Interpreting Telephony Research Laboratories Sanpeidani lnuidani, Seika-cho~ Soraku-gun, Kyoto 619-02, Japan [zajac%[email protected]] ABSTRACT We propose a model for transfer in machine translation which uses a rewriting system for typed feature structures. The grammar definitions describe transfer relations which are applied on the input structure (a typed feaane structure) by the interpreter to produce all possible transfer pairs. The formalism is based on the semantics of typed feature structures as described in [AR-Kaci 84]. INTRODUCTION We propose a new model for transfer in machine translation of dialogues. The goal is twofold: to develop a linguistically-based theory for transfer, and to develop a computer formalism with which we can implement such a theory, and which can be integrated with a unification-bas
1989
1
Word Association Norms, Mutual Information, and Lexicography Kenneth Ward Church Bell Laboratories Murray Hill, N.J. Patrick Hanks CoLlins Publishers Glasgow, Scotland Abstract The term word assaciation is used in a very particular sense in the psycholinguistic literature. (Generally speaking, subjects respond quicker than normal to the word "nurse" if it follows a highly associated word such as "doctor.") We wilt extend the term to provide the basis for a statistical description of a variety of interesting linguistic phenomena, ranging from semantic relations of the doctor/nurse type (content word/content word) to lexico-syntactic co-occurrence constraints between verbs and prepositions (content word/function word). This paper will propose a new objective measure based on the information theoretic notion of mutual information, for estimating word association norms from computer readable cor
1989
10
LEXICAL ACCESS IN CONNECTED SPEECH RECOGNITION Ted Briscoe Computer Laboratory University of Cambridge Cambridge, CB2 3QG, UK. ABSTRACT This paper addresses two issues concerning lexical access in connected speech recognition: 1) the nature of the pre-lexical representation used to initiate lexical look- up 2) the points at which lexical look-up is triggered off this representation. The results of an experiment are reported which was designed to evaluate a number of access strategies proposed in the literature in conjunction with several plausible pre-lexical representations of the speech input. The experiment also extends previous work by utilising a dictionary database containing a realistic rather than illustrative English vocabulary. THEORETICAL BACKGROUND In most recent work on the process of word recognition during comprehe~ion of connected speech (either by human or machine) a distinction is made between lexical acce
1989
11
DICTIONARIES, DICTIONARY GRAMMARS AND DICTIONARY ENTRY PARSING Mary S. Neff IBM T. J. Watson Research Center, P. O. Box 704, Yorktown Heights, New York 10598 Branimir K. Boguraev IBM T. J. Watson Research Center, P. O. Box 704, Yorktown Heights, New York 10598; Computer Laboratory, University of Cambridge, New Museums Site, Cambridge CB2 3QG Computerist: ... But, great Scott, what about structure? You can't just bang that lot into a machine without structure. Half a gigabyte of sequential file ... Lexicographer: Oh, we know all about structure. Take this entry for example. You see here italics as the typical ambiguous structural element marker, being apparently used as an undefined phrase-entry lemrna, but in fact being the subordinate entry headword address preceding the small-cap cross-reference headword address which is nested within the gloss to a defined phrase entry, itself nested within a subordinate (bold lower-case letter) sense section in t
1989
12
SOME CHART-BASED TECHNIQUES FOR PARSING ILL-FORMED INPUT Chris S. Mellish Deparlment of Artificial Intelligence, University of Edinburgh, 80 South Bridge, EDINBURGH EH1 1HN, Scotland. ABSTRACT We argue for the usefulness of an active chart as the basis of a system that searches for the globally most plausible explanation of failure to syntactically parse a given input. We suggest semantics-free, grammar- independent techniques for parsing inputs displaying simple kinds of ill-formedness and discuss the search issues involved. THE PROBLEM Although the ultimate solution to the problem of processing ill-formed input must take into account semantic and pragmatic factors, nevertheless it is important to understand the limits of recovery stra- tegies that age based entirely on syntax and which are independent of any particular grammar. The aim of Otis work is therefore to explore purely syntactic and granmmr-independent
1989
13
ON REPRESENTING GOVERNED PREPOSITIONS AND HANDLING "INCORRECT" AND NOVEL PREPOSITIONS Hatte R. Blejer, Sharon Flank, and Andrew Kchler SRA Corporation 2000 15th St. North Arlington, VA 22201, USA ABSTRACT NLP systems, in order to be robust, must handle novel and ill-formed input. One common type of error involves the use of non-standard prepositions to mark arguments. In this paper, we argue that such errors can be handled in a systematic fashion, and that a system designed to handle them offers other advantages. We offer a classification scheme for preposition usage errors. Further, we show how the knowledge representation employed in the SRA NLP system facilitates handling these data. 1.0 INTRODUCTION It is well known that NLP systems, in order to be robust, must handle ill- formed input. One common type of error involves the use of non-standard prepo
1989
14
ACQUIRING DISAMBIGUATION RULES FROM TEXT Donald Hindle AT~T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974-2070 Abstract An effective procedure for automatically acquiring a new set of disambiguation rules for an existing deterministic parser on the basis of tagged text is presented. Performance of the automatically ac- quired rules is much better than the existing hand- written disambiguation rules. The success of the acquired rules depends on using the linguistic in- formation encoded in the parser; enhancements to various components of the parser improves the ac- quired rule set. This work suggests a path toward more robust and comprehensive syntactic analyz- ers. 1 Introduction One of the most serious obstacles to developing parsers to effectively analyze unrestricted English is the difficulty of creating sufllciently comprehen- sive grammars. While it is possible to develop toy grammars for partic
1989
15
THE EFFECTS OF INTERACTION ON SPOKEN DISCOURSE Sharon L. Oviatt Philip It. Cohen Artificial Intelligence Center SItI International 333 Ravenswood Avenue Menlo Park, California 94025-3493 ABSTRACT Near-term spoken language systems will likely be limited in their interactive capabilities. To design them, we shall need to model how the presence or absence of speaker interaction in- fluences spoken discourse patterns in different types of tasks. In this research, a comprehensive examination is provided of the discourse struc- ture and performance efficiency of both interac- tive and noninteractive spontaneous speech in a seriated assembly task. More specifically, tele- phone dialogues and audiotape monologues are compared, which represent opposites in terms of the opportunity for confirmation feedback and clarification subdialognes. Keyboard communi- cation patterns, upon which most natural lan- guage heuristics and al
1989
16
How to cover a grammar Ren6 Leermakers Philips Research Laboratories, P.O. Box 80.000 5600 JA Eindhoven, The Netherlands ABSTRACT A novel formalism is presented for Earley-like parsers. It accommodates the simulation of non-deterministic pushdown automata. In particular, the theory is applied to non-deterministlc LRoparsers for RTN grammars. 1 Introduction A major problem of computational linguistics is the inefficiency of parsing natural language. The most popular parsing method for context-free natural lan- guage grammars, is the genera/ context-free parsing method of Earley [1]. It was noted by Lang [2], that Earley-like methods can be used for simulating a class of non-determlnistic pushdown antomata(NPDA). Re- cently, Tondta [3] presented an algorithm that simulates non-determlnistic LRoparsers, and claimed it to be a fast Mgorithm for practical natural language processing sys- tems. The purpose of the present pa
1989
17
The Structure of Shared Forests in Ambiguous Parsing Sylvie Billot t" Bernard Lang* INRIA rand Universit~ d'Orl~ans billotGinria.inria.fr langGinria.inria.fr Abstract The Context-Free backbone of some natural language ana- lyzers produces all possible CF parses as some kind of shared forest, from which a single tree is to be chosen by a disam- biguation process that may be based on the finer features of the language. We study the structure of these forests with respect to optimality of sharing, and in relation with the parsing schema used to produce them. In addition to a theo- retical and experimental framework for studying these issues, the main results presented are: - sophistication in chart parsing schemata (e.g. use of look-ahead) may reduce time and space efficiency instead of improving it, - there is a shared forest structure with at most cubic size for any CF grammar, - when O(n 3) complexity is required,
1989
18
A Calculus for Semantic Composition and Scoping Fernando C.N. Pereira Artificial Intelligence Center, SRI International 333 R.avenswood Ave., Menlo Park, CA 94025, USA Abstract Certain restrictions on possible scopings of quan- tified noun phrases in natural language are usually expressed in terms of formal constraints on bind- ing at a level of logical form. Such reliance on the form rather than the content of semantic inter- pretations goes against the spirit of composition- ality. I will show that those scoping restrictions follow from simple and fundamental facts about functional application and abstraction, and can be expressed as constraints on the derivation of possi- ble meanings for sentences rather than constraints of the alleged forms of those meanings. 1 An Obvious Constraint? Treatments of quantifier scope in Montague gram- mar (Montague, 1973; Dowty et al., 1981; Cooper, 1983), transformational grammar (Reinha
1989
19
A Semantic-Head-Driven Generation Algorithm for Unification-Based Formalisms Stuart M. Shieber," Gertjan van Noord, t Robert C. Moore," and Fernando C. N. Pereira.* "Artificial Intelligence Center SRI International Menlo Park, CA 94025, USA tDepartment of Linguistics Rijksuniversiteit Utrecht Utrecht, Netherlands Abstract We present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restric- tions on the class of grammars to which it is ap- plicable. In particular, unlike an Earley deduction generator (Shieber, 1988), it allows use of seman- tically nonmonotonic grammars, yet unlike top- down methods, it also permits left-recursion. The enabling design feature of the algorithm is its im- plicit traversal of the analysis tree for the string being generated in a semantic-head-driven fashion. 1 Introduction The problem of generating a well-for
1989
2
A General Computational Treatment Of The Comparative Carol Friedman" Courant Institute of Mathematical Sciences New York University 715 Broadway, Room 709 New York, NY 10005 Abstract We present a general treatment of the com- parative that is based on more basic linguistic elements so that the underlying system can be effectively utilized: in the syntactic analy- sis phase, the comparative is treated the same as similar structures; in the syntactic regular- ization phase, the comparative is transformed into a standard form so that subsequent pro- ceasing is basically unaffected by it. The scope of quantifiers under the comparative is also in- tegrated into the system in a general way. 1 Introduction Recently there has been interest in the devel- opment of a general computational treatment of the comparative. Last year at the Annual ACL Meeting, two papers were presented on the comparative by Ballard [1] and Rayn
1989
20
THE LEXICAL SEMANTICS OF COMPARATIVE EXPRESSIONS IN A MULTI-LEVEL SEMANTIC PROCESSOR Duane E. Olawsky Computer Science Dept. University of Minnesota 4-192 EE/CSci Building 200 Union Street SE Minneapolis, MN 55455 [olawsky~umn-cs.es.umn.edu] ABSTRACT Comparative expressions (CEs) such as "big- ger than" and "more oranges than" are highly ambiguous, and their meaning is context depen- dent. Thus, they pose problems for the semantic interpretation algorithms typically used in nat- ural language database interfaces. We focus on the comparison attribute ambiguities that occur with CEs. To resolve these ambiguities our nat- ural language interface interacts with the user, finding out which of the possible interpretations was intended. Our multi-level semantic processor facilitates this interaction by recognizing the oc- currence of comparison attribute ambiguity and then calculating and presenting a list of can
1989
21
AUTOMATIC ACQUISITION OF THE LEXICAL SEMANTICS OF VERBS FROM SENTENCE FRAMES* Mort Webster and Mitch Marcus Department of Computer and Information Science University of Pennsylvania 200 S. 33rd Street Philadelphia, PA 19104 ABSTRACT This paper presents a computational model of verb acquisition which uses what we will call the princi- ple of structured overeommitment to eliminate the need for negative evidence. The learner escapes from the need to be told that certain possibili- ties cannot occur (i.e., are "ungrammatical") by one simple expedient: It assumes that all proper- ties it has observed are either obligatory or for- bidden until it sees otherwise, at which point it decides that what it thought was either obliga- tory or forbidden is merely optional. This model is built upon a classification of verbs based upon a simple three-valued set of features which repre- sents key aspects of a verb's syntactic structure, i
1989
22
COMPUTER AIDED INTERPRETATION OF LEXICAL COOCCURRENCES Paola Velardi (*) Mafia Teresa Pazienza (**) (*)University of Ancona, Istituto di Informatica, via Brecce Bianche, Ancona (**)University of Roma, Dip. di lnformatica e Sistemistica, via Buonarroti 12, Roma ABSTRACT This paper addresses the problem of developing a large semantic lexicon for natural language processing. The increas~g availability of machine readable documents offers an opportunity to the field of lexieal semantics, by providing experimental evidence of word uses (on-line texts) and word definitions (on-line dictionaries). The system presented hereafter, PETRARCA, detects word e.occurrences from a large sample of press agency releases on finance and economics, and uses these associations to build a ease-based semantic lexicon. Syntactically valid cooccurenees including a new word W are detected by a high-coverage morphosyntactic analyzer. Syntacti
1989
23
A HYBRID APPROACH TO REPRESENTATION IN THE JANUS NATURAL LANGUAGE PROCESSOR Ralph M. Weischedel BBN Systems and Technologies Corporation 10 Moulton St. CambHdge, MA 02138 Abstract In BBN's natural language understanding and generation system (Janus), we have used a hybrid approach to representation, employing an intensional logic for the representation of the semantics of ut- terances and a taxonomic language with formal semantics for specification of descriptive constants and axioms relating them. Remarkably, 99.9% of 7,000 vocabulary items in our natural language ap- plications could be adequately axiomatlzed in the taxonomic language. 1. Introduction Hybrid representation systems have been ex- plored before [9, 24, 31], but until now only one has been used in an extensive natural language process- ing system. KL-TWO [31], based on a propositional logic, was at the core of the mapping from formulae to lexical ite
1989
24
PLANNING TEXT FOR ADVISORY DIALOGUES" Johanna D. Moore UCLA Department of Computer Science and USC/Information Sciences Institute 4676 Admiralty Way Marina del Key, CA 90292-6695, USA C~cile L. Paris USC/information Sciences Institute 4676 Admiralty Way Marina del Key, CA 90292-6695, USA ABSTRACT Explanation is an interactive process re- quiring a dialogue between advice-giver and advice-seeker. In this paper, we argue that in order to participate in a dialogue with its users, a generation system must be capable of reasoning about its own utterances and there- fore must maintain a rich representation of the responses it produces. We present a text planner that constructs a detailed text plan, containing the intentional, attentional, and .,,e~,~nc~ ~tructures of the text it generates. INTRODUCTION Providing explanations in an advisory situa- tion is a highly interactive process, requiring a dialogue between
1989
25
Two Constraints on Speech Act Ambiguity Elizabeth A. Hinkelman and James F. Allen Computer Science Department The University of Rochester Rochester, New York 14627 ABSTRACT Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones. Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how "Can you pass the salt?" is a typical indirect request while "Are you able to pass the salt?" is not. 1. The Problem Full natural language systems must recognize speakers' intentions in an utterance. T
1989
26
TREATMENT OF LONG DISTANCE DEPENDENCIES IN LFG AND TAG: FUNCTIONAL UNCERTAINTY IN LFG IS A COROLLARY IN TAG" Aravind K. Joshi Dept. of Computer & Information Science University of Pennsylvania Philadelphia, PA 19104 [email protected] K. Vijay-Shanker Dept. of Computer & Information Science University of Delaware Newark, DE 19716 [email protected] ABSTRACT In this paper the functional uncertainty machin- ery in LFG is compared with the treatment of long distance dependencies in TAG. It is shown that the functional uncertainty machinery is redundant in TAG, i.e., what functional uncertainty accom- plishes for LFG follows f~om the TAG formalism itself and some aspects of the linguistic theory in- stantiated in TAG. It is also shown that the anal- yses provided by the functional uncertainty ma- chinery can be obtained without requiring power beyond mildly context-sensitive grammars. Some linguisti
1989
27
TREE UNIFICATION GRAMMAR Frdd Popowich School of Computing Science Simon Fraser University Bumaby, B.C. CANADA V5A 186 ABSTRACT Tree Unification Grammar is a declarative unification-bas~:l linguistic framework. The basic grammar stmaures of this framework are partial descriptions of trees, and the framework requires only a single grammar rule to combine these partial descriptions. Using this framework, constraints associated with various linguistic phenomena (reflexivisation in particular) ~ be stated succinctly in the lexicon. INTRODUCTION There is a mind in uni~ca~on-based grammar formalisms towards using a single grammar stmctme to contain the phonological, syntactic and semantic information associated with a linguistic expression. Adopting'the terminology used by Pollard and Sag (1987), this grammar structure is called a sign. Grammar rules, guided by the syntactic information contained in signs, are used to deriv
1989
28
A GENERALIZATION OF THE OFFLINE PARSABLE GRAMMARS Andrew Haas BBN Systems and Technologies, 10 Moulton St., Cambridge MA. 02138 ABSTRACT The offline parsable grammars apparently have enough formal power to describe human language, yet the parsing problem for these grammars is solvable. Unfortunately they exclude grammars that use x-bar theory - and these grammars have strong linguistic justification. We define a more general class of unification grammars, which admits x-bar grammars while preserving the desirable properties of offline parsable grammars. Consider a unification grammar based on term unification. A typical rule has the form t o --~ t 1 ... t n where t o is a term of first order logic, and tt...t n are either terms or terminal symbols. Those t i which are terms are called the top-level terms of the rule. Suppose that no top-level term is a variable. Then erasing the arguments of the top- level terms gives a
1989
29
A THREE-VALUED INTERPRETATION OF NEGATION IN FEATURE STRUCTURE DESCRIPTIONS Anuj Dawar Dept. of Comp. and Info. Science University of Pennsylvania Philadelphia, PA 19104 K. Vijay-Shanker Dept. of Comp. and Info. Science University of Delaware Newark, DE 19718 April 20, 1989 ABSTRACT Feature structures are informational elements that have been used in several linguistic theories and in computa- tional systems for natural-language processing. A logi- caJ calculus has been developed and used as a description language for feature structures. In the present work, a framework in three-valued logic is suggested for defining the semantics of a feature structure description language, allowing for a more complete set of logical operators. In particular, the semantics of the negation and implication operators are examined. Various proposed interpretations of negation and implication are compared within the sug- ge
1989
3
DISCOURSE ENTITIES IN JANUS Damaris M. Ayuso BBN Systems and Technologies Corporation 10 Moulton Street Cambridge, Massachusetts 02138 [email protected] Abstract This paper addresses issues that arose in apply- ing the model for discourse entity (DE) generation in B. Webber's work (1978, 1983) to an interactive multi- modal interface. Her treatment was extended in 4 areas: (1)the notion of context dependence of DEs was formalized in an intensional logic, (2)the treat- ment of DEs for indefinite NPs was modified to use skolem functions, (3)the treatment of dependent quantifiers was generalized, and (4) DEs originating from non-linguistic sources, such as pointing actions, were taken into account. The discourse entities are used in intra- and extra-sentential pronoun resolution in BBN Janus. 1 Introduction Discourse entities (DEs) are descriptions of ob- jects, groups of objects, events, etc. from the real world or from
1989
30
EVALUATING DISCOURSE PROCESSING ALGORITHMS Marilyn A. Walker Hewlett Packard Laboratories Filton Rd., Bristol, England B$12 6QZ, U.K. & University of Pennsylvania lyn%lwalker~hplb.hpl.hp.com Abstract In order to take steps towards establishing a method- ology for evaluating Natural Language systems, we conducted a case study. We attempt to evaluate two different approaches to anaphoric processing in dis- course by comparing the accuracy and coverage of two published algorithms for finding the co-specifiers of pronouns in naturally occurring texts and dia- logues. We present the quantitative results of hand- simulating these algorithms, but this analysis natu- rally gives rise to both a qualitative evaluation and recommendations for performing such evaluations in general. We illustrate the general difficulties encoun- tered with quantitative evaluation. These are prob- lems with: (a) allowing for underlying assumptions, (b)
1989
31
A COMPUTATIONAL MECHANISM FOR PRONOMINAL REFERENCE Robed J. P. Ingria David Stallard BBN Systems and Technologies, Incorporated 10 Mouiton Street Mailstop 009 Cambridge, MA 02238 ABSTRACT the syntactically impossible antecedents. This latter This paper describes an implemented mechanism for handling bound anaphora, disjoint reference, and pronominal reference. The algorithm maps over every node in a parse tree in a left-to-right, depth first manner. Forward and backwards coreference, and disjoint reference are assigned during this tree walk. A semantic interpretation procedure is used to deal with multiple antecedents. 1. INTRODUCTION This paper describes an implemented mechanism for assigning antecedents to bound anaphors and per- sonal pronouns, and for establishing disjoint reference between Noun Phrases. This mechanism is part of the BBN Spoken Language System (Boissn, et al. (1989)). The algorithm used is i
1989
32
PARSING AS NATURAL DEDUCTION Esther KSnig Universitgt Stuttgart Institut fiir Maschinelle Sprachverarbeitung, Keplerstrasse 17, D-7000 Stuttgart 1, FRG Abstract The logic behind parsers for categorial grammars can be formalized in several different ways. Lam- bek Calculus (LC) constitutes an example for a na- tural deduction 1 style parsing method. In natural language processing, the task of a parser usually consists in finding derivations for all different readings of a sentence. The original Lam- bek Calculus, when it is used as a parser/theorem prover, has the undesirable property of allowing for the derivation of more than one proof for a reading of a sentence, in the general case. In order to overcome this inconvenience and to turn Lambek Calculus into a reasonable parsing method, we show the existence of "relative" normal form proof trees and make use of their properties to constrain the proof procedure in the d
1989
33
EFFICIENT PARSING FOR FRENCH* Claire Gardent University Blaise Pascal - Clermont II and University of Edinburgh, Centre for Cognitive Science, 2 Buccleuch Place, Edinburgh EH89LW, SCOTLAND, LrK Gabriel G. B~s, Pierre-Franqois Jude and Karine Baschung, Universit~ Blaise Pascal - Clermont II, Formation Doctorale Linguistique et Informatique, 34, Ave, Carnot, 63037 Clermont-Ferrand Cedex, FRANCE ABSTRACT Parsing with categorial grammars often leads to problems such as proliferating lexical ambiguity, spu- rious parses and overgeneration. This paper presents a parser for French developed on an unification based categorial grammar (FG) which avoids these pro- blem s. This parser is a bottom-up c hart parser augmen- ted with a heuristic eliminating spurious parses. The unicity and completeness of parsing are proved. INTRODUCTION Our aim is twofold. First to provide a linguistical- ly well motivated categorial grammar for French (hencef
1989
34
LOGICAL FORMS IN THE CORE LANGUAGE ENGINE Hiyan Alshawi & Jan van Eijck SRI International Cambridge Research Centre 23 Millers Yard, Mill Lane, Cambridge CB2 11ZQ, U.K. Keywords: logical form, natural language, semantics ABSTRACT This paper describes a 'Logical Form' target language for representing the literal mean- ing of English sentences, and an interme- diate level of representation ('Quasi Logical Form') which engenders a natural separation between the compositional semantics and the processes of scoping and reference resolution. The approach has been implemented in the SRI Core Language Engine which handles the English constructions discussed in the paper. INTRODUCTION The SRI Core Language Engine (CLE) is a domain independent system for translat- ing English sentences into formal represen- tations of their literal meanings which are capable of supporting reasoning (Alshawi et al. 1988). The CLE has two m
1989
4
Abstract Unification-Based Semantic Interpretation Robert C. Moore Artificial Intelligence Center SRI International Menlo Park, CA 94025 We show how unification can be used to spec- ify the semantic interpretation of natural-language expressions, including problematical constructions involving long-distance dependencies. We also sketch a theoretical foundation for unification- based semantic interpretation, and compare the unification-based approach with more conven- tional techniques based on the lambda calculus. 1 Introduction Over the past several years, unification-based for- malisms (Shieber, 1986) have come to be widely used for specifying the syntax of natural lan- guages, particularly among computational lin- guists. It is less widely realized by computa- tional linguists that unification can also be a pow- erful tool for specifying the semantic interpreta- tion of natural languages. While many of the
1989
5
REFERENCE TO LOCATIONS Lewis G. Creary, J. Mark Gawron, and John Nerbonne Hewlett-Packaxd Laboratories, 3U 1501 Page Mill Road Palo Alto, CA 94304-1126 Abstract I.I Sketch of Proposal We propose a semantics for locative expressions such as near Jones or west of Denver, an impor- tant subsystem for NLP applications. Locative ex- pressions denote regions of space, and serve as argu- ments to predicates, locating objects and events spa- tially. Since simple locatives occupy argument posi- tions, they do NOT participate in scope ambiguities m pace one common view, which sees locatives as logical operators. Our proposal justifies common representa- tional practice in computational linguistics, account- ing for how locative expressions function anaphori- tally, and explaining a wide range of inference in- volving locatives. We further demonstrate how the argument analysis may accommodate multiple loca- tive arguments in a s
1989
6
GETTING AT DISCOURSE REFERENTS Rebecca J. Passonneau UNISYS, Paoli Research Center P.O. Box 517, Paoli, PA 19301, USA ABSTRACT I examine how discourse anaphoric uses of the definite pronoun it contrast with similar uses of the demonstrative pronoun thai. Their distinct contexts of use are characterized in terms of two contextual features--perslstence of grammati- cal subject and persistence of gr,~mmatical form--which together demonstrate very clearly the interrelation among lexical choice, grammati- cal choices and the dimension of time in signalling the dynamic attentional state of a discourse. 1 Introduction Languages vary in the number and kinds of gram- matical distinctions encoded in their nominal and pronominal systems. Language specific means for explicitly mentioning and re-mentioning dis- course entities constrain what Grosz and Sidner refer to as the linguistic structure of discourse [2]. This in turn co
1989
7
CONVERSATIONALLY RELEVANT DESCRIPTIONS Amlchai Kronfeld Natural Language Incorporated 2910 Seventh St. Berkeley, CA 94710 i Abstract Conversationally relevant descriptions are definite descriptions that are not merely tools for the iden- tification of a referent, but are also crucial to the dis- course in other respects. I analyze the uses of such descriptions in assertions as conveying a particular ~ype of conversational implicatures. Such implics- tures can be represented within the framework of possible world semantics. The analysis is extended to non-assertive illocutionary acts on the one hand, and to indefinite descriptions on the other. 2 Introduction In an earlier paper [Kronfeld 1986 b I have intro- duced the distinction between/unctionagiy and con- versationally relevant descriptions. All uses of def- inite descriptions for the purpose of referring are functional in the sense that they are supposed to identi
1989
8
COOKING UP REFERRING EXPRESSIONS Robert Dale Centre for Cognitive Science, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, Scotland email: rda~uk, ac. ed. epJ.stemi~nss, c s. ucl. ac. uk ABSTRACT This paper describes the referring expression generation mechanisms used in EPICURE, a com- puter program which produces natural language descriptions of cookery recipes. Major features of the system include: an underlying ontology which permits the representation of non-singular entities; a notion of diacriminatory power, to determine what properties should be used in a description; and a PATR-like unification grammar to produce surface linguistic strings. INTRODUCTION EPICURE (Dale 1989a, 1989b) is a natural lan- guage generation system whose principal concern is the generation of referring expressions which pick out complex entities in connected discourse. In particular, the system generates natural lan- g
1989
9
POLYNOMIAL TIME PARSING OF COMBINATORY CATEGORIAL GRAMMARS* K. Vijay-Shanker Department of CIS University of Delaware Delaware, DE 19716 David J. Weir Department of EECS Northwestern University Evanston, IL 60208 Abstract In this paper we present a polynomial time pars- ing algorithm for Combinatory Categorial Grammar. The recognition phase extends the CKY algorithm for CFG. The process of generating a representation of the parse trees has two phases. Initially, a shared for- est is build that encodes the set of all derivation trees for the input string. This shared forest is then pruned to remove all spurious ambiguity. 1 Introduction Combinatory Categorial Grammar (CCG) [7, 5] is an extension of Classical Categorial Grammar in which both function composition and function application are allowed. In addition, forward and backward slashes are used to place conditions on the relative ordering of adjacent ca
1990
1
Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation Marilyn Walker University of Pennsylvania* Computer Science Dept. Philadelphia, PA 19104 [email protected] Steve Whittaker Hewlett Packard Laboratories Bristol, England BS12 6QZ HP Stanford Science Center [email protected] Abstract Conversation between two people is usually of MIXED-INITIATIVE, with CONTROL over the con- versation being transferred from one person to an- other. We apply a set of rules for the transfer of control to 4 sets of dialogues consisting of a total of 1862 turns. The application of the control rules lets us derive domain-independent discourse structures. The derived structures indicate that initiative plays a role in the structuring of discourse. In order to explore the relationship of control and initiative to discourse processes like centering, we analyze the distribution of four different classes of anaphor
1990
10
Performatives in a Rationally Based Speech Act Theory* Philip R. Cohen Artificial Intelligence Center and Center for the Study of Language and Information SRI International 333 Ravenswood Ave. Menlo Park, CA 94025 and Hector J. Levesque $ Department of Computer Science University of Toronto Abstract 1 Introduction A crucially important adequacy test of any the- ory of speech acts is its ability to handle perfor- matives. This paper provides a theory of perfor- matives as a test case for our rationally based the- ory of illocutionary acts. We show why "I request you..." is a request, and "I lie to you that p" is self-defeating. The analysis supports and extends earlier work of theorists such as Bach and Harnish [1] and takes issue with recent claims by Searle [10] that such performative-as-declarative analyses are doomed to failure. *This paper was made possible by a contract from ATR International to SR
1990
11
NORMAL STATE IMPLICATURE Nancy L. Green Department of Computer and Information Sciences University of Delaware Newark, Delaware 19716, USA Abstract In the right situation, a speaker can use an unqual- ified indefinite description without being misun- derstood. This use of language, normal slate im- plicature, is a kind of conversational implicature, i.e. a non-truth-functional context-dependent in- ference based upon language users' awareness of principles of cooperative conversation. I present a convention for identifying normal state implica- tures which is based upon mutual beliefs of the speaker and hearer about certain properties of the speaker's plan. A key property is the precondition that an entity playing a role in the plan must be in a normal state with respect to the plan. 1 Introduction In the right situation, a speaker can use an unqualified indefinite description without being misunderstood. For examp
1990
12
THE COMPUTATIONAL COMPLEXITY OF AVOIDING CONVERSATIONAL IMPLICATURES Ehud Reiterf Aiken Computation Lab Harvard University Cambridge, Mass 02138 ABSTRACt Referring expressions and other object descriptions should be maximal under the Local Brevity, No Unnecessary Components, and Lexical Preference preference rules; otherwise, they may lead hearers to infer unwanted conversational implicatures. These preference rules can be incorporated into a polyno- mial time generation algorithm, while some alterna- tive formalizations of conversational impficature make the generation task NP-Hard. 1. Introduction Natural language generation (NLG) systems should produce referring expressions and other object descriptions that are free of false implicatures, i.e., that do not cause the user of the system to infer incorrect and unwanted conversational implicatures (Grice 1975). The following utterances illustrate referring
1990
13
Free Indexation: Combinatorial Analysis and A Compositional Algorithm* Sandiway Fong 545 Technology Square, Rm. NE43-810, MIT Artificial Intelligence Laboratory, Cambridge MA 02139 Internet: [email protected] Abstract The principle known as 'free indexation' plays an important role in the determination of the refer- ential properties of noun phrases in the principle- and-parameters language framework. First, by in- vestigating the combinatorics of free indexation, we show that the problem of enumerating all possi- ble indexings requires exponential time. Secondly, we exhibit a provably optimal free indexation al- gorithm. 1 Introduction In the principles-and-parameters model of lan- guage, the principle known as 'free indexation' plays an important part in the process of deter- mining the referential properties of elements such as anaphors and pronominals. This paper ad- dresses two issues. (1) We investiga
1990
14
LICENSING AND TREE ADJOINING GRAMMAR IN GOVERNMENT BINDING PARSING Robert Frank* Department of Computer and Information Sciences University of Pennsylvania Philadelphia, PA 19104 email: frank@ linc.cis.upenn.edu Abstract This paper presents an implemented, psychologically plau- sible parsing model for Government Binding theory gram- mars. I make use of two main ideas: (1) a generaliza- tion of the licensing relations of [Abney, 1986] allows for the direct encoding of certain principles of grammar (e.g. Theta Criterion, Case Filter) which drive structure build- ing; (2) the working space of the parser is constrained to the domain determined by a Tree Adjoining Grammar elementary tree. All dependencies and constraints are lo- caiized within this bounded structure. The resultant parser operates in linear time and allows for incremental semantic interpretation and determination of grammaticaiity. 1 Introduction This paper aims
1990
15
A SIMPLIFIED THEORY OF TENSE REPRESENTATIONS AND CONSTRAINTS ON THEIR COMPOSITION Michael R. Brent MIT Artificial Intelligence Lab 545 Technology Square Cambridge, MA 02139 [email protected] ABSTRACT This paper proposes a set of representations for tenses and a set of constraints on how they can be com- bined in adjunct clauses. The semantics we propose ex- plains the possible meanings of tenses in a variety of sen- tential contexts. It also supports an elegant constraint on tense combination in adjunct clauses. These semantic representations provide insights into the interpretations of tenses, and the constraints provide a source of syntac- tic disambiguation that has not previously been demon- strated. We demonstrate an implemented disambiguator for a certain class of three-clause sentences based on our theory. 1 Introduction This paper proposes a set of representations for tenses and a set of constraints on how
1990
16
SOLVING THEMATIC DIVERGENCES IN MACHINE TRANSLATION Bonnie Doff* M.I.T. Artificial Intelligence Laboratory 545 Technology Square, Room 810 Cambridge, MA 02139, USA internet: [email protected] ABSTRACT Though most translation systems have some mechanism for translating certain types of divergent predicate-argument structures, they do not provide a genera] procedure that takes advantage of the relationship between lexical-semantic struc- ture and syntactic structure. A divergent predicate-argument structure is one in which the predicate (e.g., the main verb) or its arguments (e.g., the subject and object) do not have the same syntactic ordering properties for both the source and target language. To account for such ordering differ- ences, a machine translator must consider language-specific syntactic idiosyncrasies that distinguish a target language ¢rom a source language, while making use of lexical-semantic unifor
1990
17
A SYNTACTIC FILTER ON PRONOMINAL ANAPHORA FOR SLOT GRAMMAR Shalom Lappin and Michael McCord IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 E-mail: Lappin/[email protected] ABS]RACT We propose a syntactic falter for identifying non-coreferential pronoun-NP pairs within a sentence. The filter applies to the output of a Slot Grammar parser and is formulated m terms of the head-argument structures which the parser generates. It liandles control and unbounded de- pendency constructions without empty categories or binding chains, by virtue of the uniticational nature of the parser. The filter provides con- straints for a discourse semantics system, reducing the search domain to which the inference rules of the system's anaphora resolution component apply. 1. INTRODUCTION In this paper we present an implemented al- gorithm which filters intra-sentential relations of referential dependence betwe
1990
18
ACQUIRING CORE MEANINGS OF WORDS, REPRESENTED AS JACKENDOFF-STYLE CONCEPTUAL STRUCTURES, FROM CORRELATED STREAMS OF LINGUISTIC AND NON-LINGUISTIC INPUT Jeffrey Mark Siskind* M. I. T. Artificial Intelligence Laboratory 545 Technology Square, Room NE43-800b Cambridge MA 02139 617/253-5659 internet: Qobi~AI.MIT.EDU Abstract This paper describes an operational system which can acquire the core meanings of words without any prior knowledge of either the category or meaning of any words it encounters. The system is given as input, a description of sequences of scenes along with sentences which describe the [EVENTS] taking place as those scenes unfold, and produces as out- put, a lexicon consisting of the category and mean- ing of each word in the input, that allows the sen- tences to describe the [EVENTS]. It is argued, that each of the three main components of the system, the parser, the linker and the inference component,
1990
19
STRUCTURE AND INTONATION IN SPOKEN LANGUAGE UNDERSTANDING* Mark Steedman Computer and Information Science, University of Pennsylvania 200 South 33rd Street Philadelphia PA 19104-6389 ([email protected]) ABSTRACT The structure imposed upon spoken sentences by intonation seems frequently to be orthogo- hal to their traditional surface-syntactic struc- ture. However, the notion of "intonational struc- ture" as formulated by Pierrehumbert, Selkirk, and others, can be subsumed under a rather dif- ferent notion of syntactic surface structure that emerges from a theory of grammar based on a "Combinatory" extension to Categorial Gram, mar. Interpretations of constituents at this level are in tam directly related to "information struc- ture", or discourse-related notions of "theme", "rheme", "focus" and "presupposition". Some simplifications appear to follow for the problem of integrating syntax and other high-level mod-
1990
2
TYPES IN FUNCTIONAL UNIFICATION GRAMMARS Michael Elhadad Department of Computer Science Columbia University New York, NY 10027 Internet: [email protected] ABSTRACT Functional Unification Grammars (FUGs) are popular for natural language applications because the formalism uses very few primitives and is uniform and expressive. In our work on text generation, we have found that it also has annoying limitations: it is not suited for the expression of simple, yet very common, taxonomic relations and it does not allow the specification of completeness conditions. We have implemented an extension of traditional functional unification. This extension addresses these limitations while preserving the desirable properties of FUGs. It is based on the notions of typed features and typed constituents. We show the advantages of this exten- sion in the context of a grammar used for text genera- tion. 1 INTRODUCTION Unific
1990
20
DEFAULTS IN UNIFICATION GRAMMAR Gosse Bouma Research Institute for Knowledge Systems Postbus 463, 6200 AL Maa.qtrlcht. The Netherlands e-mall : [email protected] ABSTRACT Incorporation of defaults in grammar formalisms is important for reasons of linguistic adequacy and grammar organization. In this paper we present an algorithm for handling default information in unification grammar. The algorithm specifies a logical operation on feature structures, merging with the non-default structure only those parts of the default feature structure which are not constrained by the non-default structure. We present various linguistic applications of default unification. L INTRODUCTION MOTIVATION. There a two, not quite unrelated, reasons for incorporating defaults mechanisms into a linguistic formalism. First, linguists have often argued that certain phenomena are described most naturally with
1990
21
EXPRESSING DISJUNCTIVE AND NEGATIVE FEATURE CONSTRAINTS WITH CLASSICAL FIRST-ORDER LOGIC. Mark Johnson, Cognitive and Linguistic Sciences, Box 1978, Brown University, Providence, RI 02912. [email protected] ABSTRACT In contrast to the "designer logic" approach, this paper shows how the attribute-value feature structures of unification grammar and constraints on them can be axiomatized in classical first-order logic, which can express disjunctive and negative constraints. Because only quantifier-free formulae are used in the axiomatization, the satisfiability problem is NP- complete. INTRODUCTION. Many modern linguistic theories, such as Lexical-Functional Grammar [1], Functional Unification Grammar [12] Generalized Phrase- Structure Grammar [6], Categorial Unification Grammar [20] and Head-driven Phrase- Structure Grammar [18], replace the atomic categories of a context-free grammar with a "featu
1990
22
LAZY UNIFICATION Kurt Godden Computer Science Department General Motors Research Laboratories Warren, MI 48090-9055, USA CSNet: [email protected] ABSTRACT Unification-based NL parsers that copy argument graphs to prevent their destruction suffer from inefficiency. Copying is the most expensive operation in such parsers, and several methods to reduce copying have been devised with varying degrees of success. Lazy Unification is presented here as a new, conceptually elegant solution that reduces copying by nearly an order of magnitude. Lazy Unification requires no new slots in the structure of nodes, and only nominal revisions to the unification algorithm. PROBLEM STATEMENT degradation in performance. This performance drain is illustrated in Figure 1, where average parsing statistics are given for the original implementation of graph unification
1990
23
ZERO MORPHEMES IN UNIFICATION-BASED COMBINATORY CATEGORIAL GRAMMAR Chinatsu Aone The University of Texas at Austin & MCC 3500 West Balcones Center Dr. Austin, TX 78759 ([email protected]) Kent Wittenburg MCC 3500 West Balcones Center Dr. Austin, TX 78759 ([email protected]) ABSTRACT In this paper, we report on our use of zero morphemes in Unification-Based Combinatory Categorial Grammar. After illus- trating the benefits of this approach with several examples, we describe the algorithm for compil- ing zero morphemes into unary rules, which al- lows us to use zero morphemes more efficiently in natural language processing. 1 Then, we dis- cuss the question of equivalence of a grammar with these unary rules to the original grammar. Lastly, we compare our approach to zero mor- phemes with possible alternatives. 1. Zero Morphemes in Categorial Grammar In English and in other natural la
1990
24
THE LIMITS OF UNIFICATION Robert J. P. Ingria BBN Systems and Technologies Corporation 10 Moulton Street, Mailstop 6/4C Cambridge, MA 02138 Intemet: [email protected] ABSTRACT Current complex-feature based grammars use a sin- gle procedure--unification--for a multitude of pur- poses, among them, enforcing formal agreement between purely syntactic features. This paper presents evidence from several natural languages that unification--variable-matching combined with variable substitution--is the wrong mechanism for effecting agreement. The view of grammar developed here is one in which unification is used for semantic interpre- tation, while purely formal agreement involves only a check for non-distinctness---i.e, variable-matching without variable substitution. 1 Introduction In recent years, a great deal of attention has been de- voted to complex-feature based grammar formalisms-- i.e. grammar formalisms in whic
1990
25
Asymmetry in Parsing and Generating with Unification Grammars: Case Studies From ELU Graham Russell,* Susan Warwick,* and John Carroll? * ISSCO, 54 rte. des Acacias ? Cambridge University Computer Laboratory 1227 Geneva, Switzerland New Museums Site, Pembroke Street [email protected] Cambridge CB2 3QG Abstract Recent developments in generation algorithms have enabled work in nnificafion-based computational linguistics to approach more closely the ideal of grammars as declarative statements of linguistic facts, neutral between analysis an0_ synthesis, x-"~-oui this perspective, however, the situation is still far from perfect; all known methods of generation impose constraints on the grammars they assume. We briefly consider a number of proposals for generation, outlining their consequences for the form of grammacs, and then report on experience arising from the addition of a generator to an exist- ing unification en
1990
26
AUTOMATED INVERSION OF LOGIC GRAMMARS FOR GENERATION Tomek Strzalkowski and Ping Peng Courant Institute of Mathematical Sciences New York University 251 Mercer Street New York, NY 10012 ABSTRACT We describe a system of reversible grammar in which, given a logic-grammar specification of a natural language, two efficient PROLOG programs are derived by an off-line compilation process: a parser and a generator for this language. The centerpiece of the system is the inversion algorithm designed to compute the generator code from the parser's PRO- LOG code, using the collection of minimal sets of essential arguments (MSEA) for predicates. The sys- tem has been implemented to work with Definite Clause Grammars (DCG) and is a part of an English-Japanese machine translation project currently under development at NYU's Courant Insti- tute. INTRODUCTION The results reported in this paper are part of the ongoing resea
1990
27
ALGORITHMS FOR GENERATION IN LAMBEK THEOREM PROVING Erik-Jan van der Linden * Guido Minnen Institute for Language Technology and Artificial Intelligence Tilburg University PO Box 90153, 5000 LE Tilburg, The Netherlands E-maih vdlindenOkub.nl ABSTRACT We discuss algorithms for generation within the Lambek Theorem Proving Framework. Efficient algorithms for generation in this framework take a semantics-driven strategy. This strategy can be modeled by means of rules in the calculus that are geared to generation, or by means of an al- gorithm for the Theorem Prover. The latter pos- sibility enables processing of a bidirectional cal- culus. Therefore Lambek Theorem Proving is a natural candidate for a 'uniform' architecture for natural language parsing and generation. Keywords: generation algorithm; natural lan- guage generation; theorem proving; bidirection- ality; categorial grammar. 1 INTRODUCTION Algorit
1990
28
Multiple Underlying Systems: Translating User Requests into Programs to Produce Answers Robert J. Bobrow, Philip Resnik, Ralph M. Weischedel BBN Systems and Technologies Corporation 10 Moulton Street Cambridge, MA 02138 ABSTRACT A user may typically need to combine the strengths of more than one system in order to perform a task. In this paper, we describe a component of the Janus natural language interface that translates inten- sional logic expressions representing the meaning of a request into executable code for each application program, chooses which combination of application systems to use, and designs the transfer of data among them in order to provide an answer. The com- plete Janus natural language system has been ported to two large command and control decision support aids. 1. Introduction The norm in the next generation of user en- vironments will be distributed, networked applications. Many problems will be
1990
29
PROSODY, SYNTAX AND PARSING John Bear and Patti Price SRI International 333 Ravenswood Avenue Menlo Park, California 94025 Abstract We describe the modification of a grammar to take advantage of prosodic information provided by a speech recognition system. This initial study is lim- ited to the use of relative duration of phonetic seg- ments in the assignment of syntactic structure, specif- ically in ruling out alternative parses in otherwise ambiguous sentences. Taking advantage of prosodic information in parsing can make a spoken language system more accurate and more efficient, if prosodic- syntactic mismatches, or unlikely matches, can be pruned. We know of no other work that has suc- ceeded in automatically extracting speech informa- tion and using it in a parser to rule out extraneous parses. 1 Introduction Prosodic information can mark lexical stress, iden- tify phrasing breaks, and provide information us
1990
3
Computational structure of generative phonology and its relation to language comprehension. Eric Sven Ristad* MIT Artificial Intelligence Lab 545 Technology Square Cambridge, MA 02139 Abstract We analyse the computational complexity of phonological models as they have developed over the past twenty years. The major results ate that generation and recognition are undecidable for segmental models, and that recognition is NP- hard for that portion of segmental phonology sub- sumed by modern autosegmental models. Formal restrictions are evaluated. 1 Introduction Generative linguistic theory and human language comprehension may both be thought of as com- putations. The goal of language comprehension is to construct structural descriptions of linguistic sensations, while the goal of generative theory is to enumerate all and only the possible (grammat- ical) structural descriptions. These computations are only indirect
1990
30
PARSING THE LOB CORPUS Carl G. de Marcken MIT AI Laboratory Room 838 545 Technology Square Cambridge, MA 02142 Internet: [email protected] ABSTRACT This paper 1 presents a rapid and robust pars- ing system currently used to learn from large bodies of unedited text. The system contains a multivalued part-of-speech disambiguator and a novel parser employing bottom-up recogni- tion to find the constituent phrases of larger structures that might be too difficult to ana- lyze. The results of applying the disambiguator and parser to large sections of the Lancaster/ Oslo-Bergen corpus are presented. INTRODUCTION We have implemented and tested a pars- ing system which is rapid and robust enough to apply to large bodies of unedited text. We have used our system to gather data from the Lancaster/Oslo-Bergen (LOB) corpus, generat- ing parses which conform to a version of current Government-Binding theory, and aim to use
1990
31
AUTOMATICALLY EXTRACTING AND REPRESENTING COLLOCATIONS FOR LANGUAGE GENERATION* Frank A. Smadja t and Kathleen R. McKeown Department of Computer Science Columbia University New York, NY 10027 ABSTRACT Collocational knowledge is necessary for language gener- ation. The problem is that collocations come in a large variety of forms. They can involve two, three or more words, these words can be of different syntactic cate- gories and they can be involved in more or less rigid ways. This leads to two main difficulties: collocational knowledge has to be acquired and it must be represented flexibly so that it can be used for language generation. We address both problems in this paper, focusing on the acquisition problem. We describe a program, Xtract, that automatically acquires a range of collocations from large textual corpora and we describe how they can be represented in a flexible lexicon using a unification based fo
1990
32
DISAMBIGUATING AND INTERPRETING VERB DEFINITIONS Yael Ravin IBM T.J. Watson Research Center Yorktown Heights, New York 10598 e-mail:[email protected] ABSTRACT To achieve our goal of building a compre- hensive lexical database out of various on-line resources, it is necessary to interpret and disambiguate the information found in these resources. In this paper we describe a Disambiguation Module which analyzes the content of dictionary dcf'mitions, in particular, definitions of the form to VERB with NP". We discuss the semantic relations holding be- tween the head and the prepositional phrase in such structures, as wellas our heuristics for identifying these relations and for disambiguating the senses of the words in- volved. We present some results obtained by the Disambiguation Module and evaluate its rate of success as compared with results ob- tained from human judgements. INTRODUCTION The goal of the
1990
33
NOUN CLASSIFICATION FROM PREDICATE.ARGUMENT STRUCTURES Donald Hindle AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 ABSTRACT A method of determining the similarity of nouns on the basis of a metric derived from the distribution of subject, verb and object in a large text corpus is described. The resulting quasi-semantic classification of nouns demonstrates the plausibility of the distributional hypothesis, and has potential application to a variety of tasks, including automatic indexing, resolving nominal compounds, and determining the scope of modification. 1. INTRODUCTION A variety of linguistic relations apply to sets of semantically similar words. For example, modifiers select semantically similar nouns, selecfional restrictions are expressed in terms of the semantic class of objects, and semantic type restricts the possibilities for noun compounding. Therefore, it i
1990
34
DETERMINISTIC LEFT TO RIGHT PARSING OF TREE ADJOINING LANGUAGES* Yves Schabes Dept. of Computer & Information Science University of Pennsylvania Philadelphia, PA 19104-6389, USA [email protected] K. Vijay-Shanker Dept. of Computer & Information Science University of Delaware Newark, DE 19716, USA [email protected] Abstract We define a set of deterministic bottom-up left to right parsers which analyze a subset of Tree Adjoining Lan- guages. The LR parsing strategy for Context Free Grammars is extended to Tree Adjoining Grammars (TAGs). We use a machine, called Bottom-up Embed- tied Push Down Automaton (BEPDA), that recognizes in a bottom-up fashion the set of Tree Adjoining Lan- guages (and exactly this se0. Each parser consists of a finite state control that drives the moves of a Bottom-up Embedded Pushdown Automaton. The parsers handle deterministically some context-sensitive Tree Adjoining Languages.
1990
35
AN EFFICIENT PARSING ALGORITHM FOR TREE ADJOINING GRAMMARS Karin Harbusch DFKI - Deutsches Forschungszentrum fiir Kfinstliche Intelligenz Stuhlsatzenhausweg 3, D-6600 Saarbriicken 11, F.R.G. harbusch~dfki.uni-sb.de ABSTRACT In the literature, Tree Adjoining Grammars (TAGs) are propagated to be adequate for nat- ural language description -- analysis as well as generation. In this paper we concentrate on the direction of analysis. Especially important for an implementation of that task is how efficiently this can be done, i.e., how readily the word problem can be solved for TAGs. Up to now, a parser with O(n 6) steps in the worst case was known where n is the length of the input string. In this paper, the result is improved to O(n 4 log n) as a new lowest upper bound. The paper demonstrates how local interpretion of TAG trees allows this reduction. 1 INTRODUCTION Compared with the formalism of context-
1990
36
LEXICAL AND SYNTACTIC RULES IN A TREE ADJOINING GRAMMAR Anne Abeill6* LADL and UFRL University of Paris 7-Jussieu [email protected] ABSTRACT according to this definition 2. Each elementary tree is constrained to have at least one terminal at its frontier which serves as 'head' (or 'anchor'). Sentences of a Tag language are derived from the composition of an S-rooted initial tree with other elementary trees by two operations: substitution (the same operation used by context free grammars) or adjunction, which is more powerful. Taking examples from English and French idioms, this paper shows that not only constituent structures rules but also most syntactic rules (such as topicalization, wh-question, pronominalization ...) are subject to lexical constraints (on top of syntactic, and possibly semantic, ones). We show that such puzzling phenomena are naturally handled in a 'lexJcalized' formalism such as Tre
1990
37
BOTTOM-UP PARSING EXTENDING CONTEXT-FREENESS IN A PROCESS GRAMMAR PROCESSOR Massimo Marino Department of Linguistics - University of Pisa Via S. Maria 36 1-56100 Pisa - ITALY Bitnet: [email protected] ABSTRACT A new approach to bottom-up parsing that extends Augmented Context-Free Grammar to a Process Grammar is formally presented. A Process Grammar (PG) defines a set of rules suited for bottom-up parsing and conceived as processes that are applied by a P G Processor. The matching phase is a crucial step for process application, and a parsing structure for efficient matching is also presented. The PG Processor is composed of a process scheduler that allows immediate constituent analysis of structures, and behaves in a non-deterministic fashion. On the other side, the PG offers means for implementing spec~c parsing strategies improving the lack of determinism innate in the processor. 1. INTRODUCTION Bot
1990
38
A HARDWARE ALGORITHM FOR HIGH SPEED MORPHEME EXTRACTION AND ITS IMPLEMENTATION Toshikazu Fukushima, Yutaka Ohyama and Hitoshi Miyai C&C Systems Research Laboratories, NEC Corporation 1-1, Miyazaki 4-chome, Miyamae-ku, Kawasaki City, Kanagawa 213, Japan ([email protected], ohyama~tsl.cl.nec.co.jp, [email protected]) ABSTRACT This paper describes a new hardware algorithm for morpheme extraction and its implementation on a specific machine (MEX-I), as the first step toward achieving natural language parsing accel- erators. It also shows the machine's performance, 100-1,000 times faster than a personal computer. This machine can extract morphemes from 10,000 character Japanese text by searching an 80,000 morpheme dictionary in I second. It can treat multiple text streams, which are composed of char- acter candidates, as well as one text stream. The algorithm is implemented on the machine in linear time for the number of
1990
39