text
stringlengths 1k
1k
| year
stringclasses 12
values | No
stringlengths 1
3
|
---|---|---|
Semantic Interpretation Using KL-ONE 1 Norman K. Sondheimer USC/Information Sciences Institute Marina del Rey, California 90292 USA Ralph M. Weischedel Dept. of Computer & Information Sciences University of Delaware Newark, Delaware 19716 USA Robert J. Bobrow Bolt Beranek and Newman, Inc. Cambridge, Massachusetts 02238 USA Abstract This paper presents extensions to the work of Bobrow and Webber [Bobrow&Webber 80a, Bobrow&Webber 80b] on semantic interpretation using KL-ONE to represent knowledge. The approach is based on an extended case frame formalism applicable to all types of phrases, not just clauses. The frames are used to recognize semantically acceptable phrases, identify their structure, and, relate them to their meaning representation through translation rules. Approaches are presented for generating KL-ONE structures as the meaning of a sentence, for capturing semantic generalizations through abstract | 1984 | 24 |
TWO THEORIES FOR COMPUTING THE LOGICAL FORM OF MASS EXPRESSIONS Francis Jeffry Pelletier Lenhart K. Schubert Dept. Computing Science University of Alberta Edmonton, Alberta T6G 2El Canada ABSTRACT Applying the rules of translation is even simpler. In essence, all that is needed is a mechanism for arranging There are various difficulties in accomodating the traditional logical expressions into larger expressions in conformity with mass/count distinction into a grammar for English which the semantic rules. (For examples of parsers see Thompson has a goal the production of "logical form" semantic translations of the initial English sentences, The present paper surveys some of these difficulties. One puzzle is whether the distinction is a syntactic one or a semantic one, i.e., whether it is a well-formedness constraint or whether it is a description of the semantic translations produ | 1984 | 25 |
SYNTACTIC AND SEMANTIC PARSABILITY Geoffrey K. Pullum Syntax Research Center, Cowell College, UCSC, Santa Cruz, CA 95064 and Center for the Study of Language and Information, Stanford, CA 94305 ABSTRACT This paper surveys some issues that arise in the study of the syntax and semantics of natural languages (NL's) and have potential relevance to the automatic recognition, parsing, and translation of NL's. An attempt is made to take into account the fact that parsing is scarcely ever thought about with reference to syntax alone; semantic ulterior motives always underly the assignment of a syntactic structure to a sentence. First I con- sider the state of the art with respect to argu- ments about the language-theoretic complexity of NL's: whether NL's are regular sets, deterministic CFL's, CFL's, or whatever. While English still a | 1984 | 26 |
The Semantics of Grammar Formalisms Seen as Computer Languages Fernando C. N. Pereira and Stuart M. Shieber Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract The design, implementation, and use of grammar for- ma]isms for natural language have constituted a major branch of coml)utational linguistics throughout its devel- opment. By viewing grammar formalisms as just a spe- cial ease of computer languages, we can take advantage of the machinery of denotational semantics to provide a pre- cise specification of their meaning. Using Dana Scott's do- main theory, we elucidate the nature of the feature systems used in augmented phrase-structure grammar formalisms, in particular those of recent versions of generalized phrase structure grammar, lexical functional grammar and PATR- I1, and provide a (lcnotational semantics for a simple gram- mar form | 1984 | 27 |
THE RESOLUTION OF QUANTIFICATIONAL AMBIGUITY IN THE TENDUM SYSTEM Harry Bunt Computational Linguistics Research Unit Dept. of Language and Literature, Tilburg University P.O.Box 90153, 5000 LE Tilburg The Netherlands ABSTRACT A method is described for handling the ambiguity and vagueness that is often found in quantifications - the semantically complex relations between nominal and verbal constituents. In natural language certain aspects of quantification are often left open; it is argued that the analysis of quantification in a model-theoretic framework should use semantic representations in which this may also be done. This paper shows a form for such a representation and how "ambiguous" representations are used in an elegant and efficient procedure for semantic analysis, incorporated in the TENDUM dialogue system. The quantification ambi~uit[ explosion problem Quantification is a complex phenomenon that occurs w | 1984 | 28 |
Preventing False Inferences 1 Aravind Joshi and Bonnie Webher Department of Computer and Information Science Moore School/D2 University of Pennsylvania Philadelphia PA 19104 Ralph M. Weischedel 2 Department of Computer & Information Sciences University of Delaware Newark DE 19716 ABSTRACT I Introduction In cooperative man-machine interaction, it is taken as necessary that a system truthfully and informatively respond to a user's question. It is not, however, sufficient. In particular, if the system has reason to believe that its planned response nfight lead the user to draw an inference that it knows to be false, then it must block it by nmdifying or adding to its response. The problem is that a system neither can nor should explore all eonchtsions a user might possibly draw: its reasoning must be constrained in some systematic and well-motivated way. Such cooperative behavior was investigated in [5], in which a mo | 1984 | 29 |
TRANSFORMING ENGLISH INTERFACES TO OTHER NATURAL LANGUAGES: AN EXPERIMENT WITH PORTUGUESE GABRIEL PEREIRA LOPES (1) Departamento de Matem~tica • Instituto Superior de Agronomia Tapada da Ajuda - 1399 Lisboa Codex, Portugal ABSTRACT Nowadays it is common the construction of English understanding systems (interfaces) that soo- ner or later one has to re-use, adapting and conve~ ting them to other natural languages. This is not an easy task and in many cases the arisen problems are quite complex. In this paper an experiment that was accomplished for Portuguese language is reported and some conclusions are explicitely stated. A know ledge information processing system, known as SSIPA, with natural language comprehension capabilities that interacts with users in Portuguese through a Portuguese interface, LUSO, was built. Logic was u- sed as a mental aid and as a practical tool. I. INTRODUCTION The CHAT-80 program for Engli | 1984 | 3 |
• O B L E M LOCALIZATION STRATEGIES FOR PRAGMATI~S ~ IN NATURAL-LANGUAGE FRONT ENDS• Lance A. Remshaw & Ralph ~L Welschedel Department of Ccaputer and Information Sciences University of Delaware Newark, Delaware 19716 USA ABSTRACT Problem localization Is the identification of the most slgnlflcant failures in the AND-OR tree resulting from an unsuocass/ul attempt to achieve a goal, for instance, In planning, backward-chnining inference, or top-down parnin~ We examine beurls- tics and strategies for problem localization in the context of using a planner to check for pragmatic failures in natural language input to computer sys- tems, such as a cooperative natural language interface to Unix •• . Our heuristics call for selecting the most hopeful branch at ORs, but the most problematic one at ANDs. Surprise scores and speclal-purpose rules are the maln strategies sug- | 1984 | 30 |
A CONNECTIONIST MODEL OF SOME ASPECTS OF ANAPHOR RESOLUTION Ronan G. Reilly Educational Research Centre St Patrick's College, Drumcondra Dublin 9, Ireland ABSTRACT This paper describes some recent developments in language processing involving computational models which more closely resemble the brain in both structure and function. These models employ a large number of interconnected parallel computational units which communicate via weighted levels of excitation and inhibition. A specific model is described which uses this approach to process some fragments of connected discourse. I CONNECTIONIST MODELS The human brain consists of about i00,000 million neuronal units with between a lO00 and I0,000 connections each. The two main classes of cells in the cortex are the striate and pyramidal cells. The pyramidal cells are generally larse | 1984 | 31 |
CONCURRENT PARSING IN PROGRAMMABLE LOGIC ARRAY (PLA-) NETS PROBLEMS AND PROPOSALS Helmut Schnelle RUHR-Universit~t Bochum Sp~achwissenschaftliches Institut D-4630 Bochum 1 West-Germany ABSTRACT This contribution attempts a conceptual and practical introduction into the principles of wiring or constructing special machines for lan- guage processing tasks instead of programming a universal machine. Construction would in princi- ple provide higher descriptive adequacy in com- putationally based linguistics. After all, our heads do not apply programs on stored symbol arrays but are appropriately wired for under- standing or producing language. Introductor~ Remarks i. For me, computational linguistics is not primarily a technical discipline implementing performance processes for independently defined formal structures of linguistic competence. Computational linguistics should be a foundatio- nal discipline: It should | 1984 | 32 |
A Case Analysis Method Cooperating with ATNG and Its Application to Machine Translation Hitoshi IIDA, Kentaro OGURA and Hirosato NOMURA Musashino Electrical Communication Laboratory, N.T.T. Musashino-shi, Tokyo, 180, Japan Abstract This paper present a new method for parsing English sentences. The parser called LUTE-EJ parser is combined with case analysis and ATNG-based analysis. LUTE-EJ parser has two interesting mechanical characteristics. One is providing a structured buffer, Structured Constituent Buffer, so as to hold previous fillers for a case structure, instead of case registers before a verb appears in a sentence. The other is extended HOLD mechanism(in ATN), in whose use an embedded clause, especially a "be- deleted" clause, is recursively analyzed by case analysis. This parser's features are (1)extracting a case filler, basically as a noun phrase, by ATNG- based analysis, including recursive case analysis, an | 1984 | 33 |
A PROPER TREATMEMT OF SYNTAX AND SEMANTICS IN MACHINE TRANSLATION ¥oshihiko Nitta, Atsushi Okajima, Hiroyuki Kaji, Youichi Hidano, Koichiro Ishihara Systems Development Laboratory, Hitachi, Ltd. 1099 Ohzenji Asao-ku, Kawasaki-shi, 215 JAPAN ABSTRACT A proper treatment of syntax and semantics in machine translation is introduced and discussed from the empirical viewpoint. For English- Japanese machine translation, the syntax directed approach is effective where the Heuristic Parsing Model (HPM) and the Syntactic Role System play important roles. For Japanese-English translation, the semantics directed approach is powerful where the Conceptual Dependency Diagram (CDD) and the Augmented Case Marker System (which is a kind of Semantic Role System) play essential roles. Some examples of the difference between Japanese sentence structure and English sentence structure, which is vital to machine translation~ | 1984 | 34 |
A CONSIDERATION ON THE CONCEPTS STRUCTURE AND LANGUAGE IN RELATION TO SELECTIONS OF TRANSLATION EQUIVALENTS OF VERBS IN MACHINE TRANSLATION SYSTEMS Sho Yoshida Department of Electronics, Kyushu University 36, Fukuoka 812, Japan ABSTRACT To give appropriate translation equivalents for target words is one of the most fundamental problems in machine translation systrms. Especially, when the MT systems handle languages that have completely different structures like Japanese and European languages as source and target languages. In this report, we discuss about the data strucutre that enables appropriate selections of translation equivalents for verbs in the target language. This structure is based on the concepts strucutre with associated infor- mation relating source and target languages. Discussion have been made from the standpoint of realizability of the structure (e.g. from the standpoint of easiness of data collection | 1984 | 35 |
DETECTING PATTERNS IN A LEXICAL DATA BASE Nicoletta Calzolari Dipartimento di Linguistica - Universita' di Pisa Istituto di Linguistica Computazionale del CNR Via della Faggiola 32 50100 Pisa - Italy ABSTRACT In a well-structured Lexica] Data Base, a number of relations among lexica] entries can he interactively evidenced. The present article examines hyponymy, as an example of paradigmatic relation, and "restriction" relation, as a syntagmatic relation. The theoretical results of their implementation are illustrated. I INTRODUCTION In previous papers it has been pointed out that ill a well-structured Lexical Data Has(. it becomes possible to detect automatical;y, an(l ~e evidence through interactlve queries a number Of morphologica] , syntact.ic, or semant i~. relationships between lexical entries, .~uch ~lb synonymy, hyponymy, hyperonymy, | 1984 | 36 |
~ISTIC PROSL~ IN ~JLTILINfiUAL HOI~(-DLOGICAL ~J~O~:~OSITION G.Thurmair Siemens AG ZT ZTI Otto-Hahn-Ring 6 Munich 83 West-Germany ABSTRACT An algorithm for the morphological decomposition of words into morphemes is presented. The application area is information retrieval, and the purpose is to find morphologically related terms to a given search term. First, the parsing framework is presented, then several linguistic decisions are discussed: morpheme selection and segmentation, morpheme classes, morpheme grammar, allomorph handling, etc. Since the system works in several languages, language-specific phenomena are mentioned. I BAC~GRO~ I. Application domain In Information Retrieval (document retrieval), the usual way of searching documents is to use key words (descriptors). In most of the existent systems, these descriptors are extracted form the texts auto | 1984 | 37 |
A GENERAL COMPUTATIONAL MODEL FOR WORD-FORM RECOGNITION AND PRODUCTION Kimmo Koskenniemi Department of General Linguistics Univeristy of Helsinki Hallituskatu 11-13, Helsinki 10, Finland ABSTRACT A language independent model for recognition and production of word forms is presented. This "two-level model" is based on a new way of describing morpho- logical alternations. All rules describing the morphophonological variations are par- allel and relatively independent of each other. Individual rules are implemented as finite state automata, as in an earlier model due to Martin Kay and Ron Kaplan. The two-level model has been implemented as an operational computer programs in several places. A number of operational two-level descriptions have been written or are in progress (Finnish, English, Japanese, Rumanian, French, Swedish, Old Church Slavonic, Greek, Lappish, Arabic, Icelandic). The model is bidirectional a | 1984 | 38 |
PANEL NATURAL LANGUAGE AND DATABASES, AGAIN Karen Sparck Jones Computer Laboratory, University of Cambridge Corn Exchange Street, Cambridge CB2 3QG, England INTRODUCTION Natural Language and Databases has been a common panel topic for some years, partly because it has been an active area of work, but more importantly, because it has been widely assumed that database access is a good test environment for language research. I thought the time had come to look again at this assumption, and that it would be useful, for COLING 84, to do this. I therefore invited the members of the Panel to speak to the proposition (developed below) that database query is no longer a good, let alone the best, test environment for language processing research, because it is insufficiently demanding in its linguistic aspects and too idiosyncratically demanding in its non-lin | 1984 | 39 |
UN OUTIL MULTIDIMENSIONNEL DE L'ANALYSE DU DISCOURS J. CHAUCHE Laboratoire de Traitement de l'Information I.U.T. LE HAVRE Place Robert Schuman - 76610 LE HAVRE FRANCE & C.E.L.T.A. 23, Boulevard Albert let - 54000 NANCY FRANCE RESUME : Le traitement automatique du discours suppose un traitement algorithmique et informatique. Plu- sieurs m~thodes permettent d'appr~hender cet as- pect. L'utilisation d'un langage de programmation g~n~ral (par exemple PL/I) ou plus orient~ (par exemple LISP) repr~sente la premiere approche. A l'oppos~, l'utilisation d'un logiciel sp~cialis~ permet d'~viter l' ~tude algorithmlque n~cessaire dana le premier cas et de concentrer cette ~tude sur les aspects r~ellement sp~cifiques de ce trai- tement. Lea choix qui ont conduit ~ la d~finition du syst~ne SYGI4ART sont exposes ici. L'aspect mul- tldimensionnel eat analys~ du point de rue concep- tuel et permet de situer cette r~alisation par rapport | 1984 | 4 |
THERE STILL IS GOLD IN THE DATABASE MINE Madeleine Bates BBN Laboratories 10 Moulton Street Cambridge, MA 02238 Let me state clearly at the outset that I disagree with the premise that the problem of interfacing to database systems has outlived its usefulness as a productive environment for NL research. But I can take this stand strongly only by being very liberal in defining both "natural language interface" and "database systems". same as "Are there any vice presidents who are either male or female". This same system, when asked for all the Michigan doctors and Pennsylvania dentists, produced a list of all the people who were either doctors or dentists and who lived in either Michigan or Pennsylvania. This is the state of our art? Instead of assuming that the problem is one of using typed English to access and/or update a file or files in a sing | 1984 | 40 |
Is There Natural Language after Data Bases? Jaime G. Carbonell Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 1. Why Not Data Base Query? The undisputed favorite application for natural language interfaces has been data base query. Why? The reasons range from the relative simplicity of the task, including shallow semantic processing, to the potential real-world utility of the resultant system. Because of such reasons, the data base query task was an excellent paradigmatic problem for computational linguistics, and for the very same reasons it is now time for the field to abandon its protective cocoon and progress beyond this rather limiting task. But, one may ask, what task shall then become the new paradigmatic problem? Alas, such question presupposes that a single, universally acceptable, syntactically and semantically challenging task exists. I will argue that better progress c | 1984 | 41 |
Panel on Natural Language and Databases Daniel P. Flickinger Computer Research Center Hewlett-Packard Company 1501 Page Mill Road Palo Alto, California 94304 USA While I disagree with the proposition that database query has outlived its usefulness as a test environment for natural language processing (for reasons that I give below), I believe there are other reasonable tasks which can also spur new research in NL processing. In particular, I will suggest that the task of providing a natural language inter- face to a rich programming environment offers a convenient yet challenging extension of work already being done with database query. First I recite some of the merits of continuing research on natural language within the confines of constructing an interface for ordinary databases. One advantage is that the speed of processing is not of overwhelming importance in this application, since one who requests information from a | 1984 | 42 |
Natural Language for Expert Systems: Comparisons with Database Systems Kathleen R. McKeown Department of Computer Science Columbia University New York, N.Y. 10027 1 Introduction Do natural language database systems still ,~lovide a valuable environment for further work on n~,tural language processing? Are there other systems which provide the same hard environment :for testing, but allow us to explore more interesting natural language questions? In order to answer ,o to the first question and yes to the second (the position taken by our panel's chair}, there must be an interesting language problem which is more naturally studied in some other system than in the database system. We are currently working on natural language for expert systems at Columbia and thus, expert systems provide a natural alternative environment to compare against the database system. The relatively | 1984 | 43 |
REPRESENTING KNOWLEDGE ABOUT KNOWLEDGE AND MUTUAL KNOWLEDGE Sald Soulhi Equipe de Comprehension du Raisonnement Naturel LSI - UPS llg route de Narbonne 31062 Toulouse - FRANCE ABSTRACT In order to represent speech acts, in a multi-agent context, we choose a knowledge representation based on the modal logic of knowledge KT4 which is defined by Sato. Such a formalism allows us to reason about know- ledge and represent knowledge about knowled- ge, the notions of truth value and of defi- nite reference. I INTRODUCTION Speech act representation and the lan- guage planning require that the system can reason about intensional concepts like know- ledge and belief. A problem resolver must understand the concept of knowledge and know for example what knowledge it needs to achie- ve specific goals. Our assumption is that a theory of language is part of a theory of ac- tion (Austin [4] ). Reasoning about knowledge enc | 1984 | 44 |
UNDERSTANDING PRAGMATICALLY ILL-FORMED INPUT FL Sandra Carberry Department of Computer Science University of Delaware Newark, Delaware 19711 USA ABSTRACT An utterance may be syntactically and semant- Ically well-formed yet violate the pragmatic rules of the world model. This paper presents a context-based strateEy for constructing a coopera- tive but limited response to pragmatlcally ill- formed queries. Sug~estlon heuristics use a con- text model of the speaker's task inferred from the preceding dialogue to propose revisions to the speaker's ill-formed query. Selection heuristics then evaluate these suggestions based upon seman- tic and relevance criteria. I INTRODUCTION An utterance may be syntactically and semant- ically well-formed yet violate the prasmatlc rules of the world model. The system will therefore view it as "ill-formed" even if a native speaker finds it perfectly normal. This pheno | 1984 | 45 |
Referring as Requesting Philip R. Cohen Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University 1. Introduction 1 Searle [14] has arg,ed forcefully that referring is a speech act; that people refer, uot just expressions. This paper considers what kind of speech act referring might be. I propose a generalization of Searle's "propositional" act of referring that treats it as an illocution- ary act, a request, and argue that the propositional act of referring is unnecessary. The essence of the argument is as follows: First, I consider Searle's definition of the propositional act of referring {which I term the PAA, for Propositional Act Account). This definition is found inadequate to deal with various utterances in discourse used for the sole pur- pose of referring. Although the relevance of such utterances to the propositional act has been defined away by Searle, it | 1984 | 46 |
Entity-Oriented Parsing Philip J. Hayes Computer Science Department, Carnegie.Mellon Llniversity Pi~tsbur~ih, PA 152_13, USA Abstract f An entity-oriented approach to restricted-domain parsing is proposed, In this approach, the definitions of the structure and surface representation of domain entities are grouped together. Like semantic grammar, this allows easy exploitation of limited dolnain semantics. In addition, it facilitates fragmentary recognition and the use of multiple parsing strategies, and so is particularly useful for robust recognition of extragrammatical input. Several advantages from the point of view of language definition are also noted. Representative samples from an enlity-oriented language definition are presented, along with a control structure for an entity-oriented parser, some parsing strategies that use the control structure, and worked examples of parses. A parser incorporaling the control structure | 1984 | 47 |
Combining Functionality and Ob]ec~Orientedness for Natural Language Processing Toyoakl Nishida I and Shuji Doshita Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606, JAPAN Abstract This paper proposes a method for organizing linguistic knowledge in both systematic and flexible fashion. We introduce a purely applicative language (PAL) as an intermediate representation and an object-oriented computation mechanism for its interpretation. PAL enables the establishment of a principled and well-constrained method of interaction among lexicon-oriented linguistic modules. The object-oriented computation mechanism provides a flexible means of abstracting modules and sharing common knowledge. 1. Introduction The goal of this paper is to elaborate a domain-independent way of organizing linguistic knowledge, as a step forwards a cognitive processor consis | 1984 | 48 |
USE OF H~ru'RISTIC KN~L~EDGE IN CHINF-.SELANGUAGEANALYSIS Yiming Yang, Toyoaki Nishida and Shuji Doshita Department of Information Science, Kyoto University, Sakyo-ku, Kyoto 606, JAPAN ABSTRACT This paper describes an analysis method which uses heuristic knowledge to find local syntactic structures of Chinese sentences. We call it a preprocessing, because we use it before we do global syntactic structure analysisCl]of the input sentence. Our purpose is to guide the global analysis through the search space, to avoid unnecessary computation. To realize this, we use a set of special words that appear in commonly used patterns in Chinese. We call them "characteristic words" . They enable us to pick out fragments that might figure in the syntactic structure of the sentence. Knowledge concerning the use of characteristic words enables us to rate | 1984 | 49 |
A STOCHASTIC APPROACH TO SENTENCE PARSING Tetsunosuke FuJisaki Science Institute, IBM Japan, Ltd. No. 36 Kowa Building 5-19 Sanbancho,Chiyoda-ku Tokyo 102, Japan ABSTRACT A description will be given of a procedure to asslgn the most likely probabilitles to each of the rules of a given context-free grammar. The grammar devel- oped by S. Kuno at Harvard University was picked as the basis and was successfully augmented with rule probabilities. A brief exposition of the method with some preliminary results, whenused as a device for disamblguatingparsing English texts picked from natural corpus, will be given. Z. INTRODUCTION To prepare a grammar which can parse arbitrary sen- tances taken from a natural corpus is a difficult task. One of the most serious problems is the poten- tlally unbounded number of ambiguities. Pure syn- tactic analysis with an imprudent grammar will sometimes result in hundreds of parses. W | 1984 | 5 |
THE DESIGN OF THE KERNEL ARCHITECTURE FOR THE EUROTRA* SOFTWARE R.L. Johnson**, U.M.I.S.T., P.O. Box 88, Manchester M60 IQD, U.K. S. Krauwer, Rijksuniversiteit, Trans 14, 3512 JK Utrecht, Holland M.A. RUsher, ISSCO, University of Geneva, 1211 Geneve 4, Switzerland G.B. Varile, Commission of the European Conm~unities, P.O. Box 1907, Luxembourg ABSTRACT Starting from the assumption that machine translation (MT) should be based on theoretically sound grounds, we argue that, given the state of the art, the only viable solution for the designer of software tools for MT, is to provide the linguists building the MT system with a generator of highly specialized, problem oriented systems. We propose that such theory sensitive systems be generated automatically by supplying a set of definitions to a kernel software, of which we give an informal description in the paper. We give a formal functional definition of its architecture an | 1984 | 50 |
MACHINE TRANSLATION : WHAT TYPE OF POST-EDITING ON WHAT TYPE OF DOCUMENTS FOR WHAT TYPE OF USERS Anne-Marie LAURIAN Centre National de la Recherche Scientifique Universitd de la Sorbonne Nouvelle - Paris III 19 rue des 8ernardins, 75005 Paris (France) ABSTRACT Various typologies of technical and seientifical texts nave already been proposeO bv authors involved in multilingual transfer problems. They were usually aimed at a better knowledge of the criteria for deciding if a document has to be or can be machine trans- lated. Such a typology could also lead to a better knowledge of the typical errors oc- curing, and so lead to more appropriate post-editing, as well as to improvements in the system. Raw translations being usable, as they are quite often for rapid information needs, it is important to draw the limits between a style adequate for rapid information, and an elegant, h | 1984 | 51 |
Simplifying Deterministic Parsing Alan W. Carter z Department of Computer Science University of British Columbia Vancouver, B.C. V6T IW5 Michael J. Frelllng 2 Department of Computer Science Oregon State University Corvallis, OR 07331 ABSTRACT This paper presents a model for deterministic parsing which was designed to simplify the task of writing and understanding a deterministic grammar. While retaining structures and operations similar to those of Mareus's PARSIFAL parser [Marcus 80] the grammar language incorporates the following changes. (1) The use of productions operating in parallel has essentially been eliminated and instead the productions are organized into sequences. Not only does this improve the understandability of the grammar, it is felt that, this organization corresponds more closely to the task of per- forming the sequence of buffer transformations and attachments required to parse the most common constituent | 1984 | 52 |
DEALING WITH CONJUNCTIONS IN A MACHINE TRANSLATION ENVIRONMENT Xiumlng HUANG Institute of Linguistics Chinese Academy of Social Sciences BeiJing, China* ABSTRACT The paper presents an algorithm, written in PROLOG, for processing English sentences which contain either Gapping, Right Node Raising (RNR) or Reduced Conjunction (RC). The DCG (Definite Clause Grammar) formalism (Pereira & Warren 80) is adopted. The algorithm is highly efficient and capable of processing a full range of coordinate constructions containing any number of coordinate conjunctions ('and', 'or', and 'but'). The algorithm is part of an English-Chinese machine translation system which is in the course of construction. 0 INTRODUCTION Theoretical linguists have made a considerable investigation into coordinate constructions (Ross 67a, Hankamer 73, Schachter 77, Sag 77, Gazdar 81 and Sobin 82, to | 1984 | 53 |
ON PARSING PREFERENCES Lenhart K. Schubert Department of Computing Science University of Alberta, Edmonton Abstract. It is argued that syntactic preference principles such as Right Association and Minimal Attachment are unsatisfactory as usually formulated. Among the difficulties are: (I) dependence on ill-specified or implausible principles of parser operation; (2) dependence on questionable assumptions about syntax; (3) lack Of provision, even in principle, for integration with semantic and pragmatic preference principles; and (4) apparent counterexamples, even when discounting (I)-(3). A possible approach to a solution is sketched. I. Some preference principles The following are some standard kinds of sentences illustrating the role of syntactic preferences. (I) John bought the book which I had selec | 1984 | 54 |
A COMFUTATIONAL THEORY OF THE FUNCTION OF CLUE WORDS IN ARGUMENT UNDERSTANDING Robin Cohen Department of Computer Science University of Toronto 'lDronto, CANADA MSS IA4 A~TNACT This paper examines the use of clue words in argument dialogues. These are special words and phrases directly indicating the structure of the argument to the hearer. Two main conclusions are drawn: I) clue words can occur in conjunction with coherent transmissions, to reduce processing of the hearer 2) clue words must occur with more complex forms of transmission, to facilitate recognition of the argument structure. Interpretation rules to process clues are proposed. In addition, a relationship between use of clues and complexity of processing is suggested for the case of exceptional transmission strategies. ! Overview In argt~nent dialogues, one often encounters | 1984 | 55 |
CONTROL STRUCTURES AND THEORIES OF INTERACTION IN SPEECII UNDEP~.WI'ANDING SYSTEMS E.J. Briscoe and B.K. Boguraev University of Cambridge, Computer Laboratory Corn Exchange Street, Cambridge CB2 3QG, England ABSTRACT lr: this paper, we approach the problem of organisation and control ip. automatic speech understanding systems firaT.ly, by presentin~ a theory of the non-serial interactions "~eces';ary between two processors in the system; namely, the morphosyntaetic and the prosodic, and secondly, by showing how, when generalised, this theory allows one to specify a highly efficient architecture for a speech understanding system with a simple control structure and genuinely independent components. The theory of non-serial interactions we present predicts that speech is temporally organised in a very specific way; that is, tee system would not function effectively if the temporal dist | 1984 | 56 |
Analysts Grammar or Japanese tn the Nu-ProJect - A Procedural Approach to Analysts Grammar - Jun-tcht TSUJII. Jun-tcht NAKANURA and Nakoto NAGAO Department of Electrical Engineering Kyoto University Kyoto. JAPAN Abstract Analysts grammar of Japanese tn the Mu-proJect ts presented, It is emphasized that rules expressing constraints on stngle linguistic structures and rules for selecting the most preferable readtngs are completely different In nature, and that rules for selecting preferale readings should be utilized tn analysts grammars of practical HT systems. It ts also clatmed that procedural control ts essential tn integrating such rules tnto a unified grammar. Some sample rules are gtven to make the points of discussion clear and concrete. 1. Introduction The Hu-ProJect ts a Japanese nattonal project supp | 1984 | 57 |
LEXICON-GRAMMAR AND THE SYNTACTIC ANALYSIS OF FRENCH Maurice Gross Laboretoire d'Automatique Documentsire et Linguistique University of Paris 7 2 place Jussieu 75251 Paris CEDEX 05 France ABSTRACT A lexicon-grammar is constituted ot the elementary sentences of a language. Instead of considering words as basic syntactic units to which grammatical information is attached, we use simple sentences (subject-verb-objects) as dictionary entries, Hence, s full dictionary item is a simple sentence with a description of the corresponding distributional and transformational properties, The systematic study of French has led to an organization of its lexicon-grammar based on three main components: - the lexicon-grammar of free sentences, that is, of sentences whose verb imposes selactionel restrictions on its subject and complements (e.g. to fall, to eat, to watch), - the lexicon-gramm | 1984 | 58 |
Building a Large Knowledge Base for a Natural Language System Jerry R. Hobbs Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract A sophisticated natural language system requires a large knowledge base. A methodology is described for con- structing one in a principled way. Facts are selected for the knowledge base by determining what facts are linguistically presupposed by a text in the domain of interest. The facts are sorted into clnsters, and within each cluster they are organized according to their logical dependencies. Finally, the facts are encoded as predicate calculus axioms. 1. The Problem I It is well-known that the interpretation of natural language discourse can require arbitrarily detailed world knowledge and that a sophisticated natural language sys- tem must have a large knowledge base. But heretofore, the knowledge bases in natu | 1984 | 59 |
IIOLiNI)EI) CONH'XT PARSING AND FASY I.I'AI+.NAIIII.ITY Robert C. Ilcrwick Room 820. MH" Artificial Intelligence I ~lb Cambridge. MA 02139 AIISTRACI" Natural langt~ages are often assumed to be constrained so that they are either easily learnable or parsdble, but few studies have investigated the conrtcction between these two "'functional'" demands, Without a fonnal model of pamtbility or learnability, it is difficult to determine which is morc "dominant" in fixing the properties of natural languages. In this paper we show that if we adopt one precise model of "easy" parsability, namely, that of boumled context parsabilio,, and a precise model of "easy" learnability, namely, that of degree 2 learnabilio" then we can show that certain families of grammars that meet the bounded context parsability ct~ndition will also be degree 2 learnable. Some implications of this result for learning in other subsystems of linguistic know | 1984 | 6 |
LINGUISTICALLY MOTIVATED DESCRIPTIVE TERM SELECTION K. Sparck Jones and J.I. Tait* Computer Laboratory, University of Cambridge Corn Exchange Street, Cambridge CB2 3QG, U.K. ABSTRACT A linguistically motivated approach to indexing, that is the provision of descriptive terms for texts of any kind, is presented and illustrated. The approach is designed to achieve good, i.e. accurate and flexible, indexing by identifying index term sources in the meaning representations built by a powerful general purpose analyser, and providing a range of text expressions constituting semantic and syntactic variants for each term concept. Indexing is seen as a legitimate form of shallow text processing, but one requiring serious semantically based language processing, particularly to obtain well-founded complex terms, which is the main objective of the project described. The type of indexing strategy described is fu | 1984 | 60 |
INFERENCING ON LINGUISTICALLY BASED ZZ~IANTIC STRUCTUR~F Eva Ilaji~ov~, Milena Hn~tkov~ Department of Applied Mathematics Faculty of Mathematics and Physics Charles University ~lalostransk4 n. 25 118 O0 Praha I, Czechoslovakia ABSTRACT The paper characterizes natural lang- uage inferencing in the TIBAQ method of question-answering, focussing on three asp- ects: ~i) specification of the structures on which the inference rules operate, (ii) classification of the rules that have been formulated and implemented up to now, according to the kind of modification of the input structure ti~e rules invoke, an~ (iii) discussion of some points in which a proverly designed inference procedure may help the searc~ of the answer, and vice versa. I SPECIFICATION OF THE I:~PUT STRUCTURES FOR INFE~ENC I[IG A. Outline of the TIBAQ ~lethod hhen the TIBA~ (~ext-and-~nference based ~nswering of ~uestions) project was ~esigned | 1984 | 61 |
SEMANTIC RELEVANCE AND ASPECT DEPENDENCY IN A GIVEN SUBJECT DOMAIN Contents-drlven algorithmic processing of fuzzy wordmeanings to form dynamic stereotype representations Burghard B. Rieger Arbeitsgruppe fur mathematisch-empirische Systemforschung (MESY) German Department, Technical University of Aachen, Aachen, West Germany ABSTRACT Cognitive principles underlying the (re-)construc- tion of word meaning and/or world knowledge struc- tures are poorly understood yet. In a rather sharp departure from more orthodox lines of introspective acquisition of structural data on meaning and know- ledge representation in cognitive science, an empi- rical approach is explored that analyses natural language data s t a t i s t i c a l l y , represents its numeri- cal findings fuzzy-set theoretically, and inter- pret5 its intermediate constructs (stereotype mean- ing points) topologically as elements of semantic space | 1984 | 62 |
A Plan Recognition Model for Clarification Subdialogues Diane J. Litman and James F. Allen Department of Computer Science University of Rochester, Rochester, NY 14627 Abstract One of the promising approaches to analyzing task- oriented dialogues has involved modeling the plans of the speakers in the task domain. In general, these models work well as long as the topic follows the task structure closely, but they have difficulty in accounting for clarification subdialogues and topic change. We have developed a model based on a hierarchy of plans and metaplans that accounts for the clarification subdialogues while maintaining the advantages of the plan-based approach. I. Introduction One of the promising approaches to analyzing task- oriented dialogues has involved modeling the plans of the speakers in the task domain. The earliest work in this area involved tracking the topic of a dialogue by tracking the progress | 1984 | 63 |
A COMPUTATIONAL THEORY OF DISPOSITIONS Lotfi A. Zadeh Computer Science Division University of California, Berkeley, California 94720, U.S.A. ABSTRACT Informally, a disposition is a proposition which is prepon- derantly, but no necessarily always, true. For example, birds can fly is a disposition, as are the propositions Swedes are blond and Spaniards are dark. An idea which underlies the theory described in this paper is that a disposition may be viewed as a proposition with implicit fuzzy quantifiers which are approximations to all and always, e.g., almost all, almost always, most, frequently, etc. For example, birds can fly may be interpreted as the result of supressing the fuzzy quantifier most in the proposi- tion most birds can fly. Similarly, young men like young women may be read as most young men like mostly young women. The process of transforming a disposition into a proposition is referred to as ezplicitation or restor | 1984 | 64 |
Using Focus to Generate Complex and Simple Sentences Marcia A. Derr Kathleen R. McKeown AT&T Bell Laboratories Murray Hill, NJ 07974 USA and Department of Computer Science Columbia University Department of Computer Science Columbia University New York, NY 10027 USA Abstract One problem for the generation of natural language text is determining when to use a sequence of simple sentences and when a single complex one is more appropriate. In this paper, we show how focus of attention is one factor that influences this decision and describe its implementation in a system that generates explanations for a student advisor expert system. The implementation uses tests on functional information such as focus of attention within the Prolog definite clause grammar formalism to determine when to use complex sentences, resulting in an efficient generator that has the same benefits as a functional grammar system. 1. Introduction | 1984 | 65 |
A RATIONAL RECONSTRUCTION OF THE PROTEUS SENTENCE PLANNER Graeme Ritchie Department of Artificial Intelligence University of Edinburgh, Hope Park Square Edinburgh EH8 9NW ABSTRACT A revised and more structured version of Davey's discourse generation program has been implemented, which constructs the underlying forms for sentences and clauses by using rules which annotate and segment the initial sequence of events in various ways. i. The Proteus Program The text generation program designed and im- plemented by Davey (1974,1978) achieved a high level of fluency in the generation of small para- graphs of English describing events in a limited domain (games of "tic-tac-toe"/"noughts-and- crosses"). Although that work was completed ten years ago, the performance is still impressive by current standards. The program could play a game of "noughts-and-crosses" with a user, then produce a fluent sunmmry of what had ha | 1984 | 66 |
SOFTWARE TOOLS FOR THE ENVIRONMENT OF A COMPUTER AIDED TRANSLATION SYSTEM I Daniel BACHUT - Nelson VERASTEGUI IFCI GETA INPG, 46, av. F~lix-Viallet Universit~ de Grenoble 3803] Grenoble C~dex 38402 Saint-Martin-d'H~res FRANCE FRANCE ABSTRACT In this paper we will present three systems, ATLAS, THAM and VISULEX, which have been designed and implemented at GETA (Study Group for Machine Translation) in collaboration with IFCI (Institut de Formation et de Conseil en Informatique) as tools operating around the ARIANE-78 system. We will describe in turn the basic characteristics of each system, their possibilities, actual use, and performance. I - INTRODUCTION ARIANE-T8 is a computer system designed to offer an adequate environment for constructing machine translation programs, for running them, and for (humanly) revising the rough translations produced by the computer. It has been used for a number of applications | 1984 | 67 |
DESIGN OF A MACHINE TRANSLATION SYST~4 FOR A SUBIASK~A(~ Beat Bu~, Susan Warwick, Patrick Shann Dalle Molle Institute for Semantic and Cognitive Studies University of Geneva Switzerland ABSTRACT This paper describes the design of a prototype machine translation system for a sublanguage of job advertis~nents. The design is based on the hy- pothesis that specialized linguistic subsystems may require special crmputational treatment and that therefore a relatively shallow analysis of the text may be sufficient for automatic translation of the sublanguage. This hypothesis and the desire to mi- nimize computation in the transfer phase has led to the adoption of a flat tree representation of the linguistic data. 1. INTRODUCTION The most prcraising results in computational linguistics and specifically in Machine Translation (MT) have been obtained where applications were limited to languages for special purposes and to res | 1984 | 68 |
Grammar Writing System (GRADE) of Mu-Machtne Translation Project and its Characteristics Jun-tcht NAKAMURA. Jun-tcht TSUJII. Makoto NAGAO Department of Electrical Engineering Kyoto University Sakyo. Kyoto. Japan ABSTRACT A powerful grammar writing system has been developed. Thts grammar wrtttng system ts called GRADE (GRAmmar DEscriber). GRADE allows a grammar writer to write grammars Including analysts, transfer, and generation using the same expression. GRADE has powerful grammar writing facility. GRADE allows a grammar writer to control the process of a machine translation. GRADE also has a function to use grammatical rules written tn a word dictionary. GRADE has been used for more than a year as the software of the machine translation project from Japanese Into Engltsh. which ts supported by the Japanese Government and called Nu-proJect. 1. Objectives | 1984 | 69 |
THE REPRESENTATION OF CONSTITUENT STRUCTURES FOR FINITE-STATE PARSING D. Terence Langendoen Yedldyah Langsam Departments of English and Computer & Information Science Brooklyn College of the City University of New York Brooklyn, New York 11210 U.S.A. ABSTBACT A mixed prefix-postfix notation for repre- sentations of the constituent structures of the expressions of natural languages is proposed, which are of limited degree of center embedding if the original expressions are noncenter-embedding. The method of constructing these representations is applicable to expressions with center embed- ding, and results in representations which seem to reflect the ways in which people actually parse those expressions. Both the representations and their interpretations can be computed from the ex- pressions from left to right by finite-state de- vices. The class of acceptable expressions of a na- tural language L all manifest no more th | 1984 | 7 |
A DISCOVERY PROCEDURE FOR CERTAIN PHONOLOGICAL RULES Mark Johnson Linguistics, UCSD. ABSTRACT Acquisition of phonological systems can be insightfully studied in terms of discovery procedures. This paper describes a discovery procedure, implemented in Lisp, capable of deter- mining a set of ordered phonological rules, which may be in opaque contexts~ from a set of surface forms arranged in para- digms. 1. INTRODUCTION For generative grammarians, such as Chomsky (1965), a primary problem of linguistics is to explain how the language learner can acquire the grammar of his or her language on the basis of the limited evidence available to him or her. Chomsky introduced the idealization of instantaneous acquisition, which 1 adopt here, in order to model the language acquisition device as a function from primary linguistic data to possible gram- mars, rather than as a process. Assuming that the set of possible human la | 1984 | 70 |
WHAT NOT TO SAY Jan Fornell Department of Linguistics & Phonetics Lund University Helgonabacken 12, Lund, Sweden ABSTRACT A problem with most text production and language generation systems is that they tend to become rather verbose. This may be due to negleetion of the pragmatic factors involved in communication. In this paper, a text production system, COMMENTATOR, is described and taken as a starting point for a more general discussion of some problems in Computational Pragmatics. A new line of research is suggested, based on the concept of unification. I COMMENTATOR A. The original model I. General purpqse The original version of Commentator was written in BASIC on a small micro computer. It was intended as a generator of text (rather than just sentences), but has in fact proved quite useful, in a some | 1984 | 71 |
WHEN IS THE NEXT ALPAC REPORT DUE ? Margaret KING Dalle MolIe Institute for Semantic and Cognitive Studies University of Geneva Switzerland ~.~chine translation has a scme%~at checquered history. There were already proposals for autcmatic translation systems in the 30's, but it was not until after the second world war that real enthu- siasm led to heavy funding and unrealistic expec- tations. Traditionally, the start of intensive work on machine translation is taken as being a memorand~n of Warren Weaver, then Director of the Natural Sciences Division of the Rockefeller Foundation, in 1949. In this memorandL~n, called 'Translation', Weaver took stock of earlier work done by Booth and Richens. He likened the problem of machine translation to the problem of code breaking, for which digital cc~uters had been used with considerable success : "It is very tempting to say that a book written in Chinese is silly a book written in E | 1984 | 72 |
LR Pa rse rs For Natural Languages, Masaru Tomita Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract MLR, an extended LR parser, is introduced, and its application to natural language parsing is discussed. An LR parser is a ~;hift-reduce parser which is doterministically guided by a parsing table. A parsing table can be obtained automatically from a context- free phrase structure grammar. LR parsers cannot manage antl)iguous grammars such as natural language grammars, because their I)arsing tables would have multiply-defined entries, which precludes deterministic parsing. MLR, however, can handle mulliply-defined entries, using a dynamic programnting method. When an input sentence is ambiguous, the MI.R parser produces all possible parse trees witftoul parsing any part of the input sentenc:e more than once in the same way, despite the fact that the par | 1984 | 73 |
LFG ~ystsm in Prolog Hide~ Ya~u'~awa The Second Laboratory Institute for New Generation Computer Technology (ICOT) To~/o, 108, Japan ABSTRACT In order to design and maintain a latE? scale grammar, the formal system for representing syntactic knowledEe should be provided. Lexlcal Functional Grammar (LFG) [Kaplan, Bresnan 82] is a powerful formalism for that purpose, In this paper, the Prolog implementation of LFG system is described. Prolog provides a Eood tools for the implementation of LFG. LFG can be translated into DCG [Perelra,IIarren 80] and functional structures (f-structures) are generated durlnK the parsing process. I INTRODUCTIOr~ The fundamental purposes of syntactic analysis are to check the Eramnatlcallty and to clariDI the mapping between semantic structures and syntactic constituents. DCG provides tools for fulfi | 1984 | 74 |
The Design of a Computer Language for Linguistic Information Stuart M. Shieber Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract A considerable body of accumulated knowledge about the design of languages for communicating information to computers has been derived from the subfields of program- ming language design and semantics. It has been the goal of the PArR group at SRI to utilize a relevant portion of this knowledge in implementing tools to facilitate communica- tion of linguistic information to computers. The PATR-II formalism is our current computer language for encoding linguistic information. This paper, a brief overview of that formalism, attempts to explicate our design decisions in terms of a set of properties that effective computer lan- guages should incorporate. I. Introduction I The goal of natural-language processing research c | 1984 | 75 |
Discourse Structu res for Text Generation William C. Mann USC/Intorrnation Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 A bst ract Text generation programs need to be designed around a theory of text organization. This paper introduces Rhetorical Structure Theory, a theory of text structure in which each region of text has a central nuclear part and a number of satellites related to it. A natural text is analyzed as an example, the mechanisms of the theory are identified, and their formalization is discussed. In a comparison, Rhetorical Structure Theory is found to be more comprehensive and more informative about text function than the text organization parts of previous text generation systems. 1, The Text Organization Problem Text generation is already established as a research area within computational linguistics. Although so far there have been only a few research computer programs that can generate | 1984 | 76 |
Semantic Rule Based Text Generation Michael L. Mauldin Department of Computer Science Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 USA ABSTRACT This paper presents a semantically oriented, rule based method for single sentence text generation and discusses its implementation in the Kafka generator. This generator is part of the XCALIBUR natural language interface developed at CMU to provide natural language facilities for a wide range of expert systems and data bases. Kafka takes as input the knowledge representation used in XCALIBUR system and incrementally transforms it first into conceptual dependency graphs and then into English? 1. Introduction Transformational text generators have traditionally emphasized syntactic processing. One example is Bates ILIAD system which is based on Chomsky's theory of transformational generative grammar [1]. Another is Mann's Nigel program, based on the systemic grammar of | 1984 | 77 |
Controlling Lexical Substitution in Computer Text Generation 1 Robert Granville MIT Laboratory for Computer Science 545 Technology Square Cambridge, Massachusetts 02139 Abstract Th=s report describes Paul, a computer text generation system desig~ed LO create cohesive text through the use o| lexlcal substitutions. Specihcally, Ihas system is designed Io determmistically choose between provluminahzat0on, superordinate suhstntut0on, and dehmte noun phrase reiterabon. The system identities a strength el antecedence recovery for each of the lex~cal subshtutions, and matches them against the strength el potenfml antecedence of each element m the text to select the proper substitutions for these elements. 1. Introduction This report descrnbes Paul. a computer text generation system designed to cre~:te collesive text through tile use of lexical substitutuons. Spec;hcalty. thts system ~s designed tn deterministically choose between pronomina | 1984 | 78 |
UNDERSTANDING OF JAPANESE IN AN INTERACTIVE PROGRAMMING SYSTEM Kenji Sugiyama I, Masayuki Kameda, Kouji Akiyama, Akifumi Makinouehi Software Laboratory Fujitsu Laboratories Ltd. 1015 Kamikodanaka, Nakahara-ku, Kawasaki 211, JAPAN ABSTRACT KIPS is an automatic programming system which generates standardized business application programs through interactive natural language dialogue. KIPS models the program under discussion and the content of the user's statements as organizations of dynamic objects in the object*oriented programming sense. This paper describes the statement*model and the program-model, their use in understanding Japanese program specifications, and bow they are shaped by the linguistic singularities of Japanese input sentences. I INTRODUCTION KIPS, an interactive natural language programming system, that generates standardized business application programs through interactive natural language dialogue, | 1984 | 79 |
Features and Values Lauri Karttunen University of Texas at Austin Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract The paper discusses the linguistic aspects of a new gen- eral purpose facility for computing with features. The pro- gram was developed in connection with the course I taught at the University of Texas in the fall of 1983. It is a general- ized and expanded version of a system that Stuart Shieber originally designed for the PATR-II project at SRI in the spring of 1983 with later modifications by Fernando Pereira and me. Like its predecessors, the new Texas version of the "DG {directed graph}" package is primarily intended for representing morphological and syntactic information but it may turn out to be very useful for semantic representa- tions too. 1. Introduction Most schools of linguistics use some type of feature no- ta | 1984 | 8 |
~%D-WAY FINITE ~ % AND D~a-I~NDENCY GRAMMAR: A PARSING METHOD ~-OR INFLECTIONAL FREE WORD ORDER LAN(~I%GES I Esa Nelimarkka, Harri J~ppinen and Aarno Lehtola Helsinki University of Technology Helsinki, Finland ARSTRACT This paper presents a parser of an inflectional free word order language, namely Finnish. Two-way finite automata are used to specify a functional dependency grammar and to actually parse Finnish sentences. Each automaton gives a functional description of a dependency structure within a constituent. Dynamic local control of the parser is realized by augmenting the automata with simple operations to make the automata, associated with the words of an input sentence, activate one another. I ~ O N This Daper introduces a computational model for the description and analysis of an inflectional free word order language, namely Finnish. We argue | 1984 | 80 |
INTERRUPTABLE TRANSITION NETWORKS Sergei Nirenburg Colgate University Chagit Attiya Hebrew University of Jerusalem ABSTRACT A specialized transition network mechanism, the interruptable transition network (ITN) is used to perform the last of three stages in a multiprocessor syntactic parser. This approach can be seen as an exercise in implementing a parsing procedure of the active chart parser family. Most of the ATN parser implementations use the left-to-right top-down chronological backtracking control structure (cf. Bates, 1978 for discussion). The control strategies of the active chart type permit a blend of bottom-up and top-down parsing at the expense of time and space overhead (cf. Kaplan, 1973). The environment in which the interruptable transit | 1984 | 81 |
AUTOMATIC CONSTRUCTION OF DISCOURSE REPRESENTATION STRUCTURES Franz Guenthner Universit~it Tiibingen Wilhelmstr. 50 D-7400 Tdbingen, FRG Hubert Lehmann IBM Deutschland GmbH Heidelberg Scientific Center Tiergartenstr. 15 D-6900 Heidelberg, FRG Abstract Kamp's Discourse Representation Theory is a major breakthrough regarding the systematic translation of natural language discourse into logical form. We have therefore chosen to marry the User Specialty Languages System, which was originally designed as a natural language frontend to a relational database system, with this new theory. In the paper we try to show taking - for the sake of simplicity - Kemp's fragment of English how this is achieved. The re- search reported is going on in the context of the project Linguistics and Logic Based Legal Expert System undertaken jointly by the IBM Heidelberg Scientific Center and the Universit~it Tiibingen. 1 | 1984 | 82 |
TEXTUAL EXPERTISE IN WORD EXPERTS: AN APPROACH TO TEXT PARSING BASED ON TOPIC/COMMENT MONITORING * Udo Hahn Universitaet Konstanz Informationswissenschaft ProJekt TOPIC Postfach 5560 D-7750 Konstanz i, West Germany ABSTRACT In this paper prototype versions of two word experts for text analysis are dealt with which demonstrate that word experts are a feasible tool for parsing texts on the level of text cohesion as well as text coherence. The analysis is based on two major knowledge sources: context information is modelled in terms of a frame knowledge base, while the co-text keeps record of the linear sequencing of text analysis. The result of text parsing consists of a text graph reflecting the thematic organization of topics in a text. i. Word Experts as a Text Parsing Device This paper outlines | 1984 | 83 |
SOME LINGUISTIC ASPECTS FOR AUTOMATIC TEXT UNDERSTANDING Yutaka Kusanagi Institute of Literature and Linguistics University of Tsukuba Sakura-mura, Ibarakl 305 JAPAN ABSTRACT This paper proposes a system of map- ping classes of syntactic structures as instruments for automatic text under- standing. The system illustrated in Japa- nese consists of a set of verb classes and information on mapping them together with noun phrases, tense and aspect. The sys- tem. having information on direction of possible inferences between the verb classes with information on tense and as- pect, is supposed to be utilized for rea- soning in automatic text understanding. I. INTRODUCTION The purpose of this paper is to pro- pose a system of mapping classes of syn- tactic structures as instruments for auto- matic text understanding. The system con- | 1984 | 84 |
A SY}~ACTIC APPROACH TO DISCOURSE SEMANTICS Livia Polanyi and Remko Scha English Department University of Amsterdam Amsterdam The Netherlands ABSTRACT A correct structural analysis of a discourse is a prerequisite for understanding it. This paper sketches the outline of a discourse grammar which acknowledges several different levels of structure. This gram~nar, the "Dynamic Discourse Model", uses an Augmented Transition Network parsing mechanism to build a representation of the semantics of a discourse in a stepwise fashion, from left to right, on the basis of the semantic representations of the individual clauses which constitute the discourse. The intermediate states of the parser model the in- termediate states of the social situation which ge- nerates the discourse. The paper attempts to demonstrate that a dis- course may indeed be viewed as constructed by means of sequencing and recursive nesting of discourse con | 1984 | 85 |
DEALING WITH INCOMPLETENESS OF LINGUISTIC KNOWLEDGE IN LANGUAGE TRANSLATION TRANSFER AND GENERATION STAGE OF MU MACHINE TRANSLATION PROJECT Makoto Nagao, Toyoaki Nishida and Jun-ichi Tsujii Department of Electrical Engineering Kyoto University Sakyo-ku, Kyoto 606, JAPAN I. INTRODUCTION Linguistic knowledge usable for machine trans- lation is always imperfect. We cannot be free from the uncertainty of knowledge we have for machine translation. Especially at the transfer stage of machine translation, the selection of target lan- guage expression is rather subjective and optional. Therefore the linguistic contents of machine translation system always fluctuate, and make gradual progress. The system should be designed to allow such constant change and improvements. This paper explains the details of the transfer and gen- eration stages of Japanese-to-English system of the machine translation project by the Japanese | 1984 | 86 |
LEXICAL SEMANTICS IN HUMAN-COMPUTER COMMUNICATION Jarrett Rosenberg Xerox Office Systems Division 3333 Coyote Hill Road PaiD Alto, CA 94304 USA ABSTRACT Most linguistic studies of human-computer communication have focused on the issues of syntax and discourse structure. However, another interesting and important area is the lexical semantics of command languages. The names that users and system designers give the objects and actions of a computer system can greatly affect its usability, and the lexical issues involved are as complicated as those in natural languages. 1"his paper presents an overview of the various studies of naming in computer systems, examining such issues as suggestiveness, memorability, descriptions of categories. and the use of non.words as names. A simple featural framework for the analysis of these phenomena is presented. 0. Introduction Most research on the l | 1984 | 87 |
A Response to the Need for Summary Responses J.K. Kalita, M.J. Colbourn + and G.I. McCalla Department of Computational Science University of Saskatchewan Saskatoon, Saskatchewan, STN 0W0 CANADA Abstract In this paper we argue that natural language inter- faces to databases should be able to produce summary responses as well as listing actual data. We describe a system (incorporating a number of heuristics and a knowledge base built on top of the database) that has been developed to generate such summary responses. It is largely domain-independent, has been tested on many examples, and handles a wide variety of situa- tions where summary responses would be useful. 1. Introduction For over a decade research has been ongoing into the diverse and complex issues involved in developing smart natural language interfaces to database systems. Pioneering front-end systems such as PLANES [15], REQUEST [121, TORUS [11] and RENDEZ | 1984 | 88 |
Coping with Extragrarnmaticality Jalme G. Carbonell and Philip J. Hayes Computer Science Department, Carnegie-Mellon University Pittsburgh, PA 15213. USA Abstract 1 Practical natural language interfaces must exhibit robust bei~aviour in the presence of extragrammaticat user input. This paper classifies different types of grammatical deviations and related phenomena at the lexical and sentential levels, discussing recovery strategies tailored to specific phenomena in the classification. Such strategies constitute a tool chest of computationally tractable methods for coping with extragrammaticality in restricted domain natural language. Some of the strategies have been tested and proven viable in existing parsers. 1. Introduction Any robust natural language interface must be capable of processing input utterances that deviate from its grammatical and semantic expectations. Many researchers have made this observati | 1984 | 89 |
APPLICATIONS OF A LEXICOGRAPHICAL DATA BASE FOR GERMAN Wolfgang Teubert Institut f~r deutsche Sprache Friedrich-Karl-Str. 12 6800 Mannheim i, West Germany ABSTRACT The Institut fHr deutsche Sprache recently has begun setting up a LExicographical DAta Base for German (LEDA). This data base is designed to improve efficiency in the collection, analysis, ordering and description of language material by facilitating access to textual samples within corpora and to word articles, within machine readable dictionaries and by providing a frame to store results of lexicographical research for further processing. LEDA thus consists of the three components Tezt Bank, Diationary Bank and ResuZt Bank and serves as a tool to suppport monolingual German dictionary projects at the Institute and elsewhere. | 1984 | 9 |
Correcting Object-Related Misconceptions: How Should The System Respond? t Kathleen F. McCoy Department of Computer & Inft~rmation Science University of Pennsylvania Philadelphia, PA 19104 Abstract Tills paper describes a computational method for correcting users' miseonceptioas concerning the objects modelled by a compute," s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, :.:cording to the structure of the knowledge base, which indicate wh:LI i.formativn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the syst.cm to reas,m in a domain-independent way about how best to c~rrv,'t [he user. 1. Introduction A meier ar,.a of Al research has been the development o | 1984 | 90 |
AN ALGORITHM FOR IDENTIFYING COGNATES BETWEEN RELATED LANGUAGES Jacques B.M. Guy Linguistics Department (RSPacS) Australian National University GPO Box 4, Canberra 2601 AUSTRALIA ABSTRACT The algorithm takes as only input a llst of words, preferably but not necessarily in phonemic transcription, in any two putatively related languages, and sorts it into decreasing order of probable cognatlon. The processing of a 250-1tem bilingual list takes about five seconds of CPU time on a DEC KLI091, and requires 56 pages of core memory. The algorithm is given no information whatsoever about the phonemic transcription .used, and even though cognate identification is carried out on the basis of a context-free one-for-one matching of indivldual characters, its cognation decisions are bettered by a trained linguist using more information | 1984 | 91 |
From HOPE en I'ESPERANCE On the Role of Computational Neurolinguistics in Cross-Language Studies I Helen M. Gigley Department of Computer Science University of New Hampshire Durham, NH 03824 ABSTRACT Computational neurolinguistics (CN) is an approach to computational linguistics which in- cludes neurally-motivated constraints in the design of models of natural language processing. Furthermore, the knowledge representations in- cluded in such models must be supported with documented behaviorial ev~ce, normal and patho- logical. This paper will discuss the contribution of CN models to ~the understanding of linguistic "competence" within recent research efforts to adapt HOPE (Gigley 1981; 1982a; 1982b; 1982c; 1983a), an implemented CN model for "under- standing" English to I'ESPERANCE, one which "un- derstands" French. I. INTRODUCTION Computational Neurolinguistics (CN) incorpor- at | 1984 | 92 |
PANEL SESSION MACHINE-READABLE DICTIONABr~.S Donald E. Walker Natural-Language and Knowledge-Ruouree Systems SRI International Menlo Park, California 04025, USA and Artificial Intelligence and Information Science Research Bell Communicatlons Research 445 South Street Morrlstown, New Jersey 07960, USA Abstract The papers in this panel consider machine-readable dictionaries from several perspectives: research in computational linguistics and computational lexicology, the development of tools for improving accessibility, the design of lexical reference systems for educational purposes, and applications of machine-readable dictionaries in information science contexts. As background and by way of introduction, a description is provided of a workshop on machine-readable dictionaries that was held at SRI International in April 1983. Introduction Dictionaries constitute a unique re | 1984 | 93 |
LEXICAL KNOWLEDGE BASES Robert A. Ameler Natural-Lsngu.ge and Knowledge-Resource Systems SRI International Menlo Park, California 94025, USA A lexical knowledge base is a repository of computational information about concepts intended to be generally useful in many application areas including computational linguistics, artificial intelligence, and information science. It contains information derived from machine-readable dictionaries, the full text of reference books, the results of statistical analyses of text usages, and data manually obtained from human world knowledge. A lexical knowledge base is not intended to serve any one application, but to be a general repository of knowledge about lexical concepts and their relationships. Thus natural-language parsers, generators, or other intelligent processors must be able to interface to the knowledge base and are expected to only extract those portions of its knowledge which the | 1984 | 94 |
MACHINE-READABLE DICTIONARIES, LEXICAL DATA BASES AND THE LEXICAL SYSTEM Nicoletts Calsolsri Dipartimento dl Lingu|stica, Universita dl Plsa, Pisa, ITALY Istituto di Linguistics Cornputssionsle del CNR, Piss, ITALY I should like to raise some issues concerning the conversion from a traditional Marhine-Readable Dictionary (MRD) on tape to a Lexical Data Base (LDB), in order to highlight some important consequences for computational linguistics which can follow from this transition. The enormous potentialities of the information implicitly stored in a standard printed dictionary or a MRD can only be evidenced and made explicit when the same data are given a new logical structure in a data base model, and exploited by appropriate software. A suitable use of DB methodology is a good starting point to discover several kinds of lexical, morphological, syntactic, and semantic relationships between lexieal entries which would | 1984 | 95 |
THE DICTIONARY SERVER Martin Kay Intelligent Systems Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Road Pain Alto, California 94304~ USA The term "machine-readable dictionary" can clearly be taken in two ways. In its stronger and better established interpretation, it presumably refers to dictionaries intended for machine consumption and use as in a language processing system of some sort. In a somewhat weaker sense, it has to do with dictionaries intended for human consumption, but through the intermediary of a machine. Ideally, of course, the two enterprises would be conflated, material from a single basic store of lexical information being furnished to different clients in different forms. Such a conflation would, if it could be accomplished, be beneficial to all parties. Certainly human users could surely benefit from some of the processes that the machine-oriented information in a mach | 1984 | 96 |
HOW TO MISREAD A DICTIONARY George A. Miller Department of Psychology Princeton University Princeton, NJ 08544 A dictionary is an extremely valuable reference book, but its familiarity tends to blind adults to the high level of intelligence required to read it. This aspect becomes apparent, however, when school children are observed learning dictionary skills. Children do not respect syntactic category and often wander into the wrong lexieai entry, apparently in search of something they can understand. They also find it difficult to match the sense of a polysemous word to the context of a particular passage. And they repeatedly assume that some familiar word in a definition can be substituted for the unfamiliar word it defines. The lexical information that children need can be provided better by a computer than by a book, but that remedy will not be realized if automated dictionaries are merely machine- readable versi | 1984 | 97 |
MACHINE-READABLE COMPONENTS IN A VARIETY OF INFORMATION-SYSTEM APPLICATIONS Howard R. Webber Reference Publishing Division Houghton-Mifflln Company 2 Park Street Boston. MA 02108 Components of the machine-readable dictionary can be applied in a number of information systems. The most direct applications of the kind are in wordprocessing or in "writing- support" systems built on a wordprocessing base. However, because a central function of any dictionary is in fact data verification, there are other proposed applications in communications and data storage and retrieval systems. Moreover, the complete interrelational electronic dictionary is in some sense the model of the language; and there are, accordingly, additional implications for language-based information search and retrieval. In regard to wordprocessing, the electronic lexicon can serve as the base for spelling verification ( | 1984 | 98 |
TRANSFER IN A MULTILINGUAL MT SYSTEM Steven Krauwer & Louis des Tombe Institute for General Linguistics Utrecht State University Trans 14, 3512 JK Utrecht, The Netherlands ABSTRACT In the context of transferbased MT systems, the nature of the intermediate represenations, and particularly their 'depth', is an important question. This paper explores the notions of 'independence of languages' and 'simple trans- fer', and provides some principles that may enable linguists to study this problem in a systematic way. I. Background This paper is relevant for a class of MT systems with the following characteristics: (i) The translation process is broken down into three stages: source text analysis, transfer, and target text synthesis. (ii) The text that serves as unit of translation is at least a sentence. (iii) The system is multilingual, at least in principle. These characteristics are not uncommon; however, | 1984 | 99 |
SEHANTICS OF TEHPORAL QUERIES AND TEHPORAL DATA Carole O. Hafner College of Computer Science Northeastern University Boston, MA 02115 Abstract This paper analyzes the requirements for adding a temporal reasoning component to a natural language database query system, and proposes a computational model that satisfies those requirements. A prelim- Inary implementation in Prolog is used to generate examples of the model's capabi Iltles. I. Introduction A major area of weakness in natural language (NL) interfaces is the lack of ability to understar~ and answer queries involving time. Although there is growing recognition of the importance of temporal semantics among database theoretlcians (see, for example, Codd [6J, Anderson [2L Clifford and Warren [41, Snodgrass [ i 5]), existing database management systems offer little or no support for the manipulation of tlme data. Furthermore (as we will see In the next Section), there | 1985 | 1 |
THE COMPUTATIONAL DIFFICULTY OF ID/LP PARSING G. Edward Barton, Jr. M.I.T. Artificial Intelligence Laboratory 545 Technology Square Caanbridge, MA 02139 ABSTRACT .\lodern linguistic theory attributes surface complexity to interacting snbsystems of constraints. ["or instance, the ID LP gr,'unmar formalism separates constraints on immediate dominance from those on linear order. 5hieber's (t983) ID/I.P parsing algorithm shows how to use ID and LP constraints directly in language process- ing, without expandiqg them into an intcrmrdiate "object gammar." However, Shieber's purported O(:,Gi 2 .n ~) run- time bound underestimates the tlillicnlty of ID/LP parsing. ID/LP parsing is actually NP-complete, anti the worst-case runtime of Shieber's algorithm is actually exponential in grammar size. The growth of parser data structures causes the difficulty. So)tie ct)mputational and linguistic implica- tions follow: in particula | 1985 | 10 |
SOME COMPUTATIONAL PROPERTISS OF TREE ADJOINING GRAMM.~.S* K. Vijay-Shank~" and Aravind K. Jouhi Department of Computer and Information ~eience Room 288 Moore School/D2 University of Pennsylvania Philadelphia~ PA 191Ct ABSTRACT Tree Adjoining Grammar (TAG) is u formalism for natural language grammars. Some of the basic notions of TAG's were introduced in [Jo~hi,Levy, mad Takakashi I~'Sl and by [Jo~hi, l~l. A detailed investigation of the linguistic relevance of TAG's has been carried out in IKroch and Joshi,1985~. In this paper, we will describe some new results for TAG's, espe¢ially in the following areas: (I) parsing complexity of TAG's, (2) some closure results for TAG's, and (3) the relationship to Head grammars. 1. INTRODUCTION lnvestigatiou of constrained grammatical system from the point of view of their linguistic &leqnary and their computational tractability has been a mnjor concern of compu | 1985 | 11 |
TAG's as a Grammatical Formalism for Ceneration David D. McDonald and James D. Pus~ejovsky Departmmt of Compute~ and Information Scienc~ Un/vemty of Mam,dzm~tm at Amherst I. ~mnct Tree Adj~g Grammars, or "TAG's', (Josh/, Levy & Takahash/ 1975; Josh/ 1983; Kroch & Josh/ 1965) we~ developed as an al~ma~ive to the aandard tyntac~ formalisms that are ,,_~'~ in theoretical ~,.ll,/~s of languaSe. They are a.rwac~ve because they may pin,vide just the asFects of context seusit~ve exptes~e Fmv~r that actually appear in human lanSuages while otherwise r~alning context free. "['n/s paper ___~,~,ibcs how we have applied the theory of Tree Adjoining Grammars to natural language generation. We have ~ attracted to TAG's because their cemral opemtiou--~he exteamou of an "initial" phra~ m~ca~u tree through the incl~/ou, at re,? , ~ y came~/aed loeatinus, of oae or mmu "au~!!iar~'* ~ | 1985 | 12 |
MODULAR LOGIC GRAMMARS Michael C. McCord IBM Thomas J. Watson Research Center P. O. Box 218 Yorktown Heights, NY 10598 ABSTRACT This report describes a logic grammar formalism, Modular Logic Grammars, exhibiting a high degree of modularity between syntax and semantics. There is a syntax rule compiler (compiling into Prolog) which takes care of the building of analysis structures and the interface to a clearly separated semantic interpretation component dealing with scoping and the construction of logical forms. The whole system can work in either a one-pass mode or a two-pass mode. [n the one-pass mode, logical forms are built directly during parsing through interleaved calls to semantics, added automatically by the rule compiler. [n the two-pass mode, syn- tactic analysis trees are built automatically in the first pass, and then gi | 1985 | 13 |
New Approaches to Parsing Conjunctions Using Prolog Sand,way Fong Robert C. Berwick Artificial hitelligence Laboratory M.I.T. 545 Technology Square C,'umbridge MA 02t39, U.S.A. Abstract Conjunctions are particularly difficult to parse in tra- ditional, phra.se-based gramniars. This paper shows how a different representation, not b.xsed on tree structures, markedly improves the parsing problem for conjunctions. It modifies the union of phra.se marker model proposed by GoodalI [19811, where conjllnction is considered as tile lin- earization of a three-dimensional union of a non-tree I),'med phrase marker representation. A PItOLOG grantm~tr for con- junctions using this new approach is given. It is far simpler and more transparent than a recent phr~e-b~qed extra- position parser conjunctions by Dahl and McCord [1984]. Unlike the Dahl and McCor, I or ATN SYSCONJ appr~ach, no special trail machinery i.~ needed for conjunction, b | 1985 | 14 |
Parsing with Discontinuous Constituents Mark Johnson Center for the Study of Language and Information and Department of LinKuktics, StLnford University. Abstract By generalizing the notion of location of a constituent to allow discontinuous Ioctaions, one can describe the discontinuous consti- tuents of non-configurational languages. These discontinuous consti- tuents can be described by a variant of definite clause grammars, and these grammars can be used in conjunction with a proof pro- cedure to create a parser for non-configurational languages. 1. Introduction In this paper [ discuss the problem of describing and computa- tionally processing the discontinuous constituents of non- configurational languages. In these languages the grammatical func- tion that an argument plays in the clause or the sentence is not determined by its position or confguration in the sentence, as it is in configurational languages like | 1985 | 15 |
Structure Sharing with Binary Trees Lauri Karttunen SRI International, CSLI Stanford Martin Kay Xerox PARC, CSU Stanford Many current interfaces for natural language represent syntactic and semantic information in the form of directed graphs where attributes correspond to vectors and values to nodes. There is a simple correspondence between such graphs and the matrix notation linguists traditionally use for feature sets. . ' n"<'a'"" sg 3rd b. I cat: np -]1 rnumber: sg agr: [..person: 3rdJJ Figure I The standard operation for working with such graphs is unification. The unification operation succedes only on a pair of compatible graphs, and its result is a graph containing the information in both contributors. When a parser applies a syntactic rule, it unifies selected features of input constituents to check constraints and to budd a representat=on for the output constit | 1985 | 16 |
A Structure-Sharing Representation for Unification-Based Grammar Formalisms Fernando C. N. Pereira Artificial Intelligence Center, SRI International and Center for the Study of Language and Information Stanford University Abstract This paper describes a structure-sharing method for the rep- resentation of complex phrase types in a parser for PATR-[I, a unification-based grammar formalism. In parsers for unification-based grammar formalisms, complex phrase types are derived by incremental refinement of rite phrase types defined in grammar rules and lexical entries. In a naive implementation, a new phrase type is built by copying older ones and then combining the copies according to the constraints stated in a grammar rule. The structure-sharing method was designed to eliminate most such copying; indeed, practical tests suggest that the use of this technique reduces parsing time by as much as 60%. The pre | 1985 | 17 |
Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based Formalisms Stuart M. Shieber Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract 1 Introduction Grammar formalisms based on the encoding of grammatical information in complex-valued feature systems enjoy some currency both in linguistics and natural-language-processing research. Such formalisms can be thought of by analogy to context-free grammars as generalizing the notion of non- terminal symbol from a finite domain of atomic elements to a possibly infinite domain of directed graph structures nf a certain sort. Unfortunately, in moving to an infinite nonterminal domain, standard methods of parsing may no longer be applicable to the formalism. Typically, the prob- lem manifests itself ,as gross inefficiency or ew, n nontermina- t icm of the alg~,rit hms. In this paper, w | 1985 | 18 |
Semantic Caseframe Parsing and Syntactic Generality Philip J. Hayes. Peggy M. Andersen. and Scott Safier Carnegie Group Incorporated Commerce Court at Station Square Pittsburgi'~. PA 15219 USA Abstract We nave implemented a restricted .:lommn parser called Plume "M Building on previous work at Carneg=e-Mellon Unfvers=ty e.g. [4, 5. 81. Plume s approacn to oars=ng ~s based on semantic caseframe mstant~a~on Th~s nas the advantages of effic=ency on grin ~atical ,nput. and robustness in the face of ungrammatmcal tnput Wh~le Plume ~s well adapted to s=mpte ,:;~ectaratwe and ~mperat=ve utterances, it handles 0ass=yes relatmve clauses anti =nterrogatives in an act noc manner leading to patciny syntact=c coverage Th~s paOe, oulhnes Plume as =t Currently exfsts and descr,Oes our | 1985 | 19 |
TEMPORAL I]~'RRI~C~S IN HEDICAL TEXTS Klaus K. Obermeier BatteIle's Columbus Laboratories 505 K~ng Avenue CoLumbus, Oh£o 43201-2693, USA ABSTRACT The objectives of this paper are twofold, whereby the computer program is meant to be a particular implementation of a general natural Language [NL] proeessin~ system [NI,PSI which could be used for different domains. The first obiective is to provide a theory for processing temporal information contained in a well-struct- ured, technical text. The second obiective is to argue for a knowledge-based approach to NLP in which the parsing procedure is driven bv extra Linguistic knowledRe. The resulting computer program incorporates enough domain-specific and ~enera[ knowledge so that the parsing procedure can be driven by the knowledge base of the prog | 1985 | 2 |
MOVEMENT IN ACTIVE PRODUCTION NETWORKS Mark A. Jones Alan S. Driacoll AT&T Bell Laboratories Murray Hill, New Jersey 07974 ABSTRACT We describe how movement is handled in a class of computational devices called active production networks (APNs). The APN model is a parallel, activation-basod framework that ha= been applied to other aspects of natural language processing. The model is briefly defined, the notation and mechanism for movement is explained, and then several examples are given which illustrate how various conditions on movement can naturally be explained in terms of limitations of the APN device. I. INTRODUCTION Movement is an important phenomenon in natural languages. Recently, proposals such as Gazdar's dcrivod rules (Gazdar, 1982) and Pereira's extraposition grammars (Pereirao 1983) have attemptod to find minimal extensions to the context-free framework that would allow the descrip- tion of movement. In this p | 1985 | 20 |
PARSING HEAD-DRIVEN PHRASE STRUCTURE GRAMMAR Derek Proudlan and Carl Pollard Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA. 94303, USA Abstract The Head-driven Phrase Structure Grammar project (HPSG) is an English language database query system under development at Hewlett-Packard Laboratories. Unlike other product-oriented efforts in the natural lan- guage understanding field, the HPSG system was de- signed and implemented by linguists on the basis of recent theoretical developments. But, unlike other im- plementations of linguistic theories, this system is not a toy, as it deals with a variety of practical problems not covered in the theoretical literature. We believe that this makes the HPSG system ,nique in its combination of linguistic theory and practical application. The HPSG system differs from its predecessor GPSG, reported on at the 1982 ACL meeting (Gawron et al. 119821), in four sign | 1985 | 21 |
A Computational Semantics for Natural Language Lewis G. Creary and Carl J. Pollard Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304, USA Abstract In the new Head-driven Phrase Structure Grammar (HPSG) language processing system that is currently under development at Hewlett-Packard Laboratories, the Montagovian semantics of the earlier GPSG system (see [Gawron et al. 19821) is replaced by a radically different approach with a number of distinct advantages. In place of the lambda calculus and standard first-order logic, our medium of conceptual representation is a new logical for- realism called NFLT (Neo-Fregean Language of Thought); compositional semantics is effected, not by schematic lambda expressions, but by LISP procedures that operate on NFLT expressions to produce new expressions. NFLT has a number of features that make it well-suited {'or nat- ural language translations, including predicat | 1985 | 22 |
ANALYSIS OF OONOUNCTIONS IN A ~JLE-~ PAKSER leonardo L~smo and Pietro Torasso Dipartimento di Informatica - Universita' di Torino Via Valperga Caluso 37 - 10125 Torino (ITALY) ABSTRACT The aim of the present paper is to show how a rule-based parser for the Italian language has been extended to analyze sentences involving conjunc- tions. The most noticeable fact is the ease with ~4nich the required modifications fit in the previ- ous parser structure. In particular, the rules written for analyzing simple sentences (without conjunctions) needed only small changes. On the contrary, more substantial changes were made to the e~oeption-handling rules (called "natural changes") that are used to restructure the tree in case of failure of a syntactic hypothesis. T0~ parser described in the present work constitutes the syn- tactic component of the FIDO system (a Flexible Interface for Database Operations), an interface allow | 1985 | 23 |
A PRAGMATIC~BASED APPROACH TO UNDERSTANDING INTERS~NTENTIAL ~LIPSI~ Sandra Car berry Department of Computer and Information Science University of Delaware Nevark, Delaware 19715, U3A ABSTRACT IntersententAal eAlipti caA utterances occur frequently in information-seeking dielogues. This paper presents a pragmatics-based framework for interpreting such utterances, ~ncluding identAfi- cation of the spoa~r' s discourse ~oel in employ- ing the fra~ent. We claim that the advantage of this approach is its reliance upon pragmatic information, including discourse content and conversational goals, rather than upon precise representations of the preceding utterance alone. INTRODOCTION The fraRmentary utterances that are common in communication between humans also occur in man- Nachi~e OOmmUlLCcation. Humans perslat in using abbreviated stateme | 1985 | 24 |