doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1502.06512
3
“consciousness,” “intuition” and “intelligence” itself. It is hard to say how close we are to this threshold, but once it is crossed the world will not be the same” [3]. Von Neumann - “There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself” [4]. Similar types of arguments are still being made today by modern researchers and the area of RSI research continues to grow in popularity [5-7], though some [8] have argued that recursive self-improvement process requires hyperhuman capability to “get the ball rolling”, a kind of “Catch 22” .
1502.06512#3
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
4
Intuitively most of us have some understanding of what it means for a software system to be self- improving, however we believe it is important to precisely define such notions and to systematically investigate different types of self-improving software. First we need to define the notion of improvement. We can talk about improved efficiency – solving same problems faster or with less need for computational resources (such as memory). We can also measure improvement in error rates or finding closer approximations to optimal solutions, as long as our algorithm is functionally equivalent from generation to generation. Efficiency improvements can be classified as either producing a linear improvement as between different algorithms in the same complexity class (ex. NP), or as producing a fundamental improvement as between different complexity classes (ex. P vs NP) [9]. It is also very important to remember that complexity class notation (Big-O) may hide significant constant factors which while ignorable theoretically may change relative order of efficiency in practical applications of algorithms. This type of analysis works well for algorithms designed to accomplish a particular task, but doesn’t work well for general purpose intelligent software as an improvement in one area may go together with decreased performance in another domain. This makes it hard to claim that the updated version of the software is indeed an improvement. Mainly, the major improvement we want from self-improving intelligent software is higher degree of intelligence which can be approximated via machine friendly IQ tests [10] with a significant G-factor correlation.
1502.06512#4
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
5
A particular type of self-improvement known as Recursive Self-Improvement (RSI) is fundamentally different as it requires that the system not only get better with time, but that it gets better at getting better. A truly RSI system is theorized not to be subject to diminishing returns, but would instead continue making significant improvements and such improvements would become more substantial with time. Consequently, an RSI system would be capable of open ended self-improvement. As a result, it is possible that unlike with standard self-improvement, in RSI systems from generation-to-generation most source code comprising the system will be replaced by different code. This brings up the question of what “self” refers to in this context. If it is not the source code comprising the agent then what is it? Perhaps we can redefine RSI as Recursive Source-code Improvement (RSI) to avoid dealing with this philosophical problem. Instead of trying to improve itself such a system is trying to create a different system which is better at achieving same goals as the original system. In the most general case it is trying to create an even smarter artificial intelligence. In this paper we will attempt to define the notion of self-improvement in software, survey possible types of self-improvement, analyze behavior of self-improving software, and discuss limits to such processes.
1502.06512#5
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
6
2. Taxonomy of Types of Self-Improvement Self-improving software can be classified by the degree of self-modification it entails. In general we distinguish three levels of improvement – modification, improvement (weak self- improvement) and recursive improvement (strong self-improvement). Self-Modification does not produce improvement and is typically employed for code obfuscation to protect software from being reverse engineered or to disguise self-replicating computer viruses from detection software. While a number of obfuscation techniques are known to exist [11], ex. self-modifying code [12], polymorphic code, metamorphic code, diversion code [13], none of them are intended to modify the underlying algorithm. The sole purpose of such approaches is to modify how the source code looks to those trying to understand the software in questions and what it does [14].
1502.06512#6
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
7
Self-Improvement or Self-adaptation [15] is a desirable property of many types of software products [16] and typically allows for some optimization or customization of the product to the environment and users it is deployed with. Common examples of such software include evolutionary algorithms such as Genetic Algorithms [17-22] or Genetic Programming which optimize software parameters with respect to some well understood fitness function and perhaps work over some highly modular programming language to assure that all modifications result in software which can be compiled and evaluated. The system may try to optimize its components by creating internal tournaments between candidate solutions. Omohundro proposed the concept of efficiency drives in self-improving software [23]. Because of one of such drives, balance drive, self-improving systems will tend to balance the allocation of resources between their different subsystems. If the system is not balanced overall performance of the system could be increased by shifting resources from subsystems with small marginal improvement to those with larger marginal increase [23]. While performance of the software as a result of such optimization may be improved the overall algorithm is unlikely to be modified to a fundamentally more capable one.
1502.06512#7
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
8
Additionally, the law of diminishing returns quickly sets in and after an initial significant improvement phase, characterized by discovery of “low-hanging fruit”, future improvements are likely to be less frequent and less significant, producing a Bell curve of valuable changes. Metareasoning, metalearning, learning to learn, and lifelong learning are terms which are often used in the machine learning literature to indicate self-modifying learning algorithms or the process of selecting an algorithm which will perform best in a particular problem domain [24]. Yudkowsky calls such process non-recursive optimization – a situation in which one component of the system does the optimization and another component is getting optimized [25]. In the field of complex dynamic systems, aka chaos theory, positive feedback systems are well known to always end up in what is known as an attractor- a region within system’s state space that the system can’t escape from [26]. A good example of such attractor convergence is the process of Metacompilation or Supercompilation [27] in which a program designed to take source code written by a human programmer and to optimize it for speed is applied to its own source code. It will likely produce a more efficient compiler on the first application perhaps by 20%, on the second application by 3%, and after a few more recursive iterations converge to a fixed point of zero improvement [26].
1502.06512#8
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
9
Recursive Self-Improvement is the only type of improvement which has potential to completely replace the original algorithm with a completely different approach and more importantly to do so multiple times. At each stage newly created software should be better at optimizing future version of the software compared to the original algorithm. As of the time of this writing it is a purely theoretical concept with no working RSI software known to exist. However, as many have predicted that such software might become a reality in the 21st century it is important to provide some analysis of properties such software would exhibit. Self-modifying and self-improving software systems are already well understood and are quite common. Consequently, we will concentrate exclusively on RSI systems. In practice performance of almost any system can be trivially improved by allocation of additional computational resources such as more memory, higher sensor resolution, faster processor or greater network bandwidth for access to information. This linear scaling doesn’t fit the definition of recursive-improvement as the system doesn’t become better at improving itself. To fit the definition the system would have to engineer a faster type of memory not just purchase more memory units of the type it already has access to. In general hardware improvements are likely to speed up the system, while software improvements (novel algorithms) are necessary for achievement of meta-improvements.
1502.06512#9
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
10
It is believed that AI systems will have a number of advantages over human programmers making it possible for them to succeed where we have so far failed. Such advantages include [28]: longer work spans (no breaks, sleep, vocation, etc.), omniscience (expert level knowledge in all fields of science, absorbed knowledge of all published works), superior computational resources (brain vs processor, human memory vs RAM), communication speed (neurons vs wires), increased serial depth (ability to perform sequential operations in access of about a 100 human brain can manage), duplicability (intelligent software can be instantaneously copied), editability (source code unlike DNA can be quickly modified), goal coordination (AI copies can work towards a common goal without much overhead), improved rationality (AIs are likely to be free from human cognitive biases) [29], new sensory modalities (native sensory hardware for source code), blending over of deliberative and automatic processes (management of computational resources over multiple tasks), introspective perception and manipulation (ability to analyze low level hardware, ex. individual neurons), addition of hardware (ability to add new memory, sensors, etc.), advanced communication (ability to share underlying cognitive representations for memories and skills) [30].
1502.06512#10
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
11
Chalmers [31] uses logic and mathematical induction to show that if an AI0 system is capable of producing only slightly more capable AI1 system generalization of that process leads to superintelligent performance in AIn after n generations. He articulates, that his proof assumes that the proportionality thesis, which states that increases in intelligence lead to proportionate increases in the capacity to design future generations of AIs, is true. Nivel et al. proposed formalization of RSI systems as autocatalytic sets – collections of entities comprised of elements, each of which can be created by other elements in the set making it possible for the set to self-maintain and update itself. They also list properties of a system which make it purposeful, goal-oriented and self-organizing, particularly: reflectivity – ability to analyze and rewrite its own structure; autonomy – being free from influence by system’s original designers (bounded autonomy – is a property of a system with elements which are not subject to self-modification); endogeny – an autocatalytic ability [32]. Nivel and Thorisson also attempt to operationalize autonomy by the concept of self-programming which they insist has to be done in an experimental way instead of a theoretical way (via proofs of correctness) since it is the only tractable approach [33].
1502.06512#11
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
12
Yudkowsky writes prolifically about recursive self-improving processes and suggests that introduction of certain concepts might be beneficial to the discussion, specifically he proposes use of terms - Cascades, Cycles and Insight which he defines as: Cascades – when one development leads to another; Cycles – repeatable cascade in which one optimization leads to another which in turn benefits the original optimization; Insight – new information which greatly increases one’s optimization ability [34]. Yudkowsky also suggests that the goodness and number of opportunities in the space of solutions be known as Optimization Slope while optimization resources and optimization efficiency refer to how much of computational resources an agent has access to and how efficiently the agent utilizes said resources. An agent engaging in an optimization process and able to hit non-trivial targets in large search space [35] is described as having significant optimization power [25].
1502.06512#12
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
13
RSI software could be classified based on the number of improvements it is capable of achieving. The most trivial case is the system capable of undergoing a single fundamental improvement. The hope is that truly RSI software will be capable of many such improvements, but the question remains open regarding the possibility of an infinite number of recursive- improvements. It is possible that some upper bound on improvements exists limiting any RSI software to a finite number of desirable and significant rewrites. Critics explain failure of scientists, to date, to achieve a sustained RSI process by saying that RSI researchers have fallen victims of the bootstrap fallacy [36].
1502.06512#13
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
14
Another axis on which RSI systems can be classified has to do with how improvements are discovered. Two fundamentally different approaches are understood to exist. The first one is a brute force based approach [37] which utilizes Levin (Universal [38]) Search [39]. The idea is to consider all possible strings of source code up to some size limit and to select the one which can be proven to provide improvements. While theoretically optimal and guaranteed to find superior solution if one exists this method is not computationally feasible in practice. Some variants of this approach to self-improvement, known as Gödel Machines [40-45], Optimal Ordered Problem Solver (OOPS) [46] and Incremental Self-Improvers [47, 48], have been thoroughly analyzed by Schmidhuber and his co-authors. Second approach assumes that the system has a certain level of scientific competence and uses it to engineer and test its own replacement. Whether a system of any capability can intentionally invent a more capable and so a more complex system remains as the fundamental open problem of RSI research.
1502.06512#14
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
15
Finally, we can consider a hybrid RSI system which includes both an artificially intelligent program and a human scientist. Mixed human-AI teams have been very successful in many domains such as chess or theorem proving. It would be surprising if having a combination of natural and artificial intelligence did not provide an advantage in designing new AI systems or enhancing biological intelligence. We are currently experiencing a limited version of this approach with human computer scientists developing progressively better versions of AI software (while utilizing continuously improving software tools), but since the scientists themselves remain unenhanced we can’t really talk about self-improvement. This type of RSI can be classified as Indirect recursive improvement as opposed to Direct RSI in which the system itself is responsible for all modifications. Other types of Indirect RSI may be based on collaboration between multiple artificial systems instead of AI and human teams [49]. In addition to classification with respect to types of RSI we can also evaluate systems as to certain binary properties. For example: We may be interested only in systems which are guaranteed not to decrease in intelligence, even temporarily, during the improvement process. This may not be possible if the intelligence design landscape contains local maxima points.
1502.06512#15
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
16
Another property of any RSI system we are interested in understanding better is necessity of unchanging source code segments. In other words must an RSI system be able to modify any part of its source code or are certain portions of the system (encoded goals, verification module) must remain unchanged from generation to generation. Such portions would be akin to ultra-conserved elements or conserved sequences of DNA [50, 51] found among multiple related species. This question is particularly important for the goal preservation in self-improving intelligent software, as we want to make sure that future generations of the system are motivated to work on the same problem [31]. As AI goes through the RSI process and becomes smarter and more rational it is likely to engage in a de-biasing process removing any constraints we programmed into it [8]. Ideally we would want to be able to prove that even after recursive self-improvement our algorithm maintains the same goals as the original. Proofs of safety or correctness for the algorithm only apply to particular source code and would need to be rewritten and re-proven if the code is modified, which happens in RSI software many times. But we suspect that re-proving slightly modified code may be easier compared to having to prove safety of a completely novel piece of code.
1502.06512#16
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
17
We are also interested in understanding if RSI process can take place in an isolated (leakproofed [52]) system or if interaction with external environment, internet, people, other AI agents is necessary. Perhaps access to external information can be used to mediate speed of RSI process. This also has significant implications on safety mechanisms we can employ while experimenting with early RSI systems [53-61]. Finally, it needs to be investigated if the whole RSI process can be paused at any point and for any specific duration of time in order to limit any negative impact from potential intelligence explosion. Ideally we would like to be able to program our Seed AI to RSI until it reaches certain level of intelligence, pause and wait for further instructions. On the Limits of Recursively Self-Improving Artificially Intelligent Systems The mere possibility of recursively self-improving software remains unproven. In this section we present a number of arguments against such phenomenon.
1502.06512#17
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
18
First of all, any implemented software system relies on hardware for memory, communication and information processing needs even if we assume that it will take a non-Von Neumann (quantum) architecture to run such software. This creates strict theoretical limits to computation, which despite hardware advances predicted by Moore’s law will not be overcome by any future hardware paradigm. Bremermann [62], Bekenstein [63], Lloyd [64], Anders [65], Aaronson [66], Shannon [67], Krauss [68], and many others have investigated ultimate limits to computation in terms of speed, communication and energy consumption with respect to such factors as speed of light, quantum noise, and gravitational constant. Some research has also been done on establishing ultimate limits for enhancing human brain’s intelligence [69]. While their specific numerical findings are outside of the scope of this work, one thing is indisputable: there are ultimate physical limits to computation. Since more complex systems have greater number of components and require more matter, even if individual parts are designed at nanoscale, we can conclude that just like matter and energy are directly related [70] and matter and information (“it from bit”) [71] so is matter and intelligence. While we are obviously far away from hitting any limits imposed by availability of matter in the universe for construction of our supercomputers it is a definite theoretical upper limit on achievable intelligence even under the multiverse hypothesis.
1502.06512#18
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
19
In addition to limitations endemic to hardware, software-related limitations may present even bigger obstacles for RSI systems. Intelligence is not measured as a standalone value but with respect to the problems it allows to solve. For many problems such as playing checkers [72] it is possible to completely solve the problem (provide an optimal solution after considering all possible options) after which no additional performance improvement would be possible [73]. Other problems are known to be unsolvable regardless of level of intelligence applied to them [74]. Assuming separation of complexity classes (such as P vs NP) holds [9], it becomes obvious that certain classes of problems will always remain only approximately solvable and any improvements in solutions will come from additional hardware resources not higher intelligence.
1502.06512#19
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
20
Wiedermann argues that cognitive systems form an infinite hierarchy and from a computational point of view human-level intelligence is upper-bounded by the ∑2 class of the Arithmetic Hierarchy [75]. Because many real world problems are computationally infeasible for any non- trivial inputs even an AI which achieves human level performance is unlikely to progress towards higher levels of the cognitive hierarchy. So while theoretically machines with super- Turing computational power are possible, in practice they are not implementable as the non- computable information needed for their function is just that – not computable. Consequently Wiedermann states that while machines of the future will be able to solve problems, solvable by humans, much faster and more reliably they will still be limited by computational limits found in upper levels of the Arithmetic Hierarchy [75, 76].
1502.06512#20
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
21
Mahoney attempts to formalize what it means for a program to have a goal G and to self-improve with respect to being able to reach said goal under constraint of time, t [77]. Mahoney defines a goal as a function G: N R mapping natural numbers N to real numbers R. Given a universal Turing machine L, Mahoney defines P(t) to mean the positive natural number encoded by output of the program P with input t running on L after t time steps, or 0 if P has not halted after t steps. Mahoney’s representation says that P has goal G at time t if and only if there exists t’ > t such that G(P(t’)) > G(P(t)) and for all t’ > t, G(P(t’) ≥ G(P(t)). If P has a goal G, then G(P(t)) is a
1502.06512#21
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
22
monotonically increasing function of t with no maximum for t > C. Q improves on P with respect to goal G if and only if all of the following condition are true: P and Q have goal Q. t, G(Q(t)) > G(P(t)) and ~t, t’ > t, G(Q(t)) > G(P(t)) [77]. Mahoney then defines an improving sequence with respect to G as an infinite sequence of program P1, P2, P3, … such that for i, i > 0, Pi+1 improves Pi with respect to G. Without the loss of generality Mahoney extends the definition to include the value -1 to be an acceptable input, so P(-1) outputs appropriately encoded software. He finally defines P1 as an RSI program with respect to G iff Pi(-1) = Pi+1 for all i > 0 and the sequence Pi, i = 1, 2, 3 … is an improving sequence with respect to goal G [77]. Mahoney also analyzes complexity of RSI software and presents a proof demonstrating that the algorithmic complexity of Pn (the nth iteration of an RSI program) is not greater than O(log n) implying a very limited amount of knowledge gain would be possible in practice despite theoretical
1502.06512#22
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
24
Other limitations may be unique to the proposed self-improvement approach. For example Levin type search through the program space will face problems related to Rice’s theorem [78] which states that for any arbitrarily chosen program it is impossible to test if it has any non-trivial property such as being very intelligent. This testing is of course necessary to evaluate redesigned code. Also, universal search over the space of mind designs which will not be computationally possible due to the No Free Lunch theorems [79] as we have no information to reduce the size of the search space [80]. Other difficulties related to testing remain even if we are not taking about arbitrarily chosen programs but about those we have designed with a specific goal in mind and which consequently avoid problems with Rice’s theorem. One such difficulty is determining if something is an improvement. We can call this obstacle – “multidimensionality of optimization”. No change is strictly an improvement; it is always a tradeoff between gain in some areas and loss in others. For example, how do we evaluate and compare two software systems one of which is better at chess and the other at poker? Assuming the goal is increased intelligence over the distribution of all potential
1502.06512#24
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
25
do we evaluate and compare two software systems one of which is better at chess and the other at poker? Assuming the goal is increased intelligence over the distribution of all potential environments the system would have to figure out how to test intelligence at levels above its own a problem which remains unsolved. In general the science of testing for intelligence above level achievable by naturally occurring humans (IQ < 200) is in its infancy. De Garis raises a problem of evaluating quality of changes made to the top level structures responsible for determining the RSI’s functioning, structures which are not judged by any higher level modules and so present a fundamental difficulty in accessing their performance [81].
1502.06512#25
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
26
Other obstacles to RSI have also been suggested in the literature. Löb’s theorem states that a mathematical system can’t assert its own soundness without becoming inconsistent [82], meaning a sufficiently expressive formal system can’t know that everything it proves to be true is actually so [82]. Such ability is necessary to verify that modified versions of the program are still consistent with its original goal of getting smarter. Another obstacle, called procrastination paradox will also prevent the system from making modifications to its code since the system will find itself in a state in which a change made immediately is as desirable and likely as the same change made later [83, 84]. Since postponing making the change carries no negative implications and may actually be safe this may result in an infinite delay of actual implementation of provably desirable changes.
1502.06512#26
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
27
Similarly, Bolander raises some problems inherent in logical reasoning with self-reference, namely, self-contradictory reasoning, exemplified by the Knower Paradox of the form - “This sentence is false” [85]. Orseau and Ring introduce what they call “Simpleton Gambit” a situation in which an agent will chose to modify itself towards its own detriment if presented with a high enough reward to do so [86]. Yampolskiy reviews a number of related problems in rational self- improving optimizers, above a certain capacity, and concludes, that despite opinion of many, such machines will choose to “wirehead” [87]. Chalmers [31] suggests a number of previously unanalyzed potential obstacles on the path to RSI software with Correlation obstacle being one of them. He describes it as a possibility that no interesting properties we would like to amplify will correspond to ability to design better software. Yampolskiy is also concerned with accumulation of errors in software undergoing an RSI process, which is conceptually similar to accumulation of mutations in the evolutionary process experienced by biological agents. Errors (bugs) which are not detrimental to system’s performance are very hard to detect and may accumulate from generation to generation building on each other until a critical mass of such errors leads to erroneous functioning of the system, mistakes in evaluating quality of the future generations of the software or a complete breakdown [88].
1502.06512#27
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
28
The self-reference aspect in self-improvement system itself also presents some serious challenges. It may be the case that the minimum complexity necessary to become RSI is higher than what the system itself is able to understand. We see such situations frequently at lower levels of intelligence, for example a squirrel doesn’t have mental capacity to understand how a squirrel’s brain operates. Paradoxically, as the system becomes more complex it may take exponentially more intelligence to understand itself and so a system which starts capable of complete self-analysis may lose that ability as it self-improves. Informally we can call it the Munchausen obstacle, inability of a system to lift itself by its own bootstraps. An additional problem may be that the system in question is computationally irreducible [89] and so can’t simulate running its own source code. An agent cannot predict what it will think without thinking it first. A system needs 100% of its memory to model itself, which leaves no memory to record the output of the simulation. Any external memory to which the system may write becomes part of the system and so also has to be modeled. Essentially the system will face an infinite regress of self-models from which it can’t
1502.06512#28
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
29
system may write becomes part of the system and so also has to be modeled. Essentially the system will face an infinite regress of self-models from which it can’t escape. Alternatively, if we take a physics perspective on the issue, we can see intelligence as a computational resource (along with time and space) and so producing more of it will not be possible for the same reason why we can’t make a perpetual motion device as it would violate fundamental laws of nature related to preservation of energy. Similarly it has been argued that a Turing Machine cannot output a machine of greater algorithmic complexity [90].
1502.06512#29
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
30
We can even attempt to formally prove impossibility of intentional RSI process via proof by contradiction: Let’s define RSI R1 as a program not capable of algorithmically solving a problem of difficulty X, say Xi. If R1 modifies its source code after which it is capable of solving Xi it violates our original assumption that R1 is not capable of solving Xi since any introduced modification could be a part of the solution process, so we have a contradiction of our original assumption, and R1 can’t produce any modification which would allow it to solve Xi, which was to be shown. Informally, if an agent can produce a more intelligent agent it would already be as capable as that new agent. Even some of our intuitive assumptions about RSI are incorrect. It seems that it should be easier to solve a problem if we already have a solution to a smaller instance of such problem [91] but in a formalized world of problems belonging to the same complexity class, re-optimization problem is proven to be as difficult as optimization itself [92- 95].
1502.06512#30
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
31
Analysis A number of fundamental problems remain open in the area of RSI. We still don’t know the minimum intelligence necessary for commencing the RSI process, but we can speculate that it would be on par with human intelligence which we associate with universal or general intelligence [96], though in principal a sub-human level system capable of self-improvement can’t be excluded [31]. One may argue that even human level capability is not enough because we already have programmers (people or their intellectual equivalence formalized as functions [97] or Human Oracles [98, 99]) who have access to their own source code (DNA), but who fail to understand how DNA (nature) works to create their intelligence. This doesn’t even include additional complexity in trying to improve on existing DNA code or complicating factors presented by the impact of learning environment (nurture) on development of human intelligence. Worse yet, it is not obvious how much above human ability an AI needs to be to begin overcoming the “complexity barrier” associated with self-understanding. Today’s AIs can do many things people are incapable of doing, but are not yet capable of RSI behavior.
1502.06512#31
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
32
We also don’t know the minimum size of program (called Seed AI [100]) necessary to get the ball rolling. Perhaps if it turns out that such “minimal genome” is very small a brute force [37] approach might succeed in discovering it. We can assume that our Seed AI is the smartest Artificial General Intelligence known to exist [101] in the world as otherwise we can simply delegate the other AI as the seed. It is also not obvious how the source code size of RSI will change as it goes through the improvement process, in other words what is the relationship between intelligence and minimum source code size necessary to support it. In order to answer such questions it may be useful to further formalize the notion of RSI perhaps by representing such software as a Turing Machine [102] with particular inputs and outputs. If that could be successfully accomplished a new area of computational complexity analysis may become possible in which we study algorithms with dynamically changing complexity (Big-O) and address questions about how many code modification are necessary to achieve certain level of performance from the algorithm.
1502.06512#32
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
33
This of course raises the question of speed of RSI process, are we expecting it to take seconds, minutes, days, weeks, years or more (hard takeoff VS soft takeoff) for the RSI system to begin hitting limits of what is possible with respect to physical limits of computation [103]? Even in suitably constructed hardware (human baby) it takes decades of data input (education) to get to human-level performance (adult). It is also not obvious if the rate of change in intelligence would be higher for a more advanced RSI, because it is more capable, or for a “newbie” RSI because it has more low hanging fruit to collect. We would have to figure out if we are looking at improvement in absolute terms or as a percentage of system’s current intelligence score.
1502.06512#33
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
34
Yudkowsky attempts to analyze most promising returns on cognitive reinvestment as he considers increasing size, speed or ability of RSI systems. He also looks at different possible rates of return and arrives at three progressively steeper trajectories for RSI improvement which he terms: “fizzle”, “combust” and “explode” aka “AI go FOOM” [25]. Hall [8] similarly analyzes rates of return on cognitive investment and derives a curve equivalent to double the Moore’s Law rate. Hall also suggest that an AI would be better of trading money it earns performing useful work for improved hardware or software rather than attempt to directly improve itself since it would not be competitive against more powerful optimization agents such as Intel corporation. Fascinatingly, by analyzing properties which correlate with intelligence, Chalmers [31] is able to generalize self-improvement optimization to properties other than intelligence. We can agree that RSI software as we describe it in this work is getting better at designing software not just at being generally intelligent. Similarly other properties associated with design capacity can be increased along with capacity to design software for example capacity to design systems with sense of humor and so in addition to intelligence explosion we may face an explosion of funniness.
1502.06512#34
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
35
RSI Convergence Theorem A simple thought experiment regarding RSI can allow us to arrive at a fascinating hypothesis. Regardless of the specifics behind the design of the Seed AI used to start an RSI process all such systems, attempting to achieve superintelligence, will converge to the same software architecture. We will call this intuition - RSI Convergence Theory. There is a number of ways in which it can happen, depending on the assumptions we make, but in all cases the outcome is the same, a practically computable agent similar to AIXI (which is an incomputable but superintelligent agent [104]).
1502.06512#35
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
36
If an upper limit to intelligence exists multiple systems will eventually reach that level, probably by taking different trajectories, and in order to increase their speed will attempt to minimize the size of their source code eventually discovering smallest program with such level of ability. It may even be the case that sufficiently smart RSIs will be able to immediately deduce such architecture from basic knowledge of physics and Kolmogorov Complexity [105]. If, however, intelligence turns out to be an unbounded property RSIs may not converge. They will also not converge if many programs with maximum intellectual ability exist and all have the same Kolmogorov complexity or if they are not general intelligences and are optimized for different environments. It is also likely that in the space of minds [35] stable attractors include sub-human and super-human intelligences with precisely human level of intelligence being a rare particular [30].
1502.06512#36
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
37
If correct, predictions of RSI convergence imply creation of what Bostrom calls a Singleton [106], a single decision making agent in control of everything. Further speculation can lead us to conclude that converged RSI systems separated by space and time even at cosmological scales can engage in acausal cooperation [107, 108] since they will realize that they are the same agent with the same architecture and so are capable of running perfect simulations of each other’s future behavior. Such realization may allow converged superintelligence with completely different origins to implicitly cooperate particularly on meta-tasks. One may also argue that humanity itself is on the path which converges to the same point in the space of all possible intelligences (but is undergoing a much slower RSI process). Consequently, by observing a converged RSI architecture and properties humanity can determine its ultimate destiny, its purpose in life, its Coherent Extrapolated Volition (CEV) [109].
1502.06512#37
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
38
Conclusions Recursively Self-Improving software is the ultimate form of artificial life and creation of life remains one of the great unsolved mysteries in science. More precisely, the problem of creating RSI software is really the challenge of creating a program capable of writing other programs [110], and so is an AI-Complete problem as has been demonstrated by Yampolskiy [98, 99]. AI- complete problems are by definition most difficult problems faced by AI researchers and it is likely that RSI source code will be so complex that it would be difficult or impossible to fully analyze [49]. Also, the problem is likely to be NP-Complete as even simple metareasoning and metalearning [111] problems have been shown by Conitzer and Sandholm to belong to that class. In particular they proved that allocation of deliberation time across anytime algorithms running on different problem instances is NP-Complete and a complimentary problem of dynamically allocating information gathering resources by an agent across multiple actions is NP-Hard, even if evaluating each particular action is computationally simple. Finally, they showed that the problem of deliberately choosing a limited number of deliberation or information gathering actions to disambiguate the state of the world is PSPACE Hard in general [112].
1502.06512#38
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
39
Intelligence is a computational resource and as with other physical resources (mass, speed) its behavior is probably not going to be just a typical linear extrapolation of what we are used to, if observed at high extremes (IQ > 200+). It may also be subject to fundamental limits such as the speed limit on travel of light or fundamental limits we do not yet understand or know about (unknown unknowns). In this work we reviewed a number of computational upper limits to which any successful RSI system will asymptotically strive to grow, we can note that despite existence of such upper bounds we are currently probably very far from reaching them and so still have plenty of room for improvement at the top. Consequently, any RSI achieving such significant level of enhancement, despite not creating an infinite process, will still seem like it is producing superintelligence with respect to our current state [113]. The debate regarding possibility of RSI will continue. Some will argue that while it is possible to increase processor speed, amount of available memory or sensor resolution the fundamental ability to solve problems can’t be intentionally and continuously improved by the system itself. Additionally, critics may suggest that intelligence is upper bounded and only differs by speed and available info to process [114]. In fact they can point out to such maximum intelligence, be it a theoretical one, known as AIXI, an agent which given infinite computational resources will make purely rational decisions in any situation.
1502.06512#39
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
40
A resource-dependent system undergoing RSI intelligence explosion can expand and harvest matter, at the speed of light, from its origin converting the universe around it into a computronium sphere [114]. It is also very likely to try and condense all the matter it obtains into a super-dense unit of constant volume (reminiscent of the original physical singularity point which produced the Big Bang, see Omega Point [115]) to reduce internal computational costs which grow with the overall size of the system and at cosmic scales are very significant even at the speed of light. A side effect of this process would be emergence of an event horizon impenetrable to scientific theories about the future states of the underlying RSI system. In some limited way we already see this condensation process in attempts of computer chip manufacturers to pack more and more transistors into exponentially more powerful chips of same or smaller size. And so, from the Big Bang explosion of the original cosmological Singularity to the Technological Singularity in which intelligence explodes and attempts to amass all the matter in the universe back into a point of infinite density (Big Crunch) which in turn causes the next (perhaps well controlled) Big Bang, the history of the universe continues and relies on intelligence as its driver and shaper (similar ideas are becoming popular in cosmology [116- 118]).
1502.06512#40
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
41
Others will say that since intelligence is the ability to find patterns in data, intelligence has no upper bounds as the number of variables comprising a pattern can always be greater and so present a more complex problem against which intelligence can be measured. It is easy to see that even if in our daily life the problems we encounter do have some maximum difficulty it is certainly not the case with theoretical examples we can derive from pure mathematics. It seems likely that the debate will not be settled until a fundamental unsurmountable obstacle to RSI process is found or a proof by existence is demonstrated. Of course the question of permitting machines to undergo RSI transformation, if it is possible, is a separate and equally challenging problem. References 1. Turing, A., Computing Machinery and Intelligence. Mind, 1950. 59(236): p. 433-460. 2. Good, I.J., Speculations Concerning the First Ultraintelligent Machine. Advances in Computers, 1966. 6: p. 31-88. 3. Minsky, M., Artificial Intelligence. Scientific American, 1966. 215(3): p. 257. 4. Burks, A.W. and J. Von Neumann, Theory of self-reproducing automata. 1966: University of Illinois Press. 5. Pearce, D., The biointelligence explosion, in Singularity Hypotheses. 2012, Springer. p. 199- 238.
1502.06512#41
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
42
Illinois Press. 5. Pearce, D., The biointelligence explosion, in Singularity Hypotheses. 2012, Springer. p. 199- 238. 6. Omohundro, S.M., The Nature of Self-Improving Artificial Intelligence, in Singularity Summit 2007: San Francisco, CA. 7. Waser, M.R., Bootstrapping a Structured Self-Improving & Safe Autopoietic Self, in Annual International Conference on Biologically Inspired Cognitive Architectures. November 9, 2014: Boston, Massachusetts. 8. Hall, J.S., Engineering utopia. Frontiers in Artificial Intelligence and Applications, 2008. 171: p. 460. 9. Yampolskiy, R.V., Construction of an NP Problem with an Exponential Lower Bound. Arxiv preprint arXiv:1111.0305, 2011. 10. Yonck, R., Toward a Standard Metric of Machine Intelligence. World Future Review, 2012. 4(2): p. 61-70. 11. Mavrogiannopoulos, N., N. Kisserli, and B. Preneel, A taxonomy of self-modifying code for obfuscation. Computers & Security, 2011. 30(8): p. 679-691.
1502.06512#42
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
43
12. Anckaert, B., M. Madou, and K. De Bosschere, A model for self-modifying code, in Information Hiding. 2007, Springer. p. 232-248. 13. Petrean, L., Polymorphic and Metamorphic Code Applications in Portable Executable Files Protection. Acta Technica Napocensis, 2010. 51(1). 14. Bonfante, G., J.-Y. Marion, and D. Reynaud-Plantey, A computability perspective on self- modifying programs, in Seventh IEEE International Conference on Software Engineering and Formal Methods. 2009, IEEE. p. 231-239. 15. Cheng, B.H., et al., Software engineering for self-adaptive systems: A research roadmap, in Software engineering for self-adaptive systems. 2009, Springer. p. 1-26. 16. Ailon, N., et al., Self-improving algorithms. SIAM Journal on Computing, 2011. 40(2): p. 350-375. 17. Yampolskiy, R., et al., Printer Model Integrating Genetic Algorithm for Improvement of Halftone Patterns, in Western New York Image Processing Workshop (WNYIPW) -IEEE Signal Processing Society. 2004: Rochester, NY.
1502.06512#43
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
44
18. Yampolskiy, R.V., L. Ashby, and L. Hassan, Wisdom of Artificial Crowds—A Metaheuristic Algorithm for Optimization. Journal of Intelligent Learning Systems and Applications, 2012. 4(2): p. 98-107. 19. Yampolskiy, R.V. and E.L.B. Ahmed, Wisdom of artificial crowds algorithm for solving NP- hard problems. International Journal of Bio-Inspired Computation (IJBIC). 2012. 3(6): p. 358-369. 20. Ashby, L.H. and R.V. Yampolskiy, Genetic Algorithm and Wisdom of Artificial Crowds Algorithm Applied to Light Up, in 16th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games. July 27 - 30, 2011 Louisville, KY, USA p. 27-32. 21. Khalifa, A.B. and R.V. Yampolskiy, GA with Wisdom of Artificial Crowds for Solving Mastermind Satisfiability Problem. International Journal of Intelligent Games & Simulation, 2011. 6(2): p. 6.
1502.06512#44
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
45
22. Port, A.C. and R.V. Yampolskiy, Using a GA and Wisdom of Artificial Crowds to solve solitaire battleship puzzles, in 17th International Conference on Computer Games (CGAMES) 2012, IEEE: Louisville, KY. p. 25-29. 23. Omohundro, S., Rational artificial intelligence for the greater good, in Singularity Hypotheses. 2012, Springer. p. 161-179. 24. Anderson, M.L. and T. Oates, A review of recent research in metareasoning and metalearning. AI Magazine, 2007. 28(1): p. 12. 25. Yudkowsky, E., Intelligence Explosion Microeconomics, in MIRI Technical Report. 2013.: Available at: www.intelligence.org/files/IEM.pdf. 26. Heylighen, F., Brain in a vat cannot break out. Journal of Consciousness Studies, 2012. 19(1-2): p. 1-2. 27. Turchin, V.F., The concept of a supercompiler. ACM Transactions on Programming Languages and Systems (TOPLAS), 1986. 8(3): p. 292-325.
1502.06512#45
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
46
28. Sotala, K., Advantages of artificial intelligences, uploads, and digital minds. International Journal of Machine Consciousness, 2012. 4(01): p. 275-291. 29. Muehlhauser, L. and A. Salamon, Intelligence explosion: Evidence and import, in Singularity Hypotheses. 2012, Springer. p. 15-42. 30. Yudkowsky, E., Levels of organization in general intelligence, in Artificial general intelligence. 2007, Springer. p. 389-501. 31. Chalmers, D., The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 2010. 17: p. 7-65. 32. Nivel, E., et al., Bounded Recursive Self-Improvement. arXiv preprint arXiv:1312.6764, 2013. 33. Nivel, E. and K.R. Thórisson. Self-programming: Operationalizing autonomy. Proceedings of the 2nd Conf. on Artificial General Intelligence. 2008. in 34. Yudkowsky, E. and R. Hanson, The Hanson-Yudkowsky AI-foom debate, in MIRI Technical Report. 2008: Available at: http://intelligence.org/files/AIFoomDebate.pdf.
1502.06512#46
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
47
35. Yampolskiy, R.V., The Universe of Minds. arXiv preprint arXiv:1410.0369, 2014. 36. Hall, J.S., Self-improving AI: An analysis. Minds and Machines, 2007. 17(3): p. 249-259. 37. Yampolskiy, R.V., Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence. Journal of Discrete Mathematical Sciences & Cryptography, 2013. 16(4-5): p. 259-277. 38. Gagliolo, M., Universal search. Scholarpedia, 2007. 2(11):2575. 39. Levin, L., Universal Search Problems. Problems of Information Transmission, 1973. 9(3): p. 265--266. 40. Steunebrink, B. and J. Schmidhuber, A Family of Gödel Machine Implementations, in Fourth Conference on Artificial General Intelligence (AGI-11) 2011: Mountain View, California. 41. Schmidhuber, J., Gödel machines: Fully self-referential optimal universal self-improvers, in Artificial general intelligence 2007, Springer. p. 199-226.
1502.06512#47
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
48
Artificial general intelligence 2007, Springer. p. 199-226. 42. Schmidhuber, J., Gödel machines: Towards a technical justification of consciousness, in Adaptive Agents and Multi-Agent Systems II. 2005, Springer. p. 1-23. 43. Schmidhuber, J. Gödel machines: Self-referential universal problem solvers making provably optimal self-improvements. in Artificial General Intelligence. 2005. 44. Schmidhuber, J., Ultimate cognition à la Gödel. Cognitive Computation, 2009. 1(2): p. 177- 193. 45. Schmidhuber, J., Completely self-referential optimal reinforcement learners, in Artificial Neural Networks: Formal Models and Their Applications–ICANN. 2005, Springer. p. 223- 233. 46. Schmidhuber, J., Optimal ordered problem solver. Machine Learning, 2004. 54(3): p. 211- 254. 47. Schmidhuber, J., J. Zhao, and M. Wiering, Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 1997. 28(1): p. 105-130. 48. Schmidhuber, J., A general method for incremental self-improvement and multiagent learning. Evolutionary Computation: Theory and Applications, 1999: p. 81-123.
1502.06512#48
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
49
48. Schmidhuber, J., A general method for incremental self-improvement and multiagent learning. Evolutionary Computation: Theory and Applications, 1999: p. 81-123. 49. Leon, J. and A. Lori, Continuous self-evaluation for the self-improvement of software, in Self-Adaptive Software. 2001, Springer. p. 27-39. 50. Beck, M.B., E.C. Rouchka, and R.V. Yampolskiy, Finding Data in DNA: Computer Forensic Investigations of Living Organisms, in Digital Forensics and Cyber Crime. 2013, Springer Berlin Heidelberg. p. 204-219. 51. Beck, M. and R. Yampolskiy, DNA as a medium for hiding data. BMC Bioinformatics, 2012. 13(Suppl 12): p. A23. 52. Yampolskiy, R.V., Leakproofing Singularity - Artificial Intelligence Confinement Problem. Journal of Consciousness Studies (JCS), 2012. 19(1-2): p. 194–214. 53. Majot, A.M. and R.V. Yampolskiy. AI safety engineering through introduction of self- reference into felicific calculus via artificial pain and pleasure. in Ethics in Science, Technology and Engineering, IEEE International Symposium on. 2014.
1502.06512#49
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
50
54. Yampolskiy, R. and J. Fox, Safety Engineering for Artificial General Intelligence. Topoi, 2012: p. 1-10. 55. Yampolskiy, R.V. and J. Fox, Artificial General Intelligence and the Human Mental Model. Singularity Hypotheses: A Scientific and Philosophical Assessment, 2013: p. 129. 56. Sotala, K. and R.V. Yampolskiy, Responses to catastrophic AGI risk: A survey. Physica Scripta. 90(1). 2015. 57. Yampolskiy, R.V., What to Do with the Singularity Paradox?, in Philosophy and Theory of Artificial Intelligence 2013, Springer Berlin Heidelberg. p. 397-413. 58. Yampolskiy, R. and M. Gavrilova, Artimetrics: Biometrics for Artificial Entities. IEEE Robotics and Automation Magazine (RAM), 2012. 19(4): p. 48-58. 59. Yampolskiy, R., et al., Experiments in Artimetrics: Avatar Face Recognition. Transactions on Computational Science XVI, 2012: p. 77-94.
1502.06512#50
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
51
59. Yampolskiy, R., et al., Experiments in Artimetrics: Avatar Face Recognition. Transactions on Computational Science XVI, 2012: p. 77-94. 60. Ali, N., D. Schaeffer, and R.V. Yampolskiy, Linguistic Profiling and Behavioral Drift in Chat Bots. Midwest Artificial Intelligence and Cognitive Science Conference, 2012: p. 27. 61. Gavrilova, M. and R. Yampolskiy, State-of-the-Art in Robot Authentication [From the Guest Editors]. Robotics & Automation Magazine, IEEE, 2010. 17(4): p. 23-24. 62. Bremermann, H.J. Quantum noise and information. in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. 1967. 63. Bekenstein, J.D., Information in the holographic universe. Scientific American, 2003. 289(2): p. 58-65. 64. Lloyd, S., Ultimate Physical Limits to Computation. Nature, 2000. 406: p. 1047-1054. 65. Sandberg, A., The physics of information processing superobjects: daily life among the Jupiter brains. Journal of Evolution and Technology, 1999. 5(1): p. 1-34.
1502.06512#51
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
52
Jupiter brains. Journal of Evolution and Technology, 1999. 5(1): p. 1-34. 66. Aaronson, S., Guest column: NP-complete problems and physical reality. ACM Sigact News, 2005. 36(1): p. 30-52. 67. Shannon, C.E., A Mathematical Theory of Communication. Bell Systems Technical Journal, July 1948. 27(3): p. 379-423. 68. Krauss, L.M. and G.D. Starkman, Universal limits on computation. arXiv preprint astro- ph/0404510, 2004. 69. Fox, D., The limits of intelligence. Scientific American, 2011. 305(1): p. 36-43. 70. Einstein, A., Does the inertia of a body depend upon its energy-content? Annalen der Physik, 1905. 18: p. 639-641. 71. Wheeler, J.A., Information, Physics, Quantum: The Search for Links1990: Physics Dept., University of Texas. 72. Schaeffer, J., et al., Checkers is Solved. Science, September 2007. 317(5844): p. 1518-1522. 73. Mahoney, M., Is there a model for RSI?, in SL4June 20, 2008: Available at:
1502.06512#52
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
53
http://www.sl4.org/archive/0806/19028.html. 74. Turing, A., On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 1936. 2(42): p. 230-265. 75. Wiedermann, J., A Computability Argument Against Superintelligence. Cognitive Computation, 2012. 4(3): p. 236-245. 76. Wiedermann, J., Is There Something Beyond AI? Frequently Emerging, but Seldom Answered Questions about Artificial Super-Intelligence. Beyond AI: Artificial Dreams: 2012. p. 76. 77. Mahoney, M., A Model for Recursively Self Improving Programs, 2010: Available at: http://mattmahoney.net/rsi.pdf. 78. Rice, H.G., Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society, 1953. 74(2): p. 358-366. 79. Wolpert, D.H. and W.G. Macready, No free lunch theorems for optimization. Evolutionary Computation, IEEE Transactions on, 1997. 1(1): p. 67-82.
1502.06512#53
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
54
80. Melkikh, A.V., The No Free Lunch Theorem and hypothesis of instinctive animal behavior. Artificial Intelligence Research, 2014. 3(4): p. p43. 81. de Garis, H., The 21st. Century Artilect: Moral Dilemmas Concerning the Ultra Intelligent Machine. Revue Internationale de Philosophie, 1990. 44(172): p. 131-138. 82. Yudkowsky, E. and M. Herreshoff. Tiling agents for self-modifying AI, and the Löbian at: obstacle. MIRI http://intelligence.org/files/TilingAgentsDraft.pdf. in Technical Report. 2013. Available 83. Fallenstein, B. and N. Soares, Problems of self-reference in self-improving space-time at: embedded https://intelligence.org/wp-content/uploads/2014/05/Fallenstein-Soares-Problems-of-self- reference-in-self-improving-space-time-embedded-intelligence.pdf. intelligence, in MIRI Technical Report 2014, Available 84. Yudkowsky, E., The Procrastination Paradox (Brief technical note), in MIRI Technical Report2014: Available at: https://intelligence.org/files/ProcrastinationParadox.pdf.
1502.06512#54
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
55
85. Bolander, T., Logical theories for agent introspection. Computer Science, 2003. 70(5): p. 2002. 86. Orseau, L., M. Ring, Self-modification and mortality in artificial agents, in 4th international conference on Artificial general intelligence 2011: Mountain View, CA. p. 1-10. 87. Yampolskiy, R.V., Utility Function Security in Artificially Intelligent Agents. Journal of Experimental and Theoretical Artificial Intelligence (JETAI), 2014: p. 1-17. 88. Yampolskiy, R.V., Artificial intelligence safety engineering: Why machine ethics is a wrong approach, in Philosophy and Theory of Artificial Intelligence. 2013, Springer Berlin. p. 389- 396. 89. Wolfram, S., A New Kind of Science. May 14, 2002: Wolfram Media, Inc. 90. Mahoney, M., Is there a model for RSI?, in SL4 June 15, 2008: Available at: http://www.sl4.org/archive/0806/18997.html. 91. Yampolskiy, R.V., Computing Partial Solutions to Difficult AI Problems. Midwest Artificial Intelligence and Cognitive Science Conference, 2012: p. 90.
1502.06512#55
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
56
91. Yampolskiy, R.V., Computing Partial Solutions to Difficult AI Problems. Midwest Artificial Intelligence and Cognitive Science Conference, 2012: p. 90. 92. Böckenhauer, H.-J., et al., On the hardness of reoptimization, in SOFSEM 2008: Theory and Practice of Computer Science. 2008, Springer. p. 50-65. 93. Ausiello, G., et al., Reoptimization of minimum and maximum traveling salesman’s tours, in Algorithm Theory–SWAT 2006, Springer. p. 196-207. 94. Archetti, C., L. Bertazzi, and M.G. Speranza, Reoptimizing the traveling salesman problem. Networks, 2003. 42(3): p. 154-159. 95. Ausiello, G., V. Bonifaci, and B. Escoffier, Complexity and approximation in reoptimization. 2011: Imperial College Press/World Scientific. 96. Loosemore, R. and B. Goertzel, Why an intelligence explosion is probable, in Singularity Hypotheses 2012, Springer. p. 83-98. 97. Shahaf, D. and E. Amir, Towards a theory of AI completeness, in 8th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2007) March 26-28, 2007: CA.
1502.06512#56
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
57
98. Yampolskiy, R., Turing Test as a Defining Feature of AI-Completeness, in Artificial Intelligence, Evolutionary Computing and Metaheuristics, X.-S. Yang, Editor 2013, Springer Berlin. p. 3-17. 99. Yampolskiy, R.V., AI-Complete, AI-Hard, or AI-Easy–Classification of Problems in AI. The 23rd Midwest Artificial Intelligence and Cognitive Science Conference, Cincinnati, OH, USA, 2012. 100. Yudkowsky, E.S., General Intelligence and Seed AI - Creating Complete Minds Capable of at: Open-Ended http://singinst.org/ourresearch/publications/GISAI/. Self-Improvement, 2001: Availablet 101. Yampolskiy, R.V., AI-Complete CAPTCHAs as Zero Knowledge Proofs of Access to an Artificially Intelligent System. ISRN Artificial Intelligence, 2011. 271878. 102. Turing, A.M., On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 1936. 42: p. 230-265. 103. Bostrom, N., Superintelligence: Paths, dangers, strategies. 2014: Oxford University Press. 104. Hutter, M., Universal algorithmic intelligence: A mathematical top→ down approach, in
1502.06512#57
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
58
Artificial general intelligence. 2007, Springer. p. 227-290. 105. Kolmogorov, A.N., Three Approaches to the Quantitative Definition of Information. Problems Inform. Transmission, 1965. 1(1): p. 1-7. 106. Bostrom, N., What is a Singleton? Linguistic and Philosophical Investigations, 2006 5(2): p. 48-54. 107. Yudkowsky, E., Timeless decision theory. The Singularity Institute, San Francisco, 2010. 108. LessWrong, Acausal Trade: Available at: http://wiki.lesswrong.com/wiki/Acausal_trade, retrieved September 29, 2014. 109. Yudkowsky, E.S., Coherent Extrapolated Volition, May 2004 Singularity Institute for Artificial Intelligence: Available at: http://singinst.org/upload/CEV.html. 110. Hall, J.S., VARIAC: an Autogenous Cognitive Architecture. Frontiers in Artificial Intelligence and Applications, 2008. 171: p. 176.
1502.06512#58
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
59
110. Hall, J.S., VARIAC: an Autogenous Cognitive Architecture. Frontiers in Artificial Intelligence and Applications, 2008. 171: p. 176. 111. Schaul, T. and J. Schmidhuber, Metalearning. Scholarpedia, 2010. 5(6):4650. 112. Conitzer, V. and T. Sandholm, Definition and complexity of some basic metareasoning problems, in Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI) 2003: Acapulco, Mexico. p. 1099–1106. 113. Yudkowsky, E., Recursive Self-Improvement in Less Wrong December 1, 2008: Available at: http://lesswrong.com/lw/we/recursive_selfimprovement/, retrieved September 29, 2014. 114. Hutter, M., Can Intelligence Explode? Journal of Consciousness Studies, 2012. 19(1-2): p. 1-2. 115. Tipler, F.J., The physics of immortality: Modern cosmology, God, and the resurrection of the dead 1994: Random House LLC.
1502.06512#59
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
60
1-2. 115. Tipler, F.J., The physics of immortality: Modern cosmology, God, and the resurrection of the dead 1994: Random House LLC. 116. Smart, J.M., Evo Devo Universe? A Framework for Speculations on Cosmic Culture, in Cosmos and Culture: Cultural Evolution in a Cosmic Context, M.L.L. Steven J. Dick, Editor 2009, Govt Printing Office, NASA SP-2009-4802,: Wash., D.C. p. 201-295. 117. Stewart, J.E., The meaning of life in a developing universe. Foundations of Science, 2010. 15(4): p. 395-409. 118. Vidal, C., The Beginning and the End: The Meaning of Life in a Cosmological Perspective. arXiv preprint arXiv:1301.1648, 2013.
1502.06512#60
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.05477
0
7 1 0 2 r p A 0 2 ] G L . s c [ 5 v 7 7 4 5 0 . 2 0 5 1 : v i X r a # Trust Region Policy Optimization John Schulman Sergey Levine Philipp Moritz Michael Jordan Pieter Abbeel University of California, Berkeley, Department of Electrical Engineering and Computer Sciences [email protected] [email protected] [email protected] [email protected] [email protected] # Abstract We describe an iterative procedure for optimizing policies, with guaranteed monotonic improve- ment. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effec- tive for optimizing large nonlinear policies such as neural networks. Our experiments demon- strate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. De- spite its approximations that deviate from the theory, TRPO tends to give monotonic improve- ment, with little tuning of hyperparameters.
1502.05477#0
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
0
5 1 0 2 c e D 1 3 ] I A . s c [ 0 1 v 8 9 6 5 0 . 2 0 5 1 : v i X r a # Under review as a conference paper at ICLR 2016 TOWARDS AI-COMPLETE QUESTION ANSWERING: A SET OF PREREQUISITE TOY TASKS Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merri¨enboer, Armand Joulin & Tomas Mikolov Facebook AI Research 770 Broadway New York, USA {jase,abordes,spchopra,tmikolov,sashar,bartvm}@fb.com # ABSTRACT
1502.05698#0
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
1
Tetris is a classic benchmark problem for approximate dy- namic programming (ADP) methods, stochastic optimiza- tion methods are difficult to beat on this task (Gabillon et al., 2013). For continuous control problems, methods like CMA have been successful at learning control poli- cies for challenging tasks like locomotion when provided with hand-engineered policy classes with low-dimensional parameterizations (Wampler & Popovi´c, 2009). The in- ability of ADP and gradient-based methods to consistently beat gradient-free random search is unsatisfying, since gradient-based optimization algorithms enjoy much better sample complexity guarantees than gradient-free methods (Nemirovski, 2005). Continuous gradient-based optimiza- tion has been very successful at learning function approxi- mators for supervised learning tasks with huge numbers of parameters, and extending their success to reinforcement learning would allow for efficient training of complex and powerful policies. # 1 Introduction
1502.05477#1
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
1
# ABSTRACT One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the use- fulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks. 1 # INTRODUCTION
1502.05698#1
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
2
# 1 Introduction Most algorithms for policy optimization can be classified into three broad categories: (1) policy iteration methods, which alternate between estimating the value function un- der the current policy and improving the policy (Bertsekas, 2005); (2) policy gradient methods, which use an estima- tor of the gradient of the expected return (total reward) ob- tained from sample trajectories (Peters & Schaal, 2008a) (and which, as we later discuss, have a close connection to policy iteration); and (3) derivative-free optimization meth- ods, such as the cross-entropy method (CEM) and covari- ance matrix adaptation (CMA), which treat the return as a black box function to be optimized in terms of the policy parameters (Szita & L¨orincz, 2006).
1502.05477#2
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
2
1 # INTRODUCTION There is a rich history of the use of synthetic tasks in machine learning, from the XOR problem which helped motivate neural networks (Minsky & Papert, 1969; Rumelhart et al., 1985), to circle and ring datasets that helped motivate some of the most well-known clustering and semi-supervised learning algorithms (Ng et al., 2002; Zhu et al., 2003), Mackey Glass equations for time series (M¨uller et al., 1997), and so on – in fact some of the well known UCI datasets (Bache & Lichman, 2013) are synthetic as well (e.g., waveform). Recent work continues this trend. For example, in the area of developing learning algorithms with a memory component synthetic datasets were used to help develop both the Neural Turing Machine of Graves et al. (2014) and the Memory Networks of Weston et al. (2014), the latter of which is relevant to this work.
1502.05698#2
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
3
In this article, we first prove that minimizing a certain sur- rogate objective function guarantees policy improvement with non-trivial step sizes. Then we make a series of ap- proximations to the theoretically-justified algorithm, yield- ing a practical algorithm, which we call trust region pol- icy optimization (TRPO). We describe two variants of this algorithm: first, the single-path method, which can be ap- plied in the model-free setting; second, the vine method, which requires the system to be restored to particular states, which is typically only possible in simulation. These al- gorithms are scalable and can optimize nonlinear policies with tens of thousands of parameters, which have previ- ously posed a major challenge for model-free policy search (Deisenroth et al., 2013). In our experiments, we show that the same TRPO methods can learn complex policies for swimming, hopping, and walking, as well as playing Atari games directly from raw images. General derivative-free stochastic optimization methods such as CEM and CMA are preferred on many prob- lems, because they achieve good results while being sim- ple to understand and implement. For example, while
1502.05477#3
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
3
One of the reasons for the interest in synthetic data is that it can be easier to develop new techniques using it. It is well known that working with large amounts of real data (“big data”) tends to lead researchers to simpler models as “simple models and a lot of data trump more elaborate models based on less data” (Halevy et al., 2009). For example, N -grams for language modeling work well relative to existing competing methods, but are far from being a model that truly understands text. As researchers we can become stuck in local minima in algorithm space; development of synthetic data is one way to try and break out of that. In this work we propose a framework and a set of synthetic tasks for the goal of helping to develop learning algorithms for text understanding and reasoning. While it is relatively difficult to auto- matically evaluate the performance of an agent in general dialogue – a long term-goal of AI – it is relatively easy to evaluate responses to input questions, i.e., the task of question answering (QA). Question answering is incredibly broad: more or less any task one can think of can be cast into this setup. This enables us to propose a wide ranging set of different tasks, that test different capabilities of learning algorithms, under a common framework.
1502.05698#3
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
4
Proceedings of the 31 st International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copy- right 2015 by the author(s). # 2 Preliminaries Consider an infinite-horizon discounted Markov decision process (MDP), defined by the tuple (S, A, P, r, ρ0, γ), where S is a finite set of states, A is a finite set of actions, P : S × A × S → R is the transition probability distriTrust Region Policy Optimization bution, r : S → R is the reward function, ρ0 : S → R is the distribution of the initial state s0, and γ ∈ (0, 1) is the discount factor. Let π denote a stochastic policy π : S × A → [0, 1], and let η(π) denote its expected discounted reward: (17) = Eso ,ao,. [en r(se ] , where $0 ~ po(So), ae ~ T(at|S2), Se41 ~ P(St41| St, a4).
1502.05477#4
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
4
Our tasks are built with a unified underlying simulation of a physical world, akin to a classic text adventure game (Montfort, 2005) whereby actors move around manipulating objects and interacting 1 # Under review as a conference paper at ICLR 2016 with each other. As the simulation runs, grounded text and question answer pairs are simultaneously generated. Our goal is to categorize different kinds of questions into skill sets, which become our tasks. Our hope is that the analysis of performance on these tasks will help expose weaknesses of current models and help motivate new algorithm designs that alleviate these weaknesses. We further envision this as a feedback loop where new tasks can then be designed in response, perhaps in an adversarial fashion, in order to break the new models. The tasks we design are detailed in Section 3, and the simulation used to generate them in Section 4. In Section 5 we give benchmark results of standard methods on our tasks, and analyse their successes and failures. In order to exemplify the kind of feedback loop between algorithm development and task development we envision, in Section A we propose a set of improvements to the recent Memory Network method, which has shown to give promising performance in QA. We show our proposed approach does indeed give improved performance on some tasks, but is still unable to solve some of them, which we consider as open problems. # 2 RELATED WORK
1502.05698#4
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
5
it(s) = arg max, A,(s, a), improves the policy if there is at least one state-action pair with a positive advantage value and nonzero state visitation probability, otherwise the algo- rithm has converged to the optimal policy. However, in the approximate setting, it will typically be unavoidable, due to estimation and approximation error, that there will be some states s for which the expected advantage is negative, that is, >, #(a|s)A,(s,a) < 0. The complex dependency of p(s) on 7 makes Equation (2) difficult to optimize di- rectly. Instead, we introduce the following local approxi- mation to 17: We will use the following standard definitions of the state- action value function Qπ, the value function Vπ, and the advantage function Aπ: Lπ(˜π) = η(π) + s ρπ(s) a ˜π(a|s)Aπ(s, a). (3)
1502.05477#5
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
5
# 2 RELATED WORK Several projects targeting language understanding using QA-based strategies have recently emerged. Unlike tasks like dialogue or summarization, QA is easy to evaluate (especially in true/false or multiple choice scenarios) and hence makes it an appealing research avenue. The difficulty lies in the definition of questions: they must be unambiguously answerable by adult humans (or children), but still require some thinking. The Allen Institute for AI’s flagship project ARISTO1 is organized around a collection of QA tasks derived from increasingly difficult science exams, at the 4th, 8th, and 12th grade levels. Richardson et al. (2013) proposed the MCTest2 a set of 660 stories and associated questions intended for research on the machine comprehension of text. Each question requires the reader to understand different aspects of the story.
1502.05698#5
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
6
Lπ(˜π) = η(π) + s ρπ(s) a ˜π(a|s)Aπ(s, a). (3) Qe (St, 4t) = Eo ys jarg1 5... » “rod ; 1=0 Vx(St) = ay sega, » “rea ; 1=0 A,(s,a) = Q,(s,a) — Vr(s), where ay ~ 7(at|Se), Se41 ~ P(Sr41|St, ae) for t > 0. Note that Lπ uses the visitation frequency ρπ rather than ρ˜π, ignoring changes in state visitation density due to changes in the policy. However, if we have a parameter- ized policy πθ, where πθ(a|s) is a differentiable function of the parameter vector θ, then Lπ matches η to first order (see Kakade & Langford (2002)). That is, for any parame- ter value θ0, The following useful identity expresses the expected return of another policy ˜π in terms of the advantage over π, accu- mulated over timesteps (see Kakade & Langford (2002) or Appendix A for proof):
1502.05477#6
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
6
These two initiatives go in a promising direction but interpreting the results on these benchmarks remain complicated. Indeed, no system has yet been able to fully solve the proposed tasks and since many sub-tasks need to be solved to answer any of their questions (coreference, deduction, use of common-sense, etc.), it is difficult to clearly identify capabilities and limitations of these systems and hence to propose improvements and modifications. As a result, conclusions drawn from these projects are not much clearer than that coming from more traditional works on QA over large-scale Knowledge Bases (Berant et al., 2013; Fader et al., 2014). Besides, the best performing systems are based on hand-crafted patterns and features, and/or statistics acquired on very large corpora. It is difficult to argue that such systems actually understand language and are not simply light upgrades of traditional information extraction methods (Yao et al., 2014). The system of Berant et al. (2014) is more evolved since it builds a structured representation of a text and of a question to answer. Despite its potential this method remains highly domain specific and relies on a lot of prior knowledge.
1502.05698#6
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
7
nm) = n(%) + Eso,a0,- [eo A, (st, at ] (1) where the notation Es0,a0,···∼˜π [. . . ] indicates that actions are sampled at ∼ ˜π(·|st). Let ρπ be the (unnormalized) discounted visitation frequencies ρπ(s) = P (s0 = s)+γP (s1 = s)+γ2P (s2 = s)+. . . , Liz, (To) = (75) VoL, (7)|g_9, = Von(t)|o-9,- (4) Equation (4) implies that a sufficiently small step πθ0 → ˜π that improves Lπθold will also improve η, but does not give us any guidance on how big of a step to take. To address this issue,|Kakade & Langford|(2002) proposed a policy updating scheme called conservative policy iter- ation, for which they could provide explicit lower bounds on the improvement of 7. To define the conservative pol- icy iteration update, let 7.14 denote the current policy, and let 7’ = argmax,, L;.,,(7’). The new policy tnew was defined to be the following mixture:
1502.05477#7
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
7
Based on these observations, we chose to conceive a collection of much simpler QA tasks, with the main objective that failure or success of a system on any of them can unequivocally provide feedback on its capabilities. In that, we are close to the Winograd Schema Challenge Levesque et al. (2011), which is organized around simple statements followed by a single binary choice question such as: “Joan made sure to thank Susan for all the help she had received. Who had received the help? Joan or Susan?”. In this challenge, and our tasks, it is straightforward to interpret results. Yet, where the Winograd Challenge is mostly centered around evaluating if systems can acquire and make use of background knowledge that is not expressed in the words of the statement, our tasks are self-contained and are more diverse. By self-contained we mean our tasks come with both training data and evaluation data, rather than just the latter as in the case of ARISTO and the Winograd Challenge. MCTest has a train/test split but the training set is likely too small to capture all the reasoning needed to do well on the test set. In our setup one can assess the amount of training examples needed to perform well (which can be increased as
1502.05698#7
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
8
where s0 ∼ ρ0 and the actions are chosen according to π. We can rewrite Equation (1) with a sum over states instead of timesteps: Tnew(als) = (1 — a)to1a(a|s) + a7’ (als). (5) Kakade and Langford derived the following lower bound: n(i +0 Plo = ale Do Aals) 7‘ Ax (s, a) oo) +E oP = sl) 30 ( + Dl DH als)Ax(s, 4). (2) This equation implies that any policy update 7 — 7 that has a nonnegative expected advantage at every state s, ie, 0, 7(a|s)Ax(s,a) > 0, is guaranteed to increase the policy performance 7), or leave it constant in the case that the expected advantage is zero everywhere. This im- plies the classic result that the update performed by ex- act policy iteration, which uses the deterministic policy 2€7 (Tew) 2 Lara (Tuew) — a” where € = max|Eq.n’(a\s) [Ax(s,@)]|. (6)
1502.05477#8
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
8
capture all the reasoning needed to do well on the test set. In our setup one can assess the amount of training examples needed to perform well (which can be increased as desired) and commonsense knowledge and reasoning required for the test set should be contained in the training set. In terms of diversity, some of our tasks are related to existing setups but we also propose many additional ones; tasks 8 and 9 are inspired by previous work on lambda dependency-based compositional semantics (Liang et al., 2013; Liang, 2013) for instance. For us, each task checks one skill that the system must
1502.05698#8
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
9
(We have modified it to make it slightly weaker but sim- pler.) Note, however, that so far this bound only applies to mixture policies generated by Equation (5). This policy class is unwieldy and restrictive in practice, and it is desir- able for a practical policy update scheme to be applicable to all general stochastic policy classes. # 3 Monotonic Improvement Guarantee for General Stochastic Policies Equation (6), which applies to conservative policy iteration, implies that a policy update that improves the right-hand Trust Region Policy Optimization side is guaranteed to improve the true performance 7. Our principal theoretical result is that the policy improvement bound in Equation (6) can be extended to general stochas- tic policies, rather than just mixture polices, by replacing a with a distance measure between 7 and 7, and changing the constant € appropriately. Since mixture policies are rarely used in practice, this result is crucial for extending the im- provement guarantee to practical problems. The particular distance measure we use is the total variation divergence, which is defined by Dry (p || ¢) = 4°, \pi — ai| for dis- crete probability distributions p, qf! Define DIX (a, 7) as
1502.05477#9
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
9
1 http://allenai.org/aristo.html 2 http://research.microsoft.com/mct 2 # Under review as a conference paper at ICLR 2016 have and we postulate that performing well on all of them is a prerequisite for any system aiming at full text understanding and reasoning. # 3 THE TASKS Principles Our main idea is to provide a set of tasks, in a similar way to how software testing is built in computer science. Ideally each task is a “leaf” test case, as independent from oth- ers as possible, and tests in the simplest way possible one aspect of intended behavior. Subse- quent (“non-leaf”) tests can build on these by testing combinations as well. The tasks are pub- licly available at http://fb.ai/babi. Source code to generate the tasks is available at https://github.com/facebook/bAbI-tasks.
1502.05698#9
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
10
DIRS(m.#) = max Drv (w(-|s) || #|s))Algorithm 1 Policy iteration algorithm guaranteeing non- decreasing expected return η Initialize π0. for i = 0, 1, 2, . . . until convergence do Compute all advantage values Aπi(s, a). Solve the constrained optimization problem Tiga = arg max [Lx,(7) — CDKT* (7, 7)| where C' = 4ey/(1 — 7)? and L,(n)=n(m) +> pals) om (as) An, (8+a) # end for Theorem 1. Let α = Dmax ing bound holds: TV (πold, πnew). Then the followsey (1-7? where € = max |A,(s, a)| sa (Tew) = Larosa (Mew) — (8) is a type of minorization-maximization (MM) algorithm (Hunter & Lange, 2004), which is a class of methods that also includes expectation maximization. In the terminol- ogy of MM algorithms, Mi is the surrogate function that minorizes η with equality at πi. This algorithm is also rem- iniscent of proximal gradient methods and mirror descent.
1502.05477#10
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
10
Each task provides a set of training and test data, with the intention that a successful model performs well on test data. Following Weston et al. (2014), the supervision in the training set is given by the true answers to questions, and the set of relevant statements for answering a given question, which may or may not be used by the learner. We set up the tasks so that correct answers are limited to a single word (Q: Where is Mark? A: bathroom), or else a list of words (Q: What is Mark holding?) as evaluation is then clear-cut, and is measured simply as right or wrong. All of the tasks are noiseless and a human able to read that language can potentially achieve 100% accuracy. We tried to choose tasks that are natural to a human: they are based on simple usual situ- ations and no background in areas such as formal semantics, machine learning, logic or knowledge representation is required for an adult to solve them. The data itself is produced using a simple simulation of characters and objects moving around and interacting in locations, described in Section 4. The simulation allows us to generate data in many different scenarios where the true labels are known by grounding to the simulation. For each task, we describe it by giving a small sample of the dataset including statements, questions and the true labels (in red) in Tables 1 and 2.
1502.05698#10
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
11
We provide two proofs in the appendix. The first proof ex- tends Kakade and Langford’s result using the fact that the random variables from two distributions with total varia- tion divergence less than α can be coupled, so that they are equal with probability 1 − α. The second proof uses per- turbation theory. Trust region policy optimization, which we propose in the following section, is an approximation to Algorithm 1, which uses a constraint on the KL divergence rather than a penalty to robustly allow large updates. # 4 Optimization of Parameterized Policies Next, we note the following relationship between the to- tal variation divergence and the KL divergence (Pollard (2000), Ch. 3): Drv(p || 4g)? < Dxx(p || q). Let Die*(a,7) = maxs Dxi(z(-|s) || 7(-|s)). The follow- ing bound then follows directly from Theorem|I} In the previous section, we considered the policy optimiza- tion problem independently of the parameterization of π and under the assumption that the policy can be evaluated at all states. We now describe how to derive a practical algorithm from these theoretical foundations, under finite sample counts and arbitrary parameterizations. (a) 2 Lx (a) — CDR (, 7), dey (l-7)?" where C' = (9)
1502.05477#11
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
11
Single Supporting Fact Task 1 consists of questions where a previously given single supporting fact, potentially amongst a set of other irrelevant facts, provides the answer. We first test one of the simplest cases of this, by asking for the location of a person, e.g. “Mary travelled to the office. Where is Mary?”. This kind of task was already employed in Weston et al. (2014). It can be considered the simplest case of some real world QA datasets such as in Fader et al. (2013). Two or Three Supporting Facts A harder task is to answer questions where two supporting state- ments have to be chained to answer the question, as in task 2, where to answer the question “Where is the football?” one has to combine information from two sentences “John is in the playground” and “John picked up the football”. Again, this kind of task was already used in Weston et al. (2014). Similarly, one can make a task with three supporting facts, given in task 3, whereby the first three statements are all required to answer the question “Where was the apple before the kitchen?”.
1502.05698#11
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
12
(a) 2 Lx (a) — CDR (, 7), dey (l-7)?" where C' = (9) Algorithm 1 describes an approximate policy iteration scheme based on the policy improvement bound in Equa- tion (9). Note that for now, we assume exact evaluation of the advantage values Aπ. It follows from Equation (9) that Algorithm 1 is guaranteed to generate a monotonically improving sequence of policies η(π0) ≤ η(π1) ≤ η(π2) ≤ . . . . To see this, let Mi(π) = Lπi(π) − CDmax Since we consider parameterized policies 7(a|s) with pa- rameter vector 0, we will overload our previous notation to use functions of @ rather than 7, e.g. 7(8) := n(7), L6(0) := Lo (7g), and Dxx(4 || 8) := Dx(70 || 7). We will use Ao1q to denote the previous policy parameters that we want to improve upon. The preceding section showed that η(θ) ≥ Lθold (θ) − CDmax KL (θold, θ), with equality at θ = θold. Thus, by per- forming the following maximization, we are guaranteed to improve the true objective η:
1502.05477#12
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
12
Two or Three Argument Relations To answer questions the ability to differentiate and recognize subjects and objects is crucial. In task 4 we consider the extreme case where sentences feature re- ordered words, i.e. a bag-of-words will not work. For example, the questions “What is north of the bedroom?” and “What is the bedroom north of?” have exactly the same words, but a different order, with different answers. A step further, sometimes one needs to differentiate three separate arguments. Task 5 involves statements like “Jeff was given the milk by Bill” and then queries who is the giver, receiver or which object is involved. Yes/No Questions Task 6 tests, on some of the simplest questions possible (specifically, ones with a single supporting fact), the ability of a model to answer true/false type questions like “Is John in the playground?”. Counting and Lists/Sets Task 7 tests the ability of the QA system to perform simple counting operations, by asking about the number of objects with a certain property, e.g. “How many objects is Daniel holding?”. Similarly, task 8 tests the ability to produce a set of single word answers in the form of a list, e.g. “What is Daniel holding?”. These tasks can be seen as QA tasks related to basic database search operations. 3
1502.05698#12
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
13
maximize θ [Lθold (θ) − CDmax KL (θold, θ)] . η(πi+1) ≥ Mi(πi+1) by Equation (9) η(πi) = Mi(πi), therefore, η(πi+1) − η(πi) ≥ Mi(πi+1) − M (πi). (10) Thus, by maximizing Mi at each iteration, we guarantee that the true objective η is non-decreasing. This algorithm 1Our result is straightforward to extend to continuous states and actions by replacing the sums with integrals. In practice, if we used the penalty coefficient C recom- mended by the theory above, the step sizes would be very small. One way to take larger steps in a robust way is to use a constraint on the KL divergence between the new policy and the old policy, i.e., a trust region constraint: maximize θ Lθold (θ) (11) subject to Dmax KL (θold, θ) ≤ δ. Trust Region Policy Optimization
1502.05477#13
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05477
14
maximize θ Lθold (θ) (11) subject to Dmax KL (θold, θ) ≤ δ. Trust Region Policy Optimization This problem imposes a constraint that the KL divergence is bounded at every point in the state space. While it is motivated by the theory, this problem is impractical to solve due to the large number of constraints. Instead, we can use a heuristic approximation which considers the average KL divergence: Dict, (01,02) = Eswp [Dic(tt4, (-|8) |] 743(-[s))] We therefore propose solving the following optimization problem to generate a policy update: Lθold(θ) ρθold KL (θold, θ) ≤ δ. maximize θ (12) subject to D Similar policy updates have been proposed in prior work (Bagnell & Schneider, 2003; Peters & Schaal, 2008b; Pe- ters et al., 2010), and we compare our approach to prior methods in Section 7 and in the experiments in Section 8. Our experiments also show that this type of constrained update has similar empirical performance to the maximum KL divergence constraint in Equation (11). trajectories sampling two rollouts Sn An using CRN all state-action | pairs used in objective Po rollout set sampling two rollouts using CRN | Po rollout set
1502.05477#14
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05477
15
trajectories sampling two rollouts Sn An using CRN all state-action | pairs used in objective Po rollout set sampling two rollouts using CRN | Po rollout set Figure 1. Left: illustration of single path procedure. Here, we generate a set of trajectories via simulation of the policy and in- corporate all state-action pairs (sn, an) into the objective. Right: illustration of vine procedure. We generate a set of “trunk” tra- jectories, and then generate “branch” rollouts from a subset of the reached states. For each of these states sn, we perform multiple actions (a1 and a2 here) and perform a rollout after each action, using common random numbers (CRN) to reduce the variance. All that remains is to replace the expectations by sample averages and replace the Q value by an empirical estimate. The following sections describe two different schemes for performing this estimation. # 5 Sample-Based Estimation of the Objective and Constraint The previous section proposed a constrained optimization problem on the policy parameters (Equation (12)), which optimizes an estimate of the expected total reward η sub- ject to a constraint on the change in the policy at each up- date. This section describes how the objective and con- straint functions can be approximated using Monte Carlo simulation.
1502.05477#15
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05477
16
The first sampling scheme, which we call single path, is the one that is typically used for policy gradient estima- tion (Bartlett & Baxter, 2011), and is based on sampling individual trajectories. The second scheme, which we call vine, involves constructing a rollout set and then perform- ing multiple actions from each state in the rollout set. This method has mostly been explored in the context of policy it- eration methods (Lagoudakis & Parr, 2003; Gabillon et al., 2013). We seek to solve the following optimization problem, ob- tained by expanding Lθold in Equation (12): maximize θ s ρθold (s) a πθ(a|s)Aθold (s, a) subject to D ρθold KL (θold, θ) ≤ δ. (13) We first replace )>, 9,,,(s) [-. .] in the objective by the ex- pectation TEE oxo sig [...]. Next, we replace the advan- tage values Ag,,, by the Q-values Qo,,, in Equation (13), which only changes the objective by a constant. Last, we replace the sum over the actions by an importance sampling estimator. Using q to denote the sampling distribution, the contribution of a single s,, to the loss function is
1502.05477#16
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
16
Task 3: Three Supporting Facts John picked up the apple. John went to the office. John went to the kitchen. John dropped the apple. Where was the apple before the kitchen? A:office Task 4: Two Argument Relations The office is north of the bedroom. The bedroom is north of the bathroom. The kitchen is west of the garden. What is north of the bedroom? A: office What is the bedroom north of? A: bathroom Task 5: Three Argument Relations Mary gave the cake to Fred. Fred gave the cake to Bill. Jeff was given the milk by Bill. Who gave the cake to Fred? A: Mary Who did Fred give the cake to? A: Bill Task 6: Yes/No Questions John moved to the playground. Daniel went to the bathroom. John went back to the hallway. Is John in the playground? A:no Is Daniel in the bathroom? A:yes Task 7: Counting Daniel picked up the football. Daniel dropped the football. Daniel got the milk. Daniel took the apple. How many objects is Daniel holding? A: two Task 8: Lists/Sets Daniel picks up the football. Daniel drops the newspaper. Daniel picks up the milk. John took the apple. What is Daniel holding? milk, football
1502.05698#16
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
17
a. <a) aE, | Talalsn) y 0(4|$n)Adia($n+@) = Eaxg | q(4|sn) A‘ dora (Sns a] : Our optimization problem in Equation (13) is exactly equivalent to the following one, written in terms of expec- tations: To(als) qals) Qralss0)| (14) maximize Es~ po aoa~d # 5.1 Single Path In this estimation procedure, we collect a sequence of states by sampling s0 ∼ ρ0 and then simulating the pol- icy πθold for some number of timesteps to generate a trajec- tory s0, a0, s1, a1, . . . , sT −1, aT −1, sT . Hence, q(a|s) = πθold (a|s). Qθold (s, a) is computed at each state-action pair (st, at) by taking the discounted sum of future rewards along the trajectory. # 5.2 Vine
1502.05477#17
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05477
18
# 5.2 Vine In this estimation procedure, we first sample s0 ∼ ρ0 and simulate the policy πθi to generate a number of trajecto- ries. We then choose a subset of N states along these tra- jectories, denoted s1, s2, . . . , sN , which we call the “roll- out set”. For each state sn in the rollout set, we sample K actions according to an,k ∼ q(·|sn). Any choice of q(·|sn) with a support that includes the support of πθi(·|sn) will produce a consistent estimator. In practice, we found that q(·|sn) = πθi(·|sn) works well on continuous prob- lems, such as robotic locomotion, while the uniform dis- tribution works well on discrete tasks, such as the Atari games, where it can sometimes achieve better exploration. subject to Es~py,,, [DL (Mra (-l8) || ma(-Is))] <4. For each action an,k sampled at each state sn, we estiTrust Region Policy Optimization
1502.05477#18
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
18
Simple Negation and Indefinite Knowledge Tasks 9 and 10 test slightly more complex natural language constructs. Task 9 tests one of the simplest forms of negation, that of supporting facts that imply a statement is false e.g. “Fred is no longer in the office” rather than “Fred travelled to the office”. (In this case, task 6 (yes/no questions) is a prerequisite to the task.) Task 10 tests if we can model statements that describe possibilities rather than certainties, e.g. “John is either in the classroom or the playground.”, where in that case the answer is “maybe” to the question “Is John in the classroom?”.
1502.05698#18
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
19
For each action an,k sampled at each state sn, we estiTrust Region Policy Optimization mate ˆQθi(sn, an,k) by performing a rollout (i.e., a short trajectory) starting with state sn and action an,k. We can greatly reduce the variance of the Q-value differences be- tween rollouts by using the same random number sequence for the noise in each of the K rollouts, i.e., common random numbers. See (Bertsekas, 2005) for additional discussion on Monte Carlo estimation of Q-values and (Ng & Jordan, 2000) for a discussion of common random numbers in re- inforcement learning. In small, finite action spaces, we can generate a rollout for every possible action from a given state. The contribution to Lθold from a single state sn is as follows: 1. Use the single path or vine procedures to collect a set of state-action pairs along with Monte Carlo estimates of their Q-values. 2. By averaging over samples, construct the estimated objective and constraint in Equation (14). 3. Approximately solve this constrained optimization problem to update the policy’s parameter vector θ. We use the conjugate gradient algorithm followed by a line search, which is altogether only slightly more expensive than computing the gradient itself. See Ap- pendix C for details.
1502.05477#19
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
19
Basic Coreference, Conjunctions and Compound Coreference Task 11 tests the simplest type of coreference, that of detecting the nearest referent, e.g. “Daniel was in the kitchen. Then he went to the studio.”. Real-world data typically addresses this as a labeling problem and studies more sophisticated phenomena (Soon et al., 2001), whereas we evaluate it as in all our other tasks as a question answering problem. Task 12 (conjunctions) tests referring to multiple subjects in a single statement, e.g. “Mary and Jeff went to the kitchen.”. Task 13 tests coreference in the case where the pronoun can refer to multiple actors, e.g. “Daniel and Sandra journeyed to the office. Then they went to the garden”. Time Reasoning While our tasks so far have included time implicitly in the order of the state- ments, task 14 tests understanding the use of time expressions within the statements, e.g. “In the afternoon Julie went to the park. Yesterday Julie was at school.”, followed by questions about the order of events such as “Where was Julie before the park?”. Real-world datasets address the task of evaluating time expressions typically as a labeling, rather than a QA task, see e.g. UzZaman et al. (2012).
1502.05698#19
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
20
= Smt ax |$n)Q (Sn, @k) (15) where the action space is A = {a1,a2,...,ax}. In large or continuous state spaces, we can construct an estima- tor of the surrogate objective "Oaaa importance sampling. The self-normalized estimator (Owen| (2013), Chapter 9) of Lo... obtained at a single state s, is Dh et (Sn, an,k) Dh et (Sn, an,k) Gora (An, k Ln (9) 9(analSa) (16) Dee 1 Fo (an,k18n) assuming actions an,1, an,2, . . . , an,K from state sn. This self-normalized estimator removes the need to use a baseline for the Q-values (note that the gradient is unchanged by adding a constant to the Q-values). Averaging over sn ∼ ρ(π), we obtain an estimator for Lθold , as well as its gradient.
1502.05477#20
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05477
21
With regard to (3), we construct the Fisher informa- tion matrix (FIM) by analytically computing the Hessian of the KL divergence, rather than using the covariance matrix of the gradients. That is, we estimate A;; as N 2 W on=1 O80; Dxu (Tra (-|Sn) )), rather than * ee on log To(an|Sn) go; log 79(@n|$n) lytic estimator integrates over the action at each state s,,, and does not depend on the action a, that was sampled. As described in Appendix |C} this analytic estimator has computational benefits in the large-scale setting, since it removes the need to store a dense Hessian or all policy gra- dients from a batch of trajectories. The rate of improvement in the policy is similar to the empirical FIM, as shown in the experiments. Il 70(-|Sn The anaLet us briefly summarize the relationship between the the- ory from Section 3 and the practical algorithm we have de- scribed: The vine and single path methods are illustrated in Figure 1. We use the term vine, since the trajectories used for sam- pling can be likened to the stems of vines, which branch at various points (the rollout set) into several short offshoots (the rollout trajectories).
1502.05477#21
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
21
Task 11: Basic Coreference Daniel was in the kitchen. Then he went to the studio. Sandra was in the office. Where is Daniel? A:studio Task 12: Conjunction Mary and Jeff went to the kitchen. Then Jeff went to the park. Where is Mary? A: kitchen Where is Jeff? A: park Task 13: Compound Coreference Daniel and Sandra journeyed to the office. Then they went to the garden. Sandra and John travelled to the kitchen. After that they moved to the hallway. Where is Daniel? A: garden Task 14: Time Reasoning In the afternoon Julie went to the park. Yesterday Julie was at school. Julie went to the cinema this evening. Where did Julie go after the park? A:cinema Where was Julie before the park? A:school Task 15: Basic Deduction Sheep are afraid of wolves. Cats are afraid of dogs. Mice are afraid of cats. Gertrude is a sheep. What is Gertrude afraid of? A:wolves Task 16: Basic Induction Lily is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? A:white Task 17: Positional Reasoning The triangle is
1502.05698#21
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
22
The benefit of the vine method over the single path method that is our local estimate of the objective has much lower variance given the same number of Q-value samples in the surrogate objective. That is, the vine method gives much better estimates of the advantage values. The downside of the vine method is that we must perform far more calls to the simulator for each of these advantage estimates. Fur- thermore, the vine method requires us to generate multiple trajectories from each state in the rollout set, which limits this algorithm to settings where the system can be reset to an arbitrary state. In contrast, the single path algorithm re- quires no state resets and can be directly implemented on a physical system (Peters & Schaal, 2008b). • The theory justifies optimizing a surrogate objective with a penalty on KL divergence. However, the large penalty coefficient C leads to prohibitively small steps, so we would like to decrease this coefficient. Empirically, it is hard to robustly choose the penalty coefficient, so we use a hard constraint instead of a penalty, with parameter δ (the bound on KL diver- gence). KL (θold, θ) is hard for numerical optimization and estimation, so instead we constrain DKL(θold, θ).
1502.05477#22
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
22
is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? A:white Task 17: Positional Reasoning The triangle is to the right of the blue square. The red square is on top of the blue square. The red sphere is to the right of the blue square. Is the red sphere to the right of the blue square? A:yes Is the red square to the left of the triangle? A:yes Task 18: Size Reasoning The football fits in the suitcase. The suitcase fits in the cupboard. The box is smaller than the football. Will the box fit in the suitcase? A:yes Will the cupboard fit in the box? A:no
1502.05698#22
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
23
KL (θold, θ) is hard for numerical optimization and estimation, so instead we constrain DKL(θold, θ). • Our theory ignores estimation error for the advantage function. Kakade & Langford (2002) consider this er- ror in their derivation, and the same arguments would hold in the setting of this paper, but we omit them for simplicity. # 6 Practical Algorithm Here we present two practical policy optimization algo- rithm based on the ideas above, which use either the single path or vine sampling scheme from the preceding section. The algorithms repeatedly perform the following steps: # 7 Connections with Prior Work As mentioned in Section 4, our derivation results in a pol- icy update that is related to several prior methods, provid- ing a unifying perspective on a number of policy update Optimization Trust Region Policy Optimization schemes. The natural policy gradient (Kakade, 2002) can be obtained as a special case of the update in Equation (12) by using a linear approximation to L and a quadratic ap- proximation to the DKL constraint, resulting in the follow- ing problem: maximize [Volo loony’ (O- dora) a” # [Volo 5 (Gaia - 0)"
1502.05477#23
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
23
tests basic induction via inheritance of properties. A full analysis of induction and deduction is clearly beyond the scope of this work, and future tasks should analyse further, deeper aspects. Positional and Size Reasoning Task 17 tests spatial reasoning, one of many components of the classical SHRDLU system (Winograd, 1972) by asking questions about the relative positions of colored blocks. Task 18 requires reasoning about the relative size of objects and is inspired by the commonsense reasoning examples in the Winograd schema challenge (Levesque et al., 2011). Path Finding The goal of task 19 is to find the path between locations: given the description of various locations, it asks: how do you get from one to another? This is related to the work of Chen & Mooney (2011) and effectively involves a search problem. Agent’s Motivations Finally, task 20 questions, in the simplest way possible, why an agent per- forms an action. It addresses the case of actors being in a given state (hungry, thirsty, tired, . . . ) and the actions they then take, e.g. it should learn that hungry people might go to the kitchen, and so on. As already stated, these tasks are meant to foster the development and understanding of machine learning algorithms. A single model should be evaluated across all the tasks (not tuning per task) and then the same model should be tested on additional real-world tasks.
1502.05698#23
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]