doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1502.05477
24
maximize [Volo loony’ (O- dora) a” # [Volo 5 (Gaia - 0)" (θold − θ)T A(θold)(θold − θ) ≤ δ, Figure 2. 2D robot models used for locomotion experiments. From left to right: swimmer, hopper, walker. The hopper and walker present a particular challenge, due to underactuation and contact discontinuities. # subject to where A(θold)ij = 00 30, 00, B= [Diu (a(-|8, Bora) || 715; 9))] [og old” Input layer Fully connected layer Mean parameters Sampling — vr Control Standard deviations 30 units Input Cony, Cony. Hidden Action layer layer layer layer probabilities Sampling Control Screen input 16 filters 16 filters 20 units The update is Orew = 9oia + ZA ota) *VoL(9)|5-6.,,> where the stepsize + is typically treated as an algorithm parameter. This differs from our approach, which en- forces the constraint at each update. Though this difference might seem subtle, our experiments demonstrate that it sig- nificantly improves the algorithm’s performance on larger problems.
1502.05477#24
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
24
In our data release, in addition to providing the above 20 tasks in English, we also provide them (i) in Hindi; and (ii) with shuffled English words so they are no longer readable by humans. A good learning algorithm should perform similarly on all three, which would likely not be the case for a method using external resources, a setting intended to mimic a learner being first presented a language and having to learn from scratch. 5 # Under review as a conference paper at ICLR 2016 # 4 SIMULATION All our tasks are generated with a simulation which behaves like a classic text adventure game. The idea is that generating text within this simulation allows us to ground the language used into a coherent and controlled (artificial) world. Our simulation follows those of Bordes et al. (2010); Weston et al. (2014) but is somewhat more complex.
1502.05698#24
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
25
We can also obtain the standard policy gradient update by using an > constraint or penalty: maximize [VoLeae()looon (O- Gora)| (18) 1 subject to 3 le — Ooral|? < 6. The policy iteration update can also be obtained by solving the unconstrained problem maximizeπ Lπold(π), using L as defined in Equation (3). Figure 3. Neural networks used for the locomotion task (top) and for playing Atari games (bottom). 3. Can TRPO be used to solve challenging large-scale problems? How does TRPO compare with other methods when applied to large-scale problems, with regard to final performance, computation time, and sample complexity?
1502.05477#25
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
25
The simulated world is composed of entities of various types (locations, objects, persons. etc.) and of various actions that operate on these entities. Entities have internal states: their location, whether they carry objects on top or inside them (e.g., tables and boxes), the mental state of actors (e.g. hungry), as well as properties such as size, color, and edibility. For locations, the nearby places that are connected (e.g. what lies to the east, or above) are encoded. For actors, a set of pre-specified rules per actor can also be specified to control their behavior, e.g. if they are hungry they may try to find food. Random valid actions can also be executed if no rule is set, e.g. walking around randomly.
1502.05698#25
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
26
Several other methods employ an update similar to Equa- tion (12). Relative entropy policy search (REPS) (Peters et al., 2010) constrains the state-action marginals p(s, a), while TRPO constrains the conditionals p(a|s). Unlike REPS, our approach does not require a costly nonlinear op- timization in the inner loop. Levine and Abbeel (2014) also use a KL divergence constraint, but its purpose is to encour- age the policy not to stray from regions where the estimated dynamics model is valid, while we do not attempt to esti- mate the system dynamics explicitly. Pirotta et al. (2013) also build on and generalize Kakade and Langford’s results, and they derive different algorithms from the ones here. To answer (1) and (2), we compare the performance of the single path and vine variants of TRPO, several ablated variants, and a number of prior policy optimization algo- rithms. With regard to (3), we show that both the single path and vine algorithm can obtain high-quality locomo- tion controllers from scratch, which is considered to be a hard problem. We also show that these algorithms produce competitive results when learning policies for playing Atari games from images using convolutional neural networks with tens of thousands of parameters. # 8 Experiments We designed our experiments to investigate the following questions:
1502.05477#26
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
26
The actions an actor can execute in the simulation consist of the following: go <location>, get <object>, get <object1> from <object2>, put <object1> in/on <object2>, give <object> to <actor>, drop <object>, set <entitity> <state>, look, inventory and examine <object>. A set of universal constraints is imposed on those actions to enforce coherence in the simulation. For example an actor cannot get something that they or someone else already has, they cannot go to a place that is not connected to the current location, cannot drop something they do not already have, and so on. Using the underlying actions, rules for actors, and their constraints, defines how actors act. For each task we limit the actions needed for that task, e.g. task 1 only needs go whereas task 2 uses go, get and drop. If we write the commands down this gives us a very simple “story” which is executable by the simulation, e.g., joe go playground; bob go office; joe get football. This example corresponds to task 2. The system can then ask questions about the state of the simulation e.g., where john?, where football? and so on. It is easy to calculate the true answers for these questions as we have access to the underlying world.
1502.05698#26
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
27
# 8 Experiments We designed our experiments to investigate the following questions: # 8.1 Simulated Robotic Locomotion 1. What are the performance characteristics of the single path and vine sampling procedures? We conducted the robotic locomotion experiments using the MuJoCo simulator (Todorov et al., 2012). The three simulated robots are shown in Figure 2. The states of the robots are their generalized positions and velocities, and the controls are joint torques. Underactuation, high dimension- ality, and non-smooth dynamics due to contacts make these 2. TRPO is related to prior methods (e.g. natural policy gradient) but makes several changes, most notably by using a fixed KL divergence rather than a fixed penalty coefficient. How does this affect the performance of the algorithm? Trust Region Policy Optimization tasks very challenging. The following models are included in our evaluation: 1. Swimmer. 10-dimensional state space, linear reward for forward progress and a quadratic penalty on joint effort to produce the reward r(x, u) = ve—107>||ull?. The swimmer can propel itself forward by making an undulating motion.
1502.05477#27
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
27
To produce more natural looking text with lexical variety from statements and questions we employ a simple automated grammar. Each verb is assigned a set of synonyms, e.g., the simulation command get is replaced with either picked up, got, grabbed or took, and drop is replaced with either dropped, left, discarded or put down. Similarly, each object and actor can have a set of replacement synonyms as well, e.g. replacing Daniel with he in task 11. Adverbs are crucial for some tasks such as the time reasoning task 14. There are a great many aspects of language not yet modeled. For example, all sentences are so far relatively short and contain little nesting. Further, the entities and the vocabulary size is small (150 words, and typically 4 actors, 6 locations and 3 objects used per task). The hope is that defining a set of well defined tasks will help evaluate models in a controlled way within the simulated environment, which is hard to do with real data. That is, these tasks are not a substitute for real data, but should complement them, especially when developing and analysing algorithms. # 5 EXPERIMENTS
1502.05698#27
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
28
2. Hopper. 12-dimensional state space, same reward as the swimmer, with a bonus of +1 for being in a non- terminal state. We ended the episodes when the hop- per fell over, which was defined by thresholds on the torso height and angle. Cartpole we Swimmer of policy 3%: walker 3. Walker. 18-dimensional state space. For the walker, we added a penalty for strong impacts of the feet against the ground to encourage a smooth walk rather than a hopping gait. We used δ = 0.01 for all experiments. See Table 2 in the Appendix for more details on the experimental setup and parameters used. We used neural networks to represent the policy, with the architecture shown in Figure 3, and further details provided in Appendix D. To establish a standard baseline, we also included the classic cart-pole balancing problem, based on the formulation from Barto et al. (1983), using a linear policy with six parameters that is easy to opti- mize with derivative-free black-box optimization methods.
1502.05477#28
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
28
# 5 EXPERIMENTS We compared the following methods on our tasks (on the English dataset): (i) an N - gram classifier baseline, (ii) LSTMs (long short term memory Recurrent Neural Networks) (Hochreiter & Schmidhuber, 1997), (iii) Memory Networks (MemNNs) (Weston et al., 2014), (iv) some extensions of Memory Networks we will detail; and (v) a structured SVM that incorporates external labeled data from existing NLP tasks. These models belong to three separate tracks. Weakly supervised models are only given question answer pairs at training time, whereas strong supervision provides the set of supporting facts at training time (but not testing time) as well. Strongly super- vised ones give accuracy upper bounds for weakly supervised models, i.e. the performance should be superior given the same model class. Methods in the last external resources track can use labeled data from other sources rather than just the training set provided, e.g. coreference and semantic role labeling tasks, as well as strong supervision. For each task we use 1000 questions for training, and 1000 for testing, and report the test accuracy. We consider a task successfully passed if ≥ 95% accuracy is obtained3. 3The choice of 95% (and 1000 training examples) is arbitrary. 6 # Under review as a conference paper at ICLR 2016
1502.05698#28
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
29
The following algorithms were considered in the compari- son: single path TRPO; vine TRPO; cross-entropy method (CEM), a gradient-free method (Szita & L¨orincz, 2006); covariance matrix adaption (CMA), another gradient-free method (Hansen & Ostermeier, 1996); natural gradi- ent, the classic natural policy gradient algorithm (Kakade, 2002), which differs from single path by the use of a fixed penalty coefficient (Lagrange multiplier) instead of the KL divergence constraint; empirical FIM, identical to single path, except that the FIM is estimated using the covariance matrix of the gradients rather than the analytic estimate; max KL, which was only tractable on the cart-pole problem, and uses the maximum KL divergence in Equation (11), rather than the average divergence, allowing us to evaluate the quality of this approximation. The parameters used in the experiments are provided in Appendix E. For the natu- ral gradient method, we swept through the possible values of the stepsize in factors of three, and took the best value according to the final performance.
1502.05477#29
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
29
3The choice of 95% (and 1000 training examples) is arbitrary. 6 # Under review as a conference paper at ICLR 2016 Table 3: Test accuracy (%) on our 20 Tasks for various methods (1000 training examples each). Our proposed extensions to MemNNs are in columns 5-9: with adaptive memory (AM), N -grams (NG), nonlinear matching function (NL), and combinations thereof. Bold numbers indicate tasks where our extensions achieve ≥ 95% accuracy but the original MemNN model of Weston et al. (2014) did not. The last two columns (10-11) give method. Column 10 gives the amount of training data for each task needed to extra analysis of the MemNN AM + NG + NL obtain ≥ 95% accuracy, or FAIL if this is not achievable with 1000 training examples. The final column gives the accuracy when training on all data at once, rather than separately.
1502.05698#29
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
30
Learning curves showing the total reward averaged across five runs of each algorithm are shown in Figure 4. Single path and vine TRPO solved all of the problems, yielding the best solutions. Natural gradient performed well on the two easier problems, but was unable to generate hopping and walking gaits that made forward progress. These re- sults provide empirical evidence that constraining the KL divergence is a more robust way to choose step sizes and make fast, consistent progress, compared to using a fixed Figure 4. Learning curves for locomotion tasks, averaged across five runs of each algorithm with random initializations. Note that 1 is achievable without any for the hopper and walker, a score of forward velocity, indicating a policy that simply learned balanced standing, but not walking. penalty. CEM and CMA are derivative-free algorithms, hence their sample complexity scales unfavorably with the number of parameters, and they performed poorly on the larger problems. The max KL method learned somewhat more slowly than our final method, due to the more restric- tive form of the constraint, but overall the result suggests that the average KL divergence constraint has a similar ef- fect as the theorecally justified maximum KL divergence. Videos of the policies learned by TRPO may be viewed on the project website: http://sites.google.com/ site/trpopaper/.
1502.05477#30
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
30
Weakly Supervised Uses External Resources Strong Supervision (using supporting facts) TASK 1 - Single Supporting Fact 2 - Two Supporting Facts 3 - Three Supporting Facts 4 - Two Arg. Relations 5 - Three Arg. Relations 6 - Yes/No Questions 7 - Counting 8 - Lists/Sets 9 - Simple Negation 10 - Indefinite Knowledge 11 - Basic Coreference 12 - Conjunction 13 - Compound Coref. 14 - Time Reasoning 15 - Basic Deduction 16 - Basic Induction 17 - Positional Reasoning 18 - Size Reasoning 19 - Path Finding 20 - Agent’s Motivations Mean Performance M SV features +SRL 95 ≥ req. ex. of No. RY N O N M E Mem M E PTIV DA A 100 100 100 69 83 52 78 90 71 57 100 100 100 100 73 100 46 50 9 100 79 (2014) N N Mem al. et Weston 100 100 20 71 83 47 68 77 65 59 100 100 100 99 74 27 54 57 0 100 75 R A E N N NLIN O N + Structured N MS N A R N-G + N NL N + G N + N-gram Classifier M LST Mem Mem Mem F E R O C M A 100 100 100 73 86 100 83 94 100 97 100 100 100 100 77 100 57 54 15 100 87
1502.05698#30
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
31
Note that TRPO learned all of the gaits with general- purpose policies and simple reward functions, using min- imal prior knowledge. This is in contrast with most prior methods for learning locomotion, which typically rely on hand-architected policy classes that explicitly encode no- tions of balance and stepping (Tedrake et al., 2004; Geng et al., 2006; Wampler & Popovi´c, 2009). # 8.2 Playing Games from Images To evaluate TRPO on a partially observed task with com- plex observations, we trained policies for playing Atari games, using raw images as input. The games require learning a variety of behaviors, such as dodging bullets and hitting balls with paddles. Aside from the high dimension- ality, challenging elements of these games include delayed rewards (no immediate penalty is incurred when a life is lost in Breakout or Space Invaders); complex sequences of behavior (Q*bert requires a character to hop on 21 differ- ent platforms); and non-stationary image statistics (Enduro involves a changing and flickering background). We tested our algorithms on the same seven games reported on in (Mnih et al., 2013) and (Guo et al., 2014), which are Trust Region Policy Optimization
1502.05477#31
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
31
Classifier M LST Mem Mem Mem F E R O C M A 100 100 100 73 86 100 83 94 100 97 100 100 100 100 77 100 57 54 15 100 87 M A 100 100 99 100 86 53 86 88 63 54 100 100 100 99 100 100 49 74 3 100 83 M A 100 100 100 100 98 100 85 91 100 98 100 100 100 99 100 100 65 95 36 100 93 99 74 17 98 83 99 69 70 100 99 100 96 99 99 96 24 61 62 49 95 79 36 2 7 50 20 49 52 40 62 45 29 9 26 19 20 43 46 52 0 76 34 250 ex. 500 ex. 500 ex. 500 ex. 1000 ex. 500 ex. FAIL FAIL 500 ex. 1000 ex. 250 ex. 250 ex. 250 ex. 500 ex. 100 ex. 100 ex. FAIL 1000 ex. FAIL 250 ex. 100 50 20 20 61 70 48 49 45 64 44 72 74 94 27 21 23 51 52 8 91 49 Training MultiTask 100 100 98 80 99 100 86 93 100 98 100 100 100 99 100 94 72 93 19 100 92
1502.05698#31
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
32
We tested our algorithms on the same seven games reported on in (Mnih et al., 2013) and (Guo et al., 2014), which are Trust Region Policy Optimization B. Rider Breakout Enduro Pong Q*bert Seaquest S. Invaders Random Human (Mnih et al., 2013) 354 7456 1.2 31.0 0 368 −20.4 −3.0 157 18900 110 28010 179 3690 Deep Q Learning (Mnih et al., 2013) 4092 168.0 470 20.0 1952 1705 581 UCC-I (Guo et al., 2014) 5702 380 741 21 20025 2995 692 TRPO - single path TRPO - vine 1425.2 859.5 10.8 34.2 534.6 430.8 20.9 20.9 1973.5 7732.5 1908.6 788.4 568.4 450.2 Table 1. Performance comparison for vision-based RL algorithms on the Atari domain. Our algorithms (bottom rows) were run once on each task, with the same architecture and parameters. Performance varies substantially from run to run (with different random initializations of the policy), but we could not obtain error statistics due to time constraints.
1502.05477#32
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
32
Methods The N -gram classifier baseline is inspired by the baselines in Richardson et al. (2013) but applied to the case of producing a 1-word answer rather than a multiple choice question: we construct a bag-of-N -grams for all sentences in the story that share at least one word with the question, and then learn a linear classifier to predict the answer using those features4. LSTMs are a popular method for sequence prediction (Sutskever et al., 2014) and outperform stan- dard RNNs (Recurrent Neural Networks) for tasks similar to ours (Weston et al., 2014). They work by reading the story until the point they reach a question and then have to output an answer. Note that they are weakly supervised by answers only, and are hence at a disadvantage compared to strongly supervised methods or methods that use external resources.
1502.05698#32
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
33
made available through the Arcade Learning Environment (Bellemare et al., 2013) The images were preprocessed fol- lowing the protocol in Mnih et al (2013), and the policy was represented by the convolutional neural network shown in Figure 3, with two convolutional layers with 16 channels and stride 2, followed by one fully-connected layer with 20 units, yielding 33,500 parameters. The results of the vine and single path algorithms are sum- marized in Table 1, which also includes an expert human performance and two recent methods: deep Q-learning (Mnih et al., 2013), and a combination of Monte-Carlo Tree Search with supervised training (Guo et al., 2014), called UCC-I. The 500 iterations of our algorithm took about 30 hours (with slight variation between games) on a 16-core computer. While our method only outperformed the prior methods on some of the games, it consistently achieved rea- sonable scores. Unlike the prior methods, our approach was not designed specifically for this task. The ability to apply the same policy search method to methods as di- verse as robotic locomotion and image-based game playing demonstrates the generality of TRPO.
1502.05477#33
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
33
MemNNs (Weston et al., 2014) are a recently proposed class of models that have been shown to perform well at QA. They work by a “controller” neural network performing inference over the stored memories that consist of the previous statements in the story. The original proposed model performs 2 hops of inference: finding the first supporting fact with the maximum match score with the question, and then the second supporting fact with the maximum match score with both the question and the first fact that was found. The matching function consists of mapping the bag-of- words for the question and facts into an embedding space by summing word embeddings. The word embeddings are learnt using strong supervision to optimize the QA task. After finding supporting facts, a final ranking is performed to rank possible responses (answer words) given those facts. We also consider some extensions to this model: • Adaptive memories performing a variable number of hops rather than 2, the model is trained to predict a hop or the special “STOP” class. A similar procedure can be applied to output multiple tokens as well. 4Constructing N -grams from all sentences rather than using the filtered set gave worse results. 7 # Under review as a conference paper at ICLR 2016
1502.05698#33
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
34
the game-playing domain, we learned convolutional neu- ral network policies that used raw images as inputs. This requires optimizing extremely high-dimensional policies, and only two prior methods report successful results on this task. Since the method we proposed is scalable and has strong theoretical foundations, we hope that it will serve as a jumping-off point for future work on training large, rich function approximators for a range of challenging prob- lems. At the intersection of the two experimental domains we explored, there is the possibility of learning robotic con- trol policies that use vision and raw sensory data as in- put, providing a unified scheme for training robotic con- trollers that perform both perception and control. The use of more sophisticated policies, including recurrent policies with hidden state, could further make it possible to roll state estimation and control into the same policy in the partially- observed setting. By combining our method with model learning, it would also be possible to substantially reduce its sample complexity, making it applicable to real-world settings where samples are expensive. # 9 Discussion # Acknowledgements
1502.05477#34
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
34
4Constructing N -grams from all sentences rather than using the filtered set gave worse results. 7 # Under review as a conference paper at ICLR 2016 • N -grams We tried using a bag of 3-grams rather than a bag-of-words to represent the text. In both cases the first step of the MemNN is to convert these into vectorial embeddings. • Nonlinearity We apply a classical 2-layer neural network with tanh nonlinearity in the matching function. More details of these variants is given in Sec A of the appendix.
1502.05698#34
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
35
# 9 Discussion # Acknowledgements We proposed and analyzed trust region methods for opti- mizing stochastic control policies. We proved monotonic improvement for an algorithm that repeatedly optimizes a local approximation to the expected return of the pol- icy with a KL divergence penalty, and we showed that an approximation to this method that incorporates a KL di- vergence constraint achieves good empirical results on a range of challenging policy learning tasks, outperforming prior methods. Our analysis also provides a perspective that unifies policy gradient and policy iteration methods, and shows them to be special limiting cases of an algo- rithm that optimizes a certain objective subject to a trust region constraint. We thank Emo Todorov and Yuval Tassa for providing the MuJoCo simulator; Bruno Scherrer, Tom Erez, Greg Wayne, and the anonymous ICML reviewers for insightful comments, and Vitchyr Pong and Shane Gu for pointing our errors in a previous version of the manuscript. This re- search was funded in part by the Office of Naval Research through a Young Investigator Award and under grant num- ber N00014-11-1-0688, DARPA through a Young Faculty Award, by the Army Research Office through the MAST program. # References
1502.05477#35
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
35
• Nonlinearity We apply a classical 2-layer neural network with tanh nonlinearity in the matching function. More details of these variants is given in Sec A of the appendix. Finally, we built a classical cascade NLP system baseline using a structured support vector ma- chine (SVM), which incorporates coreference resolution and semantic role labeling (SRL) prepro- cessing steps, which are themselves trained on large amounts of costly labeled data. The Stanford coreference system (Raghunathan et al., 2010) and the SENNA semantic role labeling (SRL) system (Collobert et al., 2011) are used to build features for the input to the SVM, trained with strong super- vision to find the supporting facts, e.g. features based on words, word pairs, and the SRL verb and verb-argument pairs. After finding the supporting facts, we build a similar structured SVM for the response stage, with features tuned for that goal as well. More details are in Sec. B of the appendix. Learning rates and other hyperparameters for all methods are chosen using the training set. The summary of our experimental results on the tasks is given in Table 3. We give results for each of the 20 tasks separately, as well as mean performance and number of failed tasks in the final two rows.
1502.05698#35
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
36
# References In the domain of robotic locomotion, we successfully learned controllers for swimming, walking and hopping in a physics simulator, using general purpose neural networks and minimally informative rewards. To our knowledge, no prior work has learned controllers from scratch for all of these tasks, using a generic policy search method and non-engineered, general-purpose policy representations. In Bagnell, J. A. and Schneider, J. Covariant policy search. IJCAI, 2003. Bartlett, P. L. and Baxter, J. Infinite-horizon policy-gradient esti- mation. arXiv preprint arXiv:1106.0665, 2011. Barto, A., Sutton, R., and Anderson, C. Neuronlike adaptive ele- ments that can solve difficult learning control problems. IEEE Transactions on Systems, Man and Cybernetics, (5):834–846, 1983. Trust Region Policy Optimization Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The ar- cade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253– 279, jun 2013.
1502.05477#36
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
36
Results Standard MemNNs generally outperform the N -gram and LSTM baselines, which is con- sistent with the results in Weston et al. (2014). However they still “fail” at a number of tasks; that is, test accuracy is less than 95%. Some of these failures are expected due to insufficient modeling power as described in more detail in Sec. A.1, e.g. k = 2 facts, single word answers and bag-of- words do not succeed on tasks 3, 4, 5, 7, 8 and 18. However, there were also failures on tasks we did not at first expect, for example yes/no questions (6) and indefinite knowledge (10). Given hindsight, we realize that the linear scoring function of standard MemNNs cannot model the match between query, supporting fact and a yes/no answer as this requires three-way interactions.
1502.05698#36
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
37
Peters, J. and Schaal, S. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682–697, 2008a. Bertsekas, D. Dynamic programming and optimal control, vol- ume 1. 2005. Deisenroth, M., Neumann, G., and Peters, J. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1- 2):1–142, 2013. Gabillon, Victor, Ghavamzadeh, Mohammad, and Scherrer, Bruno. Approximate dynamic programming finally performs well in the game of Tetris. In Advances in Neural Information Processing Systems, 2013. Geng, T., Porr, B., and W¨org¨otter, F. Fast biped walking with a reflexive controller and realtime policy searching. In Advances in Neural Information Processing Systems (NIPS), 2006. Guo, X., Singh, S., Lee, H., Lewis, R. L., and Wang, X. Deep learning for real-time atari game play using offline Monte- Carlo tree search planning. In Advances in Neural Information Processing Systems, pp. 3338–3346, 2014.
1502.05477#37
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
37
Columns 5-9 of Table 3 give the results for our MemNN extensions: adaptive memories (AM), N -grams (NG) and nonlinearities (NL), plus combinations thereof. The adaptive approach gives a straight-forward improvement in tasks 3 and 16 because they both require more than two supporting facts, and also gives (small) improvements in 8 and 19 because they require multi-word outputs (but still remain difficult). We hence use the AM model in combination with all our other extensions in the subsequent experiments. MemNNs with N -gram modeling yield clear improvements when word order matters, e.g. tasks 4 and 15. However, N -grams do not seem to be a substitute for nonlinearities in the embedding function as the NL model outperforms N -grams on average, especially in the yes/no (6) and indef- inite tasks (10), as explained before. On the other hand, the NL method cannot model word order and so fails e.g., on task 4. The obvious step is thus to combine these complimentary approaches: indeed AM+NG+NL (column 9) gives improved results over both, with a total of 9 tasks that have been upgraded from failure to success compared to the original MemNN model.
1502.05698#37
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
38
Hansen, Nikolaus and Ostermeier, Andreas. Adapting arbitrary normal mutation distributions in evolution strategies: The co- In Evolutionary Computation, variance matrix adaptation. 1996., Proceedings of IEEE International Conference on, pp. 312–317. IEEE, 1996. Peters, J., M¨ulling, K., and Alt¨un, Y. Relative entropy policy search. In AAAI Conference on Artificial Intelligence, 2010. Peters, Jan and Schaal, Stefan. Natural actor-critic. Neurocom- puting, 71(7):1180–1190, 2008b. Pirotta, Matteo, Restelli, Marcello, Pecorino, Alessio, and Calan- driello, Daniele. Safe policy iteration. In Proceedings of The 30th International Conference on Machine Learning, pp. 307– 315, 2013. Pollard, David. Asymptopia: an exposition of statistical asymp- totic theory. 2000. URL http://www.stat.yale.edu/ ˜pollard/Books/Asymptopia. Szita, Istv´an and L¨orincz, Andr´as. Learning tetris using the noisy cross-entropy method. Neural computation, 18(12): 2936–2941, 2006.
1502.05477#38
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
38
The structured SVM, despite having access to external resources, does not perform better, still fail- ing at 9 tasks. It does perform better than vanilla MemNNs (without extensions) on tasks 6, 9 and 10 where the hand-built feature conjunctions capture the necessary nonlinearities. However, com- pared to MemNN (AM+NG+NL) it seems to do significantly worse on tasks requiring three (and sometimes, two) supporting facts (e.g. tasks 3, 16 and 2) presumably as ranking over so many possi- bilities introduces more mistakes. However, its non-greedy search does seem to help on other tasks, such as path finding (task 19) where search is very important. Since it relies on external resources specifically designed for English, it is unsure that it would perform as well on other languages, like Hindi, where such external resources might be of worse quality.
1502.05698#38
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
39
Tedrake, R., Zhang, T., and Seung, H. Stochastic policy gradi- ent reinforcement learning on a simple 3d biped. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004. Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. MuJoCo: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. IEEE, 2012. Hunter, David R and Lange, Kenneth. A tutorial on MM algo- rithms. The American Statistician, 58(1):30–37, 2004. Kakade, Sham. A natural policy gradient. In Advances in Neural Information Processing Systems, pp. 1057–1063. MIT Press, 2002. Wampler, Kevin and Popovi´c, Zoran. Optimal gait and form for animal locomotion. In ACM Transactions on Graphics (TOG), volume 28, pp. 60. ACM, 2009. Wright, Stephen J and Nocedal, Jorge. Numerical optimization, volume 2. Springer New York, 1999. Kakade, Sham and Langford, John. Approximately optimal ap- In ICML, volume 2, pp. proximate reinforcement learning. 267–274, 2002.
1502.05477#39
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
39
The final two columns (10-11) give further analysis of the AM+NG+NL MemNN method. The second to last column (10) shows the minimum number of training examples required to achieve ≥ 95% accuracy, or FAIL if this is not achieved with 1000 examples. This is important as it is not only desirable to perform well on a task, but also using the fewest number of examples (to generalize well, quickly). Most succeeding tasks require 100-500 examples. Task 8 requires 5000 examples and 7 requires 10000, hence they are labeled as FAIL. The latter task can presumably be solved by adding all the times an object is picked up, and subtracting the times it is dropped, which seems 8 # Under review as a conference paper at ICLR 2016 possible for an MemNN, but it does not do perfectly. Two tasks, positional reasoning 17 and path finding 19 cannot be solved even with 10000 examples, it seems those (and indeed more advanced forms of induction and deduction, which we plan to build) require a general search algorithm to be built into the inference procedure, which MemNN (and the other approaches tried) are lacking.
1502.05698#39
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
40
Kakade, Sham and Langford, John. Approximately optimal ap- In ICML, volume 2, pp. proximate reinforcement learning. 267–274, 2002. Lagoudakis, Michail G and Parr, Ronald. Reinforcement learn- ing as classification: Leveraging modern classifiers. In ICML, volume 3, pp. 424–431, 2003. Levin, D. A., Peres, Y., and Wilmer, E. L. Markov chains and mixing times. American Mathematical Society, 2009. Levine, Sergey and Abbeel, Pieter. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pp. 1071–1079, 2014. Martens, J. and Sutskever, I. Training deep and recurrent networks with hessian-free optimization. In Neural Networks: Tricks of the Trade, pp. 479–535. Springer, 2012. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Nemirovski, Arkadi. Efficient methods in convex programming. 2005.
1502.05477#40
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
40
The last column shows the performance of AM+NG+NL MemNNs when training on all the tasks jointly, rather than just on a single one. The performance is generally encouragingly similar, showing such a model can learn many aspects of text understanding and reasoning simultaneously. The main issues are that these models still fail on several of the tasks, and use a far stronger form of supervision (using supporting facts) than is typically realistic. # 6 DISCUSSION A prerequisite set We developed a set of tasks that we believe are a prerequisite to full language understanding and reasoning. While any learner that can solve these tasks is not necessarily close to full reasoning, if a learner fails on any of our tasks then there are likely real-world tasks that it will fail on too (i.e., real-world tasks that require the same kind of reasoning). Even if the situations and the language of the tasks are artificial, we believe that the mechanisms required to learn how to solve them are part of the key towards text understanding and reasoning.
1502.05698#40
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
41
Nemirovski, Arkadi. Efficient methods in convex programming. 2005. Ng, A. Y. and Jordan, M. PEGASUS: A policy search method for large mdps and pomdps. In Uncertainty in artificial intelli- gence (UAI), 2000. Owen, Art B. Monte Carlo theory, methods and examples. 2013. Pascanu, Razvan and Bengio, Yoshua. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013. Trust Region Policy Optimization # A Proof of Policy Improvement Bound This proof (of Theorem[I) uses techniques from the proof of Theorem 4.1 in (Kakade & Langford] 2002), adapting them to the more general setting considered in this paper. An informal overview is as follows. Our proof relies on the notion of coupling, where we jointly define the policies 7 and 7’ so that they choose the same action with high probability = (1— a). Surrogate loss L,(i) accounts for the the advantage of 7 the first time that it disagrees with 7, but not subsequent disagreements. Hence, the error in L, is due to two or more disagreements between 7 and 7, hence, we get an O(a?) correction term, where a is the probability of disagreement.
1502.05477#41
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
41
A flexible framework This set of tasks is not a definitive set. The purpose of a simulation-based approach is to provide flexibility and control of the tasks’ construction. We grounded the tasks into language because it is then easier to understand the usefulness of the tasks and to interpret their results. However, our primary goal is to find models able to learn to detect and combine patterns in symbolic sequences. One might even want to decrease the intrinsic difficulty by removing any lexical variability and ambiguity and reason only over bare symbols, stripped down from their lin- guistic meaning. One could also decorrelate the long-term memory from the reasoning capabilities of systems by, for instance, arranging the supporting facts closer to the questions. In the opposing view, one could instead want to transform the tasks into more realistic stories using annotators or more complex grammars. The set of 20 tasks presented here is a subset of what can be achieved with a simulation. We chose them because they offer a variety of skills that we would like a text reasoning model to have, but we hope researchers from the community will develop more tasks of varying complexity in order to develop and analyze models that try to solve them. Transfer learning across tasks is also a very important goal, beyond the scope of this paper. We have thus made the simulator and code for the tasks publicly available for those purposes.
1502.05698#41
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
42
We start out with a lemma from Kakade & Langford (2002) that shows that the difference in policy performance η(˜π)−η(π) can be decomposed as a sum of per-timestep advantages. Lemma 1. Given two policies π, ˜π, nt) = (mt) +E, we t=0 (19) This expectation is taken over trajectories τ := (s0, a0, s1, a0, . . . ), and the notation Eτ ∼˜π [. . . ] indicates that actions are sampled from ˜π to generate τ . Proof. First note that A,(s,a) = Ey ~p(s’|s,a) [7(s) + yVn(s’) — V;-(s)]. Therefore, oo oo Bae [So 7Anlone ) (20) =0 oo = Ene en r(s:) + Wx (a) vals (21) t=0 =Eqe |- V(80) + Sr (22) t=0 = —E,, [Vr (so)] + Exe > “ra (23) t=0 = −η(π) + η(˜π) (24) Rearranging, the result follows.
1502.05477#42
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
42
Testing learning methods Our tasks are designed as a test-bed for learning methods: we provide training and test sets because we intend to evaluate the capability of models to discover how to reason from patterns hidden within them. It could be tempting to hand-code solutions for them or to use existing large-scale QA systems like Cyc (Curtis et al., 2005). They might succeed at solving them, even if our structured SVM results (a cascaded NLP system with hand-built features) show that this is not straightforward; however this is not the tasks’ purpose since those approaches would not be learning to solve them. Our experiments show that some existing machine learning methods are successful on some of the tasks, in particular Memory Networks, for which we introduced some useful extensions (in Sec. A). However, those models still fail on several of the tasks, and use a far stronger form of supervision (using supporting facts) than is typically realistic.
1502.05698#42
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
43
= −η(π) + η(˜π) (24) Rearranging, the result follows. Define ¯A(s) to be the expected advantage of ˜π over π at state s: ¯A(s) = Ea∼˜π(·|s) [Aπ(s, a)] . (25) Now Lemma 1 can be written as follows: H(t) = 9(7) + Eve tA ] (26) t=0 Note that Lπ can be written as Ly (7) = n(m) + Ewe > “40 (27) t=0 The difference in these equations is whether the states are sampled using 7 or 7. To bound the difference between 7)(7) and L,,(), we will bound the difference arising from each timestep. To do this, we first need to introduce a measure of how much z and 7 agree. Specifically, we’ll couple the policies, so that they define a joint distribution over pairs of actions. Definition 1. (7, 7) is an a-coupled policy pair if it defines a joint distribution (a, @)|s, such that P(a # a|s) < a for all s. 7 and 7 will denote the marginal distributions of a and 4G, respectively. Trust Region Policy Optimization
1502.05477#43
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
43
These datasets are not yet solved. Future research should aim to minimize the amount of required supervision, as well as the number of training examples needed to solve a new task, to move closer to the task transfer capabilities of humans. That is, in the weakly supervised case with only 1000 training examples or less there is no known general (i.e. non-hand engineered) method that solves the tasks. Further, importantly, our hope is that a feedback loop of developing more challenging tasks, and then algorithms that can solve them, leads us to fruitful research directions. Note that these tasks are not a substitute for real data, but should complement them, especially when developing and analysing algorithms. There are many complementary real-world datasets, see for example Hermann et al. (2015); Bordes et al. (2015); Hill et al. (2015). That is, even if a method works well on our 20 tasks, it should be shown to be useful on real data as well. 9 # Under review as a conference paper at ICLR 2016
1502.05698#43
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
44
Trust Region Policy Optimization Computationally, α-coupling means that if we randomly choose a seed for our random number generator, and then we sample from each of π and ˜π after setting that seed, the results will agree for at least fraction 1 − α of seeds. Lemma 2. Given that π, ˜π are α-coupled policies, for all s, |A(s)| |A(s)| < 2a max |A,(s,a)| (28) Proof. A(s) = Eawe [An(8,@)] = Eva,aycn,a) [An(8,@) — Ax(s,a)] since Equa [Ax(s,a)] =0 (29) = P(aF G|s)E(a,a)~(n,%)|aza [An (8,4) — An(s, @)] (30) |A(s)| < a@-2max|A,(s, a)| 1) Lemma 3. Let (π, ˜π) be an α-coupled policy pair. Then [Ese [A(sz)] — Eg,vn [A(sz)] | < 2a max A(s) < da(1 — (1 -a)') max|A,(s, a)| (32)
1502.05477#44
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
44
9 # Under review as a conference paper at ICLR 2016 Impact Since being online, the bAbI tasks have already directly influenced the development of several promising new algorithms, including weakly supervised end-to-end Memory Networks (MemN2N) of Sukhbaatar et al. (2015), Dynamic Memory Networks of Kumar et al. (2015), and the Neural Reasoner (Peng et al., 2015). MemN2N has since been shown to perform well on some real-world tasks (Hill et al., 2015). # REFERENCES Bache, K. and Lichman, M. http://archive.ics.uci.edu/ml. UCI machine learning repository, 2013. URL Berant, Jonathan, Chou, Andrew, Frostig, Roy, and Liang, Percy. Semantic parsing on freebase from question-answer pairs. In EMNLP, pp. 1533–1544, 2013. Berant, Jonathan, Srikumar, Vivek, Chen, Pei-Chun, Huang, Brad, Manning, Christopher D, Van- der Linden, Abby, Harding, Brittany, and Clark, Peter. Modeling biological processes for reading comprehension. In Proc. EMNLP, 2014.
1502.05698#44
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
45
Proof. Given the coupled policy pair (π, ˜π), we can also obtain a coupling over the trajectory distributions produced by π and ˜π, respectively. Namely, we have pairs of trajectories τ, ˜τ , where τ is obtained by taking actions from π, and ˜τ is obtained by taking actions from ˜π, where the same random seed is used to generate both trajectories. We will consider the advantage of ˜π over π at timestep t, and decompose this expectation based on whether π agrees with ˜π at all timesteps i < t. Let n; denote the number of times that a; 4 @; for i < t, i.e., the number of times that 7 and 7 disagree before timestep t. Es,x% [A(s:)] = P(ri = 0)Es,~# ,-0 [A(s:)] + P(r > O)Es, vz jnes0 [A(se)] (33) The expectation decomposes similarly for actions are sampled using π: Es,or [A(si)] = P(re = O)Es, .n|n,=0 [A(se)] + P(re > OES, va|n,>0 [A(se)] (34) Note that the nt = 0 terms are equal:
1502.05477#45
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
45
Bordes, Antoine, Usunier, Nicolas, Collobert, Ronan, and Weston, Jason. Towards understanding situated natural language. In AISTATS, 2010. Bordes, Antoine, Usunier, Nicolas, Chopra, Sumit, and Weston, Jason. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. Chen, David L and Mooney, Raymond J. Learning to interpret natural language navigation instruc- tions from observations. San Francisco, CA, pp. 859–865, 2011. Collobert, Ronan, Weston, Jason, Bottou, L´eon, Karlen, Michael, Kavukcuoglu, Koray, and Kuksa, Pavel. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537, 2011. Curtis, Jon, Matthews, Gavin, and Baxter, David. On the effective use of cyc in a question answering system. In IJCAI Workshop on Knowledge and Reasoning for Answering Questions, pp. 61–70, 2005. Fader, Anthony, Zettlemoyer, Luke, and Etzioni, Oren. Paraphrase-driven learning for open question answering. In ACL, pp. 1608–1618, 2013.
1502.05698#45
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
46
Note that the nt = 0 terms are equal: Ey, vi|me=0 [A(s:)] = Esewm|ne=0 [A(s:)] , (35) because n; = 0 indicates that 7 and 7 agreed on all timesteps less than t. Subtracting Equations and (34), we get Ex,x% [A(s:)] —Es,~x [A(s:)] = P(ne > 0) (Es,n#|n.50 [A(se)] — Es,.rjnes0 [A(se)]) # Est∼˜π By definition of α, P (π, ˜π agree at timestep i) ≥ 1 − α, so P (nt = 0) ≥ (1 − α)t, and P (nt > 0) ≤ 1 − (1 − α)t (37) Next, note that Ex, ~# .>0 [A(Se)] — Es,~x|n,>0 [A(s2)] | E,,.#|n,>0 [A(sz)] | + |Es,am|n.>0 [A(s:)] | (38) < < da max |A,(s, a)| (39) Where the second inequality follows from Lemma 3.
1502.05477#46
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
46
Fader, Anthony, Zettlemoyer, Luke, and Etzioni, Oren. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1156–1165. ACM, 2014. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Halevy, Alon, Norvig, Peter, and Pereira, Fernando. The unreasonable effectiveness of data. Intelli- gent Systems, IEEE, 24(2):8–12, 2009. Hermann, Karl Moritz, Koˇcisk´y, Tom´aˇs, Grefenstette, Edward, Espeholt, Lasse, Kay, Will, Teaching machines to read and compre- URL Suleyman, Mustafa, and Blunsom, Phil. hend. http://arxiv.org/abs/1506.03340. In Advances in Neural Information Processing Systems (NIPS), 2015.
1502.05698#46
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
47
Where the second inequality follows from Lemma 3. and Equation (39) into Equation (36), we get [Bsa [A(se)] — Es.xe [A(se)]| < a(t ~ (1 ~ a)")max|An(s,0)| (40) Plugging Equation (37) and Equation (39) into Equation (36), we get (36) (40) Trust Region Policy Optimization The preceding Lemma bounds the difference in expected advantage at each timestep t. We can sum over time to bound the difference between (7) and L,.(7). Subtracting Equation and Equation 27, and defining « = max, ,_ |A;(s,a)|, oo Init) — Ln #)| = 77! |Erwz [A(se)] — Erm [A(se)] | (41) t=0 t=0 < 5-4! - 4ea(1 — (1 — a)') (42) t=0 1 1 da?ye T=) 10a) da?ye <a = (44) ≤ (45) Last, to replace α by the total variation divergence, we need to use the correspondence between TV divergence and coupled random variables:
1502.05477#47
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
47
Hill, Felix, Bordes, Antoine, Chopra, Sumit, and Weston, Jason. The goldilocks principle: Reading children’s books with explicit memory representation s. arXiv preprint arXiv:1511.02301, 2015. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Kumar, Ankit, Irsoy, Ozan, Su, Jonathan, Bradbury, James, English, Robert, Pierce, ˜Brian, On- druska, Peter, Gulrajani, Ishaan, and Socher, Richard. Ask me anything: Dynamic memory net- works for natural language processing. http://arxiv.org/abs/1506.07285, 2015. 10 # Under review as a conference paper at ICLR 2016 Levesque, Hector J, Davis, Ernest, and Morgenstern, Leora. The winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, 2011. Liang, Percy. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408, 2013.
1502.05698#47
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
48
= (44) ≤ (45) Last, to replace α by the total variation divergence, we need to use the correspondence between TV divergence and coupled random variables: Suppose px and py are distributions with Dry(px || py) = a. Then there exists a joint distribution (X, Y) whose marginals are px, py, for which X = Y with probability 1 — a. See (Levin et al., 2009), Proposition 4.7. It follows that if we have two policies 7 and 7 such that max, Dry (7(-|s) || 7(-|s)) < @, then we can define an a-coupled policy pair (7, 7) with appropriate marginals. Taking a = max, Dry (7(-|s) || #(-|s)) < @ in Equation (45), Theorem|]] follows. # B Perturbation Theory Proof of Policy Improvement Bound We also provide an alternative proof of Theorem 1 using perturbation theory.
1502.05477#48
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
48
Liang, Percy. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408, 2013. Liang, Percy, Jordan, Michael I, and Klein, Dan. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389–446, 2013. Minsky, Marvin and Papert, Seymour. Perceptron: an introduction to computational geometry. The MIT Press, Cambridge, expanded edition, 19:88, 1969. Montfort, Nick. Twisty Little Passages: an approach to interactive fiction. Mit Press, 2005. M¨uller, K-R, Smola, Alex J, R¨atsch, Gunnar, Sch¨olkopf, Bernhard, Kohlmorgen, Jens, and Vapnik, Vladimir. Predicting time series with support vector machines. In Artificial Neural NetworksI- CANN’97, pp. 999–1004. Springer, 1997. Ng, Andrew Y, Jordan, Michael I, Weiss, Yair, et al. On spectral clustering: Analysis and an algo- rithm. Advances in neural information processing systems, 2:849–856, 2002.
1502.05698#48
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
49
# B Perturbation Theory Proof of Policy Improvement Bound We also provide an alternative proof of Theorem 1 using perturbation theory. Proof. Let G = (1+γPπ +(γPπ)2+. . . ) = (1−γPπ)−1, and similarly Let ˜G = (1+γP˜π +(γP˜π)2+. . . ) = (1−γP˜π)−1. We will use the convention that ρ (a density on state space) is a vector and r (a reward function on state space) is a dual vector (i.e., linear functional on vectors), thus rρ is a scalar meaning the expected reward under density ρ. Note that η(π) = rGρ0, and η(˜π) = c ˜Gρ0. Let ∆ = P˜π − Pπ. We want to bound η(˜π) − η(π) = r( ˜G − G)ρ0. We start with some standard perturbation theory manipulations. G−1 − ˜G−1 = (1 − γPπ) − (1 − γP˜π) = γ∆. (46) Left multiply by G and right multiply by ˜G.
1502.05477#49
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
49
Peng, Baolin, Lu, Zhengdong, Li, Hang, and Wong, Kam-Fai. Towards neural network-based rea- soning. arXiv preprint arXiv:1508.05508, 2015. Raghunathan, Karthik, Lee, Heeyoung, Rangarajan, Sudarshan, Chambers, Nathanael, Surdeanu, Mihai, Jurafsky, Dan, and Manning, Christopher. A multi-pass sieve for coreference resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 492–501. Association for Computational Linguistics, 2010. Richardson, Matthew, Burges, Christopher JC, and Renshaw, Erin. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, pp. 193–203, 2013. Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning internal representations by error propagation. Technical report, DTIC Document, 1985. Soon, Wee Meng, Ng, Hwee Tou, and Lim, Daniel Chung Yong. A machine learning approach to coreference resolution of noun phrases. Computational linguistics, 27(4):521–544, 2001.
1502.05698#49
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
50
Left multiply by G and right multiply by ˜G. ˜G − G = γG∆ ˜G ˜G = G + γG∆ ˜G (47) Substituting the right-hand side into ˜G gives ˜G = G + γG∆G + γ2G∆G∆ ˜G (48) So we have η(˜π) − η(π) = r( ˜G − G)ρ = γrG∆Gρ0 + γ2rG∆G∆ ˜Gρ0 (49) Let us first consider the leading term γrG∆Gρ0. Note that rG = v, i.e., the infinite-horizon state-value function. Also note that Gρ0 = ρπ. Thus we can write γcG∆Gρ0 = γv∆ρπ. We will show that this expression equals the expected Trust Region Policy Optimization advantage Lπ(˜π) − Lπ(π).
1502.05477#50
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
50
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-to-end memory net- works. Proceedings of NIPS, 2015. Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- works. In Advances in Neural Information Processing Systems, pp. 3104–3112, 2014. UzZaman, Naushad, Llorens, Hector, Allen, James, Derczynski, Leon, Verhagen, Marc, and Puste- jovsky, James. Tempeval-3: Evaluating events, time expressions, and temporal relations. arXiv preprint arXiv:1206.5333, 2012. Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. CoRR, abs/1410.3916, 2014. Winograd, Terry. Understanding natural language. Cognitive psychology, 3(1):1–191, 1972. Yao, Xuchen, Berant, Jonathan, and Van Durme, Benjamin. Freebase qa: Information extraction or semantic parsing? ACL 2014, pp. 82, 2014.
1502.05698#50
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
51
Trust Region Policy Optimization advantage Lπ(˜π) − Lπ(π). L,,(%) — Ly (7) = Da Pal La (a|s) — r(a|s))Az(s, a) = 5 nls) 3 (rolls) ~ nla) |r) + Yl) a0) ~ ol) = a oD 7 (als) — (als) p(s'|s, a)yu(s’) = Toul Slr! s'|s) — pz(s'|s))yu(s’) = ywAp, (50) Next let us bound the O(∆2) term γ2rG∆G∆ ˜Gρ. First we consider the product γrG∆ = γv∆. Consider the component s of this dual vector. I(ywA)s| = Ss.) — m(s,4))Qx(s,0) =|S-((s,a) — 2(s,a))A,(s,@) < So lF(s,@) —1(s,a)|- max|A,(s,a)| a < 20€ (51)
1502.05477#51
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
51
Yao, Xuchen, Berant, Jonathan, and Van Durme, Benjamin. Freebase qa: Information extraction or semantic parsing? ACL 2014, pp. 82, 2014. Yu, Mo, Gormley, Matthew R, and Dredze, Mark. Factor-based compositional embedding models. NIPS 2014 workshop on Learning Semantics, 2014. Zhu, Xiaojin, Ghahramani, Zoubin, Lafferty, John, et al. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, volume 3, pp. 912–919, 2003. 11 # Under review as a conference paper at ICLR 2016 # A EXTENSIONS TO MEMORY NETWORKS Memory Networks Weston et al. (2014) are a promising class of models, shown to perform well at QA, that we can apply to our tasks. They consist of a memory m (an array of objects indexed by mi) and four potentially learnable components I, G, O and R that are executed given an input: I: (input feature map) – convert input sentence x to an internal feature representation I(x). G: (generalization) – update the current memory state m given the new input: mi = G(mi, I(x), m), ∀i.
1502.05698#51
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
52
where the last line used the definition of the total-variation divergence, and the definition of € = max,,_|A,(s,a)|. We bound the other portion GAG*/p using the ¢; operator norm |All]; = sup wf = veh \ (52) where we have that ||G'l|1 = ||G|1 = 1/(1 — 7) and ||A]|1 = 2a. That gives |GAGpll1 < GU sAllilEllallalls 1 1 = -2a-—-1 (53) 1-y¥ 1-y¥ So we have that Y|rGAGAGp| < yl17GA oo||GAG pl < 7lvAl]ool|GAG pl < 9 2a <7: 2ae- ~——, (ly)? Aye : =4 pe (54) # C Efficiently Solving the Trust-Region Constrained Optimization Problem This section describes how to efficiently approximately solve the following constrained optimization problem, which we must solve at each iteration of TRPO: maximize L(θ) subject to DKL(θold, θ) ≤ δ. (55) Trust Region Policy Optimization
1502.05477#52
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
52
G(mi, I(x), m), ∀i. O: (output feature map) – compute output o given the new input and the memory: o = O(I(x), m). R: (response) – finally, decode output features o to give the final textual response to the user: r = R(o). Potentially, component I can make use of standard pre-processing, e.g., parsing and entity resolu- tion, but the simplest form is to do no processing at all. The simplest form of G is store the new incoming example in an empty memory slot, and leave the rest of the memory untouched. Thus, in Weston et al. (2014) the actual implementation used is exactly this simple form, where the bulk of the work is in the O and R components. The former is responsible for reading from memory and performing inference, e.g., calculating what are the relevant memories to answer a question, and the latter for producing the actual wording of the answer given O. The O module produces output features by finding k supporting memories given x. They use k = 2. For k = 1 the highest scoring supporting memory is retrieved with: o1 = O1(x, m) = arg max i=1,...,N sO(x, mi) (1)
1502.05698#52
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05698
53
o1 = O1(x, m) = arg max i=1,...,N sO(x, mi) (1) where sO is a function that scores the match between the pair of sentences x and mi. For the case k = 2 they then find a second supporting memory given the first found in the previous iteration: o2 = O2(q, m) = arg max i=1,...,N where the candidate supporting memory mi is now scored with respect to both the original input and the first supporting memory, where square brackets denote a list. The final output o is [x, mo1, mo2], which is input to the module R. Finally, R needs to produce a textual response r. While the authors also consider Recurrent Neural Networks (RNNs), their standard setup limits responses to be a single word (out of all the words seen by the model) by ranking them: r = R(q, w) = argmaxw∈W sR([x, mo1, mo2], w) (3) where W is the set of all words in the dictionary, and sR is a function that scores the match. The scoring functions sO and sR have the same form, that of an embedding model:
1502.05698#53
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
54
The search direction is computed by approximately solving the equation Ax = g, where A is the Fisher information matrix, i.e., the quadratic approximation to the KL divergence constraint: DKL(θold, θ) ≈ 1 2 (θ − θold)T A(θ − θold), where Aij = ∂ DKL(θold, θ). In large-scale problems, it is prohibitively costly (with respect to computation and memory) to ∂θi form the full matrix A (or A−1). However, the conjugate gradient algorithm allows us to approximately solve the equation Ax = b without forming this full matrix, when we merely have access to a function that computes matrix-vector products y → Ay. Appendix C.1 describes the most efficient way to compute matrix-vector products with the Fisher information matrix. For additional exposition on the use of Hessian-vector products for optimizing neural network objectives, see (Martens & Sutskever, 2012) and (Pascanu & Bengio, 2013). Having computed the search direction s ≈ A−1g, we next need to compute the maximal step length β such that θ +
1502.05477#54
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
54
The scoring functions sO and sR have the same form, that of an embedding model: s(x, y) = Φx(x)⊤U ⊤U Φy(y). (4) where U is a n × D matrix where D is the number of features and n is the embedding dimension. The role of Φx and Φy is to map the original text to the D-dimensional feature space. They choose a bag of words representation, and D = 3|W | for sO, i.e., every word in the dictionary has three different representations: one for Φy(.) and two for Φx(.) depending on whether the words of the input arguments are from the actual input x or from the supporting memories so that they can be modeled differently.
1502.05698#54
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
55
& Bengio, 2013). Having computed the search direction s ≈ A−1g, we next need to compute the maximal step length β such that θ + βs will satisfy the KL divergence constraint. To do this, let δ = DKL ≈ 1 2 β2sT As. From this, we obtain 2δ/sT As, where δ is the desired KL divergence. The term sT As can be computed through a single Hessian vector β = product, and it is also an intermediate result produced by the conjugate gradient algorithm.
1502.05477#55
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
55
They consider various extensions of their model, in particular modeling write time and modeling unseen words. Here we only discuss the former which we also use. In order for the model to work on QA tasks over stories it needs to know which order the sentences were uttered which is not available in the model directly. They thus add extra write time extra features to SO which take on the value 0 or 1 indicating which sentence is older than another being compared, and compare triples of pairs of sentences and the question itself. Training is carried out by stochastic gradient descent using supervision from both the question answer pairs and the supporting memories (to select o1 and o2). See Weston et al. (2014) for more details. 12 # Under review as a conference paper at ICLR 2016 A.1 SHORTCOMINGS OF THE EXISTING MEMNNS The Memory Networks models defined in (Weston et al., 2014) are one possible technique to try on our tasks, however there are several tasks which they are likely to fail on: • They model sentences with a bag of words so are likely to fail on tasks such as the 2- argument (task 4) and 3-argument (task 5) relation problems. • They perform only two max operations (k = 2) so they cannot handle questions involving more than two supporting facts such as tasks 3 and 7.
1502.05698#55
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
56
Last, we use a line search to ensure improvement of the surrogate objective and satisfaction of the KL divergence constraint, both of which are nonlinear in the parameter vector θ (and thus depart from the linear and quadratic approximations used to compute the step). We perform the line search on the objective Lθold (θ) − X [DKL(θold, θ) ≤ δ], where X [. . . ] equals zero when its argument is true and +∞ when it is false. Starting with the maximal value of the step length β computed in the previous paragraph, we shrink β exponentially until the objective improves. Without this line search, the algorithm occasionally computes large steps that cause a catastrophic degradation of performance. # C.1 Computing the Fisher-Vector Product Here we will describe how to compute the matrix-vector product between the averaged Fisher information matrix and arbitrary vectors. This matrix-vector product enables us to perform the conjugate gradient algorithm. Suppose that the parameterized policy maps from the input x to “distribution parameter” vector µθ(x), which parameterizes the distribution π(u|x). Now the KL divergence for a given input x can be written as follows: Dur (Moa (|) |] 7o(-)) = kl(uo (2), fora) (56)
1502.05477#56
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
56
• They perform only two max operations (k = 2) so they cannot handle questions involving more than two supporting facts such as tasks 3 and 7. • Unless a RNN is employed in the R module, they are unable to provide multiple answers in the standard setting using eq. (3). This is required for the list (8) and path finding (19) tasks. We therefore propose improvements to their model in the following section. IMPROVING MEMORY NETWORKS A.2.1 ADAPTIVE MEMORIES (AND RESPONSES) We consider a variable number of supporting facts that is automatically adapted dependent on the question being asked. To do this we consider scoring a special fact m∅. Computation of supporting memories then becomes: i = 1 oi = O(x, m) while oi 6= m∅ do i ← i + 1 oi = O([x, mo1 , . . . , moi−1 ], m) end while That is, we keep predicting supporting facts i, conditioning at each step on the previously found facts, until m∅ is predicted at which point we stop. m∅ has its own unique embedding vector, which is also learned. In practice we still impose a hard maximum number of loops in our experiments to avoid fail cases where the computation never stops (in our experiments we use a limit of 10).
1502.05698#56
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05698
57
Multiple Answers We use a similar trick for the response module as well in order to output multi- ple words. That is, we add a special word w∅ to the dictionary and predict word wi on each iteration i conditional on the previous words, i.e., wi = R([x, mo1 , . . . , m|o|, wi, . . . , wi−1], w), until we predict w∅. A.2.2 NONLINEAR SENTENCE MODELING There are several ways of modeling sentences that go beyond a bag-of-words, and we explore three variants here. The simplest is a bag-of-N -grams, we consider N = 1, 2 and 3 in the bag. The main disadvantage of such a method is that the dictionary grows rapidly with N . We therefore consider an alternative neural network approach, which we call a multilinear map. Each word in a sentence is binned into one of Psz positions with p(i, l) = ⌈(iPsz)/l)⌉ where i is the position of the word in a sentence of length l, and for each position we employ a n × n matrix Pp(i,l). We then model the matching score with:
1502.05698#57
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
58
where the primes (’) indicate differentiation with respect to the first argument, and there is an implied summation over indices a,b. The second term vanishes, leaving just the first term. Let J := Peat) (the Jacobian), then the Fisher information matrix can be written in matrix form as J’ MJ, where M = kl!!,(j19(x), Hoa) is the Fisher information matrix of the distribution in terms of the mean parameter ju (as opposed to the parameter @). This has a simple form for most parameterized distributions of interest. The Fisher-vector product can now be written as a function y + J7M.Jy. Multiplication by J7 and J can be performed by most automatic differentiation and neural network packages (multiplication by J7 is the well-known backprop operation), and the operation for multiplication by MM can be derived for the distribution of interest. Note that this Fisher-vector product is straightforward to average over a set of datapoints, i.e., inputs x to pu.
1502.05477#58
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
58
s(q, d) = E(q) · E(d); E(x) = tanh( X Pp(i,l)Φx(xi)⊤U ) i=1,...,l (5) whereby we apply a linear map for each word dependent on its position, followed by a tanh non- linearity on the sum of mappings. Note that this is related to the model of (Yu et al., 2014) who consider tags rather than positions. While the results of this method are not shown in the main paper due to space restrictions, it performs similarly well to N -grams to and may be useful in real-world cases where N -grams cause the dictionary to be too large. Comparing to Table 3 MemNN with adaptive memories (AM) + multilinear obtains a mean performance of 93, the same as MemNNs with AM+NG+NL (i.e., using N-grams instead). 13 # Under review as a conference paper at ICLR 2016 Finally, to assess the performance of nonlinear maps that do not model word position at all we also consider the following nonlinear embedding:
1502.05698#58
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
59
One could alternatively use a generic method for calculating Hessian-vector products using reverse mode automatic differ- entiation ((Wright & Nocedal, 1999), chapter 8), computing the Hessian of DKL with respect to θ. This method would be slightly less efficient as it does not exploit the fact that the second derivatives of µ(x) (i.e., the second term in Equation (57)) can be ignored, but may be substantially easier to implement. We have described a procedure for computing the Fisher-vector product y → Ay, where the Fisher information matrix is averaged over a set of inputs to the function µ. Computing the Fisher-vector product is typically about as expensive as computing the gradient of an objective that depends on µ(x) (Wright & Nocedal, 1999). Furthermore, we need to compute Trust Region Policy Optimization
1502.05477#59
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
59
# Under review as a conference paper at ICLR 2016 Finally, to assess the performance of nonlinear maps that do not model word position at all we also consider the following nonlinear embedding: E(x) = tanh(W tanh(Φx(x)⊤U )). (6) where W is a n × n matrix. This is similar to a classical two-layer neural network, but applied to both sides q and d of s(q, d). We also consider the straight-forward combination of bag-of-N -grams followed by this nonlinearity. # B BASELINE USING EXTERNAL RESOURCES We also built a classical cascade NLP system baseline using a structured SVM, which incorpo- rates coreference resolution and semantic role labeling preprocessing steps, which are themselves trained on large amounts of costly labeled data. We first run the Stanford coreference system (Raghunathan et al., 2010) on the stories and each mention is then replaced with the first mention of its entity class. Second, the SENNA semantic role labeling system (SRL) (Collobert et al., 2011) is run, and we collect the set of arguments for each verb. We then define a ranking task for finding the supporting facts (trained using strong supervision):
1502.05698#59
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
60
Trust Region Policy Optimization k of these Fisher-vector products per gradient, where k is the number of iterations of the conjugate gradient algorithm we perform. We found k = 10 to be quite effective, and using higher k did not result in faster policy improvement. Hence, a na¨ıve implementation would spend more than 90% of the computational effort on these Fisher-vector products. However, we can greatly reduce this burden by subsampling the data for the computation of Fisher-vector product. Since the Fisher information matrix merely acts as a metric, it can be computed on a subset of the data without severely degrading the quality of the final step. Hence, we can compute it on 10% of the data, and the total cost of Hessian-vector products will be about the same as computing the gradient. With this optimization, the computation of a natural gradient step A−1g does not incur a significant extra computational cost beyond computing the gradient g. # D Approximating Factored Policies with Neural Networks
1502.05477#60
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05477
61
# D Approximating Factored Policies with Neural Networks The policy, which is a conditional probability distribution πθ(a|s), can be parameterized with a neural network. This neural network maps (deterministically) from the state vector s to a vector µ, which specifies a distribution over action space. Then we can compute the likelihood p(a|µ) and sample a ∼ p(a|µ). For our experiments with continuous state and action spaces, we used a Gaussian distribution, where the covariance matrix was diagonal and independent of the state. A neural network with several fully-connected (dense) layers maps from the input features to the mean of a Gaussian distribution. A separate set of parameters specifies the log standard deviation of each element. More concretely, the parameters include a set of weights and biases for the neural network computing the mean, {Wi, bi}L i=1, and a vector r (log standard deviation) with the same dimension as a. Then, the policy is defined by the normal distribution N
1502.05477#61
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
61
where given the question x we find at most three supporting facts with indices oi from the set of facts f in the story (we also consider selecting an “empty fact” for the case of less than three), and SO is a linear scoring function with parameters Θ. Computing the argmax requires doing exhaustive search, unlike e.g. the MemNN method which is greedy. For scalability, we thus prune the set of possible matches by requiring that facts share one common non-determiner word with each other match or with x. SO is constructed as a set of indicator features. For simplicity each of the features only looks at pairs of sentences, i.e. SO(x, fo1, fo2, fo3; Θ) = Θ ∗ (g(x, fo1 ), g(x, fo2), g(x, fo3 ), g(fo1, fo2), g(fo2, fo3), g(fo1, fo3)). The feature function g is made up of the following feature types, shown here for g(fo1, fo2): (1) Word pairs: One indicator variable for each pair of words in fo1 and fo2. (2) Pair distance: Indicator for the distance between the sentence, i.e.
1502.05698#61
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
62
For the experiments with discrete actions (Atari), we use a factored discrete action space, where each factor is parameter- ized as a categorical distribution. That is, the action consists of a tuple (a1, a2,...,a«) of integers a, € {1,2,..., Nx}, and each of these components is assumed to have a categorical distribution, which is specified by a vector uw, = [p1,P2,---;PN,]. Hence, ps is defined to be the concatenation of the factors’ parameters: pp = [/11, Ju2,---, 4K] and has dimension dim pz = ian N;,. The components of j are computed by taking applying a neural network to the input s and then applying the softmax operator to each slice, yielding normalized probabilities for each factor. # E Experiment Parameters
1502.05477#62
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
62
pairs: One indicator variable for each pair of words in fo1 and fo2. (2) Pair distance: Indicator for the distance between the sentence, i.e. o1 − o2. (3) Pair order: Indicator for the order of the sentence, i.e. o1 > o2. (4) SRL Verb Pair: Indicator variables for each pair of SRL verbs in fo1 and fo2. (5) SRL Verb-Arg Pair: Indicator variables for each pair of SRL arguments in fo1, fo2 and their corresponding verbs. After finding the supporting facts, we build a similar structured SVM for the response stage, also with features tuned for that goal: Words – indicator for each word in x, Word Pairs – indicator for each pair of words in x and supporting facts, and similar SRL Verb and SRL Verb-Arg Pair features as before.
1502.05698#62
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
63
# E Experiment Parameters Swimmer Hopper Walker State space dim. Control space dim. Total num. policy params Sim. steps per iter. Policy iter. Stepsize (DKL) Hidden layer size Discount (γ) Vine: rollout length Vine: rollouts per state Vine: Q-values per batch Vine: num. rollouts for sampling Vine: len. rollouts for sampling Vine: computation time (minutes) SP: num. path SP: path len. SP: computation time 10 2 364 50K 200 0.01 30 0.99 50 4 500 16 1000 2 50 1000 5 12 3 4806 1M 200 0.01 50 0.99 100 4 2500 16 1000 14 1000 1000 35 20 6 8206 1M 200 0.01 50 0.99 100 4 2500 16 1000 40 10000 1000 100 Table 2. Parameters for continuous control tasks, vine and single path (SP) algorithms. Trust Region Policy Optimization Total num. policy params Vine: Sim. steps per iter. SP: Sim. steps per iter. Policy iter. Stepsize (DKL) Discount (γ) Vine: rollouts per state Vine: computation time SP: computation time All games 33500 400K 100K 500 0.01 0.99 ≈ 4 ≈ 30 hrs ≈ 30 hrs Table 3. Parameters used for Atari domain. # F Learning Curves for the Atari Domain
1502.05477#63
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05698
63
Results are given in Table 3. The structured SVM, despite having access to external resources, does It does perform well on tasks not perform better than MemNNs overall, still failing at 9 tasks. 6, 9 and 10where the hand-built feature conjunctions capture the necessary nonlinearities that the original MemNNs do not. However, it seems to do significantly worse on tasks requiring three (and sometimes, two) supporting facts (e.g. tasks 3, 16 and 2) presumably as ranking over so many possibilities introduces more mistakes. However, its non-greedy search does seem to help on other tasks, such as path finding (task 19) where search is very important. 14
1502.05698#63
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
http://arxiv.org/pdf/1502.05698
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20150219
20151231
[ { "id": "1511.02301" }, { "id": "1508.05508" }, { "id": "1506.02075" } ]
1502.05477
64
cost cost 406 666 | .3 860 }.......8 beam rider >| — single path 1008 |e ccehe 1200 |Beeeeeemmeetnnst ecenemc mecca 1400 |occcccccceccceschevscescsevscevegecscecesecseceesgecseserecesseeva}ecsees 1666 | 8 30, 166 266 366 406 number of policy iterations pong —_— single path 568 100 200 300 400 number of policy iterations cost breakout — single path / — vine | cost to) 100 200 300 number of policy iterations 6 . qbert : : — single path -1006 }-.--. spoced feccescesaRtiea. ans — vine -2000 as . : 3000 b..--eeeeeeeeee ee ee. co Pacaooccnoasooocs reaccoesonsocccoctouooccocnosecno! ~ S -4e08 é. v . “SOOO rccnsececc en seettensccareneneed fecteeeeeeed Orr -6666 weds : . i . yi -7660 | sceoecosocg0s0d bosoecacsoocoISH pcocooSnongeDoCabSocasoISEORaIEHO! Boot 8668 bo a 166 266 366 406 566 number of policy
1502.05477#64
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.05477
65
bosoecacsoocoISH pcocooSnongeDoCabSocasoISEORaIEHO! Boot 8668 bo a 166 266 366 406 566 number of policy iterations 100 pe space invaders 0 “. : : : — single path 6 100 200 368 406 number of policy iterations cost enduro : — single path -~600 2 : 2 : 100 200 300 400 500 number of policy iterations 500 seaquest : : — single path > —= vine 6 -500 [eens SC boccoccecuccoccrtocooconcoccoet, number of policy iterations
1502.05477#65
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
http://arxiv.org/pdf/1502.05477
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
cs.LG
16 pages, ICML 2015
null
cs.LG
20150219
20170420
[]
1502.04623
0
# DRAW: A Recurrent Neural Network For Image Generation Karol Gregor Ivo Danihelka Alex Graves Danilo Jimenez Rezende Daan Wierstra Google DeepMind Google DeepMind 5 1 0 2 y a M 0 2 ] V C . s c [ # Abstract This paper introduces the Deep Recurrent Atten- tive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distin- guished from real data with the naked eye. 2 v 3 2 6 4 0 . 2 0 5 1 : v i X r a # 1. Introduction
1502.04623#0
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
1
2 v 3 2 6 4 0 . 2 0 5 1 : v i X r a # 1. Introduction A person asked to draw, paint or otherwise recreate a visual scene will naturally do so in a sequential, iterative fashion, reassessing their handiwork after each modification. Rough outlines are gradually replaced by precise forms, lines are sharpened, darkened or erased, shapes are altered, and the final picture emerges. Most approaches to automatic im- age generation, however, aim to generate entire scenes at once. In the context of generative neural networks, this typ- ically means that all the pixels are conditioned on a single latent distribution (Dayan et al., 1995; Hinton & Salakhut- dinov, 2006; Larochelle & Murray, 2011). As well as pre- cluding the possibility of iterative self-correction, the “one shot” approach is fundamentally difficult to scale to large images. The Deep Recurrent Attentive Writer (DRAW) ar- chitecture represents a shift towards a more natural form of image construction, in which parts of a scene are created independently from others, and approximate sketches are successively refined.
1502.04623#1
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
3
Time —~ Figure 1. A trained DRAW network generating MNIST dig- its. Each row shows successive stages in the generation of a sin- gle digit. Note how the lines composing the digits appear to be “drawn” by the network. The red rectangle delimits the area at- tended to by the network at each time-step, with the focal preci- sion indicated by the width of the rectangle border. The core of the DRAW architecture is a pair of recurrent neural networks: an encoder network that compresses the real images presented during training, and a decoder that reconstitutes images after receiving codes. The combined system is trained end-to-end with stochastic gradient de- scent, where the loss function is a variational upper bound on the log-likelihood of the data. It therefore belongs to the family of variational auto-encoders, a recently emerged hybrid of deep learning and variational inference that has led to significant advances in generative modelling (Gre- gor et al., 2014; Kingma & Welling, 2014; Rezende et al., 2014; Mnih & Gregor, 2014; Salimans et al., 2014). Where DRAW differs from its siblings is that, rather than generatDRAW: A Recurrent Neural Network For Image Generation
1502.04623#3
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
4
ing images in a single pass, it iteratively constructs scenes through an accumulation of modifications emitted by the decoder, each of which is observed by the encoder. An obvious correlate of generating images step by step is the ability to selectively attend to parts of the scene while ignoring others. A wealth of results in the past few years suggest that visual structure can be better captured by a se- quence of partial glimpses, or foveations, than by a sin- gle sweep through the entire image (Larochelle & Hinton, 2010; Denil et al., 2012; Tang et al., 2013; Ranzato, 2014; Zheng et al., 2014; Mnih et al., 2014; Ba et al., 2014; Ser- manet et al., 2014). The main challenge faced by sequential attention models is learning where to look, which can be addressed with reinforcement learning techniques such as policy gradients (Mnih et al., 2014). The attention model in DRAW, however, is fully differentiable, making it possible to train with standard backpropagation. In this sense it re- sembles the selective read and write operations developed for the Neural Turing Machine (Graves et al., 2014).
1502.04623#4
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
5
The following section defines the DRAW architecture, along with the loss function used for training and the pro- cedure for image generation. Section 3 presents the selec- tive attention model and shows how it is applied to read- ing and modifying images. Section 4 provides experi- mental results on the MNIST, Street View House Num- bers and CIFAR-10 datasets, with examples of generated images; and concluding remarks are given in Section 5. Lastly, we would like to direct the reader to the video accompanying this paper (https://www.youtube. com/watch?v=Zt-7MI9eKEo) which contains exam- ples of DRAW networks reading and generating images. P(x\|z) Cy—-1—+[write -+Cr—[write >... ~er—-[o}-P(2| 21-7) t TL \ 7 decoder dec_,[decoder| _|_ [decoder FNN fe RN "LRNN [~~ t \ | t il 4 ae decoding [sample] sample] \ | [sample T 7 i encoding Q(|x) Q(etl@, Z12- ) inference) \ (Cea Es Z1:t { h ene ¥ encoder ‘encoder Ly ‘encoder RNN \ T ear) ia t I I
1502.04623#5
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
6
Figure 2. Left: Conventional Variational Auto-Encoder. Dur- ing generation, a sample z is drawn from a prior P(z) and passed through the feedforward decoder network to compute the proba- bility of the input P(x|z) given the sample. During inference the input x is passed to the encoder network, producing an approx- imate posterior Q(z|x) over latent variables. During training, z is sampled from Q(z|x) and then used to compute the total de- scription length KL(Q(2Z|x)||P(Z)) — log(P(2|z)), which is minimised with stochastic gradient descent. Right: DRAW Net- work. At each time-step a sample z; from the prior P(z,) is passed to the recurrent decoder network, which then modifies part of the canvas matrix. The final canvas matrix cr is used to com- pute P(x|z1.7). During inference the input is read at every time- step and the result is passed to the encoder RNN. The RNNs at the previous time-step specify where to read. The output of the encoder RNN is used to compute the approximate posterior over the latent variables at that time-step.
1502.04623#6
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
7
as “what to write”. The architecture is sketched in Fig. 2, alongside a feedforward variational auto-encoder. # 2.1. Network Architecture # 2. The DRAW Network The basic structure of a DRAW network is similar to that of other variational auto-encoders: an encoder network deter- mines a distribution over latent codes that capture salient information about the input data; a decoder network re- ceives samples from the code distribuion and uses them to condition its own distribution over images. However there are three key differences. Firstly, both the encoder and de- coder are recurrent networks in DRAW, so that a sequence of code samples is exchanged between them; moreover the encoder is privy to the decoder’s previous outputs, allow- ing it to tailor the codes it sends according to the decoder’s behaviour so far. Secondly, the decoder’s outputs are suc- cessively added to the distribution that will ultimately gen- erate the data, as opposed to emitting this distribution in a single step. And thirdly, a dynamically updated atten- tion mechanism is used to restrict both the input region observed by the encoder, and the output region modified by the decoder. In simple terms, the network decides at each time-step “where to read” and “where to write” as well
1502.04623#7
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
8
Let RNN enc be the function enacted by the encoder net- work at a single time-step. The output of RNN enc at time t is the encoder hidden vector henc . Similarly the output of the decoder RNN dec at t is the hidden vector hdec . In gen- eral the encoder and decoder may be implemented by any recurrent neural network. In our experiments we use the Long Short-Term Memory architecture (LSTM; Hochreiter & Schmidhuber (1997)) for both, in the extended form with forget gates (Gers et al., 2000). We favour LSTM due to its proven track record for handling long-range depen- dencies in real sequential data (Graves, 2013; Sutskever et al., 2014). Throughout the paper, we use the notation b = W (a) to denote a linear weight matrix with bias from the vector a to the vector b. At each time-step t, the encoder receives input from both the image x and from the previous decoder hidden vector hdec 1. The precise form of the encoder input depends on a t − read operation, which will be defined in the next section. The output henc of the encoder is used to parameterise a henc ) over the latent vector zt. In our distribution Q(Zt| t DRAW: A Recurrent Neural Network For Image Generation
1502.04623#8
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
9
DRAW: A Recurrent Neural Network For Image Generation experiments the latent distribution is a diagonal Gaussian µt, σt): # (Zt| # N # µt = W (henc σt = exp (W (henc ) t (1) t )) (2) Bernoulli distributions are more common than Gaussians for latent variables in auto-encoders (Dayan et al., 1995; Gregor et al., 2014); however a great advantage of Gaus- sian latents is that the gradient of a function of the sam- ples with respect to the distribution parameters can be eas- ily obtained using the so-called reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014). This makes it straightforward to back-propagate unbiased, low variance stochastic gradients of the loss function through the latent distribution. ) drawn from At each time-step a sample zt ∼ Q(Zt| the latent distribution is passed as input to the decoder. The output hdec of the decoder is added (via a write opera- tion, defined in the sequel) to a cumulative canvas matrix ct, which is ultimately used to reconstruct the image. The total number of time-steps T consumed by the network be- fore performing the reconstruction is a free parameter that must be specified in advance.
1502.04623#9
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
10
negative log probability of x under D: x = (9) log D(x # cT ) | L z for a sequence of latent distributions ) is defined as the summed Kullback-Leibler di− The latent loss L* for a sequence of latent Q(Z,\hf”*) is defined as the summed Kullback-Leibler vergence of some latent prior P(Z;) from T Le = S7 KL(Q(Z|he"*)||P(Z1)) t=1 Q(Z;|hf"*): Le = S7 KL(Q(Z|he"*)||P(Z1)) (10) t=1 Note that this loss depends upon the latent samples z; drawn from Q(Z;|hf"°), which depend in turn on the input x. If the latent distribution is a diagonal Gaussian with ju;, o, as defined in Eqs | and 2, a simple choice for P(Z;) is a standard Gaussian with mean zero and standard deviation one, in which case Eq. 10 becomes # T T 1 Las (0 top - beet) —T/2 (lly t=1 The total loss sum of the reconstruction and latent losses: for the network is the expectation of the L = x + z Q (12) # L # L ∼
1502.04623#10
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
11
The total loss sum of the reconstruction and latent losses: for the network is the expectation of the L = x + z Q (12) # L # L ∼ For each image x presented to the network, c0, henc , hdec 0 are initialised to learned biases, and the DRAW net- work iteratively computes the following equations for t = 1 . . . , T : 1) (3) σ(ct ˆxt = x rt = read (xt, ˆxt, hdec 1) t − t = RNN enc(henc 1, [rt, hdec henc t − Q(Zt| zt ∼ t = RNN dec(hdec hdec ct = ct − − (4) 1]) (5) t − ) henc t (6) (7) 1, zt) t 1 + write(hdec ) − t (8) −
1502.04623#11
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
12
− (4) 1]) (5) t − ) henc t (6) (7) 1, zt) t 1 + write(hdec ) − t (8) − where ˆxt is the error image, [v, w] is the concatenation of vectors v and w into a single vector, and σ denotes x) . Note the logistic sigmoid function: σ(x) = that henc ), depends on both x , and hence Q(Zt| 1 of previous latent samples. We and the history z1:t will sometimes make this dependency explicit by writing 1), as shown in Fig. 2. henc can also be Q(Zt| − passed as input to the read operation; however we did not find that this helped performance and therefore omitted it. which we optimise using a single sample of z for each stochastic gradient descent step. z can be interpreted as the number of nats required to L transmit the latent sample sequence z1:T to the decoder x is the number of from the prior, and (if x is discrete) nats required for the decoder to reconstruct x given z1:T . The total loss is therefore equivalent to the expected com- pression of the data by the decoder and prior. # 2.3. Stochastic Data Generation
1502.04623#12
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
13
# 2.3. Stochastic Data Generation An image ˜x can be generated by a DRAW network by it- eratively picking latent samples ˜zt from the prior P , then running the decoder to update the canvas matrix ˜ct. After T repetitions of this process the generated image is a sample from D(X ˜cT ): | (13) ˜zt ∼ P (Zt) t = RNN dec(˜hdec ˜hdec ˜ct = ˜ct ˜x (14) # 1, ˜zt) t − 1 + write(˜hdec ) t (15) # − D(X ˜cT ) (16) | Note that the encoder is not involved in image generation. ∼ # 2.2. Loss Function # 3. Read and Write Operations The final canvas matrix cT is used to parameterise a model cT ) of the input data. If the input is binary, the natural D(X choice for D is a Bernoulli distribution with means given x is defined as the by σ(cT ). The reconstruction loss The DRAW network described in the previous section is not complete until the read and write operations in Eqs. 4 and 8 have been defined. This section describes two ways to do so, one with selective attention and one without. # L
1502.04623#13
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
14
# L DRAW: A Recurrent Neural Network For Image Generation # 3.1. Reading and Writing Without Attention In the simplest instantiation of DRAW the entire input im- age is passed to the encoder at every time-step, and the de- coder modifies the entire canvas matrix at every time-step. In this case the read and write operations reduce to # read (x, ˆxt, hdec − write(hdec t (17) 1) = [x, ˆxt] ) = W (hdec t t ) (18) However this approach does not allow the encoder to fo- cus on only part of the input when creating the latent dis- tribution; nor does it allow the decoder to modify only a part of the canvas vector. In other words it does not pro- vide the network with an explicit selective attention mech- anism, which we believe to be crucial to large scale image generation. We refer to the above configuration as “DRAW without attention”. # 3.2. Selective Attention Model > ) A gy | YY gx b' Ei
1502.04623#14
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
15
# 3.2. Selective Attention Model > ) A gy | YY gx b' Ei Figure 3. Left: A 3 × 3 grid of filters superimposed on an image. The stride (δ) and centre location (gX , gY ) are indicated. Right: Three N × N patches extracted from the image (N = 12). The green rectangles on the left indicate the boundary and precision (σ) of the patches, while the patches themselves are shown to the right. The top patch has a small δ and high σ, giving a zoomed-in but blurry view of the centre of the digit; the middle patch has large δ and low σ, effectively downsampling the whole image; and the bottom patch has high δ and σ.
1502.04623#15
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
16
To endow the network with selective attention without sac- rificing the benefits of gradient descent training, we take in- spiration from the differentiable attention mechanisms re- cently used in handwriting synthesis (Graves, 2013) and Neural Turing Machines (Graves et al., 2014). Unlike the aforementioned works, we consider an explicitly two- dimensional form of attention, where an array of 2D Gaus- sian filters is applied to the image, yielding an image ‘patch’ of smoothly varying location and zoom. This con- figuration, which we refer to simply as “DRAW”, some- what resembles the affine transformations used in computer graphics-based autoencoders (Tieleman, 2014). # via a linear transformation of the decoder output hdec: (˜gX , ˜gY , log σ2, log ˜δ, log γ) = W (hdec) (21) A + 1 2 B + 1 2 max(A, B) 1 N gX = (˜gX + 1) (22) gY = (˜gY + 1) (23) 1 ˜δ δ = − (24)
1502.04623#16
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
17
gX = (˜gX + 1) (22) gY = (˜gY + 1) (23) 1 ˜δ δ = − (24) N grid of Gaussian filters is As illustrated in Fig. 3, the N × positioned on the image by specifying the co-ordinates of the grid centre and the stride distance between adjacent fil- ters. The stride controls the ‘zoom’ of the patch; that is, the larger the stride, the larger an area of the original image will be visible in the attention patch, but the lower the effective resolution of the patch will be. The grid centre (gX , gY ) and stride δ (both of which are real-valued) determine the mean location µi Y of the filter at row i, column j in the patch as follows: − where the variance, stride and intensity are emitted in the log-scale to ensure positivity. The scaling of gX , gY and δ is chosen to ensure that the initial patch (with a randomly initialised network) roughly covers the whole input image. Given the attention parameters emitted by the decoder, the horizontal and vertical filterbank matrices FX and FY (di- B respectively) are defined as mensions N follows:
1502.04623#17
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
18
µi X = gX + (i µj Y = gY + (j N/2 0.5) δ (19) − − N/2 0.5) δ (20) − − Two more parameters are required to fully specify the at- tention model: the isotropic variance σ2 of the Gaussian filters, and a scalar intensity γ that multiplies the filter re- sponse. Given an A B input image x, all five attention parameters are dynamically determined at each time step X )2 µi − 2σ2 µj Y )2 − 2σ2 FX [i, a] = 1 ZX exp − (a (25) FY [j, b] = 1 ZY exp − (b (26) where (i, 7) is a point in the attention patch, (a, b) is a point in the input image, and Z,, Z, are normalisation constants that ensure that )>, Fx [i,a] = 1 and >, Fy[j, 6] = 1. DRAW: A Recurrent Neural Network For Image Generation
1502.04623#18
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
19
Figure 4. Zooming. Top Left: The original 100 × 75 image. Top Middle: A 12 × 12 patch extracted with 144 2D Gaussian filters. Top Right: The reconstructed image when applying transposed filters on the patch. Bottom: Only two 2D Gaussian filters are displayed. The first one is used to produce the top-left patch fea- ture. The last filter is used to produce the bottom-right patch fea- ture. By using different filter weights, the attention can be moved to a different location. generated by the network are always novel (not simply copies of training examples), and are virtually indistin- guishable from real data for MNIST and SVHN; the gener- ated CIFAR images are somewhat blurry, but still contain recognisable structure from natural scenes. The binarized MNIST results substantially improve on the state of the art. As a preliminary exercise, we also evaluate the 2D atten- tion module of the DRAW network on cluttered MNIST classification.
1502.04623#19
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
20
For all experiments, the model D(X cT ) of the input data | was a Bernoulli distribution with means given by σ(cT ). For the MNIST experiments, the reconstruction loss from Eq 9 was the usual binary cross-entropy term. For the SVHN and CIFAR-10 experiments, the red, green and blue pixel intensities were represented as numbers between 0 and 1, which were then interpreted as independent colour emission probabilities. The reconstruction loss was there- fore the cross-entropy between the pixel intensities and the model probabilities. Although this approach worked well in practice, it means that the training loss did not corre- spond to the true compression cost of RGB images. # 3.3. Reading and Writing With Attention Given FX , FY and intensity γ determined by hdec 1, along − with an input image x and error image ˆxt, the read opera- tion returns the concatenation of two N N patches from the image and error image: the experiments are Network hyper-parameters for all presented in Table 3. The Adam optimisation algo- rithm (Kingma & Ba, 2014) was used throughout. Ex- amples of generation sequences for MNIST and SVHN are provided in the accompanying video (https://www. youtube.com/watch?v=Zt-7MI9eKEo).
1502.04623#20
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]
1502.04623
21
1) = γ[FY xF T X , FY ˆxF T X ] (27) # read (x, ˆxt, hdec t − Note that the same filterbanks are used for both the image and error image. For the write operation, a distinct set of attention parameters ˆγ, ˆFX and ˆFY are extracted from hdec , the order of transposition is reversed, and the intensity is inverted: # wt = W (hdec 1 ˆγ ) t (28) Y wt ˆFX ˆF T write(hdec ) = t (29) # 4.1. Cluttered MNIST Classification To test the classification efficacy of the DRAW attention mechanism (as opposed to its ability to aid in image gener- ation), we evaluate its performance on the 100 100 clut- tered translated MNIST task (Mnih et al., 2014). Each im- age in cluttered MNIST contains many digit-like fragments of visual clutter that the network must distinguish from the true digit to be classified. As illustrated in Fig. 5, having an iterative attention model allows the network to progres- sively zoom in on the relevant region of the image, and ignore the clutter outside it.
1502.04623#21
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
http://arxiv.org/pdf/1502.04623
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20150216
20150520
[]