doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1504.03592 | 29 | 9
# 8 Conclusion
In this paper we have constructed an executable model of an ethical consequence engine described in [23] and then veriï¬ed that this model embodies the ethical principles we expect. Namely that it pro-actively selects actions which will keep humans out of harms way, if it can do so. In the course of developing this model we have laid the foundation for a declarative language for expressing ethical consequence engines. This language is executable and exists within a framework that can interface with a number of external robotic systems while allowing elements within the framework to be veriï¬ed by model checking. | 1504.03592#29 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 30 | Table 2 and Figure 2â¼4 show the comparison results of search accuracies on all of the three datasets. Two observa- tions can be made from these results:
(1) On all of the three datasets, the proposed method achieves substantially better search accuracies (w.r.t. MAP, precision within Hamming distance 2, precision-recall, and precision with varying size of top returned samples) than those baseline methods using traditional hand-crafted vi- sual features. For example, compared to the best competi- tor KSH, the MAP results of the proposed method indicate a relative increase of 58.8% â¼90.6.% / 61.3% â¼ 82.2 % / 21.2% â¼ 22.7% on SVHN / CIFAR-10 / NUS-WIDE, re- spectively.
We implement the proposed method based on the open- source Caffe [6] framework. In all experiments, our net- works are trained by stochastic gradient descent with 0.9 momentum [22]. We initiate ⬠in the piece-wise threshold function to be 0.5 and decrease it by 20% after every 20, 000 iterations. The mini-batch size of images is 64. The weight decay parameter is 0.0005. | 1504.03410#30 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 30 | At present the language is very simple relying on prioritisation ï¬rst over individuals and then over outcomes. It can not, for instance, express that while, in general, outcomes for individuals of some type (e.g., humans) are more important than those for another (e.g., the robot) there may be some particularly bad outcomes for the robot that should be prioritised over less severe outcomes for the humans (for instance it may be acceptable for a robot to move âtoo closeâ to a human if that prevents the robotâs own destruction). Nor, at present, does the language have any ability to distinguish between different contexts and so an outcome is judged equally bad no matter what the circumstances. This will be too simple for many situations â especially those involving the competing requirements of privacy and reporting that arise in many scenarios involving robots in the home. The language is also tied to the existence of an engine that is capable of simulating the outcomes of events and so the performance of a system involving such a consequence engine is necessarily limited by the capabilities of such a simulator. This simulation is tied to a single robot action and so, again, the system has no capability for reasoning that some action may lead it into a situation where the only available subsequent actions are unethical. Lastly the language presumes that suitable ethical priorities have already been externally decided and has no capability for determining ethical actions by reasoning from ï¬rst principles. | 1504.03592#30 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 31 | The results of BRE, ITQ, ITQ-CCA, KSH, MLH and SH are obtained by the implementations provided by their authors, respectively. The results of LSH are obtained from our implementation. Since the network configura- tions of CNNH in [27] are different from those of the pro- posed method, for a fair comparison, we carefully imple- ment CNNH (referred to as CNNH«) based on Caffe, where we use the code provided by the authors of [27] to imple- ment the first stage. In the second stage of CNNHx, we use the same stack of convolution-pooling layers as in Ta- ble 1, except for modifying the size of the last convolution to bits x 1 x 1 and using an average pooling layer of size bits x 1 x 1 as the output layer.
8These bag-of-words features are available in the NUS-WIDE dataset. | 1504.03410#31 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 31 | Nevertheless we believe that the work reported here opens the path to a system for implementing veriï¬able ethical consequence engines which may be interfaced to arbitrary robotic systems.
# 9 Software Archiving
The system described in this paper is available as a recomputable virtual machine on request from the ï¬rst author and will be archived at recomputation.org in due course. It can also be found on branch ethical governor of the git repository at mcapl.sourceforge.net.
# 10 Acknowledgements
Work funded by EPSRC Grants EP/L024845/1 and EP/L024861/1 (âVeriï¬able Autonomyâ).
# References
[1] M. Anderson and S. L. Anderson. EthEl: Toward a principled ethical eldercare robot. In Proc. AAAI Fall Symposium on AI in Eldercare: New Solutions to Old Problems, 2008.
[2] S. Anderson and M. Anderson. A Prima Facie Duty Approach to Machine Ethics and Its Application to Elder Care. In Human-Robot Interaction in Elder Care, 2011.
[3] R. Arkin, P. Ulam, and A. Wagner. Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception. Proceedings of the IEEE, 100(3):571â589, 2012. | 1504.03592#31 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 32 | 8These bag-of-words features are available in the NUS-WIDE dataset.
(2) In most metrics on all of the three datasets, the pro- posed method shows superior performance gains against the most related competitors CNNH and CNNHx, which are deep-networks-based two-stage methods. For example, with respect to MAP, compared to the corresponding sec- ond best competitor, the proposed method shows a relative increase of 9.6 % ~ 14.0 % / 3.9% ~ 9.2% on CIFAR-10/ NUS-WIDE, respectively. These results verify that simul- taneously learning useful representation of images and hash codes of preserving similarities can benefit each other.
# 4.3. Comparison Results of the Divide-and-Encode Module against Its Alternative | 1504.03410#32 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 32 | [4] R. S. Boyer and J. S. Moore, editors. The Correctness Problem in Computer Science. Academic Press, London, 1981.
[5] E. M. Clarke, O. Grumberg, and D. Peled. Model Checking. MIT Press, 1999.
10
[6] E. Dedu. A Bresenham-based super-cover Line Algorithm. http://lifc.univ-fcomte.fr/home/ Ëededu/projects/bresenham, 2001. [Online; accessed 29-Sept-2014].
[7] R. A. DeMillo, R. J. Lipton, and A. J. Perlis. Social Processes and Proofs of Theorems of Programs. ACM Communications, 22(5):271â280, 1979.
[8] L. Dennis, M. Fisher, M. Slavkovik, and M. Webster. Ethical Choice in Unforeseen Circumstances. In Proc. 14th Towards Autonomous Robotic Systems Conference (TAROS 2013), volume 8069 of Lecture Notes in Artiï¬cial Intelligence, pages 433â445. Springer, 2014.
[9] L. A. Dennis. ROS-AIL Integration. Technical report, University of Liverpool, Department of Computer Science, 2014. http://www.csc.liv.ac.uk/research/. | 1504.03592#32 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 33 | # 4.3. Comparison Results of the Divide-and-Encode Module against Its Alternative
A natural alternative to the divide-and-encode module is a simple fully-connected layer followed by a sigmoid layer of restricting the output valuesâ range in [0, 1] (see Figure 2(b)). To investigate the effectiveness of the divide-and°Note that, on CIFAR-10, some MAP results of CNNH« are inferior to those of CNNH [27]. This is mainly due to different network configu- rations and optimization frameworks between these two implementations. CNNH« is implemented based on Caffe [6]. But the core of the original implementation in CNNH [27] is based on Cuda-Convnet [7].
(a) (b) (c)
Figure 5. The comparison results on SVNH. (a) Precision curves within Hamming radius 2; (b) precision-recall curves of Hamming ranking with 48 bits; (c) precision curves with 48 bits w.r.t. different numbers of top returned samples.
(a) (b) (c) | 1504.03410#33 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 33 | [10] L. A. Dennis, B. Farwer, R. H. Bordini, M. Fisher, and M. Wooldridge. A Common Semantic Basis for BDI Languages. In Proc. 7th Int. Workshop on Programming Multiagent Systems (ProMAS), volume 4908 of LNAI, pages 124â139. Springer, 2008.
[11] L. A. Dennis, M. Fisher, J. M. Aitken, S. M. Veres, Y. Gao, A. Shaukat, and G. Burroughes. Reconï¬gurable Autonomy. KI - Knstliche Intelligenz, 28(3):199â207, 2014.
[12] L. A. Dennis, M. Fisher, N. K. Lincoln, A. Lisitsa, and S. M. Veres. Practical Veriï¬cation of Decision-making in Agent-based Autonomous Systems. Automated Software Engineering, 2014. To Appear.
[13] L. A. Dennis, M. Fisher, M. Webster, and R. H. Bordini. Model Checking Agent Programming Languages. Automated Software Engineering, 19(1):5â63, 2012. | 1504.03592#33 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 34 | (a) (b) (c)
Figure 6. The comparison results on CIFAR10. (a) precision curves within Hamming radius 2; (b) precision-recall curves of Hamming ranking with 48 bits; (c) precision curves with 48 bits w.r.t. different number of top returned samples
(a) (b) (c)
Figure 7. The comparison results on NUS-WIDE. (a) precision curves within Hamming radius 2; (b) precision-recall curves of Hamming ranking with 48 bits; (c) precision curves with 48 bits w.r.t. different number of top returned samples
encode module (DEM), we implement and evaluate a deep architecture derived from the proposed one in Figure 1, by replacing the divide-and-encode module with its alternative in Figure 2(b) and keeping other layers unchanged. We refer to it as âFCâ.
As can be seen from Table 3 and Figure 8, the results of the proposed method outperform the competitor with the alternative of the divide-and-encode module. For example, the architecture with DEM achieves 0.581 accuracy with 48 bits on CIFAR-10, which indicates an improvement of 19.7% over the FC alternative. The underlying reason for | 1504.03410#34 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 34 | [14] L. A. Dennis, M. Fisher, and M. P. Webster. Using Agent JPF to Build Models for Other Model Checkers. In Proc. 14th International Workshop on Computational Logic in Multi-Agent Systems (CLIMA XIV), volume 8143 of Lecture Notes in Computer Science, pages 273â289. Springer, 2013.
[15] J. H. Fetzer. Program Veriï¬cation: The Very Idea. ACM Communications, 31(9):1048â1063, 1988.
[16] N. Lincoln, S. Veres, L. Dennis, M. Fisher, and A. Lisitsa. Autonomous Asteroid Exploration by Rational Agents. IEEE Computational Intelligence Magazine, 8:25â38, 2013.
[17] F. Mondada, M. Bonani, X. Raemy, J. Pugh, C. Cianci, A. Klaptocz, S. Magenenat, J. C. Zufferey, D. Floreano, and A. Martinolli. The e-puck, a Robot Designed for Education in Engineering. In Proc. 9th Conference on Autonomous Robot Systems and Competitions, pages 59â65, 2009.
[18] T. Powers. Prospects for a Kantian Machine. IEEE Intelligent Systems, 21(4):46 â51, 2006. | 1504.03592#34 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 35 | the improvement may be that, compared to the FC alter- native, the output hash codes from the divide-and-encode modules are less redundant to each other.
# 4.4. Comparison Results of a Shared Sub-Network against Two Independent Sub-Networks
In the proposed deep architecture, we use a shared sub- network to capture a uniï¬ed image representation for the three images in an input triplet. A possible alternative to this shared sub-network is that for a triplet (I, I +, I â), the query I has an independent sub-network P , while I +
Table 3. Comparison results of the divide-and-encode module and its fully-connected alternative on three datasets. NUS-WIDE(MAP)
SVHN(MAP) CIFAR-10(MAP) 32 bits 24 bits 0.558 0.566 0.489 0.497 Method 12 bits 0.899 0.887 24 bits 0.914 0.896 32 bits 0.925 0.909 48 bits 0.923 0.912 12 bits 0.552 0.465 48bits 0.581 0.485 12 bits 0.674 0.623 24 bits 0.697 0.673 Ours (DEM) Ours (FC) 32 bits 0.713 0.682 48 bits 0.715 0.691 | 1504.03410#35 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 35 | [18] T. Powers. Prospects for a Kantian Machine. IEEE Intelligent Systems, 21(4):46 â51, 2006.
[19] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng. ROS: an Open-source Robot Operating System. In ICRA Workshop on Open Source Software, 2009.
[20] G. Verfaillie and M. Charmeau. A Generic Modular Architecture for the Control of an Autonomous Spacecraft. In Proc. 5th International Workshop on Planning and Scheduling for Space (IWPSS), 2006.
[21] W. Visser, K. Havelund, G. P. Brat, S. Park, and F. Lerda. Model Checking Programs. Automated Software Engineering, 10(2):203â232, 2003.
[22] V. Wiegel and J. van den Berg. Combining Moral Theory, Modal Logic and MAS to Create Well-Behaving Artiï¬cial Agents. International Journal of Social Robotics, 1(3):233â242, 2009. | 1504.03592#35 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 36 | (a)
(b)
(c)
Figure 8. The precision curves of divide-and-encode module versus its fully-connected alternative with 48 bits w.r.t. different number of top returned samples
and I â has a shared sub-network Q, where P /Q maps I/(I +, I â) into the corresponding image feature vector(s) (i.e., x, x+ and xâ, respectively).
We implement and compare the search accuracies of the proposed architecture with a shared sub-network to its al- ternative with two independent sub-networks. As can be seen in Table 4 and 5, the results of the proposed architec- ture outperform the competitor with the alternative with two independent sub-networks. Generally speaking, although larger networks can capture more information, it also needs more training data. The underlying reason why the architec- ture with a shared sub-network performs better than the one with two independent sub-networks may be that the training samples are not enough for networks with too much param- eters (e.g., 500 training images per class on CIFAR-10 and NUS-WIDE).
Table 4. Comparison results of a shared sub-network against two independent sub-networks on CIFAR-10. 24 bits | 1504.03410#36 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 36 | [23] A. F. T. Winï¬eld, C. Blum, and W. Liu. Towards and Ethical Robot: Internal Models, Consequences and Ethical Action Selection. In M. Mistry, A. Leonardis, M. Witkowski, and C. Melhuish, editors, Advances in Autonomous Robotics Systems, volume 8717 of Lecture Notes in Computer Science, pages 85â96. Springer, 2014.
[24] R. Woodman, A. F. T. Winï¬eld, C. Harper, and M. Fraser. Building Safer Robots: Safety Driven Control. International Journal of Robotics Research, 31(13):1603â1626, 2012.
11 | 1504.03592#36 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 37 | Table 4. Comparison results of a shared sub-network against two independent sub-networks on CIFAR-10. 24 bits
Methods 12 bits 32 bits 48 bits MAP 1-sub-network 2-sub-networks 0.558 0.477 Precision within Hamming radius 2 0.602 0.549 0.552 0.467 0.566 0.494 1-sub-network 2-sub-networks 0.527 0.450 0.615 0.564 0.581 0.515 0.625 0.588
Table 5. Comparison results of a shared sub-network against two independent sub-networks on NUSWIDE.
Methods 12 bits 24 bits 32 bits 48 bits MAP 1-sub-network 2-sub-networks 0.713 0.688 Precision within Hamming radius 2 0.710 0.696 0.674 0.640 0.697 0.686 1-sub-network 2-sub-networks 0.623 0.579 0.686 0.664 0.715 0.697 0.714 0.704 | 1504.03410#37 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03410 | 38 | ing loss designed to preserve relative similarities. Through- out the proposed deep architecture, input images are con- verted into uniï¬ed image representations via a shared sub- network of stacked convolution layers. Then, these interme- diate image representations are encoded into hash codes by divide-and-encode modules. Empirical evaluations in im- age retrieval show that the proposed method has superior performance gains over state-of-the-arts.
# Acknowledgment
This work was partially supported by Adobe Gift Fund- ing. It was also supported by the National Natural Science Foundation of China under Grants 61370021, U1401256, 61472453, Natural Science Foundation of Guangdong Province under Grant S2013010011905.
# 5. Conclusion
# References
In this paper, we developed a âone-stageâ supervised hashing method for image retrieval, which generates bitwise hash codes for images via a carefully designed deep archi- tecture. The proposed deep architecture uses a triplet rank[1] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 886â893, 2005. 1 | 1504.03410#38 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03410 | 39 | [2] A. Gionis, P. Indyk, and R. Motwani. Similarity search in In Proceedings of the Inter- high dimensions via hashing. national Conference on Very Large Data Bases, pages 518â 529, 1999. 5, 6
[3] Y. Gong, S. Kumar, H. A. Rowley, and S. Lazebnik. Learning binary codes for high-dimensional data using bilinear projec- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 484â491, 2013. 1
Iterative quantization: A pro- In Proceed- crustean approach to learning binary codes. ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 817â824, 2011. 1, 2, 5, 6
[5] J. Ji, S. Yan, J. Li, G. Gao, Q. Tian, and B. Zhang. Batch- orthogonal locality-sensitive hashing for angular similarity. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 36(10):1963â1974, 2014. 4 | 1504.03410#39 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03410 | 40 | [6] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. 6
[7] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet clas- siï¬cation with deep convolutional neural networks. In Pro- ceedings of Advances in Neural Information Processing Sys- tems, pages 1106â1114, 2012. 2, 3, 6
[8] B. Kulis and T. Darrell. Learning to hash with binary re- constructive embeddings. In Proceedings of the Advances in Neural Information Processing Systems, pages 1042â1050, 2009. 1, 2, 5, 6
[9] B. Kulis and K. Grauman. Kernelized locality-sensitive In Proceedings of the hashing for scalable image search. IEEE International Conference on Computer Vision, pages 2130â2137, 2009. 1, 2 | 1504.03410#40 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03410 | 41 | [10] X. Li, G. Lin, C. Shen, A. v. d. Hengel, and A. Dick. Learn- ing hash functions using column generation. In Proceedings of the International Conference on Machine Learning, pages 142â150, 2013. 3
[11] M. Lin, Q. Chen, and S. Yan. Network in network. In Pro- ceedings of the International Conference on Learning Rep- resentations, 2014. 2, 3
[12] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Super- vised hashing with kernels. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 2074â2081, 2012. 1, 2, 3, 5, 6
[13] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. In Proceedings of the International Conference on Machine Learning, pages 1â8, 2011. 2, 5
[14] X. Liu, J. He, B. Lang, and S.-F. Chang. Hash bit selection: a uniï¬ed solution for selection problems in hashing. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1570â1577, 2013. 1 | 1504.03410#41 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03410 | 42 | [15] B. Neyshabur, N. Srebro, R. Salakhutdinov, Y. Makarychev, and P. Yadollahpour. The power of asymmetry in binary hashing. In Advances in Neural Information Processing Sys- tems, pages 2823â2831, 2013. 4
[16] M. Norouzi and D. M. Blei. Minimal loss hashing for com- pact binary codes. In Proceedings of the International Con- ference on Machine Learning, pages 353â360, 2011. 1, 2, 5, 6
[17] M. Norouzi, D. J. Fleet, and R. Salakhutdinov. Hamming distance metric learning. In Proceedings of the Advances in Neural Information Processing Systems, pages 1â9, 2012. 2, 3
[18] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42(3):145â175, 2001. 1, 2 [19] R. Salakhutdinov and G. Hinton. Learning a nonlinear em- bedding by preserving class neighbourhood structure. In Proceedings of the International Conference on Artiï¬cial In- telligence and Statistics, pages 412â419, 2007. 2 | 1504.03410#42 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03410 | 43 | [20] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. 2
[21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 2
[22] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma- chine Learning, pages 1139â1147, 2013. 6
[23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 2 | 1504.03410#43 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03410 | 44 | [24] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face veriï¬ca- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1701â1708, 2014. 2
Semi-supervised IEEE Transactions on Pat- hashing for large-scale search. tern Analysis and Machine Intelligence, 34(12):2393â2406, 2012. 1
[26] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Proceedings of the Advances in Neural Information Process- ing Systems, pages 1753â1760, 2008. 2, 5, 6
[27] R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan. Supervised hashing for image retrieval via image representation learn- In Proceedings of the AAAI Conference on Artiï¬cial ing. Intellignece, pages 2156â2162, 2014. 1, 2, 3, 5, 6 | 1504.03410#44 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.00941 | 1 | # Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton Google
# Abstract
Learning long term dependencies in recurrent networks is difï¬cult due to van- ishing and exploding gradients. To overcome this difï¬culty, researchers have de- veloped sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectiï¬ed linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We ï¬nd that our solution is comparable to a standard implementation of LSTMs on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.
# 1 Introduction | 1504.00941#1 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 2 | # 1 Introduction
Recurrent neural networks (RNNs) are very powerful dynamical systems and they are the natural way of using neural networks to map an input sequence to an output sequence, as in speech recog- nition and machine translation, or to predict the next term in a sequence, as in language modeling. However, training RNNs by using back-propagation through time [30] to compute error-derivatives can be difï¬cult. Early attempts suffered from vanishing and exploding gradients [15] and this meant that they had great difï¬culty learning long-term dependencies. Many different methods have been proposed for overcoming this difï¬culty. | 1504.00941#2 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 3 | A method that has produced some impressive results [23, 24] is to abandon stochastic gradient descent in favor of a much more sophisticated Hessian-Free (HF) optimization method. HF operates on large mini-batches and is able to detect promising directions in the weight-space that have very small gradients but even smaller curvature. Subsequent work, however, suggested that similar results could be achieved by using stochastic gradient descent with momentum provided the weights were initialized carefully [34] and large gradients were clipped [28]. Further developments of the HF approach look promising [35, 25] but are much harder to implement than popular simple methods such as stochastic gradient descent with momentum [34] or adaptive learning rates for each weight that depend on the history of its gradients [5, 14]. | 1504.00941#3 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 4 | The most successful technique to date is the Long Short Term Memory (LSTM) Recurrent Neural Network which uses stochastic gradient descent, but changes the hidden units in such a way that the backpropagated gradients are much better behaved [16]. LSTM replaces logistic or tanh hidden units with âmemory cellsâ that can store an analog value. Each memory cell has its own input and output gates that control when inputs are allowed to add to the stored analog value and when this value is allowed to inï¬uence the output. These gates are logistic units with their own learned weights on connections coming from the input and also the memory cells at the previous time-step. There is also a forget gate with learned weights that controls the rate at which the analog value stored in the memory cell decays. For periods when the input and output gates are off and the forget gate is not causing decay, a memory cell simply holds its value over time so the gradient of the error w.r.t. its stored value stays constant when backpropagated over those periods.
1 | 1504.00941#4 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 5 | 1
The ï¬rst major success of LSTMs was for the task of unconstrained handwriting recognition [12]. Since then, they have achieved impressive results on many other tasks including speech recogni- tion [13, 10], handwriting generation [8], sequence to sequence mapping [36], machine transla- tion [22, 1], image captioning [38, 18], parsing [37] and predicting the outputs of simple computer programs [39].
The impressive results achieved using LSTMs make it important to discover which aspects of the rather complicated architecture are crucial for its success and which are mere passengers. It seems unlikely that Hochreiter and Schmidhuberâs [16] initial design combined with the subsequent intro- duction of forget gates [6, 7] is the optimal design: at the time, the important issue was to ï¬nd any scheme that could learn long-range dependencies rather than to ï¬nd the minimal or optimal scheme. One aim of this paper is to cast light on what aspects of the design are responsible for the success of LSTMs. | 1504.00941#5 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 6 | Recent research on deep feedforward networks has also produced some impressive results [19, 3] and there is now a consensus that for deep networks, rectiï¬ed linear units (ReLUs) are easier to train than the logistic or tanh units that were used for many years [27, 40]. At ï¬rst sight, ReLUs seem inappropriate for RNNs because they can have very large outputs so they might be expected to be far more likely to explode than units that have bounded values. A second aim of this paper is to explore whether ReLUs can be made to work well in RNNs and whether the ease of optimizing them in feedforward nets transfers to RNNs.
# 2 The initialization trick
In this paper, we demonstrate that, with the right initialization of the weights, RNNs composed of rectiï¬ed linear units are relatively easy to train and are good at modeling long-range dependen- cies. The RNNs are trained by using backpropagation through time to get error-derivatives for the weights and by updating the weights after each small mini-batch of sequences. Their performance on test data is comparable with LSTMs, both for toy problems involving very long-range temporal structures and for real tasks like predicting the next word in a very large corpus of text. | 1504.00941#6 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 7 | We initialize the recurrent weight matrix to be the identity matrix and biases to be zero. This means that each new hidden state vector is obtained by simply copying the previous hidden vector then adding on the effect of the current inputs and replacing all negative states by zero. In the absence of input, an RNN that is composed of ReLUs and initialized with the identity matrix (which we call an IRNN) just stays in the same state indeï¬nitely. The identity initialization has the very desirable property that when the error derivatives for the hidden units are backpropagated through time they remain constant provided no extra error-derivatives are added. This is the same behavior as LSTMs when their forget gates are set so that there is no decay and it makes it easy to learn very long-range temporal dependencies.
We also ï¬nd that for tasks that exhibit less long range dependencies, scaling the identity matrix by a small scalar is an effective mechanism to forget long range effects. This is the same behavior as LTSMs when their forget gates are set so that the memory decays fast. | 1504.00941#7 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 8 | Our initialization scheme bears some resemblance to the idea of Mikolov et al. [26], where a part of the weight matrix is ï¬xed to identity or approximate identity. The main difference of their work to ours is the fact that our network uses the rectiï¬ed linear units and the identity matrix is only used for initialization. The scaled identity initialization was also proposed in Socher et al. [32] in the context of tree-structured networks but without the use of ReLUs. Our work is also related to the work of Saxe et al. [31], who study the use of orthogonal matrices as initialization in deep networks.
# 3 Overview of the experiments
Consider a recurrent net with two input units. At each time step, the ï¬rst input unit has a real value and the second input unit has a value of 0 or 1 as shown in ï¬gure 1. The task is to report the sum of the two real values that are marked by having a 1 as the second input [16, 15, 24]. IRNNs can learn to handle sequences with a length of 300, which is a challenging regime for other algorithms.
2 | 1504.00941#8 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 9 | 2
Another challenging toy problem is to learn to classify the MNIST digits when the 784 pixels are presented sequentially to the recurrent net. Again, the IRNN was better than the LSTM, having been able to achieve 3% test set error compared to 34% for LSTM.
While it is possible that a better tuned LSTM (with a different architecture or the size of the hidden state) would outperform the IRNN for the above two tasks, the fact that the IRNN performs as well as it does, with so little tuning, is very encouraging, especially given how much simpler the model is, compared to the LSTM.
We also compared IRNNs with LSTMs on a large language modeling task. Each memory cell of an LSTM is considerably more complicated than a rectiï¬ed linear unit and has many more parameters, so it is not entirely obvious what to compare. We tried to balance for both the number of parameters and the complexity of the architecture by comparing an LSTM with N memory cells with an IRNN with four layers of N hidden units, and an IRNN with one layer and 2N hidden units. Here we ï¬nd that the IRNN gives results comparable to the equivalent LSTM. | 1504.00941#9 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 10 | Finally, we benchmarked IRNNs and LSTMs on a acoustic modeling task on TIMIT. As the tasks only require a short term memory of the inputs, we used a the identity matrix scaled by 0.01 as initialization for the recurrent matrix. Results show that our method is also comparable to LSTMs, despite being a lot simpler to implement.
# 4 Experiments
In the following experiments, we compare IRNNs against LSTMs, RNNs that use tanh units and RNNs that use ReLUs with random Gaussian initialization.
For IRNNs, in addition to the recurrent weights being initialized at identity, the non-recurrent weights are initialized with a random matrix, whose entries are sampled from a Gaussian distri- bution with mean of zero and standard deviation of 0.001.
Our implementation of the LSTMs is rather standard and includes the forget gate. It is observed that setting a higher initial forget gate bias for LSTMs can give better results for long term dependency problems. We therefore also performed a grid search for the initial forget gate bias in LSTMs from the set {1.0, 4.0, 10.0, 20.0}. Other than that we did not tune the LTSMs much and it is possible that the results of LSTMs in the experiments can be improved. | 1504.00941#10 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 11 | In addition to LSTMs, two other candidates for comparison are RNNs that use the tanh activation function and RNNs that use ReLUs with standard random Gaussian initialization. We experimented with several values of standard deviation for the random initialization Gaussian matrix and found that values suggested in [33] work well.
To train these models, we use stochastic gradient descent with a ï¬xed learning rate and gradient clipping. To ensure that good hyperparameters are used, we performed a grid search over several learning rates α = {10â9, 10â8, ..., 10â1} and gradient clipping values gc = {1, 10, 100, 1000} [9, 36]. The reported result is the best result over the grid search. We also use the same batch size of 16 examples for all methods. The experiments are carried out using the DistBelief infrastructure, where each experiment only uses one replica [20, 4].
# 4.1 The Adding Problem | 1504.00941#11 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 12 | # 4.1 The Adding Problem
The adding problem is a toy task, designed to examine the power of recurrent models in learning long-term dependencies [16, 15]. This is a sequence regression problem where the target is a sum of two numbers selected in a sequence of random signals, which are sampled from a uniform distribu- tion in [0,1]. At every time step, the input consists of a random signal and a mask signal. The mask signal has a value of zero at all time steps except for two steps when it has values of 1 to indicate which two numbers should be added. An example of the adding problem is shown in ï¬gure 1 below.
A basic baseline is to always predict the sum to have a value of 1 regardless of the inputs. This will give the Mean Squared Error (MSE) around 0.1767. The goal is to train a model that achieves MSE well below 0.1767.
3
os} o7| 03] 01] 02] 06] 05] 09 0.8 | 0.1] Random signals 1.2 | Target of r]ofofofo}al]o o | o | mask T
Figure 1: An example of the âaddingâ problem, where the target is 1.2 which is the sum of 2nd and the 7th numbers in the ï¬rst sequence [24]. | 1504.00941#12 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 13 | The problem gets harder as the length of the sequence T increases because the dependency between the output and the relevant inputs becomes more remote. To solve this problem, the recurrent net must remember the ï¬rst number or the sum of the two numbers accurately whilst ignoring all of the irrelevant numbers.
We generated a training set of 100,000 examples and a test set of 10,000 examples as we varied T . We ï¬xed the hidden states to have 100 units for all of our networks (LSTMs, RNNs and IRNNs). This means the LSTMs had more parameters by a factor of about 4 and also took about 4 times as much computation per timestep.
As we varied T , we noticed that both LSTMs and RNNs started to struggle when T is around 150. We therefore focused on investigating the behaviors of all models from this point onwards. The results of the experiments with T = 150, T = 200, T = 300, T = 400 are reported in ï¬gure 2 below (best hyperparameters found during grid search are listed in table 1).
Adding two numbers in a sequence of 150 numbers
Adding two numbers in a sequence of 200 numbers | 1504.00941#13 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 14 | 0.8 0.8 0.7 LSTM RNN + Tanh RNN + ReLUs IRNN 0.7 LSTM RNN + Tanh RNN + ReLUs IRNN 0.6 0.6 E S M t s e T 0.5 0.4 E S M t s e T 0.5 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 1 2 3 4 5 Steps 6 7 8 9 6 x 10 0 0 1 2 3 4 5 Steps 6 7 8 9 6 x 10 Adding two numbers in a sequence of 300 numbers Adding two numbers in a sequence of 400 numbers 0.8 0.8 0.7 LSTM RNN + Tanh RNN + ReLUs IRNN 0.7 LSTM RNN + Tanh RNN + ReLUs IRNN 0.6 0.6 E S M t s e T 0.5 0.4 E S M t s e T 0.5 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 1 2 3 4 5 Steps 6 7 8 9 6 x 10 0 0 1 2 3 4 5 Steps 6 7 8 9 6 x 10 | 1504.00941#14 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 15 | Figure 2: The results of recurrent methods on the âaddingâ problem for the case of T = 150 (top left), T = 200 (top right), T = 300 (bottom left) and T = 400 (bottom right). The objective function is the Root Mean Squared Error, reported on the test set of 10,000 examples. Note that always predicting the sum to be 1 should give MSE of 0.1767.
The results show that the convergence of IRNNs is as good as LSTMs. This is given that each LSTM step is more expensive than an IRNN step (at least 4x more expensive). Adding two numbers in a sequence of 400 numbers is somewhat challenging for both algorithms.
4 | 1504.00941#15 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 16 | 4
IRNN lr = 0.01, gc = 100 lr = 0.01, gc = 1 lr = 0.01, gc = 10 lr = 0.01, gc = 1 RNN + Tanh lr = 0.01, gc = 100 LSTM lr = 0.01, gc = 10, f b = 1.0 lr = 0.001, gc = 100, f b = 4.0 N/A lr = 0.01, gc = 1, f b = 4.0 N/A lr = 0.01, gc = 100, f b = 10.0 N/A T 150 200 300 400
Table 1: Best hyperparameters found for adding problems after grid search. lr is the learning rate, gc is gradient clipping, and f b is forget gate bias. N/A is when there is no hyperparameter combination that gives good result.
# 4.2 MNIST Classiï¬cation from a Sequence of Pixels | 1504.00941#16 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 17 | # 4.2 MNIST Classiï¬cation from a Sequence of Pixels
Another challenging toy problem is to learn to classify the MNIST digits [21] when the 784 pixels are presented sequentially to the recurrent net. In our experiments, the networks read one pixel at a time in scanline order (i.e. starting at the top left corner of the image, and ending at the bottom right corner). The networks are asked to predict the category of the MNIST image only after seeing all 784 pixels. This is therefore a huge long range dependency problem because each recurrent network has 784 time steps.
To make the task even harder, we also used a ï¬xed random permutation of the pixels of the MNIST digits and repeated the experiments.
All networks have 100 recurrent hidden units. We stop the optimization after it converges or when it reaches 1,000,000 iterations and report the results in ï¬gure 3 (best hyperparameters are listed in table 2). | 1504.00941#17 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 18 | Pixelâbyâpixel MNIST Pixelâbyâpixel permuted MNIST 100 100 90 80 LSTM RNN + Tanh RNN + ReLUs IRNN 90 80 LSTM RNN + Tanh RNN + ReLUs IRNN 70 70 60 50 y c a r u c c A 60 50 40 t s e T 40 30 30 20 20 10 10 0 0 1 2 3 4 5 Steps 6 7 8 9 10 5 x 10 0 0 1 2 3 4 5 Steps 6 7 8 9 10 5 x 10
y c a r u c c A
# t s e T
Figure 3: The results of recurrent methods on the âpixel-by-pixel MNISTâ problem. We report the test set accuracy for all methods. Left: normal MNIST. Right: permuted MNIST. | 1504.00941#18 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 19 | RNN + ReLUs lr = 10â8, gc = 10 RNN + Tanh lr = 10â8, gc = 10 Problem LSTM MNIST IRNN lr = 10â8, gc = 1 lr = 0.01, gc = 1 f b = 1.0 lr = 0.01, gc = 1 f b = 1.0 lr = 10â9, gc = 1 lr = 10â6, gc = 10 lr = 10â8, gc = 1 permuted MNIST
Table 2: Best hyperparameters found for pixel-by-pixel MNIST problems after grid search. lr is the learning rate, gc is gradient clipping, and f b is the forget gate bias.
The results using the standard scanline ordering of the pixels show that this problem is so difï¬cult that standard RNNs fail to work, even with ReLUs, whereas the IRNN achieves 3% test error rate which is better than most off-the-shelf linear classiï¬ers [21]. We were surprised that the LSTM did not work as well as IRNN given the various initialization schemes that we tried. While it still possi- ble that a better tuned LSTM would do better, the fact that the IRNN perform well is encouraging.
5
] | 1504.00941#19 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 20 | 5
]
Applying a ï¬xed random permutation to the pixels makes the problem even harder but IRNNs on the permuted pixels are still better than LSTMs on the non-permuted pixels.
The low error rates of the IRNN suggest that the model can discover long range correlations in the data while making weak assumptions about the inputs. This could be important to have for problems when input data are in the form of variable-sized vectors (e.g. the repeated ï¬eld of a protobuffer 1).
# 4.3 Language Modeling
We benchmarked RNNs, IRNNs and LSTMs on the one billion word language modelling dataset [2], perhaps the largest public benchmark in language modeling. We chose an output vocabulary of 1,000,000 words. | 1504.00941#20 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 21 | As the dataset is large, we observed that the performance of recurrent methods depends on the size of the hidden states: they perform better as the size of the hidden states gets larger (cf. [2]). We however focused on a set of simple controlled experiments to understand how different recurrent methods behave when they have a similar number of parameters. We ï¬rst ran an experiment where the number of hidden units (or memory cells) in LSTM are chosen to be 512. The LSTM is trained for 60 hours using 32 replicas. Our goal is then to check how well IRNNs perform given the same experimental environment and settings. As LSTM have more parameters per time step, we compared them with an IRNN that had 4 layers and same number of hidden units per layer (which gives approximately the same numbers of parameters).
We also experimented shallow RNNs and IRNNs with 1024 units. Since the output vocabulary is large, we projected the 1024 hidden units to a linear layer with 512 units before the softmax. This avoids greatly increasing the number of parameters.
The results are reported in table 3, which show that the performance of IRNNs is closer to the performance of LSTMs for this large-scale task than it is to the performance of RNNs. | 1504.00941#21 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 23 | We performed Phoneme recognition experiments on TIMIT with IRNNs and Bidirectional IRNNs and compared them to RNNs, LSTMs and Bidirectional LSTMs and RNNs. Bidirectional LSTMs have been applied previously to TIMIT in [11]. In these experiments we generated phoneme align- ments from Kaldi [29] using the recipe reported in [17] and trained all RNNs with two and ï¬ve hidden layers. Each model was given log Mel ï¬lter bank spectra with their delta and accelerations, where each frame was 120 (=40*3) dimensional and trained to predict the phone state (1 of 180). Frame error rates (FER) from this task are reported in table 4. In this task, instead of the identity initialization for the IRNNs matrices we used 0.01I so we refer Initalizing with the full identity led to slow convergence, worse results and to them as iRNNs. sometimes led to the model diverging during training. We hypothesize that this was because in the speech task similar inputs are provided to the neural net in neighboring frames. The normal IRNN keeps integrating this past input, instead of paying attention mainly to the current input because it | 1504.00941#23 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 24 | are provided to the neural net in neighboring frames. The normal IRNN keeps integrating this past input, instead of paying attention mainly to the current input because it has a difï¬cult time forgetting the past. So for the speech task, we are not only showing that iRNNs work much better than RNNs composed of tanh units, but we are also showing that initialization with the full identity is suboptimal when long range effects are not needed. Mulitplying the identity with a small scalar seems to be a good remedy in such cases. | 1504.00941#24 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 25 | # 1https://code.google.com/p/protobuf/
6
Frame error rates (dev / test) 35.0 / 36.2 34.5 / 35.4 34.3 / 35.5 35.6 / 37.0 35.0 / 36.2 33.0 / 33.8 31.5 / 32.4 29.6 / 30.6 31.9 / 33.2 33.9 / 34.8 28.5 / 29.1 28.9 / 29.7 Methods RNN (500 neurons, 2 layers) LSTM (250 cells, 2 layers) iRNN (500 neurons, 2 layers) RNN (500 neurons, 5 layers) LSTM (250 cells, 5 layers) iRNN (500 neurons, 5 layers) Bidirectional RNN (500 neurons, 2 layers) Bidirectional LSTM (250 cells, 2 layers) Bidirectional iRNN (500 neurons, 2 layers) Bidirectional RNN (500 neurons, 5 layers) Bidirectional LSTM (250 cells, 5 layers) Bidirectional iRNN (500 neurons, 5 layers)
Table 4: Frame error rates of recurrent methods on the TIMIT phone recognition task. | 1504.00941#25 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 26 | Table 4: Frame error rates of recurrent methods on the TIMIT phone recognition task.
In general in the speech recognition task, the iRNN easily outperforms the RNN that uses tanh units and is comparable to LSTM although we donât rule out the possibility that with very careful tuning of hyperparameters, the relative performance of LSTMs or the iRNNs might change. A ï¬ve layer Bidirectional LSTM outperforms all the other models on this task, followed closely by a ï¬ve layer Bidirectional iRNN.
# 4.5 Acknowledgements
We thank Jeff Dean, Matthieu Devin, Rajat Monga, David Sussillo, Ilya Sutskever and Oriol Vinyals for their help with the project.
# References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[2] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, and P. Koehn. One billion word bench- mark for measuring progress in statistical language modeling. CoRR, abs/1312.3005, 2013. | 1504.00941#26 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 27 | [3] G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing - Special Issue on Deep Learning for Speech and Language Processing, 2012.
[4] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. A. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. In NIPS, 2012.
[5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011.
[6] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 2000.
[7] F. A. Gers, N. N. Schraudolph, and J. Schmidhuber. Learning precise timing with lstm recur- rent networks. The Journal of Machine Learning Research, 2003. | 1504.00941#27 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 28 | [8] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[9] A. Graves. Generating sequences with recurrent neural networks. In Arxiv, 2013.
[10] A. Graves and N. Jaitly. Towards end-to-end speech recognition with recurrent neural net- works. In Proceedings of the 31st International Conference on Machine Learning, 2014.
[11] A. Graves, N. Jaitly, and A-R. Mohamed. Hybrid speech recognition with deep bidirectional lstm. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU),, 2013.
7
[12] A. Graves, M. Liwicki, S. Fern´andez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009.
[13] A. Graves, A-R. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural In IEEE International Conference on Acoustics, Speech and Signal Processing networks. (ICASSP), 2013. | 1504.00941#28 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 29 | [14] G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magni- tude. COURSERA: Neural Networks for Machine Learning, 2012.
[15] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ow in recurrent nets: the difï¬culty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural Networks, 2001.
[16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. [17] N. Jaitly. Exploring Deep Learning Methods for discovering features in speech signals. PhD
thesis, University of Toronto, 2014.
[18] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. | 1504.00941#29 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 30 | [20] Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. In International Ng. Building high-level features using large scale unsupervised learning. Conference on Machine Learning, 2012.
[21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
[22] T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014.
[23] J. Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th Interna- tional Conference on Machine Learning, 2010.
[24] J. Martens and I. Sutskever. Learning recurrent neural networks with Hessian-Free optimiza- tion. In ICML, 2011.
[25] J. Martens and I. Sutskever. Training deep and recurrent neural networks with Hessian-Free optimization. Neural Networks: Tricks of the Trade, 2012. | 1504.00941#30 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 31 | [25] J. Martens and I. Sutskever. Training deep and recurrent neural networks with Hessian-Free optimization. Neural Networks: Tricks of the Trade, 2012.
[26] T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. A. Ranzato. Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753, 2014.
[27] V. Nair and G. Hinton. Rectiï¬ed Linear Units improve Restricted Boltzmann Machines. In International Conference on Machine Learning, 2010.
[28] R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬culty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012.
[29] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely. The kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Under- standing. IEEE Signal Processing Society, 2011. | 1504.00941#31 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 32 | [30] D. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533â536, 1986.
[31] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
[32] R. Socher, J. Bauer, C. D. Manning, and A. Y. Ng. Parsing with compositional vector gram- mars. In ACL, 2013.
[33] D. Sussillo and L. F. Abbott. Random walk intialization for training very deep networks. arXiv preprint arXiv:1412.6558, 2015.
[34] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, 2013.
8
[35] I. Sutskever, J. Martens, and G. E. Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning, pages 1017â1024, 2011. | 1504.00941#32 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00941 | 33 | [36] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
[37] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. arXiv preprint arXiv:1412.7449, 2014.
[38] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014.
[39] W. Zaremba and I. Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014. [40] M. Zeiler, M. Ranzato, R. Monga, M. Mao, K. Yang, Q. V. Le, P. Nguyen, A. Senior, V. Van- houcke, and J. Dean. On rectiï¬ed linear units for speech processing. In IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013.
9 | 1504.00941#33 | A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem. | http://arxiv.org/pdf/1504.00941 | Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | cs.NE, cs.LG | null | null | cs.NE | 20150403 | 20150407 | [] |
1504.00702 | 0 | 6 1 0 2
r p A 9 1 ] G L . s c [
5 v 2 0 7 0 0 . 4 0 5 1 : v i X r a
Journal of Machine Learning Research 17 (2016) 1-40
Submitted 10/15; Published 4/16
# End-to-End Training of Deep Visuomotor Policies
Sergey Levineâ Chelsea Finnâ Trevor Darrell Pieter Abbeel Division of Computer Science University of California Berkeley, CA 94720-1776, USA â These authors contributed equally.
[email protected] [email protected] [email protected] [email protected]
Editor: Jan Peters
# Abstract | 1504.00702#0 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 1 | Editor: Jan Peters
# Abstract
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to- end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robotâs motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods. Keywords: Reinforcement Learning, Optimal Control, Vision, Neural Networks
# 1. Introduction | 1504.00702#1 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 2 | # 1. Introduction
Robots can perform impressive tasks under human control, including surgery (Lanfranco et al., 2004) and household chores (Wyrobek et al., 2008). However, designing the perception and control software for autonomous operation remains a major challenge, even for basic tasks. Policy search methods hold the promise of allowing robots to automatically learn new behaviors through experience (Kober et al., 2010b; Deisenroth et al., 2011; Kalakrishnan et al., 2011; Deisenroth et al., 2013). However, policies learned using such methods often rely on a number of hand-engineered components for perception and control, so as to present the policy with a more manageable and low-dimensional representation of observations and actions. The vision system in particular can be complex and prone to errors, and it is typically not improved during policy training, nor adapted to the goal of the task.
In this article, we aim to answer the following question: can we acquire more eï¬ec- tive policies for sensorimotor control if the perception system is trained jointly with the control policy, rather than separately? In order to represent a policy that performs both
©2016 Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel.
Levine, Finn, Darrell, and Abbeel
hanger cube hammer bottle
hanger
cube
hammer
bottle | 1504.00702#2 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 3 | Figure 1: Our method learns visuomotor policies that directly use camera image observa- tions (left) to set motor torques on a PR2 robot (right).
perception and control, we use deep neural networks. Deep neural network representations have recently seen widespread success in a variety of domains, such as computer vision and speech recognition, and even playing video games. However, using deep neural networks for real-world sensorimotor policies, such as robotic controllers that map image pixels and joint angles to motor torques, presents a number of unique challenges. Successful applications of deep neural networks typically rely on large amounts of data and direct supervision of the output, neither of which is available in robotic control. Real-world robot interaction data is scarce, and task completion is deï¬ned at a high level by means of a cost function, which means that the learning algorithm must determine on its own which action to take at each point. From the control perspective, a further complication is that observations from the robotâs sensors do not provide us with the full state of the system. Instead, important state information, such as the positions of task-relevant objects, must be inferred from inputs such as camera images. | 1504.00702#3 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 4 | We address these challenges by developing a guided policy search algorithm for senso- rimotor deep learning, as well as a novel CNN architecture designed for robotic control. Guided policy search converts policy search into supervised learning, by iteratively con- structing the training data using an eï¬cient model-free trajectory optimization procedure. We show that this can be formalized as an instance of Bregman ADMM (BADMM) (Wang and Banerjee, 2014), which can be used to show that the algorithm converges to a locally optimal solution. In our method, the full state of the system is observable at training time, but not at test time. For most tasks, providing the full state simply requires position- ing objects in one of several known positions for each trial during training. At test time, the learned CNN policy can handle novel, unknown conï¬gurations, and no longer requires full state information. Since the policy is optimized with supervised learning, we can use standard methods like stochastic gradient descent for training. Our CNNs have 92,000 pa- rameters and 7 layers, including a novel spatial feature point transformation that provides accurate spatial reasoning and reduces overï¬tting. This allows us to train | 1504.00702#4 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 5 | and 7 layers, including a novel spatial feature point transformation that provides accurate spatial reasoning and reduces overï¬tting. This allows us to train our policies with relatively modest amounts of data and only tens of minutes of real-world interaction time. We evaluate our method by learning policies for inserting a block into a shape sorting cube, screwing a cap onto a bottle, ï¬tting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a rack with a PR2 robot (see Figure 1). These tasks require localization, visual tracking, and handling complex contact dynamics. Our results demonstrate improvements in consistency and generalization from training visuomotor poli- cies end-to-end, when compared to training the vision and control components separately. We also present simulated comparisons that show that guided policy search outperforms a | 1504.00702#5 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 7 | # 2. Related Work
Reinforcement learning and policy search methods (Gullapalli, 1990; Williams, 1992) have been applied in robotics for playing games such as table tennis (Kober et al., 2010b), object manipulation (Gullapalli, 1995; Peters and Schaal, 2008; Kober et al., 2010a; Deisenroth et al., 2011; Kalakrishnan et al., 2011), locomotion (Benbrahim and Franklin, 1997; Kohl and Stone, 2004; Tedrake et al., 2004; Geng et al., 2006; Endo et al., 2008), and ï¬ight (Ng et al., 2004). Several recent papers provide surveys of policy search in robotics (Deisenroth et al., 2013; Kober et al., 2013). Such methods are typically applied to one component of the robot control pipeline, which often sits on top of a hand-designed controller, such as a PD controller, and accepts processed input, for example from an existing vision pipeline (Kalakrishnan et al., 2011). Our method learns policies that map visual input and joint encoder readings directly to the torques at the robotâs joints. By learning the entire map- ping from perception to control, the perception layers can be adapted to optimize task performance, and the motor control layers can be adapted to imperfect perception. | 1504.00702#7 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 8 | We represent our policies with convolutional neural networks (CNNs). CNNs have a long history in computer vision and deep learning (Fukushima, 1980; LeCun et al., 1989; Schmidhuber, 2015), and have recently gained prominence due to excellent results on a number of vision benchmarks (Ciresan et al., 2011; Krizhevsky et al., 2012; Ciresan et al., 2012; Girshick et al., 2014a; Tompson et al., 2014; LeCun et al., 2015; He et al., 2015). Most applications of CNNs focus on classiï¬cation, where locational information is discarded by means of successive pooling layers to provide for invariance (Lee et al., 2009). Applications to localization typically either use a sliding window (Girshick et al., 2014a) or object pro- posals (Endres and Hoiem, 2010; Uijlings et al., 2013; Girshick et al., 2014b) to localize the object, reducing the task to classiï¬cation, perform regression to a heatmap of manually labeled keypoints (Tompson et al., 2014), requiring precise knowledge of the object posi- tion in the image and camera calibration, or use 3D | 1504.00702#8 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 9 | of manually labeled keypoints (Tompson et al., 2014), requiring precise knowledge of the object posi- tion in the image and camera calibration, or use 3D models to localize previously scanned objects (Pepik et al., 2012; Savarese and Fei-Fei, 2007). Many prior robotic applications of CNNs do not directly consider control, but employ CNNs for the perception component of a larger robotic system (Hadsell et al., 2009; Sung et al., 2015; Lenz et al., 2015b; Pinto and Gupta, 2015). We use a novel CNN architecture for our policies that automatically learn feature points that capture spatial information about the scene, without any supervision beyond the information from the robotâs encoders and camera. | 1504.00702#9 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 10 | Applications of deep learning in robotic control have been less prevalent in recent years than in visual recognition. Backpropagation through the dynamics and the image for- mation process is typically impractical, since they are often non-diï¬erentiable, and such long-range backpropagation can lead to extreme numerical instability, since the lineariza- tion of a suboptimal policy is likely to be unstable. This issue has also been observed in the related context of recurrent neural networks (Hochreiter et al., 2001; Pascanu and Bengio, 2012). The high dimensionality of the network also makes reinforcement learning diï¬cult (Deisenroth et al., 2013). Pioneering early work on neural network control used
3
Levine, Finn, Darrell, and Abbeel | 1504.00702#10 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 11 | small, simple networks (Pomerleau, 1989; Hunt et al., 1992; Bekey and Goldberg, 1992; Lewis et al., 1998; Bakker et al., 2003; Mayer et al., 2006), and has largely been supplanted by methods that use carefully designed policies that can be learned eï¬ciently with rein- forcement learning (Kober et al., 2013). More recent work on sensorimotor deep learning has tackled simple task-space motions (Lenz et al., 2015a; Lampe and Riedmiller, 2013) and used unsupervised learning to obtain low-dimensional state spaces from images (Lange et al., 2012). Such methods have been demonstrated on tasks with a low-dimensional un- derlying structure: Lenz et al. (2015a) controls the end-eï¬ector in 2D space, while Lange et al. (2012) controls a 2-dimensional slot car with 1-dimensional actions. Our experiments include full torque control of 7-DoF robotic arms interacting with objects, with 30-40 state dimensions. In simple synthetic environments, control from images has been addressed with image features (Jodogne and Piater, 2007), nonparametric methods (van Hoof et al., 2015), | 1504.00702#11 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 12 | environments, control from images has been addressed with image features (Jodogne and Piater, 2007), nonparametric methods (van Hoof et al., 2015), and unsupervised state-space learning (B¨ohmer et al., 2013; Jonschkowski and Brock, 2014). CNNs have also been trained to play video games with Q-learning, Monte Carlo tree search, and stochastic search (Mnih et al., 2013; Koutn´ık et al., 2013; Guo et al., 2014), and have been applied to simple simulated control tasks (Watter et al., 2015; Lillicrap et al., 2015). However, such methods have only been demonstrated on synthetic domains that lack the visual complexity of the real world, and require an impractical number of samples for real- world robotic learning. Our method is sample eï¬cient, requiring only minutes of interaction time. To the best of our knowledge, this is the ï¬rst method that can train deep visuomotor policies for complex, high-dimensional manipulation skills with direct torque control. | 1504.00702#12 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 13 | Learning visuomotor policies on a real robot requires handling complex observations and high dimensional policy representations. We tackle these challenges using guided pol- icy search. In guided policy search, the policy is optimized using supervised learning, which scales gracefully with the dimensionality of the policy. The training set for supervised learn- ing can be constructed using trajectory optimization under known dynamics (Levine and Koltun, 2013a,b, 2014; Mordatch and Todorov, 2014) and trajectory-centric reinforcement learning methods that operate under unknown dynamics (Levine and Abbeel, 2014; Levine et al., 2015), which is the approach taken in this work. In both cases, the supervision is adapted to the policy, to ensure that the ï¬nal policy can reproduce the training data. The use of supervised learning in the inner loop of iterative policy search has also been pro- posed in the context of imitation learning (Ross et al., 2011, 2013). However, such methods typically do not address the question of how the supervision should be adapted to the policy. The goal of our approach is also similar to visual servoing, which performs feedback control on feature points in a camera image (Espiau et al., 1992; Mohta et | 1504.00702#13 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 14 | The goal of our approach is also similar to visual servoing, which performs feedback control on feature points in a camera image (Espiau et al., 1992; Mohta et al., 2014; Wilson et al., 1996). However, our visuomotor policies are entirely learned from real-world data, and do not require feature points or feedback controllers to be speciï¬ed by hand. This allows our method much more ï¬exibility in choosing how to use the visual signal. Our approach also does not require any sort of camera calibration, in contrast to many visual servoing methods (though not all â see e.g. J¨agersand et al. (1997); Yoshimi and Allen (1994)). | 1504.00702#14 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 15 | # 3. Background and Overview
In this section, we deï¬ne the visuomotor policy learning problem and present an overview of our approach. The core component of our approach is a guided policy search algorithm
4
End-to-End Training of Deep Visuomotor Policies
that separates the problem of learning visuomotor policies into separate supervised learning and trajectory learning phases, each of which is easier than optimizing the policy directly. We also discuss a policy architecture suitable for end-to-end learning of vision and control, and a training setup that allows our method to be applied to real robotic platforms.
# 3.1 Deï¬nitions and Problem Formulation | 1504.00702#15 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 16 | In policy search, the goal is to learn a policy 79(u;|o,) that allows an agent to choose actions u, in response to observations o; to control a dynamical system, such as a robot. The policy comes from some parametric class parameterized by 9, which could be, for example, the weights of a neural network. The system is defined by states x,, actions u,, and observations o;. For example, x; might include the joint angles of the robot, the positions of objects in the world, and their time derivatives, u; might consist of motor torque commands, and o; might include an image from the robotâs onboard camera. In this paper, we address finite horizon episodic tasks with t ⬠[1,..., 7]. The states evolve in time according to the system dynamics p(x++1|x¢, uz), and the observations are, in general, a stochastic consequence of the states, according to p(o;|x;). Neither the dynamics nor the observation distribution are assumed to be known in general. For notational convenience, we will use 79(u;|x;) to denote the distribution over actions under the policy conditioned on the state. However, since the policy | 1504.00702#16 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 19 | # 3.2 Approach Summary
Our methods consists of two main components, which are illustrated in Figure 3. The first is a supervised learning algorithm that trains policies of the form 79(u;|o,) = N(u7 (oz), &(0z)), where both yâ¢(o¢) and (oz) are general nonlinear functions. In our implementation, 1 (0¢) is a deep convolutional neural network, while ©7(o;) is an observation-independent earned covariance, though other representations are possible. The second component is a rajectory-centric reinforcement learning (RL) algorithm that generates guiding distribu- ions p;(u;|x;) that provide the supervision used to train the policy. These two components orm a policy search algorithm that can be used to learn complex robotic tasks using only a high-level cost function ¢(x;, uz). During training, only samples from the guiding distribu- ions p;(u;|xz) are generated by running rollouts on the physical system, which avoids the need to execute partially trained neural network policies on physical hardware.
Supervised learning will not, in general, produce a policy with good long-horizon per- formance, since a small mistake on the part of the policy will place the system into states that are outside the distribution in the training data, causing compounding errors. To
5 | 1504.00702#19 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 21 | symbol definition example/details Markovian system state at time step t ⬠Joint angles, end-effector pose, object Posl- Xe 1,7] tions, and their velocities; dimensionality: ; 14 to 32 trol i t ti tep t ⬠[1,7] joint motor torque commands; dimensional- ur control or action at time step : ity: 7 (for the PR2 robot) RGB camera image, joint encoder readings oO observation at time step t ⬠[1, T] & velocities, end-effector pose; dimensional- ity: around 200,000 r trajectory: notational shorthand for a sequence of states T = {X1,U1,X2,U2,...,xr, ur} and actions , . . dista betwe a bject in th i L(Xt, Xe) cost function that defines the goal of the task istance Detween an object In be gripper and the target P(Xt+1|Xe, Ue) unknown system dynamics physics that govern the robot and any ob- jects it interacts with stochastic process that produces camera im- 01x vat istributi p(o|Xz) unknown observation distribution ages from system state learned nonlinear global policy parameter-| | 1504.00702#21 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 22 | process that produces camera im- 01x vat istributi p(o|Xz) unknown observation distribution ages from system state learned nonlinear global policy parameter-| convolutional neural network, such as the To (ut|Oz) . . an : ized by weights 0 one in Figure 2 (ui lx) f (wilor)p(orlxe)a notational shorthand for observation-based 9 (ut |x. To (ur|Oz)p(Or|xz)do. . sys o\mee oe Or) Portlet aor policy conditioned on state (us[xe) learned local time-varying linear-Gaussian | time-varying linear-Gaussian controller has pituelx controller for initial state x} form N(Kux: + ki, Cri) (7) trajectory distribution for â7@(u,|xz):| notational shorthand for trajectory distribu- 0 P(x1) Wes To (We |Xt)P(Xt41[Xe, Us) tion induced by a policy | 1504.00702#22 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 23 | Table 1: Summary of the notation frequently used in this article.
avoid this issue, the training data must come from the policyâs own state distribution (Ross et al., 2011). We achieve this by alternating between trajectory-centric RL and supervised learning. The RL stage adapts to the current policy Ïθ(ut|ot), providing supervision at states that are iteratively brought closer to the states visited by the policy. This is for- malized as a variant of the BADMM algorithm (Wang and Banerjee, 2014) for constrained optimization, which can be used to show that, at convergence, the policy Ïθ(ut|ot) and the guiding distributions pi(ut|xt) will exhibit the same behavior. This algorithm is derived in Section 4. The guiding distributions are substantially easier to optimize than learning the policy parameters directly (e.g., using model-free reinforcement learning), because they use the full state of the system xt, while the policy Ïθ(ut|ot) only uses the observations. This means that the method requires the full state to be known during training, but not at test time. This makes it possible to eï¬ciently learn complex visuomotor policies, but imposes additional assumptions on the observability of xt during training that we discuss in Section 4. | 1504.00702#23 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 24 | When learning visuomotor tasks, the policy Ïθ(ut|ot) is represented by a novel convo- lutional neural network (CNN) architecture, which we describe in Section 5.2. CNNs have enjoyed considerable success in computer vision (LeCun et al., 2015), but the most popular
6
# End-to-End Training of Deep Visuomotor Policies
RGB image convt conv conv3 spatial softmax feature motor . . . points torques 7x7 conv s fully fully fully stride 2 expected connected) connected >} connected J Ry ReLU [2D position ReLU. ReLU linear 240 109 a 40 40 7 109 109 robot configuration 39
Figure 2: Visuomotor policy architecture. The network contains three convolutional lay- ers, followed by a spatial softmax and an expected position layer that converts pixel-wise features to feature points, which are better suited for spatial computations. The points are concatenated with the robot conï¬guration, then passed through three fully connected layers to produce the torques. | 1504.00702#24 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 25 | architectures rely on large datasets and focus on semantic tasks such as classiï¬cation, often intentionally discarding spatial information. Our architecture, illustrated in Figure 2, uses a ï¬xed transformation from the last convolutional layer to a set of spatial feature points, which form a concise representation of the visual scene suitable for feedback control. Our network has 7 layers and around 92,000 parameters, which presents a major challenge for standard policy search methods (Deisenroth et al., 2013). To reduce the amount of experience needed to train visuomotor policies, we also introduce a pretraining scheme that allows us to train eï¬ective policies with a relatively small number of iterations. The pretraining steps are illustrated in Figure 3. The intuition behind our pretraining is that, although we ultimately seek to obtain sensorimotor policies that combine both vision and control, low-level aspects of vision can be initialized independently. To that end, we pretrain the convolu- tional layers of our network by predicting elements of xt that are not provided in the observation ot, such as the positions of objects in the scene. We also initially train the guiding trajectory distributions | 1504.00702#25 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 26 | elements of xt that are not provided in the observation ot, such as the positions of objects in the scene. We also initially train the guiding trajectory distributions pi(ut|xt) indepen- dently of the convolutional network until the trajecto- ries achieve a basic level of competence at the task, and then switch to full guided policy search with end-to-end In our implementation, we also training of Ïθ(ut|ot). initialize the ï¬rst layer ï¬lters from the model of Szegedy et al. (2014), which is trained on ImageNet (Deng et al., 2009) classiï¬cation. The initialization and pretraining scheme is described in Section 5.2. | 1504.00702#26 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 27 | {Ï j i } pi Ïθ pi pi
# 4. Guided Policy Search with BADMM
Guided policy search transforms policy search into a supervised learning problem, where the training set is generated by a simple trajectory-centric RL algorithm. This algorithm
7
# Levine, Finn, Darrell, and Abbeel
optimizes linear-Gaussian controllers pi(ut|xt), and is described in Section 4.2. We refer to the trajectory distribution induced by pi(ut|xt) as pi(Ï ). Each pi(ut|xt) succeeds from diï¬erent initial states. For example, in the task of placing a cap on a bottle, these initial states correspond to diï¬erent positions of the bottle. By training on trajectories for multiple bottle positions, the ï¬nal CNN policy can succeed from all initial states, and can generalize to other states from the same distribution. | 1504.00702#27 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 28 | The ï¬nal policy Ïθ(ut|ot) learned with guided policy search is only provided with observations ot of the full state xt, and the dynamics are assumed to be unknown. A diagram of this method, which corresponds to an expanded version of the guided policy search box in Figure 3, is shown on the right. In the outer loop, we draw sample trajectories {Ï j i } for each ini- tial state on the physical system by running the corresponding controller pi(ut|xt). The samples are used to ï¬t the dynamics pi(xt+1|xt, ut) that are used to improve pi(ut|xt), and serve as training data for the policy. The inner loop alternates between optimizing each pi(Ï ) and optimizing the policy to match these trajectory distributions. The policy is trained to predict the actions along each trajectory from the observations ot, rather than the full state xt. This allows the policy to directly use raw observations at test time. This alternating optimization can be framed as an instance of the BADMM algorithm (Wang and Banerjee, 2014), which converges to a solution where the trajectory distributions and the policy have the same state distribution. This allows greedy supervised training of the policy to produce a policy with good long-horizon performance. | 1504.00702#28 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 29 | outer loop run each pi(ut|xt) on robot inner loop samples {Ï j i } ï¬t dynamics optimize Ïθ w.r.t. Lθ optimize each pi(Ï ) w.r.t. Lp
# 4.1 Algorithm Derivation
Policy search methods minimize the expected cost Ex, [(7)], where 7 = {x1,u1,...,x7, ur} is a trajectory, and (7) = a 1 (Xt, uz) is the cost of an episode. In the fully observed case, the expectation is taken under 79(T) = p(x1) Tha mo (ur|Xt)p(Xt41|Xt, Uz). The final policy o(uz|oz) is conditioned on the observations o;, but 79(uz|xz) can be recovered as 79(uz|Xt) = J 79(u:|04)p(04|x;)do;. We will present the derivation in this section for 7(u;|x;), but we do not require knowledge of p(o;|x;) in the final algorithm. As discussed in Section 4.3, the integral will be evaluated with samples from the real system, which include both x; and o;. We begin by rewriting the expected cost minimization as a constrained problem: | 1504.00702#29 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 30 | min Ep[¢(r)| s.t. p(u|xz) = 79(us|xt) V Xe, Ue, t, (1)
where we will refer to p(Ï ) as a guiding distribution. This formulation is equivalent to the original problem, since the constraint forces the two distributions to be identical. However, if we approximate the initial state distribution p(x1) with samples xi 1, we can choose p(Ï ) to be a class of distributions that is much easier to optimize than Ïθ, as we will show later. This will allow us to use simple local learning methods for p(Ï ), without needing to train the complex neural network policy Ïθ(ut|ot) directly with reinforcement learning, which would require a prohibitive amount of experience on real physical systems.
The constrained problem can be solved by a dual descent method, which alternates between minimizing the Lagrangian with respect to the primal variables, and incrementing
8
End-to-End Training of Deep Visuomotor Policies | 1504.00702#30 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 31 | 8
End-to-End Training of Deep Visuomotor Policies
the Lagrange multipliers by their subgradient. Minimization of the Lagrangian with respect to p(Ï ) and θ is done in alternating fashion: minimizing with respect to θ corresponds to supervised learning (making Ïθ match p(Ï )), and minimizing with respect to p(Ï ) consists of one or more trajectory optimization problems. The dual descent method we use is based on BADMM (Wang and Banerjee, 2014), a variant of ADMM (Boyd et al., 2011) that augments the Lagrangian with a Bregman divergence between the constrained variables. We use the KL-divergence as the Bregman constraint, which is particularly convenient for working with probability distributions. We will also modify the constraint p(ut|xt) = Ïθ(ut|xt) by multiplying both sides by p(xt), to get p(ut|xt)p(xt) = Ïθ(ut|xt)p(xt). This constraint is equivalent, but has the convenient property that we can express the Lagrangian in terms of expectations. The BADMM augmented Lagrangians for θ and p are therefore given by | 1504.00702#31 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 32 | T Lo(O,p) = > Ep (xy ay) EX u,)] + Ey(xe)mo (uslxe) [Ax:url _ Ep(xeu,) Ax:uel +r 464 (8, p) t=1 T Ly(p, 6) = > Evycxeu) (EX, uz)] Tr Eoy(xe) mo (uelxe) Axe.ue] _ Ey(xeur) xen T a (9, P), t=1
where λxt,ut is the Lagrange multiplier for state xt and action ut at time t, and Ïθ Ïp t (θ, p) are expectations of the KL-divergences:
(0, p) are
Ot (p,9) = Encx,)[Dxx (p(uelxe)|| 79 (uelxe))] 9 (8, p) = Eycx,)(Dxu(mo (uy) [lp (ul x2)]Dual descent with alternating primal minimization is then described by the following steps: | 1504.00702#32 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 33 | T 6 < arg min)? Eoy(xce)mro (uelxe) Axe] + 10% (0, p) t=1 T pear min)? Enyce au) E(C%t; Wt) â Aree ay] + 4d? (p, t=1 Axes â Areas + 04 (779 (Ue Xt) (Xt) â p(Ur|Xt)P(%t))# t (p, θ)
This procedure is an instance of BADMM, and therefore inherits its convergence guarantees. Note that we drop terms that are independent of the optimization variables on each line. The parameter α is a step size. As with most augmented Lagrangian methods, the weight νt is set heuristically, as described in Appendix A.1. | 1504.00702#33 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 34 | The dynamics only aï¬ect the optimization with respect to p(Ï ). In order to make this optimization eï¬cient, we choose p(Ï ) to be a mixture of N Gaussians pi(Ï ), one for each initial state sample xi 1. This makes the action conditionals pi(ut|xt) and the dynamics pi(xt+1|xt, ut) linear-Gaussian, as discussed in Section 4.2. This is a reasonable choice when the system is deterministic, or the noise is Gaussian or small, and we found that this approach is suï¬ciently tolerant to noise for use on real physical systems. Our choice of p also assumes that the policy Ïθ(ut|ot) is conditionally Gaussian. This is also reasonable, since the mean and covariance of Ïθ(ut|ot) can be any nonlinear function of the observations
9
# Levine, Finn, Darrell, and Abbeel
ot, which themselves are a function of the unobserved state xt. In Section 4.2, we show how these assumptions enable each pi(Ï ) to be optimized very eï¬ciently. We will refer to pi(Ï ) as guiding distributions, since they serve to focus the policy on good, low-cost behaviors. | 1504.00702#34 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 35 | Aside from learning pi(Ï ), we must choose a tractable way to represent the inï¬nite set of constraints p(ut|xt)p(xt) = Ïθ(ut|xt)p(xt). One approximate approach proposed in prior work is to replace the exact constraints with expectations of features (Peters et al., 2010). When the features consist of linear, quadratic, or higher order monomial functions of the random variable, this can be viewed as a constraint on the moments of the distributions. If we only use the ï¬rst moment, we get a constraint on the expected action: Ep(ut|xt)p(xt)[ut] = EÏθ(ut|xt)p(xt)[ut]. If the stochasticity in the dynamics is low, as we assumed previously, the optimal solution for each pi(Ï ) will have low entropy, making this ï¬rst moment constraint a reasonable approximation. The KL-divergence terms in the augmented Lagrangians will still serve to softly enforce agreement between the higher moments. While this simpliï¬cation is quite drastic, we found that it was more stable in practice than including higher moments, likely because these higher moments are harder to estimate accurately with a limited number of samples. The alternating optimization is now given by | 1504.00702#35 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 36 | T 0<¢ arg min Epcxz)mo(uelxe) [uf Apel + 1.69 (0, p) (2) t=1
# t=1 T
pe arg min)? Eny(x.uz)(E(Xt Ws) â Uf Ae] + MeO (D, 9) (3) t=1
λµt â λµt + ανt(EÏθ(ut|xt)p(xt)[ut] â Ep(ut|xt)p(xt)[ut]),
where λµt is the Lagrange multiplier on the expected action at time t. In the rest of the paper, we will use Lθ(θ, p) and Lp(p, θ) to denote the two augmented Lagrangians in Equations (2) and (3), respectively. In the next two sections, we will describe how Lp(p, θ) can be optimized with respect to p under unknown dynamics, and how Lθ(θ, p) can be optimized for complex, high-dimensional policies. Implementation details of the BADMM optimization are presented in Appendix A.1.
# 4.2 Trajectory Optimization under Unknown Dynamics | 1504.00702#36 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 37 | # 4.2 Trajectory Optimization under Unknown Dynamics
Since the Lagrangian £,(p, 0) in the previous section factorizes over the mixture elements in p(t) = 30; pi(r), we describe the trajectory optimization method for a single Gaussian p(t). When there are multiple mixture elements, this procedure is applied in parallel to each pi(T). Since p(T) is Gaussian, the conditionals p(x;+41|x,, uz) and p(u;,|x;), which correspond to the dynamics and the controller, are time-varying linear-Gaussian, and given by
p(ut|xt) = N (Ktxt + kt, Ct) p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft).
This type of controller can be learned eï¬ciently with a small number of real-world samples, making it a good choice for optimizing the guiding distributions. Since a diï¬erent set of time- varying linear-Gaussian dynamics is ï¬tted for each initial state, this dynamics representation can model any continuous deterministic system that can be locally linearized. Stochastic dynamics can violate the local linearity assumption in principle, but we found that in practice this representation was well suited for a wide variety of noisy real-world tasks.
10 | 1504.00702#37 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 38 | 10
End-to-End Training of Deep Visuomotor Policies
The dynamics are determined by the environment. If they are known, p(u;|x;) can be optimized with a variant of the iterative linear-quadratic-Gaussian regulator (iLQG) (Li and Todorov, 2004; Levine and Koltun, 2013a), which is a variant of DDP (Jacobson and Mayne, 1970). In the case of unknown dynamics, we can fit p(x:41|Xz, Uz) to sample trajectories sampled from the trajectory distribution at the previous iteration, denoted f(r). If p(7) is too different from p(T), these samples will not give a good estimate of p(x:+1|xz, uz), and the optimization will diverge. To avoid this, we can bound the change from p(7) to p(T) in terms of their KL-divergence by a step size â¬, producing the following constrained problem:
i Ly(p,9) s.t. Dxr(y p Se. te 3 p(p, 4) s KL(P(T)||B(7)) < ⬠| 1504.00702#38 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 39 | This type of policy update has previously been proposed by several authors in the con- text of policy search (Bagnell and Schneider, 2003; Peters and Schaal, 2008; Peters et al., 2010; Levine and Abbeel, 2014). In the case when p(Ï ) is Gaussian, this problem can be solved eï¬ciently using dual gradient descent, while the dynamics p(xt+1|xt, ut) are ï¬tted to samples gathered by running the previous controller Ëp(ut|xt) on the robot. Fitting a global Gaussian mixture model to tuples (xt, ut, xt+1) and using it as a prior for ï¬tting the dynamics p(xt+1|xt, ut) serves to greatly reduce the sample complexity. We describe the dynamics ï¬tting procedure in detail in Appendix A.3. | 1504.00702#39 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 40 | Note that the trajectory optimization cost function £,(p,0) also depends on the policy mo(uz|xz), while we only have access to 79(u;|oz). In order to compute a local quadratic expansion of the KL-divergence term Dkr (p(uz|x¢)||7(uz|xz)) inside L,(p, 0) for iLQG, we also estimate a linearization of the mean of the conditionally Gaussian policy 7(u;|o;) with respect to the state x;, using the same procedure that we use to linearize the dynamics. The data for this estimation consists of tuples {x}, E, a(u,|oi)[Ui]}, which we can obtain because both the states x} and the observations 0} are available for all of the samples evaluated on the real physical system. | 1504.00702#40 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 41 | This constrained optimization is performed in the âinner loopâ of the optimization described in the previous section, and the KL-divergence constraint Dx i(p(7)||B(7)) < ⬠imposes a step size on the trajectory update. The overall algorithm then becomes an instance of generalized BADMM (Wang and Banerjee, 2014). Note that the augmented Lagrangian £L,(p, @) consists of an expectation under p(7) of a quantity that is independent of p. We can locally approximate this quantity with a quadratic by using a quadratic expansion of (xz, uz), and fitting a linear-Gaussian to 79(u;|x;) with the same method we used for the dynamics. We can then solve the primal optimization in the dual gradient descent procedure with a standard LQR backward pass. This is significantly simpler and much faster than the forward-backward dynamic programming procedure employed in previous work (Levine and Abbeel, 2014; Levine and Koltun, 2014). This improvement is enabled by the use of BADMM, which allows us to always formulate the KL-divergence term in the | 1504.00702#41 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 42 | Levine and Koltun, 2014). This improvement is enabled by the use of BADMM, which allows us to always formulate the KL-divergence term in the Lagrangian with the distribution being optimized as the first argument. Since the KL-divergence is convex in its first argument, this makes the corresponding optimization significantly easier. The details of this LQR-based dual gradient descent algorithm are derived in Appendix A.4. We can further improve the efficiency of the method by allowing samples from multiple trajectories p;(7) to be used to fit a shared dynamics p(x:41|Xz, Uz), while the controllers pi(uz|x¢) are allowed to vary. This makes sense when the initial states of these trajectories | 1504.00702#42 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 43 | 11
Levine, Finn, Darrell, and Abbeel
are similar, and they therefore visit similar regions. This allows us to draw just a single sample from each pi(Ï ) at each iteration, allowing us to handle many more initial states.
# 4.3 Supervised Policy Optimization
Since the policy parameters θ participate only in the constraints of the optimization problem in Equation (1), optimizing the policy corresponds to minimizing the KL-divergence between the policy and trajectory distribution, as well as the expectation of λT µtut. For a conditional Gaussian policy of the form Ïθ(ut|ot) = N (µÏ(ot), ΣÏ(ot)), the objective is
N T 1 _ £66.) =ay Yd Eriixnor) [tt[Cz;'=" (0r)] log |=" (0r)| i=1t=1 +(H" (01) â Hp (x1) Cai" (U⢠(Ot) â Mei (Xe)) + BAeâ (04)] | 1504.00702#43 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 44 | where µp ti(xt) is the mean of pi(ut|xt) and Cti is the covariance, and the expectation is eval- uated using samples from each pi(Ï ) with corresponding observations ot. The observations are sampled from p(ot|xt) by recording camera images on the real system. Since the input to µÏ(ot) and ΣÏ(ot) is not the state xt, but only an observation ot, we can train the policy to directly use raw observations. Note that Lθ(θ, p) is simply a weighted quadratic loss on the diï¬erence between the policy mean and the mean action of the trajectory distribution, oï¬set by the Lagrange multiplier. The weighting is the precision matrix of the conditional in the trajectory distribution, which is equal to the curvature of its cost-to-go function (Levine and Koltun, 2013a). This has an intuitive interpretation: Lθ(θ, p) penalizes de- viation from the trajectory distribution, with a penalty that is locally proportional to its cost-to-go. At convergence, when the policy Ïθ(ut|ot) takes the same actions as pi(ut|xt), their Q-functions are equal, and the supervised policy objective becomes equivalent to the policy iteration objective (Levine and Koltun, 2014) | 1504.00702#44 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 45 | In this work, we optimize Lθ(θ, p) with respect to θ using stochastic gradient descent (SGD), a standard method for neural network training. The covariance of the Gaussian policy does not depend on the observation in our implementation, though adding this de- pendence would be straightforward. Since training complex neural networks requires a substantial number of samples, we found it beneï¬cial to include sampled observations from previous iterations into the policy optimization, evaluating the action µp ti(xt) at their corre- sponding states using the current trajectory distributions. Since these samples come from the wrong state distribution, we use importance sampling and weight them according to the ratio of their probability under the current distribution p(xt) and the one they were sampled from, which is straightforward to evaluate under the estimated linear-Gaussian dynamics (Levine and Koltun, 2013b).
# 4.4 Comparison with Prior Guided Policy Search Methods
We presented a guided policy search method where the policy is trained on observations, while the trajectories are trained on the full state. The BADMM formulation of guided policy search is new to this work, though several prior guided policy search methods based on constrained optimization have been proposed. Levine and Koltun (2014) proposed a formulation similar to Equation (1), but with a constraint on the KL-divergence between
12
# End-to-End Training of Deep Visuomotor Policies | 1504.00702#45 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 46 | 12
# End-to-End Training of Deep Visuomotor Policies
p(Ï ) and Ïθ. This results in a more complex, non-convex forward-backward trajectory optimization phase. Since the BADMM formulation solves a convex problem during the trajectory optimization phase, it is substantially faster and easier to implement and use, especially when the number of trajectories pi(Ï ) is large.
The use of ADMM for guided policy search was also proposed by Mordatch and Todorov (2014) for deterministic policies under known dynamics. This approach requires known, de- terministic dynamics and trains deterministic policies. Furthermore, because this approach uses a simple quadratic augmented Lagrangian term, it further requires penalty terms on the gradient of the policy to account for local feedback. Our approach enforces this feed- back behavior due to the higher moments included in the KL-divergence term, but does not require computing the second derivative of the policy.
# 5. End-to-End Visuomotor Policies | 1504.00702#46 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 47 | # 5. End-to-End Visuomotor Policies
Guided policy search allows us to optimize complex, high-dimensional policies with raw observations, such as when the input to the policy consists of images from a robotâs onboard camera. However, leveraging this capability to directly learn policies for visuomotor control requires designing a policy representation that is both data-eï¬cient and capable of learning complex control strategies directly from raw visual inputs. In this section, we describe a deep convolutional neural network (CNN) model that is uniquely suited to this task. Our approach combines a novel spatial soft-argmax layer with a pretraining procedure that provides for ï¬exibility and data-eï¬ciency.
# 5.1 Visuomotor Policy Architecture | 1504.00702#47 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 48 | Our visuomotor policy runs at 20 Hz on the robot, mapping monocular RGB images and the robot conï¬gurations to joint torques on a 7 DoF arm. The conï¬guration includes the angles of the joints and the pose of the end-eï¬ector (deï¬ned by 3 points in the space of the end-eï¬ector), as well as their velocities, but does not include the position of the target ob- ject or goal, which must be determined from the image. CNNs often use pooling to discard the locational information that is necessary to determine positions, since it is an irrelevant distractor for tasks such as object classiï¬cation (Lee et al., 2009). Because locational in- formation is important for control, our policy does not use pooling. Additionally, CNNs built for spatial tasks such as human pose estimation often also rely on the availability of location labels in image-space, such as hand-labeled keypoints (Tompson et al., 2014). We propose a novel CNN architecture capable of estimating spatial information from an image without direct supervision in image space. Our pose estimation experiments, discussed in Section 5.2, show that this network can learn useful visual features using only 3D position | 1504.00702#48 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |