doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1502.03044 | 37 | Elliott, Desmond and Keller, Frank. Image description using vi- sual dependency representations. In EMNLP, 2013.
Mao, Junhua, Xu, Wei, Yang, Yi, Wang, Jiang, and Yuille, Alan. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv:1412.6632, December 2014.
Fang, Hao, Gupta, Saurabh, Iandola, Forrest, Srivastava, Rupesh, Deng, Li, Doll´ar, Piotr, Gao, Jianfeng, He, Xiaodong, Mitchell, Margaret, Platt, John, et al. From captions to visual concepts and back. arXiv:1411.4952, November 2014.
Mitchell, Margaret, Han, Xufeng, Dodge, Jesse, Mensch, Alyssa, Goyal, Amit, Berg, Alex, Yamaguchi, Kota, Berg, Tamara, Stratos, Karl, and Daum´e III, Hal. Midge: Generating im- age descriptions from computer vision detections. In European Chapter of the Association for Computational Linguistics, pp. 747â756. ACL, 2012.
Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997. | 1502.03044#37 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 38 | Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997.
Hodosh, Micah, Young, Peter, and Hockenmaier, Julia. Framing image description as a ranking task: Data, models and evalu- ation metrics. Journal of Artiï¬cial Intelligence Research, pp. 853â899, 2013.
Mnih, Volodymyr, Hees, Nicolas, Graves, Alex, and Recurrent models of visual atten- Kavukcuoglu, Koray. tion. In NIPS, 2014.
Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, and Ben- gio, Yoshua. How to construct deep recurrent neural networks. In ICLR, 2014.
Karpathy, Andrej and Li, Fei-Fei. Deep visual-semantic align- ments for generating image descriptions. arXiv:1412.2306, December 2014.
Rensink, Ronald A. The dynamic representation of scenes. Visual cognition, 7(1-3):17â42, 2000.
Adam: A Method for Stochastic Optimization. arXiv:1412.6980, December 2014. | 1502.03044#38 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 39 | Adam: A Method for Stochastic Optimization. arXiv:1412.6980, December 2014.
Kiros, Ryan, Salahutdinov, Ruslan, and Zemel, Richard. Multi- modal neural language models. In International Conference on Machine Learning, pp. 595â603, 2014a.
Kiros, Ryan, Salakhutdinov, Ruslan, and Zemel, Richard. Uni- fying visual-semantic embeddings with multimodal neural lan- guage models. arXiv:1411.2539, November 2014b.
Ima- geNet classiï¬cation with deep convolutional neural networks. In NIPS. 2012.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, An- drej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge, 2014.
Very deep convolu- tional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. | 1502.03044#39 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 40 | Very deep convolu- tional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practi- cal bayesian optimization of machine learning algorithms. In NIPS, pp. 2951â2959, 2012.
Snoek, Jasper, Swersky, Kevin, Zemel, Richard S, and Adams, Input warping for bayesian optimization of non- Ryan P. stationary functions. arXiv preprint arXiv:1402.0929, 2014.
Kulkarni, Girish, Premraj, Visruth, Ordonez, Vicente, Dhar, Sag- nik, Li, Siming, Choi, Yejin, Berg, Alexander C, and Berg, Tamara L. Babytalk: Understanding and generating simple im- age descriptions. PAMI, IEEE Transactions on, 35(12):2891â 2903, 2013.
Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR, 15, 2014. | 1502.03044#40 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 41 | Kuznetsova, Polina, Ordonez, Vicente, Berg, Alexander C, Berg, Tamara L, and Choi, Yejin. Collective generation of natural image descriptions. In Association for Computational Linguis- tics. ACL, 2012.
Kuznetsova, Polina, Ordonez, Vicente, Berg, Tamara L, and Choi, Yejin. Treetalk: Composition and compression of trees for im- age descriptions. TACL, 2(10):351â362, 2014.
Larochelle, Hugo and Hinton, Geoffrey E. Learning to com- bine foveal glimpses with a third-order boltzmann machine. In NIPS, pp. 1243â1251, 2010.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural networks. In NIPS, pp. 3104â 3112, 2014.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Van- houcke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. | 1502.03044#41 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 42 | Tang, Yichuan, Srivastava, Nitish, and Salakhutdinov, Ruslan R. Learning generative models with visual attention. In NIPS, pp. 1808â1816, 2014.
Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5 - rmsprop. Technical report, 2012.
Li, Siming, Kulkarni, Girish, Berg, Tamara L, Berg, Alexander C, and Choi, Yejin. Composing simple image descriptions us- ing web-scale n-grams. In Computational Natural Language Learning. ACL, 2011.
Vinyals, Oriol, Toshev, Alexander, Bengio, Samy, and Erhan, Dumitru. Show and tell: A neural image caption generator. arXiv:1411.4555, November 2014.
Neural Image Caption Generation with Visual Attention
Weaver, Lex and Tao, Nigel. The optimal reward baseline for gradient-based reinforcement learning. In Proc. UAIâ2001, pp. 538â545, 2001.
Williams, Ronald J. Simple statistical gradient-following al- gorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992. | 1502.03044#42 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 43 | Williams, Ronald J. Simple statistical gradient-following al- gorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Yang, Yezhou, Teo, Ching Lik, Daum´e III, Hal, and Aloimonos, Yiannis. Corpus-guided sentence generation of natural images. In EMNLP, pp. 444â454. ACL, 2011.
Young, Peter, Lai, Alice, Hodosh, Micah, and Hockenmaier, Julia. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67â78, 2014.
Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Re- arXiv preprint current neural network regularization. arXiv:1409.2329, September 2014.
Neural Image Caption Generation with Visual Attention
# A. Appendix
Visualizations from our âhardâ (a) and âsoftâ (b) attention model. White indicates the regions where the model roughly attends to (see section 5.4).
(a) A man and a woman playing frisbee in a ï¬eld.
(b) A woman is throwing a frisbee in a park.
Figure 6. | 1502.03044#43 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 44 | (a) A man and a woman playing frisbee in a ï¬eld.
(b) A woman is throwing a frisbee in a park.
Figure 6.
Neural Image Caption Generation with Visual Attention
(a) A giraffe standing in the ï¬eld with trees.
large(0.49), white(0.40) bird(0.35), standing(0.29), forest(0.54)
(b) A large white bird standing in a forest.
Figure 7.
Neural Image Caption Generation with Visual Attention
(a) A dog is laying on a bed with a book.
A(0.99) wd standing(0.38) 1rdwood(0.58), f C- 4 4
(b) A dog is standing on a hardwood ï¬oor.
Figure 8.
Neural Image Caption Generation with Visual Attention
(a) A woman is holding a donut in his hand.
(b) A woman holding a clock in her hand.
Figure 9.
Neural Image Caption Generation with Visual Attention
(a) A stop sign with a stop sign on it.
(b) A stop sign is on a road with a mountain in the background.
Figure 10.
Neural Image Caption Generation with Visual Attention
(a) A man in a suit and a hat holding a remote control. | 1502.03044#44 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 45 | Figure 10.
Neural Image Caption Generation with Visual Attention
(a) A man in a suit and a hat holding a remote control.
(b) A man wearing a hat and a hat on a skateboard.
Figure 11.
Neural Image Caption Generation with Visual Attention
(a) A little girl sitting on a couch with a teddy bear.
(b) A little girl sitting on a bed with a teddy bear.
Neural Image Caption Generation with Visual Attention
(a) A man is standing on a beach with a surfboard.
(b) A person is standing on a beach with a surfboard.
Neural Image Caption Generation with Visual Attention
(a) A man and a woman riding a boat in the water.
(b) A group of people sitting on a boat in the water.
Figure 12.
Neural Image Caption Generation with Visual Attention
(a) A man is standing in a market with a large amount of food.
(b) A woman is sitting at a table with a large pizza.
Figure 13.
Neural Image Caption Generation with Visual Attention
(a) A giraffe standing in a ï¬eld with trees.
(b) A giraffe standing in a forest with trees in the background.
Figure 14.
Neural Image Caption Generation with Visual Attention
(a) A group of people standing next to each other. | 1502.03044#45 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.02251 | 0 | 5 1 0 2
n u J 8 1 ] L M . t a t s [
3 v 1 5 2 2 0 . 2 0 5 1 : v i X r a
# From Pixels to Torques: Policy Learning with Deep Dynamical Models
# Niklas Wahlstr¨om Division of Automatic Control, Link¨oping University, Link¨oping, Sweden
[email protected]
# Thomas B. Sch¨on Department of Information Technology, Uppsala University, Sweden
# [email protected]
# Marc Peter Deisenroth Department of Computing, Imperial College London, United Kingdom
[email protected]
# Abstract | 1502.02251#0 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 1 | # Marc Peter Deisenroth Department of Computing, Imperial College London, United Kingdom
[email protected]
# Abstract
Data-efï¬cient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. In this paper, we con- the pix- sider one instance of this challenge, els to torques problem, where an agent must learn a closed-loop control policy from pixel in- formation only. We introduce a data-efï¬cient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model that uses deep auto- encoders to learn a low-dimensional embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning ensures that not only static but also dynamic properties of the data are accounted for. This is crucial for long-term predictions, which lie at the core of the adaptive model predictive con- trol strategy that we use for closed-loop con- trol. Compared to state-of-the-art reinforcement learning methods for continuous states and ac- tions, our approach learns quickly, scales to high- dimensional state spaces and is an important step toward fully autonomous learning from pixels to torques. | 1502.02251#1 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 2 | mation, (3) take new information into account for learning and adaptation. Effectively, any fully autonomous system has to close this perception-action-learning loop without relying on speciï¬c human expert knowledge. The pixels to torques problem (Brock, 2011) identiï¬es key aspects of an autonomous system: autonomous thinking and decision making using sensor measurements only, intelligent explo- ration and learning from mistakes.
We consider the problem of learning closed-loop policies (âtorquesâ) from pixel information end-to-end. A possible scenario is a scene in which a robot is moving about. The only available sensor information is provided by a camera, i.e., no direct information of the robotâs joint conï¬gura- tion is available. The objective is to learn a continuous- valued policy that allows the robotic agent to solve a task in this continuous environment in a data-efï¬cient way, i.e., we want to keep the number of trials small. To date, there is no fully autonomous system that convincingly closes the perception-action-learning loop and solves the pixels to torques problem in continuous state-action spaces, the natural domains in robotics. | 1502.02251#2 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 3 | A promising approach toward solving the pixels to torques problem is Reinforcement Learning (RL) (Sutton & Barto, 1998), a principled mathematical framework that deals with fully autonomous learning from trial and error. How- ever, one practical shortcoming of many existing RL algo- rithms is that they require many trials to learn good poli- cies, which is prohibitive when working with real-world mechanical plants or robots.
# 1. Introduction
The vision of fully autonomous and intelligent systems that learn by themselves has inï¬uenced AI and robotics re- search for many decades. To devise fully autonomous sys- tems, it is necessary to (1) process perceptual data (e.g., im- ages) to summarize knowledge about the surrounding envi- ronment and the systemâs behavior in this environment, (2) make decisions based on uncertain and incomplete inforOne way of using data efï¬ciently (and therefore keep the number of experiments small) is to learn forward models of the underlying dynamical system, which are then used for internal simulations and policy learning. These ideas have been successfully applied to RL, control and robotics in (Schmidhuber, 1990; Atkeson & Schaal, 1997; Bagnell & Schneider, 2001; Contardo et al., 2013;
From Pixels to Torques: Policy Learning with Deep Dynamical Models | 1502.02251#3 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 4 | From Pixels to Torques: Policy Learning with Deep Dynamical Models
Image at time t-1 Vr-1 Zr] Feature at time t-1 Prediction model Encoder ââ__> âââ> g! Decoder eS g Feature at time t Image at time t Zt vr
Figure 1. Illustration of our idea of combining deep learning architectures for feature learning and prediction models in feature space. A camera observes a robot approaching an object. A good low-dimensional feature representation of an image is important for learning a predictive model if the camera is the only sensor available.
Pan & Theodorou, 2014; Deisenroth et al., 2015; Pan & Theodorou, 2014; van Hoof et al., 2015; Levine et al., 2015), for instance. However, these methods use heuris- tic or engineered low-dimensional features, and they do not easily scale to data-efï¬cient RL using pixel informa- tion only because even âsmallâ images possess thousands of dimensions. | 1502.02251#4 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 5 | we can use for internal simulation of the dynamical sys- tem. For this purpose, we employ deep auto-encoders for the lower-dimensional embedding and a multi-layer feed- forward neural network for the transition function. We use this deep dynamical model to predict trajectories and apply an adaptive model-predictive-control (MPC) algo- rithm (Mayne, 2014) for online closed-loop control, which is practically based on pixel information only.
A common way of dealing with high-dimensional data is to learn low-dimensional feature representations. Deep learn- ing architectures, such as deep neural networks (Hinton & Salakhutdinov, 2006), stacked auto-encoders (Bengio et al., 2007; Vincent et al., 2008), or convolutional neu- ral networks (LeCun et al., 1998), are the current state of the art in learning parsimonious representations of high- dimensional data. Deep learning has been successfully ap- plied to image, text and speech data in commercial prod- ucts, e.g., by Google, Amazon and Facebook. | 1502.02251#5 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 6 | Deep learning has been used to produce ï¬rst promising results in the context of model-free RL on images: For instance, (Mnih et al., 2015) present an approach based on Deep-Q-learning, in which human-level game strategies are learned autonomously, purely based on pixel informa- tion. Moreover, (Lange et al., 2012) presented an approach that learns good discrete actions to control a slot car based on raw images, employing deep architectures for ï¬nding compact low-dimensional representations. Other examples of deep learning in the context of RL on image data in- clude (Cuccu et al., 2011; Koutnik et al., 2013). These ap- proaches have in common that they try to estimate the value function from which the policy is derived. However, nei- ther of these algorithms learns a predictive model and are, therefore, prone to data inefï¬ciency, either requiring data collection from millions of experiments or relying on dis- cretization and very low-dimensional feature spaces, limit- ing their applicability to mechanical systems. | 1502.02251#6 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 7 | To increase data efï¬ciency, we therefore introduce a model- based approach to learning from pixels to torques. In par- ticular, exploit results from (Wahlstr¨om et al., 2015) and jointly learn a lower-dimensional embedding of images and a transition function in this lower-dimensional space that
MPC has been well explored in the control community, However, adaptive MPC has so far not received much atten- tion in the literature (Mayne, 2014). An exception is (Sha, 2008), where the authors advocate a neural network ap- proach similar to ours. However, they do not consider high- dimensional data but assume that they have direct access to low-dimensional measurements.
Our approach beneï¬ts from the application of model- based optimal control principles within a machine learn- ing framework. Along these lines, (Deisenroth et al., 2009; Abramova et al., 2012; Boedecker et al., 2014; Pan & Theodorou, 2014; Levine et al., 2015) suggested to ï¬rst learn a transition model and then use optimal control meth- ods to solve RL problems. Unlike these methods, our ap- proach does not need to estimate value functions and scales to high-dimensional problems. | 1502.02251#7 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 8 | Similar to our approach, (Boots et al., 2014; Levine et al., 2015; van Hoof et al., 2015) recently proposed model- based RL methods that learn policies directly from vi- sual information. Unlike these methods, we exploit a low- dimensional feature representation that allows for fast pre- dictions and online control learning via MPC.
# Problem Set-up and Objective
We consider a classical N-step finite-horizon RL setting in which an agent attempts to solve a particular task by trial and error. In particular, our objective is to find a closed-loop policy 7* that minimizes the long-term cost v= yea fo(xt, uz), where fo denotes an immediate cost, 7, ⬠R? is the continuous-valued system state and uz ⬠RF are continuous control inputs.
â
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Input layer (high-dim. data) Y1yt Hidden layer (feature) Output layer (reconstructed) YL Encoder g~1 Decoder g | 1502.02251#8 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 9 | Input layer (high-dim. data) Y1yt Hidden layer (feature) Output layer (reconstructed) YL Encoder g~1 Decoder g
Figure 2. Auto-encoder that consists of an encoder g~! and a decoder g. The encoder maps the original image yw ⬠R⢠onto its low-dimensional representation z; = goâ (ys) eRâ, where m < M; the decoder maps this feature back to a high- dimensional representation 7 = g(Z). The gray color represents high-dimensional observations.
High-dim. observations Features Control inputs
Figure 3. Prediction model: Each feature z; is computed from high-dimensional data y; via the encoder g~'. The transition model predicts the feature 2,41)/,,, at the next time step based on the n-step history of n past features z;-n41,..., Z¢ and con- trol inputs weân+1,-.., ut. The predicted feature 241), can be mapped to a high-dimensional prediction #41 via the decoder g. The gray color represents high-dimensional observations.
# 2.1. Deep Auto-Encoder | 1502.02251#9 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 10 | # 2.1. Deep Auto-Encoder
The learning agent faces the following additional chal- lenges: (a) The agent has no access to the true state, but perceives the environment only through high-dimensional pixel information (images), (b) a good control policy is re- quired in only a few trials. This setting is practically rel- evant, e.g., when the agent is a robot that is monitored by a video camera based on which the robot has to learn to solve tasks fully autonomously. Therefore, this setting is an instance of the pixels to torques problem.
# 2. Deep Dynamical Model | 1502.02251#10 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 11 | # 2. Deep Dynamical Model
We use a deep auto-encoder for embedding images in a low-dimensional feature space, where both the encoder g~! and the decoder g are modeled with deep neural networks. Each layer k of the encoder neural network g~! computes yt) = (Any + by), where o is a sigmoidal acti- vation function (we used arctan) and A, and by are free parameters. The input to the first layer is the image, i.e., (1) Y= Yt The last layer is the low-dimensional fea- ture representation of the image z:(Oz) = g~'(yt;@e), where 6 = [..., Ax, bx, -..] are the parameters of all neu- ral network layers. The decoder g consists of the same number of layers in reverse order, see Fig. 2, and ap- proximately inverts the encoder g, such that %; (9g, 0p) = 9(g~* (yt; 9E); OD) © ys is the reconstructed version of yz with an associated reconstruction error | 1502.02251#11 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 12 | Our approach to solve the pixels-to-torques problem is based on a deep dynamical model (DDM), which jointly (i) embeds high-dimensional images in a low-dimensional feature space via deep auto-encoders and (ii) learns a pre- dictive forward model in this feature space (Wahlstro6m et al., 2015). In particular, we consider a DDM with con- trol inputs u and high-dimensional observations y. We as- sume that the relevant properties of y can be compactly represented by a feature variable z. The two components of the DDM, i.e., the low-dimensional embedding and the prediction model, which predicts future observations yt+1 based on past observations and control inputs, are de- tailed in the following. Throughout this paper, y, denotes the high-dimensional measurements, z, the corresponding low-dimensional encoded features and %; the reconstructed high-dimensional measurement. Further, 2,41 and #41 de- note a predicted feature and measurement at time t + 1, respectively.
εR t (θE, θD) = yt (1)
# â Ge(Oe, OD).
The main purpose of the deep auto-encoder is to keep this reconstruction error and the associated compression loss negligible, such that the features zt are a compact repre- sentation of the images yt. | 1502.02251#12 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 13 | # 2.2. Prediction Model
We now turn the static auto-encoder into a dynamical model that can predict future features 2, and images Ji41. The encoder g~+ allows us to map high-dimensional observations y; onto low-dimensional features z;. For pre- dicting we assume that future features 241
,, depend on an n-step history h, of past features and control inputs, ie.,
Zr ajh, (Op) = f (Zt, Ue, ++ Ztâng1, Weng; Op),
(2)
From Pixels to Torques: Policy Learning with Deep Dynamical Models
where f is a nonlinear transition function, in our case a feed-forward neural network, and θP are the correspond- ing model parameters. This is a nonlinear autoregressive exogenous model (NARX) (Ljung, 1999). The predictive performance of the model will be important for model pre- dictive control (see Section 3) and for model learning based on the prediction error (Ljung, 1999).
To predict future observations Y141
,, We exploit the de- coder, such that +1)n, = 9(2:41)n,39D)- The deep de- coder g maps features z to high-dimensional observations y parameterized by Op. | 1502.02251#13 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 14 | cost function is minimized by the BFGS algorithm (No- cedal & Wright, 2006). Note that in (5a) it is crucial to include not only the prediction error VP, but also the re- construction error VR. Without this term the multi-step ahead prediction performance will decrease because pre- dicted features are not consistent with features achieved from the encoder. Since we consider a control problem in this paper, multi-step ahead predictive performance is cru- cial.
Now, we are ready to put the pieces together: With feature prediction model (2) and the deep auto-encoder, the DDM predicts future features and images according to
zt(θE) = gâ1(yt; θE),
(3a) n+1; θP), (3b)
Zr4ajn, (Op, Op) = f (Zt; Wes +s Zn gas Urn $13 OP) Tesrjn,, (Oe, Od, OP) = g(Zr41]h,3 9D), (3b) which is illustrated in Fig. 3. With this prediction model we define the prediction error
â
εP t+1(θE, θD, θP) = yt+1 (4)
â Tern, (Ox, 4%, Op), | 1502.02251#14 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 15 | â
εP t+1(θE, θD, θP) = yt+1 (4)
â Tern, (Ox, 4%, Op),
Initialization. With a linear activation function the auto- encoder and PCA are identical (Bourlard & Kamp, 1988), which we exploit to initialize the parameters of the auto- encoder: The auto-encoder network is unfolded, each pair of layers in the encoder and the decoder are combined, and the corresponding PCA solution is computed for each of these pairs. We start with high-dimensional image data at the top layer and use the principal components from that pair of layers as input to the next pair of layers. Thereby, we recursively compute a good initialization for all parameters of the auto-encoder. Similar pre-training routines are found in (Hinton & Salakhutdinov, 2006), in which a restricted Boltzmann machine is used instead of PCA.
where yt+1 is the observed image at time t + 1.
# 2.3. Training
The DDM is parameterized by the encoder parameters 6p, the decoder parameters @p and the prediction model param- eters Op. In the DDM, we train both the prediction model and the deep auto-encoder jointly by finding parameters (6, , 6p). such that
such that Op) =arg min 8.00 N | 1502.02251#15 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 16 | such that Op) =arg min 8.00 N
(6x, 8p, Op) =arg min Va(9g, Op) + Vo(Oz, Op, Op), (Sa) 8.00
N c Vez, 8; 0) = D>, _, ler Oz, 6, 8). (5b)
N Va(G, 40) = D2, llet @z, 0) |, (Se)
which minimizes the sums of squared reconstruction (1) and prediction (4) errors.
We learn all model parameters θE, θD, θP jointly by solv- ing (5a).1 The required gradients with respect to the param- eters are computed efï¬ciently by back-propagation, and the
In this section, we have presented a DDM that facili- tates fast predictions of high-dimensional observations via a low-dimensional embedded time series. The property of fast predictions will be exploited by the online feedback control strategy presented in the following. More details on the proposed model are given in (Wahlstr¨om et al., 2015).
# 3. Learning Closed-Loop Policies from Images | 1502.02251#16 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 17 | # 3. Learning Closed-Loop Policies from Images
We use the DDM for learning a closed-loop policy by means of nonlinear model predictive control (MPC). We start off by an introduction to classical MPC, before mov- ing on to MPC on images in Section 3.1. MPC ï¬nds an op- timal sequence of control signals that minimizes a K-step loss function, where K is typically smaller than the full horizon. In general, MPC relies on (a) a reference trajec- 1, . . . , xâ tory xref = xâ K (which can be a constant reference signal) and (b) a dynamics model
âNormally when features are used for learning dynamical models, they are first extracted from the data in a pre-processing step by minimizing (5c) with respect to the auto-encoder param- eters 02,4. In a second step, the prediction model parameters Op are estimated based on these features by minimizing (5b) con- ditioned on the estimated 05 and . In our experience, a prob- lem with this approach is that the learned features might have a small reconstruction error, but this representation will not be ideal for learning a transition model. The supplementary material dis- cusses this in more detail.
xt+1 = f (xt, ut), (6) | 1502.02251#17 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 18 | xt+1 = f (xt, ut), (6)
which, assuming that the current state is denoted by xo, can be used to compute/predict a state trajectory Z1,...,£« for a given sequence uo,...,WuxKâ1 of control signals. Using the dynamics model MPC determines an optimal (open- loop) control sequence ug,...,Uj,_,, such that the pre- dicted trajectory %1,...,2« gets as close to the reference
From Pixels to Torques: Policy Learning with Deep Dynamical Models
trajectory xref as possible, such that
K-1 Up,.+., Uz, ⬠arg min > |Z, â a |? + Aljuel|?, Uu0o:K-1 i=0
where ||7; â 27||? is a cost associated with the deviation of the predicted state trajectory Zo. ; from the reference tra- jectory ayer, and ||u;||? penalizes the amplitude of the con- trol signals. Note that the predicted £, depends on all pre- vious ug:7â1. When the control sequence up,...,Uj_1 is determined, the first control ug is applied to the system. After observing the next state, MPC repeats the entire op- timization and turns the overall policy into a closed-loop (feedback) control strategy. | 1502.02251#18 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 19 | # 3.1. MPC on Images
of our MPC formulation lies the DDM, which is used to predict future states (8) from a sequence of control inputs. The quality of the MPC controller is inherently bound to the prediction quality of the dynamical model, which is typical in model-based RL (Schneider, 1997; Schaal, 1997; Deisenroth et al., 2015).
To learn models and controllers from scratch, we apply a control scheme that allows us to update the DDM as new data arrives. In particular, we use the MPC controller in an adaptive fashion to gradually improve the model by col- lected data in the feedback loop without any speciï¬c prior knowledge of the system at hand. Data collection is per- formed in closed-loop (online MPC), and it is divided into multiple sequential trials. After each trial, we add the data of the most recent trajectory to the data set, and the model is re-trained using all data that has been collected so far. | 1502.02251#19 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 20 | We now turn the classical MPC procedure into MPC on im- ages by exploiting some convenient properties of the DDM. The DDM allows us to predict features 21,...,2« based on a sequence of controls uo, ..., ux â1. By comparing (6) with (2), we define the state xo as the present and past nâ1 features and the past n â 1 control inputs, such that
â
x0 = [z0, . . . , zân+1, uâ1, . . . , uân+1]. (8)
The DDM computes the present and past features with the encoder zt = gâ1(yt, θE), such that x0 is known at the current time, which matches the MPC requirement. Our objective is to control the system towards a desired refer- ence image frame yref. This reference frame yref can also be encoded to a corresponding reference feature zref = gâ1(yref, θE), which results in the MPC objective
K-1 Up,-++,Uxâ1 © arg min > 2: â zreel|? +Alfuell?, (9) uoK-1 4=9 | 1502.02251#20 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 21 | K-1 Up,-++,Uxâ1 © arg min > 2: â zreel|? +Alfuell?, (9) uoK-1 4=9
Up,-++,Uxâ1 © arg min > 2: â zreel|? +Alfuell?, (9) uoK-1 4=9 where x, defined in (8), is the current state. The gradi- ents of the cost function (9) with respect to the control sig- nals uo,...,WKâ1 are computed in closed form, and we use BFGS to find the optimal sequence of control signals. Note that the objective function depends on uo,...,uKâ1 not only via the control penalty |||? but also via the fea- ture predictions 21.â1 of the DDM via (2). Overall, we now have an online MPC algorithm that, given a trained DDM, works indirectly on images by exploiting their feature representation. In the following, we will now turn this into an iterative algorithm that learns predictive models from images and good controllers from scratch.
Algorithm 1 Adaptive MPC in feature space | 1502.02251#21 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 22 | Algorithm 1 Adaptive MPC in feature space
Algorithm 1 Adaptive MPC in feature space Follow a random control strategy and record data loop Update DDM with all data collected so far for t = 0 to Nâ1do Get state x; via auto-encoder uy < â¬-greedy MPC policy using DDM prediction Apply uj and record data end for end loop
Simply applying the MPC controller based on a randomly initialized model would make the closed-loop system very likely to converge to a point, which is far away from the desired reference value, due to the poor model that can- not extrapolate well to unseen states. This would in turn imply that no data is collected in unexplored regions, in- cluding the region that we actually are interested in. There are two solutions to this problem: Either we use a proba- bilistic dynamics model as suggested in (Schneider, 1997; Deisenroth et al., 2015) to explicitly account for model un- certainty and the implied natural exploration or we follow an explicit exploration strategy to ensure proper excitation of the system. In this paper, we follow the latter approach. In particular, we choose an e-greedy exploration strategy where the optimal feedback uw at each time step is selected with a probability 1 â ¢, and a random action is selected with probability e.
# 3.2. Adaptive MPC for Learning from Scratch | 1502.02251#22 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 23 | # 3.2. Adaptive MPC for Learning from Scratch
We will now turn over to describe how (adaptive) MPC can be used together with our DDM to address the pixels to torques problem and to learn from scratch. At the core
Algorithm | summarizes our adaptive online MPC scheme. We initialize the DDM with a random trial. We use the learned DDM to find an e-greedy policy using predicted features within MPC. This happens online. The collected data is added to the data set and the DDM is updated after each trial.
From Pixels to Torques: Policy Learning with Deep Dynamical Models
True video frames Yeo Yer Yer2 Yer3 Vera Yes Yer6 YT Yes Predicted video frames Yerole -Yerrle â-Yerait âYersie â Yerale â Yersie Yer olt t+sie
Figure 4. Long-term (up to eight steps) predictive performance of the DDM: True (upper plot) and predicted (lower plot) video frames on test data. | 1502.02251#23 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 24 | # 4. Experimental Results
(a) Autoencoder and prediction model
In the following, we empirically assess the components of our proposed methodology for autonomous learning from high-dimensional synthetic image data: (a) the quality of the learned DDM and (b) the overall learning framework.
In both cases, we consider a sequence of images (51 51 = 2601 pixels) and a control input associated with these im- ages. Each pixel y(i) is a component of the measurement t R2601 and assumes a continuous gray-value in the in- yt terval [0, 1]. No access to the underlying dynamics or the state (angle Ï and angular velocity ËÏ) was available, i.e., we are dealing with a high-dimensional continuous state space. The challenge was to learn (a) a good dynamics model (b) a good controller from pixel information only. We used a sampling frequency of 0.2 s and a time horizon of 25 s, which corresponds to 100 frames per trial. | 1502.02251#24 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 25 | The input dimension has been reduced to dim(yt) = 50 prior to model learning using PCA. With these 50- dimensional inputs, a four-layer auto-encoder network was used with dimension 50-25-12-6-2, such that the features were of dimension dim(zt) = 2, which is optimal to model the periodic angle of the pendulum. The order of the dy- namics was selected to be n = 2 (i.e., we consider two consecutive image frames) to capture velocity information, such that zt+1 = f (zt, ut, ztâ1, utâ1). For the prediction model f we used a feedforward neural network with a 6-4- 2 architecture. Note that the dimension of the ï¬rst layer is given by n(dim(zt) + dim(ut)) = 2(2 + 1) = 6.
# (b) Only auto-encoder | 1502.02251#25 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 26 | # (b) Only auto-encoder
Figure 5. Feature space for both joint (a) and sequential training (b) of auto-encoder and prediction model. The feature space is divided into grid points. For each grid point the decoded high- dimensional image is displayed and the feature values for the training data (red) and validation data (yellow) are overlain. For the joint training the feature values reside on a two-dimensional manifold that corresponds to the two-dimensional position of the tile. For the separate training the feature values are scattered with- out structure.
# 4.1. Learning Predictive Models from Pixels
To assess the predictive performance of the DDM, we took 601 screenshots of a moving tile, see Fig. 4. The control inputs are the (random) increments in position in horizontal and vertical directions.
We evaluate the performance of the learned DDM in terms of long-term predictions, which play a central role in MPC for autonomous learning. Long-term predictions are ob- tained by concatenating multiple 1-step ahead predictions.
The performance of the DDM is illustrated in Fig. 4 on a | 1502.02251#26 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 27 | The performance of the DDM is illustrated in Fig. 4 on a
test data set. The top row shows the ground truth images and the bottom row shows the DDMâs long-term predic- tions. The model predicts future frames of the tile with high accuracy both for 1-step ahead and multiple steps ahead. The model yields a good predictive performance for both one-step ahead prediction and multiple-step ahead predic- tion.
In Fig. 5(a), the feature representation of the data is dis- played. The features reside on a two-dimensional manifold that encodes the two-dimensional position of the moving
From Pixels to Torques: Policy Learning with Deep Dynamical Models | 1502.02251#27 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 28 | Ist trial 4th trial 7th trial Angle [rad] Angle [rad] Time [s] Time [s] Time [s]
Figure 7. Control performance after 1st to 15th trial evaluated with ε = 0 for 16 different experiments. The objective was to reach an angle of ±Ï.
Figure 6. The feature space z â [â1, 1] Ã [â1, 1] is divided into 9 Ã 9 grid points for illustration purposes. For each grid point the decoded high-dimensional image is displayed. Green: Feature values that correspond to collected experience in previous trials. Cyan: Feature value that corresponds to the current time step. Red: Desired reference value. Yellow: 15-steps-ahead prediction after optimizing for the optimal control inputs.
tile. By inspecting the decoded images we can see that each corner of the manifold corresponds to a corner po- sition of the tile. Due to this structure a relatively simple prediction model is sufï¬cient to describe the dynamics. In case the auto-encoder and the prediction model would have been learned sequentially (ï¬rst training the auto-encoder, and then based on these features values train the predic- tion model) such a structure would not have been enforced. In Fig. 5(b) the corresponding feature representation is displayed where only the auto-encoder has been trained. Clearly, these features does not exhibit such a structure. | 1502.02251#28 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 29 | # 4.2. Closed-Loop Policy Learning from Pixels
the DDM using all collected data so far, where we also in- clude the reference image while learning the auto-encoder.
Fig. 6 displays the decoded images corresponding to 1, 1]2. The learned fea- learned latent representations in [ ture values of the training data (green) line up in a circular shape, such that a relatively simple prediction model is suf- ï¬cient to describe the dynamics. If we would not have opti- mized for both the prediction error and reconstruction error, such an advantageous structure of the feature values would not have been obtained. The DDM extracts features that can also model the dynamic behavior compactly. The ï¬gure also shows the predictions produced by the MPC controller (yellow), starting from the current time step (cyan) and tar- geting the reference feature (red) where the pendulum is in the target position.
To assess the controller performance after each trial, we applied a greedy policy (⬠= 0). In Fig. 7, angle trajectories for 15 of the 50 experiments at different learning stages are displayed. In the first trial, the controller managed only ina few cases to drive the pendulum toward the reference value +t. The control performance increased gradually with the number of trials, and after the 15th trial, it manages in most cases to get it to an upright position. | 1502.02251#29 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 30 | In this section, we report results on learning a policy that moves a pendulum (1-link robot arm with length 1m, weight | kg and friction coefficient 1 Nsm/rad) from a start position y = 0 to a target position y = +7. The reference signal was the screenshot of the pendulum in the target po- sition. For the MPC controller, we used a planning horizon of P = 15 steps and a control penalty \ = 0.01. For the e-greedy exploration strategy we used ⬠= 0.2. We con- ducted 50 independent experiments with different random initializations. The learning algorithm was run for 15 trials (plus an initial random trial). After each trial, we retrained | 1502.02251#30 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 31 | To assess the data efï¬ciency of our approach, we compared it with the PILCO RL framework (Deisenroth et al., 2015) to learning closed-loop control policies for the pendulum task above. PILCO is a current state-of-the art model-based RL algorithm for data-efï¬cient learning of control policies in continuous state-control spaces. Using collected data PILCO learns a probabilistic model of the system dynam- ics, implemented as a Gaussian process (GP) (Rasmussen & Williams, 2006). Subsequently, this model is used to compute a distribution over trajectories and the correspondFrom Pixels to Torques: Policy Learning with Deep Dynamical Models
1 0.8 e t a R s s e c c u S 0.6 0.4 0.2 PILCO w/ 2D state (Ï, ËÏ) PILCO w/ 2D AE features PILCO w/ 20D PCA features DDM+MPC 0 0 500 1,000 1,500
separately. The auto-encoder ï¬nds good features that min- imize the reconstruction error. However, these features are not good for modeling the dynamic behavior of the sys- tem,3 and lead to bad long-term predictions. | 1502.02251#31 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 32 | Computation times of PILCO and our method are vastly different: While PILCO spends most time optimizing pol- icy parameters, our model spends most of the time on learn- ing the DDM. Computing the optimal nonparametric MPC policy happens online and does not require signiï¬cant com- putational overhead. To put this into context, PILCO re- quired a few days of learning time for 10 trials (in a 20D feature space). In a 2D feature space, running PILCO for 10 trials and 1000 data points requires about 10 hours.
# Number of frames (100 per trial)
Figure 8. Average learning success with standard errors. Blue: PILCO ground-truth RL baseline using the true state (Ï, ËÏ). Red: PILCO with learned auto-encoder features from image pixels. Cyan: PILCO on 20D feature determined by PCA. Black: Our proposed MPC solution using the DDM.
ing expected cost, which is used for gradient-based opti- mization of the controller parameters. | 1502.02251#32 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 33 | ing expected cost, which is used for gradient-based opti- mization of the controller parameters.
Although PILCO uses data very efï¬ciently, its computa- tional demand makes its direct application impractical for 20 D) problems, many data points or high-dimensional ( such that we had to make suitable adjustments to apply PILCO to the pixels-to-torques problem. In particular, we performed the following experiments: (1) PILCO applied to 20D PCA features, (2) PILCO applied to 2D features learned by deep auto-encoders, (3) An optimal baseline where we applied PILCO to the standard RL setting with access to the âtrueâ state (Ï, ËÏ) (Deisenroth et al., 2015).
Overall, our DDM+MPC approach to learning closed-loop policies from high-dimensional observations exploits the learned Deep Dynamical Model to learn good policies fairly data efï¬ciently.
# 5. Conclusion | 1502.02251#33 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 34 | # 5. Conclusion
We have proposed a data-efï¬cient model-based RL algo- rithm that learns closed-loop policies in continuous state and action spaces directly from pixel information. The key components of our solution are (1) a deep dynamical model (DDM) that is used for long-term predictions in a compact feature space and (2) an MPC controller that uses the pre- dictions of the DDM to determine optimal actions on the ï¬y without the need for value function estimation. For the suc- cess of this RL algorithm it is crucial that the DDM learns the feature mapping and the predictive model in feature space jointly to capture dynamic behavior for high-quality long-term predictions. Compared to state-of-the-art RL our algorithm learns fairly quickly, scales to high-dimensional state spaces and facilitates learning from pixels to torques.
# Acknowledgments | 1502.02251#34 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 35 | # Acknowledgments
Fig. 8 displays the average success rate of PILCO (in- cluding standard error) and our proposed method using deep dynamical models together with a tailored MPC (DDM+MPC). We deï¬ne âsuccessâ if the pendulumâs an- gle is stabilized within 10⦠around the target state.2 The baseline (PILCO trained on the ground-truth 2D state (Ï, ËÏ)) is shown in blue and solves the task very quickly. The graph shows that our proposed algorithm (black), which learns torques directly from pixels, is not too far behind the ground-truth RL solution, achieving a n almost 90% success rate after 15 trials (1500 image frames). How- ever, PILCO trained on the 2D auto-encoder features (red) and 20D PCA features fail consistently in all experiments We explain PILCOâs failure by the fact that we trained the auto-encoder and the transition dynamics in feature space
2Since we consider a continuous setting, we have to deï¬ne a target region. | 1502.02251#35 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 36 | 2Since we consider a continuous setting, we have to deï¬ne a target region.
This work was supported by the Swedish Foundation for Strategic Research under the project Cooperative Localiza- tion and the Swedish Research Council under the project Probabilistic modeling of dynamical systems (Contract number: 621-2013-5524). MPD was supported by an Im- perial College Junior Research Fellowship.
# References
Abramova, Ekatarina, Dickens, Luke, Kuhn, Daniel, and Faisal, A. Aldo. Hierarchical, heterogeneous control us- ing reinforcement learning. In EWRL, 2012.
3When we inspected the latent-space embedding of the auto- encoder, the pendulum angles do not nicely line up along an âeasyâ manifold as in Fig. 6. See supplementary material for more details.
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Atkeson, Christopher G. and Schaal, S. Learning tasks from a single demonstration. In ICRA, 1997.
LeCun, Y, Bottou, L, Bengio, Y, and Haffner, P. Gradient- based learning applied to document recognition. Proc. of the IEEE, 86(11):2278â2324, 1998.
Bagnell, James A. and Schneider, Jeff G. Autonomous helicopter control using reinforcement learning policy search methods. In ICRA, 2001. | 1502.02251#36 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 37 | Bagnell, James A. and Schneider, Jeff G. Autonomous helicopter control using reinforcement learning policy search methods. In ICRA, 2001.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
Bengio, Yoshua, Lamblin, Pascal, Popovici, Dan, and Larochelle, Hugo. Greedy layer-wise training of deep networks. In NIPS, 2007.
Ljung, L. System Identiï¬cation: Theory for the User. Pren- tice Hall, 1999.
Boedecker, Joschka, Springenberg, Jost Tobias, W¨ulï¬ng, Jan, and Riedmiller, Martin. Approximate real-time op- timal control based on sparse Gaussian process models. In ADPRL, 2014.
Boots, Byron, Byravan, Arunkumar, and Fox, Dieter. Learning predictive models of a depth camera & manip- ulator from raw execution traces. In ICRA, 2014.
Mayne, David Q. Model predictive control: Recent devel- opments and future promise. Automatica, 50(12):2967â 2986, 2014. | 1502.02251#37 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 38 | Mayne, David Q. Model predictive control: Recent devel- opments and future promise. Automatica, 50(12):2967â 2986, 2014.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, and et al. Human-level control Nature, 518 through deep reinforcement (7540):529â533, 2015.
Bourlard, Herv´e and Kamp, Yves. Auto-association by multilayer perceptrons and singular value decomposi- tion. Biological Cybernetics, 59(4-5):291â294, 1988.
Nocedal, J. and Wright, S. J. Numerical Optimization. Springer, 2006.
Brock, Oliver. Berlin Summit on Robotics: Conference Re- port, chapter Is Robotics in Need of a Paradigm Shift?, pp. 1â10. 2011.
Contardo, Gabriella, Denoyer, Ludovic, Artieres, Thierry, and Gallinari, Patrick. Learning states representations in POMDP. arXiv preprint arXiv:1312.6042, 2013. | 1502.02251#38 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 39 | Cuccu, Giuseppe, Luciw, Matthew, Schmidhuber, J¨urgen, and Gomez, Faustino. Intrinsically motivated neuroevo- lution for vision-based reinforcement learning. In ICDL, 2011.
Pan, Yunpeng and Theodorou, Evangelos. Probabilistic dif- ferential dynamic programming. In NIPS, 2014.
Rasmussen, Carl E. and Williams, Christopher K. I. Gaus- sian Processes for Machine Learning. The MIT Press, 2006.
Schaal, Stefan. Learning from demonstration. In NIPS. 1997.
Schmidhuber, J¨urgen. An on-line algorithm for dynamic reinforcement learning and planning in reactive environ- ments. In IJCNN, 1990.
Deisenroth, Marc P., Rasmussen, Carl E., and Peters, Jan. Gaussian process dynamic programming. Neurocomput- ing, 72(7â9):1508â1524, 2009.
Deisenroth, Marc P., Fox, Dieter, and Rasmussen, Carl E. Gaussian processes for data-efï¬cient learning in robotics and control. IEEE-TPAMI, 37(2):408â423, 2015.
Hinton, G and Salakhutdinov, R. Reducing the dimension- ality of data with neural networks. Science, 313:504â 507, 2006. | 1502.02251#39 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02251 | 40 | Hinton, G and Salakhutdinov, R. Reducing the dimension- ality of data with neural networks. Science, 313:504â 507, 2006.
Koutnik, Jan, Cuccu, Giuseppe, Schmidhuber, J¨urgen, and Gomez, Faustino. Evolving large-scale neural networks In GECCO, for vision-based reinforcement learning. 2013.
Schneider, Jeff G. Exploiting model uncertainty estimates for safe dynamic control learning. In NIPS. 1997.
Sha, Daohang. A new neural networks based adaptive model predictive control for unknown multiple variable non-linear systems. IJAMS, 1(2):146â155, 2008.
Sutton, Richard S. and Barto, Andrew G. Reinforcement Learning: An Introduction. The MIT Press, 1998.
van Hoof, Herke, Peters, Jan, and Neumann, Gerhard. Learning of non-parametric control policies with high- dimensional state features. In AISTATS, 2015.
Vincent, P, Larochelle, H, Bengio, Y, and Manzagol, Pierre-Antoine. Extracting and composing robust fea- tures with denoising autoencoders. In ICML, 2008. | 1502.02251#40 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | [
{
"id": "1504.00702"
}
] |
1502.02072 | 0 | 5 1 0 2
b e F 6 ] L M . t a t s [
1 v 2 7 0 2 0 . 2 0 5 1 : v i X r a
# Massively Multitask Networks for Drug Discovery
Bharath Ramsundar*,â , ⦠Steven Kearnes*,â Patrick Riley⦠Dale Webster⦠David Konerding⦠Vijay Pandeâ (*Equal contribution, â Stanford University, â¦Google Inc.)
[email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
# Abstract | 1502.02072#0 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 1 | # Abstract
Massively multitask neural architectures provide a learning framework for drug discovery that synthesizes information from many distinct bi- ological sources. To train these architectures at scale, we gather large amounts of data from pub- lic sources to create a dataset of nearly 40 mil- lion measurements across more than 200 bio- logical targets. We investigate several aspects of the multitask framework by performing a se- ries of empirical studies and obtain some in- teresting results: (1) massively multitask net- works obtain predictive accuracies signiï¬cantly better than single-task methods, (2) the pre- dictive power of multitask networks improves as additional tasks and data are added, (3) the total amount of data and the total number of tasks both contribute signiï¬cantly to multitask improvement, and (4) multitask networks afford limited transferability to tasks not in the training set. Our results underscore the need for greater data sharing and further algorithmic innovation to accelerate the drug discovery process. | 1502.02072#1 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 2 | After a suitable target has been identiï¬ed, the ï¬rst step in the drug discovery process is âhit ï¬nding.â Given some druggable target, pharmaceutical companies will screen millions of drug-like compounds in an effort to ï¬nd a few attractive molecules for further optimization. These screens are often automated via robots, but are expensive to perform. Virtual screening attempts to replace or aug- ment the high-throughput screening process by the use of computational methods (Shoichet, 2004). Machine learn- ing methods have frequently been applied to virtual screen- ing by training supervised classiï¬ers to predict interactions between targets and small molecules.
There are a variety of challenges that must be overcome to achieve effective virtual screening. Low hit rates in experimental screens (often only 1â2% of screened com- pounds are active against a given target) result in im- balanced datasets that require special handling for effec- tive learning. For instance, care must be taken to guard against unrealistic divisions between active and inactive compounds (âartiï¬cial enrichmentâ) and against informa- tion leakage due to strong similarity between active com- pounds (âanalog biasâ) (Rohrer & Baumann, 2009). Fur- thermore, the paucity of experimental data means that over- ï¬tting is a perennial thorn. | 1502.02072#2 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 3 | # 1. Introduction
Discovering new treatments for human diseases is an im- mensely complicated challenge. Prospective drugs must attack the source of an illness, but must do so while sat- isfying restrictive metabolic and toxicity constraints. Tra- ditionally, drug discovery is an extended process that takes years to move from start to ï¬nish, with high rates of failure along the way.
The overall complexity of the virtual screening problem has limited the impact of machine learning in drug dis- covery. To achieve greater predictive power, learning al- gorithms must combine disparate sources of experimental data across multiple targets. Deep learning provides a ï¬ex- ible paradigm for synthesizing large amounts of data into predictive models. In particular, multitask networks facil- itate information sharing across different experiments and compensate for the limited data associated with any partic- ular experiment.
Preliminary work. Under review by the International Conference on Machine Learning (ICML). Copyright 2015 by the author(s).
In this work, we investigate several aspects of the multi- task learning paradigm as applied to virtual screening. We gather a large collection of datasets containing nearly 40
Massively Multitask Networks for Drug Discovery | 1502.02072#3 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 4 | In this work, we investigate several aspects of the multi- task learning paradigm as applied to virtual screening. We gather a large collection of datasets containing nearly 40
Massively Multitask Networks for Drug Discovery
million experimental measurements for over 200 targets. We demonstrate that multitask networks trained on this col- lection achieve signiï¬cant improvements over baseline ma- chine learning methods. We show that adding more tasks and more data yields better performance. This effect di- minishes as more data and tasks are added, but does not appear to plateau within our collection. Interestingly, we ï¬nd that the total amount of data and the total number of tasks both have signiï¬cant roles in this improvement. Fur- thermore, the features extracted by the multitask networks demonstrate some transferability to tasks not contained in the training set. Finally, we ï¬nd that the presence of shared active compounds is moderately correlated with multitask improvement, but the biological class of the target is not. | 1502.02072#4 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 5 | While we were preparing this work, a workshop paper was released that also used massively multitask networks for virtual screening (Unterthiner et al.). That work curated a dataset of 1,280 biological targets with 2 million asso- ciated data points and trained a multitask network. Their network has more tasks than ours (1,280 vs. 259) but far fewer data points (2 million vs. nearly 40 million). The emphasis of our work is considerably different; while their report highlights the performance gains due to multitask networks, ours is focused on disentangling the underly- ing causes of these improvements. Another closely related work proposed the use of collaborative ï¬ltering for vir- tual screening and employed both multitask networks and kernel-based methods (Erhan et al., 2006). Their multitask networks, however, did not consistently outperform single- task models.
# 2. Related Works | 1502.02072#5 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 6 | # 2. Related Works
Machine learning has a rich history in drug discovery. Early work combined creative featurizations of molecules with off-the-shelf learning algorithms to predict drug ac- tivity (Varnek & Baskin, 2012). The state of the art has moved to more reï¬ned models, such as the inï¬uence rele- vance voting method that combines low-complexity neural networks and k-nearest neighbors (Swamidass et al., 2009), and Bayesian belief networks that repurpose textual infor- mation retrieval methods for virtual screening (Abdo et al., 2010). Other related work uses deep recursive neural net- works to predict aqueous solubility by extracting features from the connectivity graphs of small molecules (Lusci et al., 2013).
Within the greater context of deep learning, we draw upon various strands of recent thought. Prior work has used multitask deep networks in the contexts of language understanding (Collobert & Weston, 2008) and multi- language speech recognition (Deng et al., 2013). Our best-performing networks draw upon design patterns intro- duced by GoogLeNet (Szegedy et al., 2014), the winner of ILSVRC 2014.
# 3. Methods
# 3.1. Dataset Construction and Design | 1502.02072#6 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 7 | # 3. Methods
# 3.1. Dataset Construction and Design
Deep learning has made inroads into drug discovery in recent years, most notably in 2012 with the Merck Kag- gle competition (Dahl, November 1, 2012). Teams were given pre-computed molecular descriptors for compounds with experimentally measured activity against 15 targets and were asked to predict the activity of molecules in a held-out test set. The winning team used ensemble models including multitask deep neural networks, Gaussian pro- cess regression, and dropout to improve the baseline test set R2 by nearly 17%. The winners of this contest later released a technical report that discusses the use of mul- titask networks for virtual screening (Dahl et al., 2014). Additional work at Merck analyzed the choice of hyper- parameters when training single- and multitask networks and showed improvement over random forest models (Ma et al., 2015). The Merck Kaggle result has been received with skepticism by some in the cheminformatics and drug discovery communities (Lowe, December 11, 2012, and as- sociated comments). Two major concerns raised were that the sample size was too small (a good result across 15 sys- tems may well have occurred by chance) and that any gains in predictive accuracy were too small to justify the increase in complexity. | 1502.02072#7 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 8 | Models were trained on 259 datasets gathered from pub- licly available data. These datasets were divided into four groups: PCBA, MUV, DUD-E, and Tox21. The PCBA group contained 128 experiments in the PubChem BioAssay database (Wang et al., 2012). The MUV group contained 17 challenging datasets speciï¬cally designed to avoid common pitfalls in virtual screening (Rohrer & Bau- mann, 2009). The DUD-E group contained 102 datasets that were designed for the evaluation of methods to pre- dict interactions between proteins and small molecules (Mysinger et al., 2012). The Tox21 datasets were used in the recent Tox21 Data Challenge (https://tripod. nih.gov/tox21/challenge/) and contained exper- imental data for 12 targets relevant to drug toxicity predic- tion. We used only the training data from this challenge because the test set had not been released when we con- structed our collection. In total, our 259 datasets contained 37.8M experimental data points for 1.6M compounds. De- tails for the dataset groups are given in Table 1. See the Appendix for details on individual datasets and their bio- logical target categorization. | 1502.02072#8 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 9 | It should be noted that we did not perform any prepro- cessing of our datasets, such as removing potential ex- perimental artifacts. Such artifacts may be due by comMassively Multitask Networks for Drug Discovery
Table 1. Details for dataset groups. Values for the number of data points per dataset and the percentage of active compounds are reported as means, with standard deviations in parenthesis.
Group Datasets Data Points / ea. % Active PCBA DUD-E MUV Tox21 128 102 17 12 282K (122K) 14K (11K) 15K (1) 6K (500) 1.8 (3.8) 1.6 (0.2) 0.2 (0) 7.8 (4.7) | 1502.02072#9 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 10 | community (Jain & Nicholls, 2008), we used metrics de- rived from the receiver operating characteristic (ROC) curve to evaluate model performance. Recall that the ROC curve for a binary classiï¬er is the plot of true positive rate (TPR) vs. false positive rate (FPR) as the discrimination threshold is varied. For individual datasets, we are inter- ested in the area under the ROC curve (AUC), which is a global measure of classiï¬cation performance (note that AUC must lie in the range [0, 1]). More generally, for a collection of N datasets, we consider the mean and median K-fold-average AUC:
pounds whose physical properties cause interference with experimental measurements or allow for promiscuous in- teractions with many targets. A notable exception is the MUV group, which has been processed with consideration of these pathologies (Rohrer & Baumann, 2009).
# 3.2. Small Molecule Featurization | 1502.02072#10 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 11 | # 3.2. Small Molecule Featurization
We used extended connectivity ï¬ngerprints (ECFP4) (Rogers & Hahn, 2010) generated by RDKit (Landrum) to featurize each molecule. The molecule is decomposed into a set of fragmentsâeach centered at a non-hydrogen atomâwhere each fragment extends radially along bonds to neighboring atoms. Each fragment is assigned a unique identiï¬er, and the collection of identiï¬ers for a molecule is hashed into a ï¬xed-length bit vector to construct the molec- ular âï¬ngerprintâ. ECFP4 and other ï¬ngerprints are com- monly used in cheminformatics applications, especially to measure similarity between compounds (Willett et al., 1998). A number of molecules (especially in the Tox21 group) failed the featurization process and were not used in training our networks. See the Appendix for details.
K . 1 Mean / Median { K > AUC; (Dn) | 1502.02072#11 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 12 | K . 1 Mean / Median { K > AUC; (Dn)
where AUCk(Dn) is deï¬ned as the AUC of a classiï¬er trained on folds {1, . . . , K} \ k of dataset Dn and tested on fold k. For completeness, we include in the Appendix an alternative metric called âenrichmentâ that is widely used in the cheminformatics literature (Jain & Nicholls, 2008). We note that many other performance metrics exist in the literature; the lack of standard metrics makes it difï¬cult to do direct comparisons with previous work.
# 3.4. Multitask Networks
A neural network is a nonlinear classiï¬er that performs re- peated linear and nonlinear transformations on its input. Let xi represent the input to the i-th layer of the network (where x0 is simply the feature vector). The transformation performed is
xi+1 = Ï(Wixi + bi)
# 3.3. Validation Scheme and Metrics | 1502.02072#12 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 13 | xi+1 = Ï(Wixi + bi)
# 3.3. Validation Scheme and Metrics
The traditional approach for model evaluation is to have ï¬xed training, validation, and test sets. However, the im- balance present in our datasets means that performance varies widely depending on the particular training/test split. To compensate for this variability, we used stratiï¬ed K- fold cross-validation; that is, each fold maintains the ac- tive/inactive proportion present in the unsplit data. For the remainder of the paper, we use K = 5.
Note that we did not choose an explicit validation set. Sev- eral datasets in our collection have very few actives (â¼ 30 each for the MUV group), and we feared that selecting a speciï¬c validation set would skew our results. As a conse- quence, we suspect that our choice of hyperparameters may be affected by information leakage across folds. However, our networks do not appear to be highly sensitive to hyper- parameter choice (see Section 4.1), so we do not consider leakage to be a serious issue. | 1502.02072#13 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 14 | where Wi and bi are respectively the weight matrix and bias for the i-th layer, and Ï is a nonlinearity (in our work, the rectiï¬ed linear unit (Nair & Hinton, 2010)). After L such transformations, the ï¬nal layer of the network xL is then fed to a simple linear classiï¬er, such as the softmax, which predicts the probability that the input x0 has label j:
. ew?) xi Ply = j|xo) Se
where M is the number of possible labels (here M = 2) and w1, · · · , wM are weight vectors. Wi, bi, and wm are learned during training by the backpropagation algorithm (Rumelhart et al., 1988). A multitask network attaches N softmax classiï¬ers, one for each task, to the ï¬nal layer xL. (A âtaskâ corresponds to the classiï¬er associated with a particular dataset in our collection, although we often use âtaskâ and âdatasetâ interchangeably. See Figure 1.)
# 4. Experimental Section
Following recommendations from the cheminformatics
In this section, we seek to answer a number of questions about the performance, capabilities, and limitations of masMassively Multitask Networks for Drug Discovery | 1502.02072#14 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 15 | Following recommendations from the cheminformatics
In this section, we seek to answer a number of questions about the performance, capabilities, and limitations of masMassively Multitask Networks for Drug Discovery
Softmax nodes, one per dataset Hidden layers. 1-4 layers with 50-3000 nodes Fully connected to layer below, rectified linear activation
Figure 1. Multitask neural network.
sively multitask neural networks:
1. Do massively multitask networks provide a perfor- mance boost over simple machine learning methods? If so, what is the optimal architecture for massively multitask networks?
2. How does the performance of a multitask network de- pend on the number of tasks? How does the perfor- mance depend on the total amount of data?
with the networks overï¬tting the data. As discussed in Sec- tion 3.1, our datasets had a very small fraction of positive examples. For the single hidden layer multitask network in Table 2, each dataset had 1200 associated parameters. With a total number of positives in the tens or hundreds, overï¬tting this number of parameters is a major issue in the absence of strong regularization. | 1502.02072#15 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 16 | Reducing the number of parameters speciï¬c to each dataset In our is the motivation for the pyramidal architecture. pyramidal networks, the ï¬rst hidden layer is very wide (2000 nodes) with a second narrow hidden layer (100 nodes). This dimensionality reduction is similar in moti- vation and implementation to the 1x1 convolutions in the GoogLeNet architecture (Szegedy et al., 2014). The wide lower layer allows for complex, expressive features to be learned while the narrow layer limits the parameters spe- ciï¬c to each task. Adding dropout of 0.25 to our pyramidal networks improved performance. We also trained single- task versions of our best pyramidal network to understand whether this design pattern works well with less data. Ta- ble 2 indicates that these models outperform vanilla single- task networks but do not substitute for multitask training. Results for a variety of alternate models are presented in the Appendix.
3. Do massively multitask networks extract generaliz- able information about chemical space?
4. When do datasets beneï¬t from multitask training?
The following subsections detail a series of experiments that seek to answer these questions. | 1502.02072#16 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 17 | 4. When do datasets beneï¬t from multitask training?
The following subsections detail a series of experiments that seek to answer these questions.
We investigated the sensitivity of our results to the sizes of the pyramidal layers by running networks with all com- binations of hidden layer sizes: (1000, 2000, 3000) and (50, 100, 150). Across the architectures, means and medi- ans shifted by ⤠.01 AUC with only MUV showing larger changes with a range of .038. We note that performance is sensitive to the choice of learning rate and the number of training steps. See the Appendix for details and data.
# 4.1. Experimental Exploration of Massively Multitask Networks
We investigate the performance of multitask networks with various hyperparameters and compare to several standard machine learning approaches. Table 2 shows some of the highlights of our experiments. Our best multitask archi- tecture (pyramidal multitask networks) signiï¬cantly out- performed simpler models, including a hypothetical model whose performance on each dataset matches that of the best single-task model (Max{LR, RF, STNN, PSTNN}).
# 4.2. Relationship between performance and number of tasks | 1502.02072#17 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 18 | # 4.2. Relationship between performance and number of tasks
The previous section demonstrated that massively multi- task networks improve performance over single-task mod- els. In this section, we seek to understand how multitask performance is affected by increasing the number of tasks. A priori, there are three reasonable âgrowth curvesâ (visu- ally represented in Figure 2):
Every model we trained performed extremely well on the DUD-E datasets (all models in Table 2 had median 5- fold-average AUCs ⥠0.99), making comparisons between models on DUD-E uninformative. For that reason, we exclude DUD-E from our subsequent statistical analysis. However, we did not remove DUD-E from the training alto- gether because doing so adversely affected performance on the other datasets (data not shown); we theorize that DUD- E helped to regularize the classiï¬er and avoid overï¬tting.
During our ï¬rst explorations, we had consistent problems
Over the hill: performance initially improves, hits a max- imum, then falls.
Plateau: performance initially improves, then plateaus.
Still climbing: performance improves throughout, but with a diminishing rate of return. | 1502.02072#18 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 19 | Plateau: performance initially improves, then plateaus.
Still climbing: performance improves throughout, but with a diminishing rate of return.
We constructed and trained a series of multitask networks on datasets containing 10, 20, 40, 80, 160, and 249 tasks. These datasets all contain a ï¬xed set of ten âheld-inâ tasks, which consists of a randomly sampled collection of ï¬ve
Massively Multitask Networks for Drug Discovery
Table 2. Median 5-fold-average AUCs for various models. For each model, the sign test in the last column estimates the fraction of datasets (excluding the DUD-E group, for reasons discussed in the text) for which that model is superior to the PMTNN (bottom row). We use the Wilson score interval to derive a 95% conï¬dence interval for this fraction. Non-neural network methods were trained using scikit-learn (Pedregosa et al., 2011) implementations and basic hyperparameter optimization. We also include results for a hypothetical âbestâ single-task model (Max{LR, RF, STNN, PSTNN}) to provide a stronger baseline. Details for our cross-validation and training procedures are given in the Appendix. | 1502.02072#19 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 20 | Model PCBA (n = 128) MUV (n = 17) Tox21 (n = 12) Sign Test CI Logistic Regression (LR) Random Forest (RF) Single-Task Neural Net (STNN) Pyramidal (2000, 100) STNN (PSTNN) Max{LR, RF, STNN, PSTNN} 1-Hidden (1200) Layer Multitask Neural Net (MTNN) Pyramidal (2000, 100) Multitask Neural Net (PMTNN) .801 .800 .795 .809 .824 .842 .873 .752 .774 .732 .745 .781 .797 .841 .738 .790 .714 .740 .790 .785 .818 [.04, .13] [.06, .16] [.04, .12] [.06, .16] [.12, .24] [.08, .18]
Still climbing Plateau Over the hill Multitask Improvement Number of Tasks
Figure 2. Potential multitask growth curves | 1502.02072#20 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 21 | Still climbing Plateau Over the hill Multitask Improvement Number of Tasks
Figure 2. Potential multitask growth curves
0.12 Tox21 NR-Aromatase 0.08 MUV 852 pos sires 0.04 PCBA 485297 MUV 859 MUY 466 âââTox2t'SR-MIMP PCBA 651644 EBA 743266 0.00 + - AUC from single task NN PCBA 899 -0.08 1020 40 80 160 249 # of Tasks
PCBA, three MUV, and two Tox21 datasets. These datasets correspond to unique targets that do not have any obvious analogs in the remaining collection. (We also excluded a similarly chosen set of ten âheld-outâ tasks for use in Sec- tion 4.4). Each training collection is a superset of the pre- ceding collection, with tasks added randomly. For each net- work in the series, we computed the mean 5-fold-average- AUC for the tasks in the held-in collection. We repeated this experiment ten times with different choices of random seed. | 1502.02072#21 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 22 | Figure 3. Held-in growth curves. The y-axis shows the change in AUC compared to a single-task neural network with the same architecture (PSTNN). Each colored curve is the multitask im- provement for a given held-in dataset. Black dots represent means across the 10 held-in datasets for each experimental run, where additional tasks were randomly selected. The shaded curve is the mean across the 100 combinations of datasets and experimental runs.
Figure 3 plots the results of our experiments. The shaded region emphasizes the average growth curve, while black dots indicate average results for different experimental runs. The ï¬gure also displays lines associated with each held-in dataset. Note that several datasets show initial dips in performance. However, all datasets show subsequent im- provement, and all but one achieves performance superior to the single-task baseline. Within the limits of our current dataset collection, the distribution in Figure 3 agrees with either plateau or still climbing. The mean performance on the held-in set is still increasing at 249 tasks, so we hypothesize that performance is still climbing. It is possible that our collection is too small and that an alternate pattern may eventually emerge.
# 4.3. More tasks or more data? | 1502.02072#22 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 23 | # 4.3. More tasks or more data?
In the previous section we studied the effects of adding more tasks, but here we investigate the relative importance the total number of tasks. of the total amount of data vs. Namely, is it better to have many tasks with a small amount of associated data, or a small number of tasks with a large amount of associated data?
Massively Multitask Networks for Drug Discovery
We constructed a series of multitask networks with 10, 15, 20, 30, 50 and 82 tasks. As in the previous section, the tasks are randomly associated with the networks in a cumulative manner (i.e., the 82-task network contained all tasks present in the 50-task network, and so on). All net- works contained the ten held-in tasks described in the pre- vious section. The 82 tasks chosen were associated with the largest datasets in our collection, each containing 300K- 500K data points. Note that all of these tasks belonged to the PCBA group. | 1502.02072#23 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 24 | We then trained this series of networks multiple times with 1.6M, 3.3M, 6.5M, 13M, and 23M data points sampled from the non-held-in tasks. We perform the sampling such that for a given task, all data points present in the ï¬rst stage (1.6M) appeared in the second (3.3M), all data points present in the second stage appeared in the third (6.5M), and so on. We decided to use larger datasets so we could sample meaningfully across this entire range. Some com- binations of tasks and data points were not realized; for instance, we did not have enough data to train a 20-task network with 23M additional data points. We repeated this experiment ten times using different random seeds.
Figure 4 shows the results of our experiments. The x-axis tracks the number of additional tasks, while the y-axis dis- plays the improvement in performance for the held-in set relative to a multitask network trained only on the held-in data. When the total amount of data is ï¬xed, having more tasks consistently yields improvement. Similarly, when the number of tasks is ï¬xed, adding additional data consis- tently improves performance. Our results suggest that the total amount of data and the total number of tasks both con- tribute signiï¬cantly to the multitask effect. | 1502.02072#24 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 25 | 0.03 peo # Additional Input Examples â 23M -- 13M â 65m -- 3.3M â- 1.6m Mean AUC 0.01 0.00 5 10 20 72 40 # Additional Tasks
Figure 4. Multitask beneï¬t from increasing tasks and data inde- pendently. As in Figure 2, we added randomly selected tasks (x- axis) to a ï¬xed held-in set. A stratiï¬ed random sampling scheme was applied to the additional tasks in order to achieve ï¬xed total numbers of additional input examples (color, line type). White points indicate the mean over 10 experimental runs of â mean- AUC over the initial network trained on the 10 held-in datasets. Color-ï¬lled areas and error bars describe the smoothed 95% con- ï¬dence intervals.
stronger as multitask networks were trained on more data. Large multitask networks exhibited better transferability, but the average effect even with 249 datasets was only â¼ .01 AUC. We hypothesize that the extent of this gen- eralizability is determined by the presence or absence of relevant data in the multitask training set.
# 4.5. When do datasets beneï¬t from multitask training?
# 4.4. Do massively multitask networks extract generalizable features? | 1502.02072#25 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 26 | # 4.5. When do datasets beneï¬t from multitask training?
# 4.4. Do massively multitask networks extract generalizable features?
The features extracted by the top layer of the network rep- resent information useful to many tasks. Consequently, we sought to determine the transferability of these features to tasks not in the training set. We held out ten data sets from the growth curves calculated in Section 4.2 and used the learned weights from points along the growth curves to ini- tialize single-task networks for the held-out datasets, which we then ï¬ne-tuned.
The results in Sections 4.2 and 4.4 indicate that some datasets beneï¬t more from multitask training than others. In an effort to explain these differences, we consider three speciï¬c questions:
1. Do shared active compounds explain multitask im- provement?
2. Do some biological target classes realize greater mul- titask improvement than others?
The results of training these networks (with 5-fold strat- iï¬ed cross-validation) are shown in Figure 5. First, note that many of the datasets performed worse than the baseline when initialized from the 10-held-in-task networks. Fur- ther, some datasets never exhibited any positive effect due to multitask initialization. Transfer learning can be nega- tive.
Second, note that the transfer learning effect became | 1502.02072#26 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 27 | Second, note that the transfer learning effect became
3. Do tasks associated with duplicated targets have arti- ï¬cially high multitask performance?
4.5.1. SHARED ACTIVE COMPOUNDS
The biological context of our datasets implies that active compounds contain more information than inactive com- pounds; while an inactive compound may be inactive for many reasons, active compounds often rely on similar physical mechanisms. Hence, shared active compounds should be a good measure of dataset similarity.
Massively Multitask Networks for Drug Discovery
PBA 602233 __ ââ Tox21 SR-ATADS LO MuV 832 POBA gz4at7 =f saee° PCBA 2675 âPCBA 1461 Tox21 NR-ARR, 0.05 0.004 - -0.05 MUV 548 ZAUC from single task NN -0.10 1020 40 249 80 160 # of Tasks/Datasets | 1502.02072#27 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 28 | Figure 5. Held-out growth curves. The y-axis shows the change in AUC compared to a single-task neural network with the same architecture (PSTNN). Each colored curve is the result of initializ- ing a single-task neural network from the weights of the networks from Section 4.2 and computing the mean across the 10 experi- mental runs. These datasets were not included in the training of the original networks. The shaded curve is the mean across the 100 combinations of datasets and experimental runs, and black dots represent means across the 10 held-out datasets for each ex- perimental run, where additional tasks were randomly selected.
Figure 6 plots multitask improvement against a measure of dataset similarity we call âactive occurrence rateâ (AOR). For each active compound α in dataset Di, AORi,α is de- ï¬ned as the number of additional datasets in which this compound is also active: | 1502.02072#28 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 29 | There is a moderate correlation between AOR and â log- odds-mean-AUC (r2 = .33); we note that this correlation is not present when we use â mean-AUC as the y-coordinate (r2 = .09). We hypothesize that some portion of the multi- task effect is determined by shared active compounds. That is, a dataset is most likely to beneï¬t from multitask training when it shares many active compounds with other datasets in the collection.
20 @ ~=PCBA e muy © Tox21 A Log-odds-mean-AUC 5 10 15 20 Active occurrence rate (AOR, ,.)
Figure 6. Multitask improvement compared to active occurrence rate (AOR). Each point in the ï¬gure represents a particular dataset Di. The x-coordinate is the mean AOR across all active com- pounds in Di, and the y-coordinate is the difference in log-odds- mean-AUC between multitask and single-task models. The gray bars indicate standard deviations around the AOR means. There is a moderate correlation (r2 = .33). For reasons discussed in Section 4.1, we excluded DUD-E from this analysis. (Including DUD-E results in a similar correlation, r2 = .22.) | 1502.02072#29 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 30 | AORi,a = Ss 1(a ⬠Actives (Da)). di
Each point in Figure 6 corresponds to a single dataset Di. The x-coordinate is
AORi = Mean αâActives(Di) (AORi,α) ,
and the y-coordinate (â log-odds-mean-AUC) is
# 4.5.2. TARGET CLASSES
Figure 7 shows the relationship between multitask im- provement and target classes. As before, we report mul- titask improvement in terms of log-odds and exclude the DUD-E datasets. Qualitatively, no target class beneï¬ted more than any other from multitask training. Nearly ev- ery target class realized gains, suggesting that the multitask framework is applicable to experimental data from multiple target classes.
K K _{ 1 J _f 1 s logit (i » aucâ? ») âlogit G » auc\â) ») | 1502.02072#30 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 31 | K K _{ 1 J _f 1 s logit (i » aucâ? ») âlogit G » auc\â) »)
where AUC(M ) (Di) are respectively the AUC values for the k-th fold of dataset i in the multitask and single-task models, and logit (p) = log (p/(1 â p)). The use of log-odds reduces the effect of outliers and em- phasizes changes in AUC when the baseline is high. Note that for reasons discussed in Section 4.1, DUD-E was ex- cluded from this analysis.
,
# 4.5.3. DUPLICATE TARGETS
As mentioned in Section 3.1, there are many cases of tasks with identical targets. We compared the multitask improve- ment of duplicate vs. unique tasks. The distributions have substantial overlap (see the Appendix), but the average log- odds improvement was slightly higher for duplicated tasks .372; a one-sided t-test between the duplicate (.531 vs. and unique distributions gave p = .016). Since duplicated targets are likely to share many active compounds, this im- provement is consistent with the correlation seen in SecMassively Multitask Networks for Drug Discovery
\ ao) ro ens a Neos one ee ih do Ko 09 Target Class | 1502.02072#31 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 32 | \ ao) ro ens a Neos one ee ih do Ko 09 Target Class
Figure 7. Multitask improvement across target classes. The x- coordinate lists a series of biological target classes represented in our dataset collection, and the y-coordinate is the difference in log-odds-mean-AUC between multitask and single-task models. Note that the DUD-E datasets are excluded. Classes are ordered by total number of targets (in parenthesis), and target classes with fewer than ï¬ve members are merged into âmiscellaneous.â
tion 4.5.1. However, sign tests for single-task vs. multitask models for duplicate and unique targets gave signiï¬cant and highly overlapping conï¬dence intervals ([0.04, 0.24] and [0.06, 0.17], respectively; recall that the meaning of these intervals is given in the caption for Table 2). Together, these results suggest that there is not signiï¬cant informa- tion leakage within multitask networks. Consequently, the results of our analysis are unlikely to be signiï¬cantly af- fected by the presence of duplicate targets in our dataset collection.
# 5. Discussion and Conclusion | 1502.02072#32 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 33 | # 5. Discussion and Conclusion
We observed that the multitask effect was stronger for some datasets than others. Consequently, we investigated possible explanations for this discrepancy and found that the presence of shared active compounds was moderately correlated with multitask improvement, but the biological class of the target was not. It is also possible that multitask improvement results from accurately modeling experimen- tal artifacts rather than speciï¬c interactions between targets and small molecules. We do not believe this to be the case, as we demonstrated strong improvement on the thoroughly- cleaned MUV datasets.
The efï¬cacy of multitask learning is directly related to the availability of relevant data. Hence, obtaining greater amounts of data is of critical importance for improving the state of the art. Major pharmaceutical companies pos- sess vast private stores of experimental measurements; our work provides a strong argument that increased data shar- ing could result in beneï¬ts for all.
More data will maximize the beneï¬ts achievable using cur- rent architectures, but in order for algorithmic progress to occur, it must be possible to judge the performance of pro- posed models against previous work. It is disappointing to note that all published applications of deep learning to vir- tual screening (that we are aware of) use distinct datasets that are not directly comparable. It remains to future re- search to establish standard datasets and performance met- rics for this ï¬eld. | 1502.02072#33 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 34 | Another direction for future work is the further study of In this work, we use only small molecule featurization. one possible featurization (ECFP4), but there exist many others. Additional performance may also be realized by considering targets as well as small molecules in the fea- turization. Yet another line of research could improve per- formance by using unsupervised learning to explore much larger segments of chemical space.
In this work, we investigated the use of massively multitask networks for virtual screening. We gathered a large collec- tion of publicly available experimental data that we used to train massively multitask neural networks. These net- works achieved signiï¬cant improvement over simple ma- chine learning algorithms.
Although deep learning offers interesting possibilities for virtual screening, the full drug discovery process remains immensely complicated. Can deep learningâcoupled with large amounts of experimental dataâtrigger a revolution in this ï¬eld? Considering the transformational effect that these methods have had on other ï¬elds, we are optimistic about the future. | 1502.02072#34 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 35 | We explored several aspects of the multitask framework. First, we demonstrated that multitask performance im- proved with the addition of more tasks; our performance was still climbing at 259 tasks. Next, we considered the rel- ative importance of introducing more data vs. more tasks. We found that additional data and additional tasks both contributed signiï¬cantly to the multitask effect. We next discovered that multitask learning afforded limited trans- ferability to tasks not contained in the training set. This ef- fect was not universal, and required large amounts of data even when it did apply.
# Acknowledgments
B.R. was supported by the Fannie and John Hertz Foun- dation. S.K. was supported by a Smith Stanford Graduate Fellowship. We also acknowledge support from NIH and NSF, in particular NIH U54 GM072970 and NSF 0960306. The latter award was funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Massively Multitask Networks for Drug Discovery
# References
Abdo, Ammar, Chen, Beining, Mueller, Christoph, Salim, Naomie, and Willett, Peter. Ligand-based virtual screen- ing using bayesian networks. Journal of chemical infor- mation and modeling, 50(6):1012â1020, 2010. | 1502.02072#35 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 36 | Collobert, Ronan and Weston, Jason. A uniï¬ed architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th inter- national conference on Machine learning, pp. 160â167. ACM, 2008.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learn- ing (ICML-10), pp. 807â814, 2010.
Pedregosa, Fabian, Varoquaux, Ga¨el, Gramfort, Alexan- dre, Michel, Vincent, Thirion, Bertrand, Grisel, Olivier, Blondel, Mathieu, Prettenhofer, Peter, Weiss, Ron, Dubourg, Vincent, et al. Scikit-learn: Machine learning in python. The Journal of Machine Learning Research, 12:2825â2830, 2011.
Dahl, George. Deep Learning How I Did It: Merck 1st place interview. No Free Hunch, November 1, 2012.
Rogers, David and Hahn, Mathew. Extended-connectivity ï¬ngerprints. Journal of chemical information and mod- eling, 50(5):742â754, 2010. | 1502.02072#36 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 37 | Dahl, George E, Jaitly, Navdeep, and Salakhutdinov, Rus- lan. Multi-task neural networks for QSAR predictions. arXiv preprint arXiv:1406.1231, 2014.
Deng, Li, Hinton, Geoffrey, and Kingsbury, Brian. New types of deep neural network learning for speech recog- nition and related applications: An overview. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 8599â8603. IEEE, 2013.
Rohrer, Sebastian G and Baumann, Knut. Maximum un- biased validation (MUV) data sets for virtual screening based on pubchem bioactivity data. Journal of chemical information and modeling, 49(2):169â184, 2009.
Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive modeling, 1988.
Erhan, Dumitru, LâHeureux, Pierre-Jean, Yue, Shi Yi, and Bengio, Yoshua. Collaborative ï¬ltering on a family of biological targets. Journal of chemical information and modeling, 46(2):626â635, 2006. | 1502.02072#37 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 38 | Recommenda- tions for evaluation of computational methods. Journal of computer-aided molecular design, 22(3-4):133â139, 2008.
Shoichet, Brian K. Virtual screening of chemical libraries. Nature, 432(7019):862â865, 2004.
Swamidass, S Joshua, Azencott, Chlo´e-Agathe, Lin, Ting- Wan, Gramajo, Hugo, Tsai, Shiou-Chuan, and Baldi, Pierre. Inï¬uence relevance voting: an accurate and inter- pretable virtual high throughput screening method. Jour- nal of chemical information and modeling, 49(4):756â 766, 2009.
Landrum, Greg. RDKit: Open-source cheminformatics. URL http://www.rdkit.org.
Lowe, Derek. Did Kaggle Predict Drug Candidate Activi- ties? Or Not? In the Pipeline, December 11, 2012.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. arXiv preprint Going deeper with convolutions. arXiv:1409.4842, 2014. | 1502.02072#38 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 39 | Lusci, Alessandro, Pollastri, Gianluca, and Baldi, Pierre. Deep architectures and deep learning in chemoinformat- ics: the prediction of aqueous solubility for drug-like molecules. Journal of chemical information and mod- eling, 53(7):1563â1575, 2013.
Unterthiner, Thomas, Mayr, Andreas, ¨unter Klambauer, G, Steijaert, Marvin, Wenger, J¨org, Ceulemans, Hugo, and Hochreiter, Sepp. Deep learning as an opportunity in virtual screening.
Ma, Junshui, Sheridan, Robert P, Liaw, Andy, Dahl, George, and Svetnik, Vladimir. Deep neural nets as a method for quantitative structure-activity relationships. Journal of Chemical Information and Modeling, 2015.
Varnek, Alexandre and Baskin, Igor. Machine learning methods for property prediction in chemoinformatics: quo vadis? Journal of chemical information and model- ing, 52(6):1413â1437, 2012.
Mysinger, Michael M, Carchia, Michael, Irwin, John J, and Shoichet, Brian K. Directory of useful decoys, enhanced (DUD-E): better ligands and decoys for better bench- marking. Journal of medicinal chemistry, 55(14):6582â 6594, 2012. | 1502.02072#39 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 40 | Wang, Yanli, Xiao, Jewen, Suzek, Tugba O, Zhang, Jian, Wang, Jiyao, Zhou, Zhigang, Han, Lianyi, Karapetyan, Karen, Dracheva, Svetlana, Shoemaker, Benjamin A, et al. PubChemâs BioAssay database. Nucleic acids re- search, 40(D1):D400âD412, 2012.
Massively Multitask Networks for Drug Discovery
Willett, Peter, Barnard, John M, and Downs, Geoffrey M. Chemical similarity searching. Journal of chemical in- formation and computer sciences, 38(6):983â996, 1998.
# Massively Multitask Networks for Drug Discovery: Appendix
# February 11, 2022
# A. Dataset Construction and Design | 1502.02072#40 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 41 | # Massively Multitask Networks for Drug Discovery: Appendix
# February 11, 2022
# A. Dataset Construction and Design
The PCBA datasets are dose-response assays performed by the NCATS Chemical Genomics Center (NCGC) and down- loaded from PubChem BioAssay using the following search limits: TotalSidCount from 10000, ActiveSidCount from 30, Chemical, Conï¬rmatory, Dose-Response, Target: Single, NCGC. These limits correspond to the search query: (10000[TotalSidCount] : 1000000000[TotalSidCount]) AND (30[ActiveSidCount] : 1000000000[ActiveSidCount]) AND âsmall moleculeâ[ï¬lt] AND âdoseresponseâ[ï¬lt] AND 1[TargetCount] AND âNCGCâ[SourceName]. We note that the DUD-E datasets are especially susceptible to âartiï¬cial enrichmentâ (unrealistic divisions between active and inactive compounds) as an artifact of the dataset construction procedure. Each data point in our collection was associated with a binary label classifying it as either active or inactive. | 1502.02072#41 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 42 | A description of each of our 259 datasets is given in Table A1. These datasets cover a wide range of target classes and assay types, including both cell-based and in vitro experiments. Datasets with duplicated targets are marked with an asterisk (note that only the non-DUD-E duplicate target datasets were used in the analysis described in the text). For the PCBA datasets, compounds not labeled âActiveâ were considered inactive (including compounds marked âInconclusiveâ). Due to missing data in PubChem BioAssay and/or featurization errors, some data points and compounds were not used for evaluation of our models; failure rates for each dataset group are shown in Table A.2. The Tox21 group suffered especially high failure rates, likely due to the relatively large number of metallic or otherwise abnormal compounds that are not supported by the RDKit package. The counts given in Table A1 do not include these missing data. A graphical breakdown of the datasets by target class is shown in Figure A.1. The datasets used for the held-in and held-out analyses are repeated in Table A.3 and Table A.4, respectively.
As an extension of our treatment of task similarity in the text, we generated the heatmap in Figure A.2 to show the pairwise intersection between all datasets in our collection. A few characteristics of our datasets are immediately apparent: | 1502.02072#42 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 43 | ⢠The datasets in the DUD-E group have very little intersection with any other datasets.
⢠The PCBA and Tox21 datasets have substantial self-overlap. In contrast, the MUV datasets have relatively little self-overlap.
⢠The MUV datasets have substantial overlap with the datasets in the PCBA group.
⢠The Tox21 datasets have very small intersections with datasets in other groups.
Figure A.3 shows the â log-odds-mean-AUC for datasets with duplicate and unique targets.
Dataset Actives Inactives Target Class Target pcba-aid411* pcba-aid875 pcba-aid881 pcba-aid883 pcba-aid884 pcba-aid885 1562 32 589 1214 3391 163 69 734 73 870 106 656 8170 9676 12 904 other enzyme protein-protein interaction other enzyme other enzyme other enzyme other enzyme luciferase brca1-bach1 15hLO-2 CYP2C9 CYP3A4 CYP3A4
Massively Multitask Networks for Drug Discovery | 1502.02072#43 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 44 | Dataset pcba-aid887 pcba-aid891 pcba-aid899 pcba-aid902* pcba-aid903* pcba-aid904* pcba-aid912 pcba-aid914 pcba-aid915 pcba-aid924* pcba-aid925 pcba-aid926 pcba-aid927* pcba-aid938 pcba-aid995* pcba-aid1030 pcba-aid1379* pcba-aid1452 pcba-aid1454* pcba-aid1457 pcba-aid1458 pcba-aid1460* pcba-aid1461 pcba-aid1468* pcba-aid1469 pcba-aid1471 pcba-aid1479 pcba-aid1631 pcba-aid1634 pcba-aid1688 Actives Inactives Target Class 1024 1548 1809 1872 338 528 445 218 436 1146 39 350 61 1775 699 15 963 562 177 536 722 5805 5662 2305 1039 169 288 788 892 154 2374 72 140 7836 7575 123 512 54 175 53 981 68 506 10 619 10 401 122 867 64 358 71 666 59 108 70 241 70 189 200 920 198 500 151 634 130 788 204 859 202 680 261 757 218 561 270 371 276 098 223 321 | 1502.02072#44 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 45 | 401 122 867 64 358 71 666 59 108 70 241 70 189 200 920 198 500 151 634 130 788 204 859 202 680 261 757 218 561 270 371 276 098 223 321 275 479 262 774 263 512 218 200 path- path- Target 15hLO-1 CYP2D6 CYP2C19 H1299-p53A138V H1299-neo H1299-neo anthrax LF-PA internalization HIF-1 HIF-1 H1299-p53A138V EGFP-654 TSHR USP2a CNG ERK1/2 cascade ALDH1A1 luciferase 12hLO ERK1/2 cascade IMPase SMN2 K18 NPSR K18 TRb-SRC2 huntingtin TRb-SRC2 hPK-M2 hPK-M2 HTTQ103 | 1502.02072#45 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 46 | other enzyme other enzyme other enzyme viability viability viability miscellaneous transcription fac- tor transcription fac- tor viability miscellaneous GPCR protease ion channel signalling way other enzyme other enzyme other enzyme signalling way other enzyme miscellaneous protein-protein interaction GPCR protein-protein interaction protein-protein interaction protein-protein interaction miscellaneous other enzyme other enzyme protein-protein interaction other enzyme other enzyme other enzyme other enzyme other enzyme miscellaneous other enzyme other enzyme other enzyme transcription fac- tor other enzyme
1087 1159 285 3477 715 1069 2008 1136 660 10 550
291 649 301 145 321 268 223 441 198 459 268 500 285 737 344 762 347 283 293 509
# pcba-aid1721 pcba-aid2100* pcba-aid2101* pcba-aid2147 pcba-aid2242* pcba-aid2326 pcba-aid2451 pcba-aid2517 pcba-aid2528 pcba-aid2546
# LmPK alpha-glucosidase glucocerebrosidase JMJD2E alpha-glucosidase inï¬uenza A NS1 FBPA APE1 BLM VP16
# pcba-aid2549
1210
233 706
# RECQ1
Massively Multitask Networks for Drug Discovery | 1502.02072#46 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 47 | Dataset pcba-aid2551 pcba-aid2662 pcba-aid2675 pcba-aid2676 pcba-aid463254* pcba-aid485281 pcba-aid485290 pcba-aid485294* pcba-aid485297 pcba-aid485313 pcba-aid485314 pcba-aid485341* pcba-aid485349 pcba-aid485353 pcba-aid485360 pcba-aid485364 pcba-aid485367 pcba-aid492947 pcba-aid493208 pcba-aid504327 pcba-aid504332 pcba-aid504333 pcba-aid504339 pcba-aid504444 pcba-aid504466 pcba-aid504467 pcba-aid504706 pcba-aid504842 pcba-aid504845 pcba-aid504847 pcba-aid504891 pcba-aid540276* pcba-aid540317 Actives Inactives Target Class 16 666 110 99 1081 41 254 942 148 9126 7567 4491 1729 618 603 1485 10 700 557 80 342 759 30 586 15 670 16 857 7390 4169 7647 201 101 104 3515 34 4494 2126 288 772 293 953 279 333 361 124 330 640 341 253 343 503 362 056 311 481 | 1502.02072#47 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 48 | 15 670 16 857 7390 4169 7647 201 101 104 3515 34 4494 2126 288 772 293 953 279 333 361 124 330 640 341 253 343 503 362 056 311 481 313 119 329 974 328 952 321 745 328 042 223 830 345 950 330 124 330 601 43 647 380 820 317 753 341 165 367 661 353 475 325 944 322 464 321 230 329 517 385 400 390 525 383 652 279 673 381 226 Target ROR gamma MLL-HOX-A MBNL1-CUG RXFP1 USP2a apoferritin TDP1 AmpC Rab9 NPC1 DNA polymerase beta AmpC ATM PLP L3MBTL1 TGR PFK beta2-AR mTOR GCN5L2 G9a BAZ2B JMJD2A Nrf2 HEK293T-ELG1-luc ELG1 p53 Mm-CPN RGS4 VDR Pin1 Marburg virus HP1-beta | 1502.02072#48 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 49 | transcription fac- tor miscellaneous miscellaneous GPCR protease miscellaneous other enzyme other enzyme promoter promoter other enzyme other enzyme protein kinase protease protein-protein interaction other enzyme other enzyme GPCR protein kinase other enzyme other enzyme protein-protein interaction protein-protein interaction transcription fac- tor viability promoter miscellaneous other enzyme miscellaneous transcription fac- tor other enzyme miscellaneous protein-protein interaction other enzyme other enzyme other enzyme other enzyme other enzyme other enzyme other enzyme transcription fac- tor other enzyme other enzyme
25 034 3921 51 1987 3936 4715 1308 4894
335 826 382 731 386 206 393 298 382 117 383 994 384 951 398 438
pcba-aid588342* pcba-aid588453* pcba-aid588456* pcba-aid588579 pcba-aid588590 pcba-aid588591 pcba-aid588795 pcba-aid588855
luciferase TrxR1 TrxR1 DNA polymerase kappa DNA polymerase iota DNA polymerase eta FEN1 Smad3
# pcba-aid602179 pcba-aid602233
364 165
387 230 380 904
# IDH1 PGK
Massively Multitask Networks for Drug Discovery | 1502.02072#49 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |