doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1504.00702 | 49 | an image without direct supervision in image space. Our pose estimation experiments, discussed in Section 5.2, show that this network can learn useful visual features using only 3D position information provided by the robot, and no camera calibration. Further training the network with guided policy search to directly output motor torques causes it to acquire task-speciï¬c visual features. Our experiments in Section 6.4 show that this improves performance beyond the level achieved with features trained only for pose estimation. | 1504.00702#49 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 51 | the form a,j; = max(0, 2.;;) for each channel c and each pixel coordinate (i,j). The third convolutional layer contains 32 response maps with resolution 109 x 109. These response maps are passed through a spatial softmax function of the form sq; = e%4/Y>i, i! etlâ, Each output channel of the softmax is a probability distribution over the location of a feature in the image. To convert from this distribution to a coordinate representation (fee, fey), the network calculates the expected image position of each feature, yielding a 2D coordinate for each channel: fe = Ly Seijtiy and fry = Ly SeijYig, Where (jj, Yij) is the image-space position of the point (i,7) in the response map. Since this is a linear operation, it corresponds to a fixed, sparse fully connected layer with weights Wai2 = xj; and W.jy = yij- The combination of the spatial softmax and expectation operator implement a kind of soft-argmax. The spatial feature points (for, fey) are concatenated with the robotâs configuration and fed into two fully connected layers, each with 40 rectified units, followed by linear connections to the torques. The full network contains about 92,000 parameters, of which 86,000 are in the convolutional layers. | 1504.00702#51 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 52 | The spatial softmax and the expected position computation serve to convert pixel-wise representations in the convolutional layers to spatial coordinate representations, which can be manipulated by the fully connected layers into 3D positions or motor torques. The softmax also provides lateral inhibition, which suppresses low, erroneous activations, only keeping strong activations that are more likely to be accurate. This makes our policy more robust to distractors, providing generalization to novel visual variation. We compare our architecture with more standard alternatives in Section 6.3 and evaluate robustness to visual distractors in Section 6.4. However, the proposed architecture is also in some sense more specialized for visuomotor control, in contrast to more general standard convolutional networks. For example, not all perception tasks require information that can be coherently summarized by a set of spatial locations.
# 5.2 Visuomotor Policy Training | 1504.00702#52 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 53 | # 5.2 Visuomotor Policy Training
The guided policy search trajectory optimization phase uses the full state of the system, though the ï¬nal policy only uses the observations. This type of instrumented training is a natural choice for many robotics tasks, where the robot is trained under controlled conditions, but must then act intelligently in uncon- trolled, real-world situations. In our tasks, the unobserved vari- ables are the pose of a target object (e.g. the bottle on which a cap must be placed). During training, this target object is typi- cally held in the robotâs left gripper, while the robotâs right arm performs the task, as shown to the right. This allows the robot to move the target through a range of known positions. The ï¬nal visuomotor policy does not receive this position as input, but must instead use the camera images. Due to the modest amount of training data, distractors that are correlated with task-relevant variables can hamper generalization. For this reason, the left arm is covered with cloth to prevent the policy from associating its appearance with the objectâs position. | 1504.00702#53 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 54 | 14
End-to-End Training of Deep Visuomotor Policies
While we can train the visuomotor policy entirely from scratch, the algorithm would spend a large number of iterations learning basic visual features and arm motions that can more eï¬ciently be learned by themselves, before being incorporated into the policy. To speed up learning, we initialize both the vision layers in the policy and the trajectory distributions for guided policy search by leveraging the fully observed training setup. To initialize the vision layers, the robot moves the target object through a range of random positions, recording camera images and the objectâs pose, which is computed automatically from the pose of the gripper. This dataset is used to train a pose regression CNN, which consists of the same vision layers as the policy, followed by a fully connected layer that outputs the 3D points that deï¬ne the target. Since the training set is still small (we use 1000 images collected from random arm motions), we initialize the ï¬lters in the ï¬rst layer with weights from the model of Szegedy et al. (2014), which is trained on ImageNet (Deng et al., 2009) classiï¬cation. After training on pose regression, the weights in the convolutional layers are transferred to the policy CNN. This enables the robot to learn the appearance of the objects prior to learning the behavior. | 1504.00702#54 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 55 | To initialize the linear-Gaussian controllers for each of the initial states, we take 15 iterations of guided policy search without optimizing the visuomotor policy. This allows for much faster training in the early iterations, when the trajectories are not yet successful, and optimizing the full visuomotor policy is unnecessarily time consuming. Since we still want the trajectories to arrive at compatible strategies for each target position, we replace the visuomotor policy during these iterations with a small network that receives the full state, which consisted of two layers with 40 rectiï¬ed linear hidden units in our experiments. This network serves only to constrain the trajectories and avoid divergent behaviors from emerging for similar initial states, which would make subsequent policy learning diï¬cult. | 1504.00702#55 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 56 | After initialization, we train the full visuomotor policy with guided policy search. During the supervised policy optimiza- tion phase, the fully connected motor control layers are ï¬rst optimized by themselves, since they are not initialized with pre- training. This can be done very quickly because these layers are small. Then, the entire network is further optimized end-to-end. We found that ï¬rst training the upper layers before end-to-end optimization prevented the convolutional layers from forgetting useful features learning during pretraining, when the error sig- nal due to the untrained upper layers is very large. The entire pretraining scheme is summarized in the diagram on the right. Note that the trajectories can be pretrained in parallel with the vision layer pretraining, which does not require access to the physical system. Further- more, the entire initialization procedure does not use any additional information that is not already available from the robot.
requires robot collect visual pose data pretrain trajectories train pose CNN initial trajectories initial visual features end-to-end training policy
requires robot
# collect visual pose data
# pretrain trajectories
# train pose CNN
# initial trajectories
# initial visual features
# end-to-end training
# policy
# 6. Experimental Evaluation
In this section, we present a series of experiments aimed at evaluating our approach and answering the following questions:
15 | 1504.00702#56 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 57 | # initial visual features
# end-to-end training
# policy
# 6. Experimental Evaluation
In this section, we present a series of experiments aimed at evaluating our approach and answering the following questions:
15
Levine, Finn, Darrell, and Abbeel
1. How does the guided policy search algorithm compare to other policy search methods for training complex, high-dimensional policies, such as neural networks?
2. Does our trajectory optimization algorithm work on a real robotic platform with unknown dynamics, for a range of diï¬erent tasks?
3. How does our spatial softmax architecture compare to other, more standard convolu- tional neural network architectures?
4. Does training the perception and control systems in a visuomotor policy jointly end- to-end provide better performance than training each component separately? | 1504.00702#57 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 58 | 4. Does training the perception and control systems in a visuomotor policy jointly end- to-end provide better performance than training each component separately?
Evaluating a wide range of policy search algorithms on a real robot would be extremely time consuming, particularly for methods that require a large number of samples. We therefore answer question (1) by using a physical simulator and simpler policies that do not use vision. This also allows us to test the generality of guided policy search on tasks that include manipulation, walking, and swimming. To answer question (2), we present a wide range of experiments on a PR2 robot. These experiments allow us to evaluate the sample eï¬ciency of our trajectory optimization algorithm. To address question (3), we compare a range of diï¬erent policy architectures on the task of localizing a target object (the cube in the shape sorting cube task). Since localizing the target object is a prerequisite for completing the shape sorting cube task, this serves as a good proxy for evaluating diï¬erent architectures. Finally, we answer the last and most important question (4) by training visuomotor policies for hanging a coat hanger on a clothes rack, inserting a block into a shape sorting cube, ï¬tting the claw of a toy hammer under a nail with various grasps, and screwing on a bottle cap. These tasks are illustrated in Figure 8. | 1504.00702#58 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 59 | # 6.1 Simulated Comparisons to Prior Policy Search Methods
In this section, we compare our method against prior policy search techniques on a range of simulated robotic control tasks. These results previously appeared in our conference paper that introduced the trajectory optimization procedure with local linear models (Levine and Abbeel, 2014). In these tasks, the state xt consists of the joint angles and velocities of each robot, and the actions ut consist of the torques at each joint. The neural network policies used one hidden layer and soft rectiï¬er nonlinearities of the form a = log(1 + exp(z)). Since these policies use the state as input, they only have a few hundred parameters, far fewer than our visuomotor policies. However, even this number of parameters can pose a major challenge for prior policy search methods (Deisenroth et al., 2013). | 1504.00702#59 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 60 | Experimental tasks. We simulated 2D and 3D peg insertion, octopus arm control, and planar swimming and walking. The diï¬culty in the peg insertion tasks stems from the need to align the peg with the slot and the complex contacts between the peg and the walls, which result in discontinuous dynamics. Octopus arm control involves moving the tip of a ï¬exible arm to a goal position (Engel et al., 2005). The challenge in this task stems from its high dimensionality: the arm has 25 degrees of freedom, corresponding to 50 state dimensions. The swimming task requires controlling a three-link snake, and the walking task requires a seven-link biped to maintain a target velocity. The challenge in these tasks comes from underactuation. Details of the simulation and cost for each task are in Appendix B.1.
16
# End-to-End Training of Deep Visuomotor Policies
itr 1 itr 2 itr 4 itr 1 itr 5 itr 1 itr 20 itr 40 itr 10 | 1504.00702#60 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 61 | itr 1 itr 2 itr 4 itr 1 itr 5 itr 1 itr 20 itr 40 itr 10
Figure 4: Results for learning linear-Gaussian controllers for 2D and 3D insertion, octopus arm, and swimming. Our approach uses fewer samples and ï¬nds better solutions than prior methods, and the GMM further reduces the required sample count. Images in the lower- right show the last time step for each system at several iterations of our method, with red lines indicating end eï¬ector trajectories. | 1504.00702#61 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 62 | Prior methods. We compare to REPS (Peters et al., 2010), reward-weighted regression (RWR) (Peters and Schaal, 2007; Kober and Peters, 2009), the cross-entropy method (CEM) (Rubinstein and Kroese, 2004), and PILCO (Deisenroth and Rasmussen, 2011). We also use iLQG (Li and Todorov, 2004) with a known model as a baseline, shown as a black horizontal line in all plots. REPS is a model-free method that, like our approach, enforces a KL-divergence constraint between the new and old policy. We compare to a variant of REPS that also ï¬ts linear dynamics to generate 500 pseudo-samples (Lioutikov et al., 2014), which we label âREPS (20 + 500).â RWR is an EM algorithm that ï¬ts the policy to previous samples weighted by the exponential of their reward, and CEM ï¬ts the policy to the best samples in each batch. With Gaussian trajectories, CEM and RWR only diï¬er in the weights. These methods represent a class of RL algorithms that ï¬t the policy to weighted | 1504.00702#62 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 63 | trajectories, CEM and RWR only diï¬er in the weights. These methods represent a class of RL algorithms that ï¬t the policy to weighted samples, including PoWER and PI2 (Kober and Peters, 2009; Theodorou et al., 2010; Stulp and Sigaud, 2012). PILCO is a model-based method that uses a Gaussian process to learn a global dynamics model that is used to optimize the policy. We used the open-source implementation of PILCO provided by the authors. Both REPS and PILCO require solving large nonlinear optimizations at each iteration, while our method does not. Our method used 5 rollouts with the Gaussian mixture model prior, and 20 without. Due to its computational cost, PILCO was provided with 5 rollouts per iteration, while other prior methods used 20 and 100. For all prior methods with free hyperparameters (such as the fraction of elites for CEM), we performed hyperparameter sweeps and chose the most successful settings for the comparison. | 1504.00702#63 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 64 | Gaussian trajectory distributions. In the ï¬rst set of comparisons, we evaluate only the trajectory optimization procedure for training linear-Gaussian controllers under unknown dynamics to determine its sample-eï¬ciency and applicability to complex, high-dimensional problems. The results of this comparison for the peg insertion, octopus arm, and swimming
17
# Levine, Finn, Darrell, and Abbeel
#1 #2 #3 #1 #2 #3 #4 #4
Figure 5: Comparison on neural network policies. For insertion, the policy was trained to search for an unknown slot position on four slot positions (shown above). Generalization to new positions is graphed with dashed lines. Note how the end eï¬ector (red) follows the surface to ï¬nd the slot, and how the swimming gait is smoother due to the stationary policy. | 1504.00702#64 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 65 | tasks appears in Figure 4. The horizontal axis shows the total number of samples, and the vertical axis shows the minimum distance between the end of the peg and the bottom of the slot, the distance to the target for the octopus arm, or the total distance travelled by the swimmer. Since the peg is 0.5 units long, distances above this amount correspond to controllers that cannot perform an insertion. Our method learned much more eï¬ective controllers with fewer samples, especially when using the Gaussian mixture model prior. On 3D insertion, it outperformed the iLQG baseline, which used a known model. Contact discontinuities cause problems for derivative-based methods like iLQG, as well as methods like PILCO that learn a smooth global dynamics model. We use a time-varying local model, which preserves more detail, and ï¬tting the model to samples has a smoothing eï¬ect that mitigates discontinuity issues. Prior policy search methods could servo to the hole, but were unable to insert the peg. On the octopus arm, our method succeeded despite the high dimensionality of the state and action spaces.1 Our method also successfully learned a swimming gait, | 1504.00702#65 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 66 | peg. On the octopus arm, our method succeeded despite the high dimensionality of the state and action spaces.1 Our method also successfully learned a swimming gait, while prior model-free methods could not initiate forward motion. PILCO also learned an eï¬ective gait due to the smooth dynamics of this task, but its GP- based optimization required orders of magnitude more computation time than our method, taking about 50 minutes per iteration. In the case of prior model-free methods, the high dimensionality of the time-varying linear-Gaussian controllers likely caused considerable diï¬culty (Deisenroth et al., 2013), while our approach exploits the structure of linear- Gaussian controllers for eï¬cient learning. | 1504.00702#66 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 67 | 1. The high dimensionality of the octopus arm made it diï¬cult to run PILCO, though in principle, such methods should perform well on this task given the armâs smooth dynamics.
18
End-to-End Training of Deep Visuomotor Policies
Neural network policies. In the second set of comparisons, shown in Figure 5, we compare guided policy search to RWR and CEM2 on the challenging task of training high- dimensional neural network policies for the peg insertion and locomotion tasks. The vari- ant of guided policy search used in this comparison diï¬ers somewhat from the version described in Section 4, in that it used a simpler dual gradient descent formulation, rather than BADMM. In practice, we found the performance of these methods to be very similar, though the BADMM variant was substantially faster and easier to implement. | 1504.00702#67 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 68 | On swimming, our method achieved similar performance to the linear-Gaussian case, but since the neural network policy was stationary, the resulting gait was much smoother. Previous methods could only solve this task with 100 samples per iteration, with RWR eventually obtaining a distance of 0.5m after 4000 samples, and CEM reaching 2.1m after 3000. Our method was able to reach such distances with many fewer samples. Following prior work (Levine and Koltun, 2013a), the walker trajectory was initialized from a demon- stration, which was stabilized with simple linear feedback. The RWR and CEM policies were initialized with samples from this controller to provide a fair comparison. The graph shows the average distance travelled on rollouts that did not fall, and shows that only our method was able to learn walking policies that succeeded consistently. | 1504.00702#68 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 69 | On peg insertion, the neural network was trained to insert the peg without precise knowledge of the position of the hole, resulting in a partially observed problem. The holes were placed in a region of radius 0.2 units in 2D and 0.1 units in 3D. The policies were trained on four diï¬erent hole positions, and then tested on four new hole positions to evaluate generalization. The hole position was not provided to the neural network, and the policies therefore had to search for the hole, with only joint angles and velocities as input. Only our method could acquire a successful strategy to locate both the training and test holes, although RWR was eventually able to insert the peg into one of the four holes in 2D. These comparisons show that training even medium-sized neural network policies for continuous control tasks with a limited number of samples is very diï¬cult for many prior policy search algorithms. Indeed, it is generally known that model-free policy search meth- In ods struggle with policies that have over 100 parameters (Deisenroth et al., 2013). subsequent sections, we will evaluate our method on real robotic tasks, showing that it can scale from these simulated tasks all the way up to end-to-end learning of visuomotor control.
# 6.2 Learning Linear-Gaussian Controllers on a PR2 Robot | 1504.00702#69 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 70 | # 6.2 Learning Linear-Gaussian Controllers on a PR2 Robot
In this section, we demonstrate the range of manipulation tasks that can be learned using our trajectory optimization algorithm on a real PR2 robot. These experiments previously appeared in our conference paper on guided policy search (Levine et al., 2015). Since performing trajectory optimization is a prerequisite for guided policy search to learn eï¬ective visuomotor policies, it is important to evaluate that our trajectory optimization can learn a wide variety of robotic manipulation tasks under unknown dynamics. The tasks in these experiments are shown in Figure 6, while Figure 7 shows the learning curves for each task. For all robotic experiments in this article, the tasks were learned entirely from scratch,
2. PILCO cannot optimize neural network policies, and we could not obtain reasonable results with REPS. Prior applications of REPS generally focus on simpler, lower-dimensional policy classes (Peters et al., 2010; Lioutikov et al., 2014).
19
Levine, Finn, Darrell, and Abbeel
(a) (b) (c) (d) (e) (f) (g) (h) (i)
(a)
(b)
(d)
(e)
(f)
(g)
(h)
(c)
(i) | 1504.00702#70 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 71 | (a)
(b)
(d)
(e)
(f)
(g)
(h)
(c)
(i)
Figure 6: Tasks for linear-Gaussian controller evaluation: (a) stacking lego blocks on a ï¬xed base, (b) onto a free-standing block, (c) held in both gripper; (d) threading wooden rings onto a peg; (e) attaching the wheels to a toy airplane; (f) inserting a shoe tree into a shoe; (g,h) screwing caps onto pill bottles and (i) onto a water bottle.
with the initialization of the controllers p(ut|xt) described in Appendix B.2. The number of samples required to learn each controller is around 20-25, substantially lower than many prior policy search meth- ods in the literature (Peters and Schaal, 2008; Kober et al., 2010b; Theodorou et al., 2010; Deisenroth et al., 2013). Total learn- ing time was about ten minutes for each task, of which only 3-4 minutes involved sys- tem interaction. The rest of the time was spent resetting the robot to the initial state and on computation. | 1504.00702#71 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 72 | linear-Gaussian controller learning curves ââ ego block (xed) ââ toy airplane ~ 8h ââ lego block (free) ââ shoe tree E ââ lego block (hand) + â=â pill bottle 3 ° ââ~ Fing on peg âââ waler bottle 5 4 ~° 2 olâ DE nS eS samples
Figure 7: Distance to target point during training of linear-Gaussian controllers. The actual target may diï¬er due to perturbations. Error bars indicate one standard deviation. | 1504.00702#72 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 73 | The linear-Gaussian controllers are optimized for a speciï¬c condition â e.g., a speciï¬c position of the target lego block. To evaluate their robustness to errors in the speciï¬ed target position, we conducted experiments on the lego block and ring tasks where the target object (the lower block and the peg) was perturbed at each trial during training, and then tested with various perturbations. For each task, controllers were trained with Gaussian perturbations with standard deviations of 0, 1, and 2 cm in the position of the target object, and each controller was tested with perturbations of radius 0, 1, 2, and 3 cm. Note that with a radius of 2 cm, the peg would be placed about one ring-width away from the expected position. The results are shown in Table 2. All controllers were robust to perturbations of 1 cm, and would often succeed at 2 cm. Robustness increased slightly when more noise was injected during training, but even controllers trained without noise exhibited considerable robustness, since the linear-Gaussian controllers themselves add noise during sampling. We also evaluated a kinematic baseline for each perturbation level, which planned a straight path from a point 5 cm above the target to the expected | 1504.00702#73 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 74 | add noise during sampling. We also evaluated a kinematic baseline for each perturbation level, which planned a straight path from a point 5 cm above the target to the expected (unperturbed) target location. This baseline was only able to place the lego block in the absence of perturbations. The rounded top of the peg provided an easier condition for the baseline, with occasional successes at higher perturbation levels. Our controllers outperformed the baseline by a wide margin. | 1504.00702#74 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 75 | All of the robotic experiments discussed in this section may be viewed in the corre- sponding supplementary video, available online: http://rll.berkeley.edu/icra2015gps. A video illustration of the visuomotor policies, discussed in the following sections, is also available: http://sites.google.com/site/visuomotorpolicy.
20
# End-to-End Training of Deep Visuomotor Policies
test perturbation lego block ring on peg g n i n i a r t . 0 cm b 1 cm r u t 2 cm r e kinematic baseline p 0 cm 1 cm 2 cm 3 cm 0 cm 1 cm 2 cm 3 cm 5/5 5/5 5/5 5/5 5/5 5/5 5/5 0/5 3/5 3/5 5/5 0/5 2/5 2/5 3/5 0/5 5/5 5/5 5/5 5/5 5/5 5/5 5/5 3/5 0/5 3/5 3/5 0/5 0/5 0/5 0/5 0/5
Table 2: Success rates of linear-Gaussian controllers under target object perturbation. 6.3 Spatial Softmax CNN Architecture Evaluation | 1504.00702#75 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 76 | Table 2: Success rates of linear-Gaussian controllers under target object perturbation. 6.3 Spatial Softmax CNN Architecture Evaluation
In this section, we evaluate the neural network architecture that we propose in Section 5.1 in comparison to more standard convolutional networks. To isolate the architectures from other confounding factors, we measure their accuracy on the pose estimation pretraining task described in Section 5.2. This is a reasonable proxy for evaluating how well the network can overcome two major challenges in visuomotor learning: the ability to handle relatively small datasets without overï¬tting, and the capability to learn tasks that are inherently spatial. We compare to a network where the expectation operator after the softmax is replaced with a learned fully connected layer, as is standard in the literature, a network where both the softmax and the expectation operators are replaced with a fully connected layer, and a version of this network that also uses 3 à 3 max pooling with stride 2 at the ï¬rst two layers. These alternative architectures have many more parameters, since the fully connected layer takes the entire bank of response maps from the third convolutional layer as input. Pooling helps to reduce the number of parameters, but not to the same degree as the spatial softmax and expectation operators in our architecture. | 1504.00702#76 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 77 | The results in Table 3 indicate that using the softmax and expectation operators im- proves pose estimation accuracy substantially. Our network is able to outperform the more standard architectures because it is forced by the softmax and expectation operators to learn feature points, which provide a concise representation suitable for spatial inference. Since most of the parameters in this architecture are in the convolutional layers, which ben- eï¬t from extensive weight sharing, overï¬tting is also greatly reduced. By removing pooling, our network also maintains higher resolution in the convolutional layers, improving spatial accuracy. Although we did attempt to regularize the larger standard architectures with higher weight decay and dropout, we did not observe a signiï¬cant improvement on this dataset. We also did not extensively optimize the parameters of this network, such as ï¬l- ter size and number of channels, and investigating these design decisions further would be valuable to investigate in future work.
network architecture softmax + feature points (ours) softmax + fully connected layer fully connected layer max-pooling + fully connected test error (cm) 1.30 ± 0.73 2.59 ± 1.19 4.75 ± 2.29 3.71 ± 1.73 | 1504.00702#77 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 79 | Figure 8: Illustration of the tasks in our visuomotor policy experiments, showing the vari- ation in the position of the target for the hanger, cube, and bottle tasks, as well as two of the three grasps for the hammer, which also included variation in position (not shown).
# 6.4 Deep Visuomotor Policy Evaluation
In this section, we present an evaluation of our full visuomotor policy training algorithm on a PR2 robot. The aim of this evaluation is to answer the following question: does training the perception and control systems in a visuomotor policy jointly end-to-end provide better performance than training each component separately? | 1504.00702#79 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 80 | Experimental tasks. We trained policies for hanging a coat hanger on a clothes rack, inserting a block into a shape sorting cube, ï¬tting the claw of a toy hammer under a nail with various grasps, and screwing on a bottle cap. The cost function for these tasks encourages low distance between three points on the end-eï¬ector and corresponding target points, low torques, and, for the bottle task, spinning the wrist. The equations for these cost functions and the details of each task are presented in Appendix B.2. The tasks are illustrated in Figure 8. Each task involved variation of 10-20 cm in each direction in the position of the target object (the rack, shape sorting cube, nail, and bottle). In addition, the coat hanger and hammer tasks were trained with two and three grasps, respectively. The current angle of the grasp was not provided to the policy, but had to be inferred from observing the robotâs gripper in the camera images. All tasks used the same policy architecture and model parameters. | 1504.00702#80 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 81 | Experimental conditions. We evaluated the visuomotor policies in three conditions: (1) the training target positions and grasps, (2) new target positions not seen during training and, for the hammer, new grasps (spatial test), and (3) training positions with visual distractors (visual test). A selection of these experiments is shown in the supplementary video.3 For the visual test, the shape sorting cube was placed on a table rather than held in
3. The video can be viewed at http://sites.google.com/site/visuomotorpolicy
22
End-to-End Training of Deep Visuomotor Policies
the gripper, the coat hanger was placed on a rack with clothes, and the bottle and hammer tasks were done in the presence of clutter. Illustrations of this test are shown in Figure 9. | 1504.00702#81 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 82 | Comparison. The success rates for each test are shown in Figure 9. We compared to two baselines, both of which train the vision layers in advance for pose prediction, instead of training the entire policy end-to-end. The features baseline discards the last layer of the pose predictor and uses the feature points, resulting in the same architecture as our policy, while the prediction baseline feeds the predicted pose into the control layers. The pose prediction baseline is analogous to a standard modular approach to policy learning, where the vision system is ï¬rst trained to localize the target, and the policy is trained on top of it. This variant achieves poor performance. As discussed in Section 6.3, the pose estimate is accurate to about 1 cm. However, unlike the tasks in Section 6.2, where robust controllers could succeed even with inaccurate perception, many of these tasks have tolerances of just a few millimeters. In fact, the pose prediction baseline is only successful on the coat hanger, which requires comparatively little accuracy. Millimeter accuracy is diï¬cult to achieve even with calibrated cameras and checkerboards. Indeed, prior work has reported that the PR2 can maintain a camera to end | 1504.00702#82 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 83 | is diï¬cult to achieve even with calibrated cameras and checkerboards. Indeed, prior work has reported that the PR2 can maintain a camera to end eï¬ector accuracy of about 2 cm during open loop motion (Meeussen et al., 2010). This suggests that the failure of this baseline is not atypical, and that our visuomotor policies are learning visual features and control strategies that improve the robotâs accuracy. When provided with pose estimation features, the policy has more freedom in how it uses the visual information, and achieves somewhat higher success rates. However, full end-to-end training performs signiï¬cantly better, achieving high accuracy even on the challenging bottle task, and successfully adapting to the variety of grasps on the hammer task. This suggests that, although the vision layer pretraining is clearly beneï¬cial for reducing computation time, it is not suï¬cient by itself for discovering good features for visuomotor policies. | 1504.00702#83 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 84 | Visual distractors. The policies exhibit moderate tolerance to distractors that are visu- ally separated from the target object. This is enabled in part by the spatial softmax, which has a lateral inhibition eï¬ect that suppresses non-maximal activations. Since distractors are unlikely to activate each feature as much as the true object, their activations are there- fore suppressed. However, as expected, the learned policies tend to perform poorly under drastic changes to the backdrop, or when the distractors are adjacent to or occluding the manipulated objects, as shown in the supplementary video. A standard solution to this issue to expose the policy to a greater variety of visual situations during training. This issue could also be mitigated by artiï¬cially augmenting the image samples with synthetic transformations, as discussed in prior work in computer vision (Simard et al., 2003), or even incorporating ideas from transfer and semi-supervised learning.
# 6.5 Features Learned with End-to-End Training
The visual processing layers of our architecture automatically learn features points using the spatial softmax and expectation operators. These feature points encapsulate all of the visual information received by the motor layers of the policy. In Figure 10, we show the features points discovered by our visuomotor policy through guided policy search. Each policy learns features on the target object and the robot manipulator, both clearly relevant
23
Levine, Finn, Darrell, and Abbeel
# training | 1504.00702#84 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 86 | training (18) spatial test (24) visual test (18) coat hanger 100% end-to-end pose features 88.9% pose prediction 55.6% shape cube end-to-end pose features pose prediction 0% training (45) spatial test (60) visual test (60) toy hammer 91.1% end-to-end pose features 62.2% pose prediction 8.9% bottle cap end-to-end pose features
Success rates on training positions, on novel test positions, and in the presence of visual distractors. The number of trials per test is shown in parentheses.
Figure 9: Training and visual test scenes as seen by the policy (left), and experimental results (right). The hammer and bottle images were cropped for visualization only.
to task execution. The policy tends to pick out robust, distinctive features on the objects, such as the left pole of the clothes rack, the left corners of the shape-sorting cube and the bottom-left corner of the toy tool bench. In the bottle task, the end-to-end trained policy outputs points on both sides of the bottle, including one on the cap, while the pose prediction network only ï¬nds points on the right edge of the bottle. | 1504.00702#86 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 87 | In Figure 11, we compare the feature points learned through guided policy search to those learned by a CNN trained for pose prediction. After end-to-end training, the policy acquired a distinctly diï¬erent set of feature points compared to the pose prediction CNN used for initialization. The end-to-end trained model ï¬nds more feature points on task- relevant objects and fewer points on background objects. This suggests that the policy improves its performance by acquiring goal-driven visual features that diï¬er from those learned for object localization.
The feature point representation is very simple, since it assumes that the learned features are present at all times, and only one instance of each feature is ever present in the image. While this is a drastic simpliï¬cation, both the pose predictor and the policy still achieve good results. A more ï¬exible architecture that still learns a concise feature point representation could further improve policy performance. We hope to explore this in future work.
# 6.6 Computational Performance and Sample Eï¬ciency
We used the Caï¬e deep learning library (Jia et al., 2014) for CNN training. Each visuomotor policy required a total of 3-4 hours of training time: 20-30 minutes for the pose prediction data collection on the robot, 40-60 minutes for the fully observed trajectory pretraining on
24 | 1504.00702#87 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 89 | Figure 10: Feature points tracked by the policy during task execution for each of the four tasks. Each feature point is displayed in a diï¬erent random color, with consistent coloring across images. The policy ï¬nds features on the target object and the robot gripper and arm. In the bottle cap task, note that the policy correctly ignores the distractor bottle in the background, even though it was not present during training. | 1504.00702#89 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 90 | (a) hanger (b) cube (c) hammer (d) bottle
Figure 11: Feature points learned for each task. For each input image, the feature points produced by the policy are shown in blue, while the feature points of the pose prediction network are shown in red. The end-to-end trained policy tends to discover more feature points on the target object and the robot arm than the pose prediction network.
25
Levine, Finn, Darrell, and Abbeel
the robot and oï¬ine pose pretraining (which can be done in parallel), and between 1.5 and 2.5 hours for end-to-end training with guided policy search. The coat hanger task required two iterations of guided policy search, the shape sorting cube and the hammer required three, and the bottle task required four. Only about 15 minutes of the training time consisted of executing trials on the robot. Since training was dominated by computation, we expect signiï¬cant speedup from a more eï¬cient implementation. The number of samples for training each policy is shown in Table 4. Each trial was ï¬ve seconds in length, and the numbers do not include the time needed to collect about 1000 images for pretraining the visual processing layers of the policy. | 1504.00702#90 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 91 | number of trials task coat hanger shape cube toy hammer bottle cap trajectory pretraining 120 90 150 180 end-to-end training 36 81 90 108 total 156 171 240 288
Table 4: Total number of trials used for learning each visuomotor policy.
# 7. Discussion and Future Work
In this paper, we presented a method for learning robotic control policies that use raw input from a monocular camera. These policies are represented by a novel convolutional neural network architecture, and can be trained end-to-end using our guided policy search algorithm, which decomposes the policy search problem in a trajectory optimization phase that uses full state information and a supervised learning phase that only uses the obser- vations. This decomposition allows us to leverage state-of-the-art tools from supervised learning, making it straightforward to optimize extremely high-dimensional policies. Our experimental results show that our method can execute complex manipulation skills, and that end-to-end training produces signiï¬cant improvements in policy performance compared to using ï¬xed vision layers trained for pose prediction. | 1504.00702#91 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 92 | Although we demonstrate moderate generalization over variations in the scene, our current method does not generalize to dramatically diï¬erent settings, especially when visual distractors occlude the manipulated object or break up its silhouette in ways that diï¬er from the training. The success of CNNs on exceedingly challenging vision tasks suggests that this class of models is capable of learning invariance to irrelevant distractor features (LeCun et al., 2015), and in principle this issue can be addressed by training the policy in a variety of environments, though this poses certain logistical challenges. More practical alternatives that could be explored in future work include simultaneously training the policy on multiple robots, each of which is located in a diï¬erent environment, developing more sophisticated regularization and pretraining techniques to avoid overï¬tting, and introducing artiï¬cial data augmentation to encourage the policy to be invariant to irrelevant clutter. However, even without these improvements, our method has numerous applications in, for example, an industrial setting where the robot must repeatedly and eï¬ciently perform a task that requires visual feedback under moderate variation in background and clutter conditions.
Our method takes advantage of a known, fully observed state space during training. This is both a weakness and a strength. It allows us to train linear-Gaussian controllers
26
End-to-End Training of Deep Visuomotor Policies | 1504.00702#92 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 93 | 26
End-to-End Training of Deep Visuomotor Policies
for guided policy search using a very small number of samples, far more eï¬ciently than standard policy search methods. However, the requirement to observe the full state during training limits the tasks to which the method can be applied. In many cases, this limitation is minor, and the only âinstrumentationâ required at training is to position the objects in the scene at consistent positions. However, tasks that require, for example, manipulating freely moving objects require more extensive instrumentation, such as motion capture. A promising direction for addressing this limitation is to combine our method with unsuper- vised state-space learning, as proposed in several recent works, including our own (Lange et al., 2012; Watter et al., 2015; Finn et al., 2015). | 1504.00702#93 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 94 | In future work, we hope to explore more complex policy architectures, such as recurrent policies that can deal with extensive occlusions by keeping a memory of past observations. We also hope to extend our method to a wider range of tasks that can beneï¬t from visual input, as well as a variety of other rich sensory modalities, including haptic input from pressure sensors and auditory input. With a wider range of sensory modalities, end-to- end training of sensorimotor policies will become increasingly important: while it is often straightforward to imagine how vision might help to localize the position of an object in the scene, it is much less apparent how sound can be integrated into robotic control. A learned sensorimotor policy would be able to naturally integrate a wide range of modalities and utilize them to directly aid in control.
# Acknowledgements
This research was funded in part by DARPA through a Young Faculty Award, the Army Research Oï¬ce through the MAST program, NSF awards IIS-1427425 and IIS-1212798, the Berkeley Vision and Learning Center, and a Berkeley EECS Department Fellowship.
# Appendix A. Guided Policy Search Algorithm Details
In this appendix, we describe a number of implementation details of our BADMM-based guided policy search algorithm and our linear-Gaussian controller optimization method.
# A.1 BADMM Dual Variables and Weight Adjustment | 1504.00702#94 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 95 | # A.1 BADMM Dual Variables and Weight Adjustment
Recall that the inner loop alternating optimization is given by
T 0 Harg min Eop(xp)rro (url) Ue Apt] +1469 (0,p) t=1 T p arg min > Enyxeq,any) E(%t, We) â UF Ape] +44? (p, 8) t=1 Aut â Aut + Ot (Erry (aye) p(xe) [Ue] _ Ey(us|xe)p(o) [Ue])We use a step size of α = 0.1 in all of our experiments, which we found to be more stable than α = 1.0. The weights νt are initialized to 0.01 and incremented based on the following schedule: at every iteration, we compute the average KL-divergence between p(ut|xt) and Ïθ(ut|xt) at each time step, as well as its standard deviation over time steps.
27
Levine, Finn, Darrell, and Abbeel | 1504.00702#95 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 96 | 27
Levine, Finn, Darrell, and Abbeel
The weights νt corresponding to time steps where the KL-divergence is higher than the average are increased by a factor of 2, and the weights corresponding to time steps where the KL-divergence is two standard deviations or more below the average are decreased by a factor of 2. The rationale behind this schedule is to adjust the KL-divergence penalty to keep the policy and trajectory in agreement by roughly the same amount at all time steps. Increasing νt too quickly can lead to the policy and trajectory becoming âlockedâ together, which makes it diï¬cult for the trajectory to decrease its cost, while leaving it too low requires more iterations for convergence. We found this schedule to work well across all tasks, both during trajectory pretraining and while training the visuomotor policy.
To update the dual variables λµt, we evaluate the expectations over p(xt) by using the latest batch of sampled trajectories. For each state {xi t} along these sampled trajectories, we evaluate the expectations over ut under Ïθ(ut|xt) and p(ut|xt), which correspond simply to the means of these conditional Gaussian distributions, in closed form.
# A.2 Policy Variance Optimization | 1504.00702#96 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 97 | # A.2 Policy Variance Optimization
As discussed in Section 4, the variance of the Gaussian policy Ïθ(ut|ot) does not depend on the observation, though this dependence would be straightforward to add. Analyzing the objective Lθ(θ, p), we can write out only the terms that depend on ΣÏ:
N 1 _ L£o(9,p) = oN » ye Epicxs.on) [a [C,"2"] â log |X*]] . i=1 t=1
Diï¬erentiating and setting the derivative to zero, we obtain the following equation for ΣÏ:
where the expectation under pi(xt) is omitted, since Cti does not depend on xt.
# A.3 Dynamics Fitting
Optimizing the linear-Gaussian controllers pi(ut|xt) that induce the trajectory distributions pi(Ï ) requires ï¬tting the system dynamics pi(xt+1|xt, ut) at each iteration to samples gen- erated on the physical system from the previous controller Ëpi(ut|xt). In this section, we describe how these dynamics are ï¬tted. As in Section 4, we drop the subscript i, since the dynamics are ï¬tted the same way for all of the trajectory distributions. | 1504.00702#97 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 98 | The linear-Gaussian dynamics are deï¬ned as p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft), and the data that we obtain from the robot can be viewed as tuples {xi t+1}. A simple way to ï¬t these linear-Gaussian dynamics is to use linear regression to determine fx, fu, and fc, and ï¬t Ft based on the errors. However, the sample complexity of linear regression scales with the dimensionality of xt. For a high-dimensional robotic system, we might need an impractically large number of samples at each iteration to obtain a good ï¬t. However, we can observe that the dynamics at nearby time steps are strongly correlated, and we can dramatically reduce the sample complexity of the dynamics ï¬tting by bringing in information from other time steps, and even prior iterations. We will bring in this
28
# End-to-End Training of Deep Visuomotor Policies | 1504.00702#98 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 99 | 28
# End-to-End Training of Deep Visuomotor Policies
information by ï¬tting a global model to all of the transitions {xi t+1} for all t and all tuples from several prior iterations (we use three prior iterations in our implementation), and then use this model as a prior for ï¬tting the dynamics at each time step. Note that this global model does not itself need to be a good forward dynamics model â it just needs to serve as a good prior to reduce the sample complexity of linear regression. | 1504.00702#99 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 100 | To make it more convenient to incorporate a data-driven prior, we will ï¬rst reformulate this linear regression ï¬t and view it as ï¬tting a Gaussian model to the dataset {xi t+1} at each time step t, and then conditioning this Gaussian to obtain p(xt+1|xt, ut). While this is equivalent to linear regression, it allows us to easily incorporate a normal-inverse- Wishart prior on this Gaussian in order to bring in prior information. Let ËΣ be the empirical covariance of our dataset, and let ˵ be the empirical mean. The normal-inverse-Wishart prior is deï¬ned by prior parameters Φ, µ0, m, and n0. Under this prior, the maximum a posteriori estimates for the covariance Σ and mean µ are given by
Σ = Φ + N ËΣ + N m N +m (˵ â µ0)(˵ â µ0)T N + n0 µ = mµ0 + n0 ˵ m + n0 . | 1504.00702#100 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 101 | Having obtained Σ and µ, we can obtain an estimate of the dynamics p(xt+1|xt, ut) by conditioning the distribution N (µ, Σ) on [xt; ut], which produces linear-Gaussian dynamics p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft). The parameters of the normal-inverse-Wishart prior are obtained from the global model of the dynamics which, as described previously, is ï¬tted to all available tuples {xi
The simplest prior can be obtained by fitting a Gaussian distribution to vectors [x; u; xâ]. If the mean and covariance of this data are given by ji and ©, the prior is given by ® = nod and pio = ft, while no and m should be set to the number of data points in the datasets. In practice, settings ng and m to 1 tends to produce better results, since the prior is fitted to many more samples than are available for linear regression at each time step. While this prior is simple, we can obtain a better prior by employing a nonlinear model. | 1504.00702#101 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 102 | The particular global model we use in this work is a Gaussian mixture model over vectors [x;u;xâ]. Systems of articulated rigid bodies undergoing contact dynamics, such as robots interacting with their environment, can be coarsely modeled as having piecewise inear dynamics. The Gaussian mixture model provides a good approximation for such piecewise linear systems, with each mixture element corresponding to a different linear mode Khansari-Zadeh and Billard, 2010). Under this model, the state transition tuple is assumed o come from a distribution that depends on some hidden state h, which corresponds to he mixture element identity. In practice, this hidden state might correspond to the type of contact profile experienced by a robotic arm at step 7. The prior for the dynamics fit at time step t is then obtained by inferring the hidden state distribution for the transition dataset {xi, ui, x} ai}; and using the mean and covariance of the corresponding mixture elements weighted by their probabilities) to obtain fi and X. The prior parameters can then be obtained as described above. | 1504.00702#102 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 103 | In our experiments, we set the number of mixture elements for the Gaussian mixture model prior such that there were at least 40 samples per mixture element, or 20 total mixture elements, whichever was lower. In general, we did not ï¬nd the performance of the method to be sensitive to this parameter, though overï¬tting did tend to occur in the early iterations when the number of samples is low, if the number of mixtures was too high.
29
Levine, Finn, Darrell, and Abbeel
# A.4 Trajectory Optimization
In this section, we show how the LQR backward pass can be used to optimize the constrained objective in Section 4.2. The constrained trajectory optimization problem is given by
oe Ly(p, 8) s.t. Di (p(r)||B(7)) < â¬. | 1504.00702#103 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 104 | oe Ly(p, 8) s.t. Di (p(r)||B(7)) < â¬.
The augmented Lagrangian £,(p, 0) consists of an entropy term and an expectation under p(T) of a quantity that is independent of p. We can locally approximate this quantity with a quadratic by using a quadratic expansion of ¢(x;,u;), and fitting a linear Gaussian to mo (uy|X,) with the same method we used for the dynamics. We can then solve the primal optimization in the dual gradient descent procedure with a standard LQR backward pass. As discussed in Section 4, £,(p,) can be written as the expectation of some function c(r) that is independent of p, such that L,(p,0) = E,,7)[e(7)] â 4H (p(7)). Specifically,
c(xt, Ue) = C(Xt, Us) â UP Ape â Ue log 79 (UE| xe)
Writing the Lagrangian of the constrained optimization, we have
L(p) = Eqryle(r) â nlog p(r)] â (n+ u)H(p(r)) â ne, | 1504.00702#104 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 105 | L(p) = Eqryle(r) â nlog p(r)] â (n+ u)H(p(r)) â ne,
where η is the Lagrange multiplier. Note that L(p) is the Lagrangian of the constrained trajectory optimization, which is not related to the augmented Lagrangian Lp(Ï, θ). Group- ing the terms in the expectation and omitting constants, we can rewrite the minimization of the Lagrangian with respect to the primal variables as
min p(Ï )âN (Ï ) Ep(Ï ) 1 η + νt c(Ï )â η η + νt log Ëp(Ï ) â H(p(Ï )). (4)
Let Ëc(Ï ) = 1 log Ëp(Ï ). The above optimization corresponds to minimizing Ep(Ï )[Ëc(Ï )] â H(p(Ï )). This type of maximum entropy problem can be solved using the LQR algorithm, and the solution is given by
p(ut|xt) = N (Ktxt + kt; Qâ1 | 1504.00702#105 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 106 | p(ut|xt) = N (Ktxt + kt; Qâ1
where Kt and kt are the feedback and open loop terms of the optimal linear feedback controller corresponding to the cost Ëc(xt, ut) and the dynamics p(xt+1|xt, ut), and Qu,ut is the quadratic term in the Q-function at time step t. All of these terms can be obtained from a standard LQR backward pass (Li and Todorov, 2004), which we summarize below. Recall that the estimated linear-Gaussian dynamics have the form p(xt+1|xt, ut) =
N (fxtxt + futut + fct, Ft). The quadratic cost approximation has the form
Ëc(xt, ut) â 1 2 [xt; ut]TËcxu,xut[xt; ut] + [xt; ut]TËcxut + const,
where subscripts denote derivatives, e.g. Ëcxut is the gradient of Ëc with respect to [xt; ut], while Ëcxu,xut is the Hessian.4 Under this model of the dynamics and cost function, the
4. We assume that all Taylor expansions here are recentered around zero. Otherwise, the point around which the derivatives are computed must be subtracted from xt and ut in all of these equations.
30 | 1504.00702#106 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 107 | 30
# End-to-End Training of Deep Visuomotor Policies
optimal controller can be computed by recursively computing the quadratic Q-function and value function, starting with the last time step. These functions are given by
V (xt) = Q(xt, ut) = 1 2 1 2 t Vx,xtxt + xT xT t Vxt + const [xt; ut]TQxu,xut[xt; ut] + [xt; ut]TQxut + const
We can express them with the following recurrence, which is computed starting at the last time step t = T and moving backward through time:
Qxu,xut = Ëcxu,xut + f T Qxut = Ëcxut + f T Vx,xt = Qx,xt â QT Vxt = Qxt â QT xutVx,xt+1fxut xutVxt+1 + f T u,xtQâ1 u,xtQâ1 xutVx,xt+1fct u,utQu,xt u,utQut, | 1504.00702#107 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 108 | and the optimal control law is then given by g(xt) = Ktxt + kt, where Kt = âQâ1 u,utQu,xt and kt = âQâ1 u,utQut. If, instead of simply minimizing the expected cost, we instead wish to optimize the maximum entropy objective in Equation (4), the optimal controller is instead linear-Gaussian, with the solution given by p(ut|xt) = N (Ktxt + kt; Qâ1 u,ut), as shown in prior work (Levine and Koltun, 2013a).
# Appendix B. Experimental Setup Details
In this appendix, we present a detailed summary of the experimental setup for our simulated and real-world experiments.
# B.1 Simulated Experiment Details | 1504.00702#108 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 109 | # Appendix B. Experimental Setup Details
In this appendix, we present a detailed summary of the experimental setup for our simulated and real-world experiments.
# B.1 Simulated Experiment Details
All of the simulated experiments used the MuJoCo simulation package (Todorov et al., 2012), with simulated frictional contacts and torque motors at the joints used for actuation. Although no control or state noise was added during simulation, noise was injected naturally by the linear-Gaussian controllers. The linear-Gaussian controllers pi(ut|xt) were initialized to stay near the initial state x1 using linear feedback based on a proportional-derivative control law for all tasks, except for the octopus arm, where pi(ut|xt) was initialized to be zero mean with a ï¬xed spherical covariance, and the walker, which was initialized to track a demonstration trajectory with proportional-derivative feedback. The walker was the only task that used a demonstration, as described previously. We describe the details of each system below. | 1504.00702#109 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 110 | Peg insertion: The 2D peg insertion task has 6 state dimensions (joint angles and angular velocities) and 2 action dimensions. The 3D version of the task has 12 state dimensions, since the arm has 3 degrees of freedom at the shoulder, 1 at the elbow, and 2 at the wrist. Trials were 8 seconds in length and simulated at 100 Hz, resulting in 800 time steps per rollout. The cost function is given by
1 2 * (xy, We) = zeullwell? + wpli2(Px, â Pâ),
31
Levine, Finn, Darrell, and Abbeel | 1504.00702#110 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 111 | 1 2 * (xy, We) = zeullwell? + wpli2(Px, â Pâ),
31
Levine, Finn, Darrell, and Abbeel
where px, is the position of the end effector for state x;, p* is the desired end effector position at the bottom of the slot, and the norm f19(z) is given by 4||z||? + Va + 2, which corresponds to the sum of an é2 and soft @; norm. We use this norm to encourage the peg to precisely reach the target position at the bottom of the hole, but to also receive a larger penalty when far away. The task also works well in 2D with a simple ¢2 penalty, though we found that the 3D version of the task takes longer to insert the peg all the way into the hole without the ¢;-like square root term. The weights were set to wy = 10~® and Wp = 1. Initial states were chosen by moving the shoulder of the arm relative to the hole, with four equally spaced starting states in a 20 cm region for the 2D arm, and four random starting states in a 10 cm radius for the 3D arm. | 1504.00702#111 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 112 | Octopus arm: The octopus arm consists of six four-sided chambers. Each edge of each chamber is a simulated muscle, and actions correspond to contracting or relaxing the mus- cle. The state space consists of the positions and velocities of the chamber vertices. The midpoint of one edge of the ï¬rst chamber is ï¬xed, resulting in a total of 25 degrees of free- dom: the 2D positions of the 12 unconstrained points, and the orientation of the ï¬rst edge. Including velocities, the total dimensionality of the state space is 50. The cost function depends on the activation of the muscles and distance between the tip of the arm and the target point, in the same way as for peg insertion. The weights are set to wu = 10â3 and wp = 1.
Swimmer: The swimmer consists of 3 links and 5 degrees of freedom, including the global position and orientation which, together with the velocities, produces a 10 dimensional state space. The swimmer has 2 action dimensions corresponding to the torques between joints. The simulation applied drag on each link of the swimmer to roughly simulate a ï¬uid, allowing it to propel itself. The rollouts were 20 seconds in length at 20 Hz, resulting in 400 time steps per rollout. The cost function for the swimmer is given by | 1504.00702#112 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 113 | 1 1 (xt, Ut) = 3 wull uelâ + qwellerx. â vt?
5 where vz, is the horizontal velocity, vk = 2.0m/s, and the weights were wy = 2- 107° and Wy = 1.
Walker: The bipedal walker consists of a torso and two legs, each with three links, for a total of 9 degrees of freedom and 18 dimensions, with velocity, and 6 action dimensions. The simulation ran for 5 seconds at 100 Hz, for a total of 500 time steps. The cost function is given by
1 2 2 + gwallPuse, _ Pyll 1 1 0(Xt, Ut) = zwulluel|â + gwellvex. âv,
where Ury, is again the horizontal velocity, Pyx, 38 the vertical position of the root, v} = 2.1m/s, py = 1.1m, and the weights were set to wa = 0-4, wy = 1, and wp, = 1.
32
End-to-End Training of Deep Visuomotor Policies
# B.2 Robotic Experiment Details | 1504.00702#113 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 114 | 32
End-to-End Training of Deep Visuomotor Policies
# B.2 Robotic Experiment Details
All of the robotic experiments were conducted on a PR2 robot. The robot was controlled at 20 Hz via direct eï¬ort control,5 and camera images were recorded using the RGB camera on a PrimeSense Carmine sensor. The images were downsampled to 240 à 240 à 3. The learned policies controlled one 7 DoF arm of the robot, while the other arm was used to move objects in the scene to automatically vary the initial conditions. The camera was kept ï¬xed in each experiment. Each episode was 5 seconds in length. For each task, the cost function required placing the object held in the gripper at a particular location (which might require, for example, to insert a shape into a shape sorting cube). The cost was given by the following equation:
C(xt, Ut) = wed? + Wiog log(d? + a) + wullurl|?, | 1504.00702#114 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 115 | C(xt, Ut) = wed? + Wiog log(d? + a) + wullurl|?,
where d; is the distance between three points in the space of the end-effector and their target positions.®, and the weights are set to Wey = 10-3, Wiog = 1.0, and wa = 10-?. The quadratic term encourages moving the end-effector toward the target when it is far, while the logarithm term encourages placing it precisely at the target location, as discussed in prior work (Levine et al., 2015). The bottle cap task used an additional cost term consisting of a quadratic penalty on the difference between the wrist angular velocity and a target velocity. For all of the tasks, we initialized all of the linear-Gaussian controllers p;(u;|x;) to stay near the initial state x;, with a diagonal noise covariance. The covariance of the noise was chosen to be proportional to a diagonal approximation of the inverse effective mass at each joint, as provided by the manufacturer of the PR2 robot, and the feedback controller was constructed using LQR, with an approximate linear model obtained from the same diagonal inverse mass matrix. The role of this initial controller was primarily to avoid dangerous actions during the first iteration. We discuss the particular setup for each experiment below: | 1504.00702#115 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 116 | Coat hanger: The coat hanger task required the robot to hang a coat hanger on a clothes rack. The coat hanger was grasped at one of two angles, about 35⦠apart, and the rack was positioned at three diï¬erent distances from the robot during training, with diï¬erences of about 10 cm between each position. The rack was moved manually between these positions during training. A trial was considered successful if, when the coat hanger was released, it remained hanging on the rack rather than dropping to the ground.
Shape sorting cube: The shape sorting cube task required the robot to insert a red trapezoid into a trapezoidal hole on a shape sorting cube. During training, the cube was positioned at nine diï¬erent positions, situated at the corners, edges, and middle of a rect- angular region 16 cm à 10 cm in size. During training, the shape sorting cube was moved through the training positions by using the left arm. A trial was considered successful if the bottom face of the trapezoid was completely inside the shape sorting cube, such that if the robot were to release the trapezoid, it would fall inside the cube. | 1504.00702#116 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 117 | 5. The PR2 robot does not provide for closed loop torque control, but instead supports an eï¬ort control in- terface that directly sets feedforward motor voltages. In practice, these voltages are roughly proportional to feedforward torques, but are also aï¬ected by friction and damping.
6. Three points fully deï¬ne the pose of the end-eï¬ector. For the bottle cap task, which is radially symmetric, we use only two points.
33
Levine, Finn, Darrell, and Abbeel
Toy hammer: The hammer task required the robot to insert the claw of a toy hammer underneath a toy plastic nail, placing the claw around the base of the nail. The hammer was grasped at one of three angles, each 22.5⦠apart, for a total variation of 45⦠degrees, and the nail was positioned at ï¬ve positions, at the corners and center of a rectangular region 10 cm à 7 cm in size. During training, the toy tool bench containing the nail was moved using the left arm. A trial was considered successful if the tip of the claw of the hammer was at least under the centerline of the nail. | 1504.00702#117 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 118 | Bottle cap: The bottle cap task required the robot to screw a cap onto a bottle at various positions. The bottle was located at nine diï¬erent positions, situated at the corners, edges, and middle of a rectangular region 16 cm à 10 cm in size, and the left arm was used to move the bottle through the training positions. A trial was considered successful if, after completion, the cap could not be removed from bottle simply by pulling vertically.
# References
J. A. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on Artiï¬cial Intelligence (IJCAI), 2003.
B. Bakker, V. Zhumatiy, G. Gruener, and J. Schmidhuber. A robot that reinforcement- learns to identify and memorize important previous observations. In International Con- ference on Intelligent Robots and Systems (IROS), 2003.
G. Bekey and K. Goldberg. Neural Networks in Robotics. Springer US, 1992.
H. Benbrahim and J. A. Franklin. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems, 22:283â302, 1997. | 1504.00702#118 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 119 | H. Benbrahim and J. A. Franklin. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems, 22:283â302, 1997.
W. B¨ohmer, S. Gr¨unew¨alder, Y. Shen, M. Musial, and K. Obermayer. Construction of approximation spaces for reinforcement learning. Journal of Machine Learning Research, 14(1):2067â2118, January 2013.
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1122, 2011.
D. Ciresan, U. Meier, J. Masci, L. Gambardella, and J. Schmidhuber. Flexible, high per- formance convolutional neural networks for image classiï¬cation. In International Joint Conference on Artiï¬cial Intelligence (IJCAI), 2011.
D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬cation. In Computer Vision and Pattern Recognition (CVPR), 2012. | 1504.00702#119 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 120 | M. Deisenroth and C. Rasmussen. PILCO: a model-based and data-eï¬cient approach to policy search. In International Conference on Machine Learning (ICML), 2011.
M. Deisenroth, C. Rasmussen, and D. Fox. Learning to control a low-cost manipulator using data-eï¬cient reinforcement learning. In Robotics: Science and Systems (RSS), 2011.
34
End-to-End Training of Deep Visuomotor Policies
M. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foun- dations and Trends in Robotics, 2(1-2):1â142, 2013.
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale In Computer Vision and Pattern Recognition (CVPR), hierarchical image database. 2009.
G. Endo, J. Morimoto, T. Matsubara, J. Nakanishi, and G. Cheng. Learning CPG-based biped locomotion with a policy gradient method: Application to a humanoid robot. International Journal of Robotic Research, 27(2):213â228, 2008.
I. Endres and D. Hoiem. Category independent object proposals. In European Conference on Computer Vision (ECCV). 2010. | 1504.00702#120 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 121 | I. Endres and D. Hoiem. Category independent object proposals. In European Conference on Computer Vision (ECCV). 2010.
Y. Engel, P. Szab´o, and D. Volkinshtein. Learning to control an octopus arm with Gaussian In Advances in Neural Information Processing process temporal diï¬erence methods. Systems (NIPS), 2005.
B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 1992.
C. Finn, X. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel. Learning visual fea- ture spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015.
K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaï¬ected by shift in position. Biological Cybernetics, 36:193â202, 1980.
T. Geng, B. Porr, and F. W¨org¨otter. Fast biped walking with a reï¬exive controller and realtime policy searching. In Advances in Neural Information Processing Systems (NIPS), 2006. | 1504.00702#121 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 122 | R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate In Conference on Computer Vision and object detection and semantic segmentation. Pattern Recognition (CVPR), 2014a.
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014b.
V. Gullapalli. A stochastic reinforcement learning algorithm for learning real-valued func- tions. Neural Networks, 3(6):671â692, 1990.
V. Gullapalli. Skillful control under uncertainty via direct reinforcement learning. Rein- forcement Learning and Robotics, 15(4):237â246, 1995.
X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time Atari game play using oï¬ine Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems (NIPS), 2014.
35
Levine, Finn, Darrell, and Abbeel | 1504.00702#122 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 123 | 35
Levine, Finn, Darrell, and Abbeel
R. Hadsell, P. Sermanet, J. B. A. Erkan, and M. Scoï¬er. Learning long-range vision for autonomous oï¬-road driving. Journal of Field Robotics, pages 120â144, 2009.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ow in recurrent nets: the diï¬culty of learning long-term dependencies. In A Field Guide to Dynamic Recurrent Neural Networks. IEEE Press, 2001.
K. J. Hunt, D. Sbarbaro, R. ËZbikowski, and P. J. Gawthrop. Neural networks for control systems: A survey. Automatica, 28(6):1083â1112, November 1992.
D. Jacobson and D. Mayne. Diï¬erential Dynamic Programming. Elsevier, 1970. | 1504.00702#123 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 124 | D. Jacobson and D. Mayne. Diï¬erential Dynamic Programming. Elsevier, 1970.
M. J¨agersand, O. Fuentes, and R. C. Nelson. Experimental evaluation of uncalibrated visual servoing for precision manipulation. In International Conference on Robotics and Automation (ICRA), 1997.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caï¬e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
S. Jodogne and J. H. Piater. Closed-loop learning of visual control policies. Journal of Artiï¬cial Intelligence Research, 28:349â391, 2007.
R. Jonschkowski and O. Brock. State representation learning in robotics: Using prior knowledge about physical interaction. In Proceedings of Robotics: Science and Systems, 2014.
M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal. Learning force control policies for compliant manipulation. In International Conference on Intelligent Robots and Systems (IROS), 2011. | 1504.00702#124 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 125 | S. M. Khansari-Zadeh and A. Billard. BM: An iterative algorithm to learn stable non- linear dynamical systems with Gaussian mixture models. In International Conference on Robotics and Automation (ICRA), 2010.
J. Kober and J. Peters. Learning motor primitives for robotics. In International Conference on Robotics and Automation (ICRA), 2009.
J. Kober, K. Muelling, O. Kroemer, C.H. Lampert, B. Schoelkopf, and J. Peters. Movement templates for learning of hitting and batting. In International Conference on Robotics and Automation (ICRA), 2010a.
J. Kober, E. Oztop, and J. Peters. Reinforcement learning to adjust robot movements to new situations. In Robotics: Science and Systems (RSS), 2010b.
J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotic Research, 32(11):1238â1274, 2013.
36
End-to-End Training of Deep Visuomotor Policies
N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomo- tion. In International Conference on Robotics and Automation (IROS), 2004. | 1504.00702#125 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 126 | N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomo- tion. In International Conference on Robotics and Automation (IROS), 2004.
J. Koutn´ık, G. Cuccu, J. Schmidhuber, and F. Gomez. Evolving large-scale neural networks for vision-based reinforcement learning. In Conference on Genetic and Evolutionary Com- putation, GECCO â13, 2013.
A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS). 2012.
T. Lampe and M. Riedmiller. Acquiring visual servoing reaching and grasping skills using In International Joint Conference on Neural Networks neural reinforcement learning. (IJCNN), 2013.
A. Lanfranco, A. Castellanos, J. Desai, and W. Meyers. Robotic surgery: a current per- spective. Annals of surgery, 239(1):14, 2004.
S. Lange, M. Riedmiller, and A. Voigtlaender. Autonomous reinforcement learning on raw visual input data in a real world application. In International Joint Conference on Neural Networks, 2012. | 1504.00702#126 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 127 | Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems (NIPS), 1989.
Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521:436â444, May 2015.
H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning (ICML), 2009.
Ian Lenz, Ross Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model predictive control. In RSS, 2015a.
Ian Lenz, Honglak Lee, and Ashutosh Saxena. Deep learning for detecting robotic grasps. IJRR, 2015b.
S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), 2014.
S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013a. | 1504.00702#127 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 128 | S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013a.
S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems (NIPS), 2013b.
S. Levine and V. Koltun. Learning complex neural network policies with trajectory opti- mization. In International Conference on Machine Learning (ICML), 2014.
S. Levine, N. Wagener, and P. Abbeel. Learning contact-rich manipulation skills with guided policy search. In International Conference on Robotics and Automation (ICRA), 2015.
37
Levine, Finn, Darrell, and Abbeel
F. L. Lewis, A. Yesildirak, and S. Jagannathan. Neural Network Control of Robot Manipu- lators and Nonlinear Systems. Taylor & Francis, Inc., 1998.
W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222â229, 2004.
T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. | 1504.00702#128 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 129 | R. Lioutikov, A. Paraschos, G. Neumann, and J. Peters. Sample-based information-theoretic In International Conference on Robotics and Automation, stochastic optimal control. 2014.
H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber. A system In for robotic heart surgery that learns to tie knots using recurrent neural networks. International Conference on Intelligent Robots and Systems (IROS), 2006.
W. Meeussen, M. Wise, S. Glaser, S. Chitta, C. McGann, P. Mihelich, E. Marder-Eppstein, M. Muja, Victor Eruhimov, T. Foote, J. Hsu, R.B. Rusu, B. Marthi, G. Bradski, K. Kono- lige, B. Gerkey, and E. Berger. Autonomous door opening and plugging in with a personal robot. In International Conference on Robotics and Automation (ICRA), 2010.
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Ried- miller. Playing Atari with deep reinforcement learning. NIPS â13 Workshop on Deep Learning, 2013. | 1504.00702#129 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 130 | K. Mohta, V. Kumar, and K. Daniilidis. Vision based control of a quadrotor for perching on planes and lines. In International Conference on Robotics and Automation (ICRA), 2014.
I. Mordatch and E. Todorov. Combining the beneï¬ts of function approximation and tra- jectory optimization. In Robotics: Science and Systems (RSS), 2014.
A. Y. Ng, H. J. Kim, M. I. Jordan, and S. Sastry. Inverted autonomous helicopter ï¬ight via reinforcement learning. In International Symposium on Experimental Robotics, 2004.
R. Pascanu and Y. Bengio. On the diï¬culty of training recurrent neural networks. Technical Report arXiv:1211.5063, Universite de Montreal, 2012.
B. Pepik, M. Stark, P. Gehler, and B. Schiele. Teaching 3D geometry to deformable part models. In Computer Vision and Pattern Recognition (CVPR), 2012.
J. Peters and S. Schaal. Applying the episodic natural actor-critic architecture to motor In European Symposium on Artiï¬cial Neural Networks (ESANN), primitive learning. 2007. | 1504.00702#130 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 131 | J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682â697, 2008.
38
End-to-End Training of Deep Visuomotor Policies
J. Peters, K. M¨ulling, and Y. Alt¨un. Relative entropy policy search. In AAAI Conference on Artiï¬cial Intelligence, 2010.
Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. CoRR, abs/1509.06825, 2015.
D. Pomerleau. ALVINN: an autonomous land vehicle in a neural network. In Advances in Neural Information Processing Systems (NIPS), 1989.
S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15:627â 635, 2011.
S. Ross, N. Melik-Barkhudarov, K. Shaurya Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert. Learning monocular reactive UAV control in cluttered natural environments. In International Conference on Robotics and Automation (ICRA), 2013. | 1504.00702#131 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 132 | R. Rubinstein and D. Kroese. The Cross-Entropy Method: A Uniï¬ed Approach to Combi- natorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, 2004.
S. Savarese and L. Fei-Fei. 3D generic object categorization, localization and pose estima- tion. In International Conference on Computer Vision (ICCV), 2007.
J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61: 85â117, 2015.
P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices for convolutional neural networks applied to visual document analysis. In Seventh International Conference on Document Analysis and Recognition, 2003.
F. Stulp and O. Sigaud. Path integral policy improvement with covariance matrix adapta- tion. In International Conference on Machine Learning (ICML), 2012.
Jaeyong Sung, Seok Hyun Jin, and Ashutosh Saxena. Robobarista: Object part based transfer of manipulation trajectories from crowd-sourcing in 3d pointclouds. CoRR, abs/1504.03071, 2015. | 1504.00702#132 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 133 | C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
R. Tedrake, T. Zhang, and H. Seung. Stochastic policy gradient reinforcement learning on a simple 3d biped. In International Conference on Intelligent Robots and Systems (IROS), 2004.
E. Theodorou, J. Buchli, and S. Schaal. Reinforcement learning of motor skills in high dimensions. In International Conference on Robotics and Automation (ICRA), 2010.
E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
39
Levine, Finn, Darrell, and Abbeel
J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in Neural Information Processing Systems (NIPS), 2014. | 1504.00702#133 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00702 | 134 | J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 2013.
H. van Hoof, J. Peters, and G. Neumann. Learning of non-parametric control policies with In International Conference on Artiï¬cial Intelligence high-dimensional state features. and Statistics, 2015.
H. Wang and A. Banerjee. Bregman alternating direction method of multipliers. In Advances in Neural Information Processing Systems (NIPS). 2014.
M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A lo- cally linear latent dynamics model for control from raw images. In Advanced in Neural Information Processing Systems (NIPS), 2015.
R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229â256, May 1992.
W. J. Wilson, C. W. Williams Hulls, and G. S. Bell. Relative end-eï¬ector control using cartesian position based visual servoing. IEEE Transactions on Robotics and Automation, 12(5), 1996. | 1504.00702#134 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | [
{
"id": "1509.06113"
},
{
"id": "1509.02971"
},
{
"id": "1512.03385"
}
] |
1504.00325 | 0 | 5 1 0 2
r p A 3 ] V C . s c [
2 v 5 2 3 0 0 . 4 0 5 1 : v i X r a
# Microsoft COCO Captions: Data Collection and Evaluation Server
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam Saurabh Gupta, Piotr Doll ´ar, C. Lawrence Zitnick
AbstractâIn this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, ï¬ve independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided. | 1504.00325#0 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 1 | 1 INTRODUCTION The automatic generation of captions for images is a long standing and challenging problem in artiï¬cial in- telligence [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. Research in this area spans numerous domains, such as computer vision, natural language processing, and machine learn- ing. Recently there has been a surprising resurgence of interest in this area [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], due to the renewed interest in neural network learning techniques [31], [32] and increasingly large datasets [33], [34], [35], [7], [36], [37], [38]. | 1504.00325#1 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 2 | In this paper, we describe our process of collecting captions for the Microsoft COCO Caption dataset, and the evaluation server we have set up to evaluate perfor- mance of different algorithms. The MS COCO caption dataset contains human generated captions for images contained in the Microsoft Common Objects in COntext (COCO) dataset [38]. Similar to previous datasets [7], [36], we collect our captions using Amazonâs Mechanical Turk (AMT). Upon completion of the dataset it will contain over a million captions.
A large bus sitting next to a very tall building. The man at bat readies to swing at the pitch while the umpire looks on. Bunk bed with a narrow shelf sitting underneath it. A horse carrying a large load of hay and two people sitting on it.
Fig. 1: Example images and captions from the Microsoft COCO Caption dataset. | 1504.00325#2 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 3 | Fig. 1: Example images and captions from the Microsoft COCO Caption dataset.
When evaluating image caption generation algo- rithms, it is essential that a consistent evaluation protocol is used. Comparing results from different approaches can be difï¬cult since numerous evaluation metrics exist [39], [40], [41], [42]. To further complicate matters the imple- mentations of these metrics often differ. To help alleviate these issues, we have built an evaluation server to enable consistency in evaluation of different caption generation approaches. Using the testing data, our evaluation server evaluates captions output by different approaches using numerous automatic metrics: BLEU [39], METEOR [41],
ROUGE [40] and CIDEr [42]. We hope to augment these results with human evaluations on an annual basis.
This paper is organized as follows: First we describe the data collection process. Next, we describe the caption evaluation server and the various metrics used. Human performance using these metrics are provided. Finally the annotation format and instructions for using the eval- uation server are described for those who wish to submit results. We conclude by discussing future directions and known issues. | 1504.00325#3 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 4 | Xinlei Chen is with Carnegie Mellon University. ⢠Hao Fang is with the University of Washington. ⢠T.Y. Lin is with Cornell NYC Tech. ⢠Ramakrishna Vedantam is with Virginia Tech. ⢠Saurabh Gupta is with the Univeristy of California, Berkeley. ⢠P. Doll´ar is with Facebook AI Research. ⢠C. L. Zitnick is with Microsoft Research, Redmond.
# 2 DATA COLLECTION
In this section we describe how the data is gathered for the MS COCO captions dataset. For images, we use the dataset collected by Microsoft COCO [38]. These images are split into training, validation and testing sets.
1
2
The images were gathered by searching for pairs of 80 object categories and various scene types on Flickr. The goal of the MS COCO image collection process was to gather images containing multiple objects in their natural context. Given the visual complexity of most images in the dataset, they pose an interesting and difï¬cult challenge for image captioning. | 1504.00325#4 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 5 | For generating a dataset of image captions, the same training, validation and testing sets were used as in the original MS COCO dataset. Two datasets were collected. The ï¬rst dataset MS COCO c5 contains ï¬ve reference captions for every image in the MS COCO training, validation and testing datasets. The second dataset MS COCO c40 contains 40 reference sentences for a ran- domly chosen 5,000 images from the MS COCO testing dataset. MS COCO c40 was created since many auto- matic evaluation metrics achieve higher correlation with human judgement when given more reference sentences [42]. MS COCO c40 may be expanded to include the MS COCO validation dataset in the future.
Our process for gathering captions received signiï¬cant inspiration from the work of Young etal. [36] and Ho- dosh etal. [7] that collected captions on Flickr images using Amazonâs Mechanical Turk (AMT). Each of our captions are also generated using human subjects on AMT. Each subject was shown the user interface in Figure 2. The subjects were instructed to:
Describe all the important parts of the scene. ⢠Do not start the sentences with âThere is. ⢠Do not describe unimportant details. ⢠Do not describe things that might have happened
in the future or past. | 1504.00325#5 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 6 | in the future or past.
Do not describe what a person might say. ⢠Do not give people proper names. ⢠The sentences should contain at least 8 words.
The number of captions gathered is 413,915 captions for 82,783 images in training, 202,520 captions for 40,504 images in validation and 379,249 captions for 40,775 images in testing including 179,189 for MS COCO c5 and 200,060 for MS COCO c40. For each testing image, we collected one additional caption to compute the scores of human performance for comparing scores of machine generated captions. The total number of collected cap- tions is 1,026,459. We plan to collect captions for the MS COCO 2015 dataset when it is released, which should approximately double the size of the caption dataset. The AMT interface may be obtained from the MS COCO website.
3 CAPTION EVALUATION In this section we describe the MS COCO caption evalu- ation server. Instructions for using the evaluation server are provided in Section 5. As input the evaluation server receives candidate captions for both the validation and testing datasets in the format speciï¬ed in Section 5. The validation and test images are provided to the submit- ter. However, the human generated reference sentences | 1504.00325#6 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 7 | Instructions: H+ Describe all the important parts of the soane. + Do not start the sentences with "There is". + Do not describe unimportant details. + Do not deseribe things that might have happened in the future or past. * Do not describe what a person might say. + Do not give people proper names. + The sentence should contain at least 8 words. Please describe the image: prev || next
Fig. 2: Example user interface for the caption gathering task.
are only provided for the validation set. The reference sentences for the testing set are kept private to reduce the risk of overï¬tting.
Numerous evaluation metrics are computed on both MS COCO c5 and MS COCO c40. These include BLEU- 1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, METEOR and CIDEr-D. The details of the these metrics are described next.
# 3.1 Tokenization and preprocessing
Both the candidate captions and the reference captions are pre-processed by the evaluation server. To tokenize the captions, we use Stanford PTBTokenizer in Stanford CoreNLP tools (version 3.4.1) [43] which mimics Penn Treebank 3 tokenization. In addition, punctuations1 are removed from the tokenized captions. | 1504.00325#7 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 8 | 3.2 Evaluation metrics Our goal is to automatically evaluate for an image Ii the quality of a candidate caption ci given a set of reference captions Si = {si1, . . . , sim} â S. The caption sentences are represented using sets of n-grams, where an n-gram Ïk â ⦠is a set of one or more ordered words. In this paper we explore n-grams with one to four words. No stemming is performed on the words. The number of times an n-gram Ïk occurs in a sentence sij is denoted hk(sij) or hk(ci) for the candidate sentence ci â C.
# 3.3 BLEU
BLEU [39] is a popular machine translation metric that analyzes the co-occurrences of n-grams between the candidate and reference sentences. It computes a corpus- level clipped n-gram precision between sentences as follows:
i Dig min(has (ci), max ha (sig) A) CP,(C, S$) (1)
1. The full list of punctuations: {â, â, â, â, -LRB-, -RRB-, -LCB-, -RCB-, ., ?, !, ,, :, -, â, ..., ;}. | 1504.00325#8 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 9 | where k indexes the set of possible n-grams of length n. The clipped precision metric limits the number of times an n-gram may be counted to the maximum number of times it is observed in a single reference sentence. Note that CPn is a precision score and it favors short sentences. So a brevity penalty is also used:
if lo > Is 1 Io > I: W(C,S) = fri i y (2) Ig < lg
where lC is the total length of candidate sentences ciâs and lS is the length of the corpus-level effective refer- ence length. When there are multiple references for a candidate sentence, we choose to use the closest reference length for the brevity penalty.
The overall BLEU score is computed using a weighted geometric mean of the individual n-gram precision:
N BLEUN(C, 8) = 0(C, 8) exp (x Wn log CP, (C, 8) n=1 (3)
where N = 1, 2, 3, 4 and wn is typically held constant for all n.
BLEU has shown good performance for corpus- level comparisons over which a high number of n- gram matches exist. However, at a sentence-level the n-gram matches for higher n rarely occur. As a result, BLEU performs poorly when comparing individual sen- tences.
# 3.4 ROUGE | 1504.00325#9 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 10 | # 3.4 ROUGE
ROUGE [40] is a set of evaluation metrics designed to evaluate text summarization algorithms.
1) ROUGEy: The first ROUGE metric computes a simple n-gram recall over all reference summaries given a candidate sentence: Yj Vex min(he (ci); ha (Sig)
Yj Vex min(he (ci); ha (Sig) vi Ve hx (ij) ROUGEN (ci, i)
(4) 2) ROUGEL: ROUGEL uses a measure based on the Longest Common Subsequence (LCS). An LCS is a set words shared by two sentences which occur in the same order. However, unlike n-grams there may be words in between the words that create the LCS. Given the length l(ci, sij) of the LCS between a pair of sentences, ROUGEL is found by computing an F-measure:
Rl = max j Pl = max j l(ci, sij) |sij| l(ci, sij) |ci| ROU GEL(ci, Si) = (1 + β2)RlPl Rl + β2Pl
(3)
(5)
(6)
(7) | 1504.00325#10 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 11 | (3)
(5)
(6)
(7)
Rl and Pl are recall and precision of LCS. β is usually set to favor recall (β = 1.2). Since n- grams are implicit in this measure due to the use of the LCS, they need not be speciï¬ed.
3) ROUGES: The ï¬nal ROUGE metric uses skip bi- grams instead of the LCS or n-grams. Skip bi-grams are pairs of ordered words in a sentence. However, similar to the LCS, words may be skipped between pairs of words. Thus, a sentence with 4 words would have C 4 2 = 6 skip bi-grams. Precision and recall are again incorporated to compute an F- measure score. If fk(sij) is the skip bi-gram count for sentence sij, ROUGES is computed as:
Re = max LA Min( Seles) Fels) wo Vx Feliz) (8)
P= max Demin Sule) FalSi)) eG Lx Se(ci)
ROU GES(ci, Si) = (1 + β2)RsPs Rs + β2Ps (10)
Skip bi-grams are capable of capturing long range sentence structure. In practice, skip bi-grams are computed so that the component words occur at a distance of at most 4 from each other. | 1504.00325#11 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 12 | # 3.5 METEOR
METEOR [41] is calculated by generating an alignment between the words in the candidate and reference sen- tences, with an aim of 1:1 correspondence. This align- ment is computed while minimizing the number of chunks, ch, of contiguous and identically ordered tokens in the sentence pair. The alignment is based on exact token matching, followed by WordNet synonyms [44], stemmed tokens and then paraphrases. Given a set of alignments, m, the METEOR score is the harmonic mean of precision Pm and recall Rm between the best scoring reference and candidate:
Pen=y(%) (11) m
Fmean = PmRm αPm + (1 â α)Rm (12)
|m| k hk(ci) |m| k hk(sij) M ET EOR = (1 â P en)Fmean
|m| Pin = SO 13, 5 hel) 0)
|m| Rn = = (14) dhe (siz)
(15) | 1504.00325#12 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 13 | |m| Pin = SO 13, 5 hel) 0)
|m| Rn = = (14) dhe (siz)
(15)
Thus, the ï¬nal METEOR score includes a penalty P en based on chunkiness of resolved matches and a har- monic mean term that gives the quality of the resolved matches. The default parameters α, γ and θ are used for this evaluation. Note that similar to BLEU, statistics of precision and recall are ï¬rst aggregated over the entire corpus, which are then combined to give the corpus-level METEOR score.
3
(8)
(9)
4
# 3.6 CIDEr
The CIDEr metric [42] measures consensus in image captions by performing a Term Frequency Inverse Doc- ument Frequency (TF-IDF) weighting for each n-gram. The number of times an n-gram Ïk occurs in a reference sentence sij is denoted by hk(sij) or hk(ci) for the candi- date sentence ci. CIDEr computes the TF-IDF weighting gk(sij) for each n-gram Ïk using:
# gk(sij) =
Pe(sig) toe Z| Soca Talsa) | g (< oat rm) , (16) | 1504.00325#13 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 14 | # gk(sij) =
Pe(sig) toe Z| Soca Talsa) | g (< oat rm) , (16)
where ⦠is the vocabulary of all n-grams and I is the set of all images in the dataset. The ï¬rst term measures the TF of each n-gram Ïk, and the second term measures the rarity of Ïk using its IDF. Intuitively, TF places higher weight on n-grams that frequently occur in the reference sentences describing an image, while IDF reduces the weight of n-grams that commonly occur across all de- scriptions. That is, the IDF provides a measure of word saliency by discounting popular words that are likely to be less visually informative. The IDF is computed using the logarithm of the number of images in the dataset |I| divided by the number of images for which Ïk occurs in any of its reference sentences.
The CIDErn score for n-grams of length n is com- puted using the average cosine similarity between the candidate sentence and the reference sentences, which accounts for both precision and recall:
CIDE rp (ci, 5%) +> ra (ci) -gâ (siz) (17) J ) Cea){llloâ sis MI | 1504.00325#14 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 15 | CIDE rp (ci, 5%) +> ra (ci) -gâ (siz) (17) J ) Cea){llloâ sis MI
where gâ(c;) is a vector formed by 9,(c;) corresponding to all n-grams of length n and ||gâ(c;)|| is the magnitude of the vector gâ(c;). Similarly for gâ(s;;).
Higher order (longer) n-grams to are used to cap- ture grammatical properties as well as richer semantics. Scores from n-grams of varying lengths are combined as follows:
N CIDEr(c;, $;) = S> wnCIDEr (ci, $i), n=1 (18)
Uniform weights are used wn = 1/N . N = 4 is used.
CIDEr-D is a modiï¬cation to CIDEr to make it more robust to gaming. Gaming refers to the phenomenon where a sentence that is poorly judged by humans tends to score highly with an automated metric. To defend the CIDEr metric against gaming effects, [42] add clipping and a length based gaussian penalty to the CIDEr metric described above. This results in the following equations for CIDEr-D: | 1504.00325#15 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 16 | TABLE 1: Human Agreement for Image Captioning: Various metrics when benchmarking a human generated caption against ground truth captions.
Metric Name MS COCO c5 MS COCO c40 BLEU 1 BLEU 2 BLEU 3 BLEU 4 0.663 0.469 0.321 0.217 0.880 0.744 0.603 0.471 METEOR ROUGEL CIDEr-D 0.252 0.484 0.854 0.335 0.626 0.910
\ 10 =(ei) =U sig)? CIDEr-D,, (¢;, S;) = Tn DS e302 & j min(gâ (ci), 9â (siz)) 9" (Sis) llaâ (ea)MIlloâ (sis) Il » (19)
Where l(ci) and l(sij) denote the lengths of candidate and reference sentences respectively. Ï = 6 is used. A factor of 10 is used in the numerator to make the CIDEr- D scores numerically similar to the other metrics.
The ï¬nal CIDEr-D metric is computed in a similar manner to CIDEr (analogous to eqn. 18): | 1504.00325#16 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |
1504.00325 | 17 | The ï¬nal CIDEr-D metric is computed in a similar manner to CIDEr (analogous to eqn. 18):
N CIDEr-D(;, 5:) = }> waCIDEr-D, (ci, $i), n=1 (20)
Note that just like the BLEU and ROUGE metrics, CIDEr- D does not use stemming. We adopt the CIDEr-D metric for the evaluation server.
4 HUMAN PERFORMANCE In this section, we study the human agreement among humans at this task. We start with analyzing the inter- human agreement for image captioning (Section. 4.1) and then analyze human agreement for the word prediction sub-task and provide a simple model which explains human agreement for this sub-task (Section. 4.2).
# 4.1 Human Agreement for Image Captioning
When examining human agreement on captions, it be- comes clear that there are many equivalent ways to say essentially the same thing. We quantify this by conducting the following experiment: We collect one additional human caption for each image in the test set and treat this caption as the prediction. Using the MS COCO caption evaluation server we compute the various metrics. The results are tabulated in Table 1.
# 4.2 Human Agreement for Word Prediction | 1504.00325#17 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | [
{
"id": "1502.03671"
},
{
"id": "1501.02598"
}
] |