doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1506.02626
18
5 Table 4: For AlexNet, pruning reduces the number of weights by 9× and computation by 3×. Layer Weights conv1 conv2 conv3 conv4 conv5 fc1 fc2 fc3 Total 35K 307K 885K 663K 442K 38M 17M 4M 61M FLOP Act% Weights% FLOP% 84% 211M 88% 38% 448M 52% 35% 299M 37% 37% 224M 40% 37% 150M 34% 9% 75M 36% 9% 34M 40% 25% 100% 8M 11% 54% 1.5B 84% 33% 18% 14% 14% 3% 3% 10% 30% Table 5: For VGG-16, pruning reduces the number of weights by 12× and computation by 5×. 58% 12% 30% 29% 43% 16% 29% 21% 14% 15% 12% 9% 11% 1% 2% 9% 21% 100% 23% 7.5% # 4.3 VGG-16 on ImageNet
1506.02626#18
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
19
where equality holds when λ = 1. 4 # INTERPRETATION AS REWARD SHAPING In this section, we discuss how one can interpret λ as an extra discount factor applied after per- forming a reward shaping transformation on the MDP. We also introduce the notion of a response function to help understand the bias introduced by γ and λ. Reward shaping (Ng et al., 1999) refers to the following transformation of the reward function of an MDP: let Φ : S → R be an arbitrary scalar-valued function on state space, and define the transformed reward function ˜r by F(s,a,s’) =r(s,a, 8’) + y®(s’) — ®(s), (20) 5 (16) Published as a conference paper at ICLR 2016 which in turn defines a transformed MDP. This transformation leaves the discounted advantage function Aπ,γ unchanged for any policy π. To see this, consider the discounted sum of rewards of a trajectory starting with state st: oo 0° lx l Sy F(Se41, Qt, S441) = Sy 1(Se41,Ge41, St4i41) — O(S2). (21) 1=0 1=0
1506.02438#19
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
19
# 4.3 VGG-16 on ImageNet With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 [27], on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three fully-connected layers. Following a similar methodology, we aggressively pruned both convolutional and fully-connected layers to realize a significant reduction in the number of weights, shown in Table 5. We used five iterations of pruning an retraining. The VGG-16 results are, like those for AlexNet, very promising. The network as a whole has been reduced to 7.5% of its original size (13× smaller). In particular, note that the two largest fully-connected layers can each be pruned to less than 4% of their original size. This reduction is critical for real time image processing, where there is little reuse of fully connected layers across images (unlike batch processing during training). # 5 Discussion
1506.02626#19
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
20
Letting ˜Qπ,γ, ˜V π,γ, ˜Aπ,γ be the value and advantage functions of the transformed MDP, one obtains from the definitions of these quantities that ˜Qπ,γ(s, a) = Qπ,γ(s, a) − Φ(s) ˜V π,γ(s, a) = V π,γ(s) − Φ(s) ˜Aπ,γ(s, a) = (Qπ,γ(s, a) − Φ(s)) − (V π,γ(s) − Φ(s)) = Aπ,γ(s, a). (24) Note that if Φ happens to be the state-value function V π,γ from the original MDP, then the trans- formed MDP has the interesting property that ˜V π,γ(s) is zero at every state. Note that (Ng et al., 1999) showed that the reward shaping transformation leaves the policy gradient and optimal policy unchanged when our objective is to maximize the discounted sum of rewards an y'r(8¢, Gt, 8441). In contrast, this paper is concerned with maximizing the undiscounted sum of rewards, where the discount 7 is used as a variance-reduction parameter.
1506.02438#20
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
20
# 5 Discussion The trade-off curve between accuracy and number of parameters is shown in Figure 5. The more parameters pruned away, the less the accuracy. We experimented with L1 and L2 regularization, with and without retraining, together with iterative pruning to give five trade off lines. Comparing solid and dashed lines, the importance of retraining is clear: without retraining, accuracy begins dropping much sooner — with 1/3 of the original connections, rather than with 1/10 of the original connections. It’s interesting to see that we have the “free lunch” of reducing 2× the connections without losing accuracy even without retraining; while with retraining we are ably to reduce connections by 9×. 6 -O-L2 regularization w/o retrain -A-L1 regularization w/o retrain -&L1 regularization w/ retrain ~OL2 regularization w/ retrain -®L2 regularization w/ iterative prune and retrain 0.5% 0.0% -0.5% 1.0% “1.5% -2.0% -2.5% -3.0% -3.5% -4.0% -4.5% 40% 50% 60% 70% 80% 90% 100% Parametes Pruned Away Accuracy Loss
1506.02626#20
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
21
Having reviewed the idea of reward shaping, let us consider how we could use it to get a policy gradient estimate. The most natural approach is to construct policy gradient estimators that use discounted sums of shaped rewards ˜r. However, Equation (21) shows that we obtain the discounted sum of the original MDP’s rewards r minus a baseline term. Next, let’s consider using a “steeper” discount γλ, where 0 ≤ λ ≤ 1. It’s easy to see that the shaped reward ˜r equals the Bellman residual term δV , introduced in Section 3, where we set Φ = V . Letting Φ = V , we see that xo xe SON Fsertsar, sestet) = D(—ay'6yyy = APAPO, (25) l=0 l=0 Hence, by considering the γλ-discounted sum of shaped rewards, we exactly obtain the generalized advantage estimators from Section 3. As shown previously, λ = 1 gives an unbiased estimate of gγ, whereas λ < 1 gives a biased estimate. To further analyze the effect of this shaping transformation and parameters γ and λ, it will be useful to introduce the notion of a response function χ, which we define as follows:
1506.02438#21
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
21
Figure 5: Trade-off curve for parameter reduction and loss in top-5 accuracy. L1 regularization performs better than L2 at learning the connections without retraining, while L2 regularization performs better than L1 at retraining. Iterative pruning gives the best result. “conv! ~conv2 fconvs cond -*-convS fet 0% XE SX Bax x18: 0% 0040-0 6 5% § B-10% 3-10% g “15% “18% -20% 25% 50% 75% 400% 0% 25% 50% 75% 100% #Parameters #Parameters “conv! ~conv2 fconvs cond -*-convS 0% XE SX Bax x18: 6 E | B-10% Es “15% 25% 50% 75% 400% #Parameters fet 0% 0040-0 5% § 3-10% g “18% -20% 0% 25% 50% 75% 100% #Parameters Figure 6: Pruning sensitivity for CONV layer (left) and FC layer (right) of AlexNet.
1506.02626#21
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
22
XE Se, ¢) = E [regi | se, a] — E [rei | se] - (26) Note that A™*7(s,a) = 7729 7'x(J; s,a), hence the response function decomposes the advantage function across timesteps. The response function lets us quantify the temporal credit assignment problem: long range dependencies between actions and rewards correspond to nonzero values of the response function for / >> 0. Next, let us revisit the discount factor y and the approximation we are making by using A”? rather than A™!. The discounted policy gradient estimator from Equation (6) has a sum of terms of the form Vo log mo(a; | 8:)A™ (St, 44) = Vo log mo(a 51) So y'x(l; St, 41). (27) 1=0
1506.02438#22
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
22
Figure 6: Pruning sensitivity for CONV layer (left) and FC layer (right) of AlexNet. L1 regularization gives better accuracy than L2 directly after pruning (dotted blue and purple lines) since it pushes more parameters closer to zero. However, comparing the yellow and green lines shows that L2 outperforms L1 after retraining, since there is no benefit to further pushing values towards zero. One extension is to use L1 regularization for pruning and then L2 for retraining, but this did not beat simply using L2 for both phases. Parameters from one mode do not adapt well to the other. The biggest gain comes from iterative pruning (solid red line with solid circles). Here we take the pruned and retrained network (solid green line with circles) and prune and retrain it again. The leftmost dot on this curve corresponds to the point on the green line at 80% (5× pruning) pruned to 8×. There’s no accuracy loss at 9×. Not until 10× does the accuracy begin to drop sharply. Two green points achieve slightly better accuracy than the original model. We believe this accuracy improvement is due to pruning finding the right capacity of the network and hence reducing overfitting.
1506.02626#22
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
23
Using a discount 7 < 1 corresponds to dropping the terms with | >> 1/(1 — ). Thus, the error introduced by this approximation will be small if y rapidly decays as / increases, i.e., if the effect of an action on rewards is “forgotten” after + 1/(1 — 7) timesteps. If the reward function 7 were obtained using @ = V™7, we would have E[?,4: | s:,a:] = E [r+ | s:] = 0 for 1 > 0, ie., the response function would only be nonzero at | = 0. Therefore, this shaping transformation would turn temporally extended response into an immediate response. Given that V7 completely reduces the temporal spread of the response function, we can hope that a good approximation V + V7 partially reduces it. This observation suggests an interpretation of Equation (16): reshape the rewards using V to shrink the temporal extent of the response function, and then introduce a “steeper” discount 7A to cut off the noise arising from long delays, i.e., ignore terms Vo log 9 (az | $¢)6¢,, where 1 >> 1/(1 — 7). 6 (22) Published as a conference paper at ICLR 2016 # 5 VALUE FUNCTION ESTIMATION
1506.02438#23
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
23
Both CONV and FC layers can be pruned, but with different sensitivity. Figure 6 shows the sensitivity of each layer to network pruning. The figure shows how accuracy drops as parameters are pruned on a layer-by-layer basis. The CONV layers (on the left) are more sensitive to pruning than the fully connected layers (on the right). The first convolutional layer, which interacts with the input image directly, is most sensitive to pruning. We suspect this sensitivity is due to the input layer having only 3 channels and thus less redundancy than the other convolutional layers. We used the sensitivity results to find each layer’s threshold: for example, the smallest threshold was applied to the most sensitive layer, which is the first convolutional layer. Storing the pruned layers as sparse matrices has a storage overhead of only 15.6%. Storing relative rather than absolute indices reduces the space taken by the FC layer indices to 5 bits. Similarly, CONV layer indices can be represented with only 8 bits. 7
1506.02626#23
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
24
6 (22) Published as a conference paper at ICLR 2016 # 5 VALUE FUNCTION ESTIMATION A variety of different methods can be used to estimate the value function (see, e.g., Bertsekas (2012)). When using a nonlinear function approximator to represent the value function, the sim- plest approach is to solve a nonlinear regression problem: N minimize So l¥s(sn) -V,ll, (28) n=1 n=1 where Vv, = an ¥! rz41 is the discounted sum of rewards, and n indexes over all timesteps in a batch of trajectories. This is sometimes called the Monte Carlo or TD(1) approach for estimating the value function (Sutton & Barto, 1998). For the experiments in this work, we used a trust region method to optimize the value function in each iteration of a batch optimization procedure. The trust region helps us to avoid overfitting to the most recent batch of data. To formulate the trust region problem, we first compute ¢? = ¥ lene (Sn) — Vall?, where dog is the parameter vector before optimization. Then we solve the following constrained optimization problem: N minimize So Vo(sn) — Vall? @ n=l N . 1 Vo(sn) — Vo, (Sn) |? subject to > 7 <e. (29) Na 20"
1506.02438#24
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
24
7 Table 6: Comparison with other model reduction methods on AlexNet. Data-free pruning [28] saved only 1.5× parameters with much loss of accuracy. Deep Fried Convnets [29] worked on fully connected layers only and reduced the parameters by less than 4×. [30] reduced the parameters by 4× with inferior accuracy. Naively cutting the layer size saves parameters but suffers from 4% loss of accuracy. [12] exploited the linear structure of convnets and compressed each layer individually, where model compression on a single layer incurred 0.9% accuracy penalty with biclustering + SVD.
1506.02626#24
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
25
This constraint is equivalent to constraining the average KL divergence between the previous value function and the new value function to be smaller than ¢, where the value function is taken to pa- rameterize a conditional Gaussian distribution with mean V;,(s) and variance 7. We compute an approximate solution to the trust region problem using the conjugate gradient algo- rithm (Wright & Nocedal, 1999). Specifically, we are solving the quadratic program minimize g7( — doa) ¢ 1 N subject to = de — Goa)” H(e — Goa) < €. (30) n=
1506.02438#25
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
25
Network Baseline Caffemodel [26] Data-free pruning [28] Fastfood-32-AD [29] Fastfood-16-AD [29] Collins & Kohli [30] Naive Cut SVD [12] Network Pruning Top-1 Error Top-5 Error 42.78% 44.40% 41.93% 42.90% 44.40% 47.18% 44.02% 42.77% 19.73% - - - - 23.23% 20.56% 19.67% Parameters 61.0M 39.6M 32.8M 16.4M 15.2M 13.8M 11.9M 6.7M Compression Rate 1× 1.5× 2× 3.7× 4× 4.4× 5× 9× # Count Figure 7: Weight distribution before and after parameter pruning. The right figure has 10× smaller scale.
1506.02626#25
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
26
minimize g7( — doa) ¢ 1 N subject to = de — Goa)” H(e — Goa) < €. (30) n= where g is the gradient of the objective, and H = W DY, jnda, where jn = VeVo(Sn)-. Note that His the “Gauss-Newton” approximation of the Hessian of the objective, and it is (up to a a? factor) the Fisher information matrix when interpreting the value function as a conditional probability dis- tribution. Using matrix-vector products v + Hv to implement the conjugate gradient algorithm, we compute a step direction s * —H~1g. Then we rescale s + as such that $(as)? H(as) = € and take 6 = ¢o1a + as. This procedure is analogous to the procedure we use for updating the policy, which is described further in Section 6 and based on Schulman et al. (2015). # 6 EXPERIMENTS We designed a set of experiments to investigate the following questions: 1. What is the empirical effect of varying λ ∈ [0, 1] and γ ∈ [0, 1] when optimizing episodic total reward using generalized advantage estimation?
1506.02438#26
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
26
# Count Figure 7: Weight distribution before and after parameter pruning. The right figure has 10× smaller scale. After pruning, the storage requirements of AlexNet and VGGNet are are small enough that all weights can be stored on chip, instead of off-chip DRAM which takes orders of magnitude more energy to access (Table 1). We are targeting our pruning method for fixed-function hardware specialized for sparse DNN, given the limitation of general purpose hardware on sparse computation. Figure 7 shows histograms of weight distribution before (left) and after (right) pruning. The weight is from the first fully connected layer of AlexNet. The two panels have different y-axis scales. The original distribution of weights is centered on zero with tails dropping off quickly. Almost all parameters are between [−0.015, 0.015]. After pruning the large center region is removed. The network parameters adjust themselves during the retraining phase. The result is that the parameters form a bimodal distribution and become more spread across the x-axis, between [−0.025, 0.025]. # 6 Conclusion
1506.02626#26
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
27
1. What is the empirical effect of varying λ ∈ [0, 1] and γ ∈ [0, 1] when optimizing episodic total reward using generalized advantage estimation? 2. Can generalized advantage estimation, along with trust region algorithms for policy and value function optimization, be used to optimize large neural network policies for challenging control problems? 2Another natural choice is to compute target values with an estimator based on the TD(λ) backup (Bertsekas, t = Vφold (sn)+ l=0(γλ)lδt+l. While we experimented with this choice, we did not notice a difference in performance from 7 Published as a conference paper at ICLR 2016 # 6.1 POLICY OPTIMIZATION ALGORITHM While generalized advantage estimation can be used along with a variety of different policy gra- dient methods, for these experiments, we performed the policy updates using trust region policy optimization (TRPO) (Schulman et al., 2015). TRPO updates the policy by approximately solving the following constrained optimization problem each iteration: Lθold (θ)
1506.02438#27
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
27
# 6 Conclusion We have presented a method to improve the energy efficiency and storage of neural networks without affecting accuracy by finding the right connections. Our method, motivated in part by how learning works in the mammalian brain, operates by learning which connections are important, pruning the unimportant connections, and then retraining the remaining sparse network. We highlight our experiments on AlexNet and VGGNet on ImageNet, showing that both fully connected layer and convolutional layer can be pruned, reducing the number of connections by 9× to 13× without loss of accuracy. This leads to smaller memory capacity and bandwidth requirements for real-time image processing, making it easier to be deployed on mobile systems. # References [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. 8 [2] Alex Graves and J¨urgen Schmidhuber. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602–610, 2005.
1506.02626#27
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
28
minimize A) . zeta subject to Dir" (79,,,,70) < € 1 Qi melan| sn) 5 where Lo,,,(0) = = ). 22" _ A, N hal TO.1a(An | $n) = i Dict! (ora 7) = 5 > Dix (m10(+ | 8n) || T(- | 8n)) (31) As described in (Schulman et al., 2015), we approximately solve this problem by linearizing the objective and quadraticizing the constraint, which yields a step in the direction θ − θold ∝ −F −1g, where F is the average Fisher information matrix, and g is a policy gradient estimate. This policy update yields the same step direction as the natural policy gradient (Kakade, 2001a) and natural actor-critic (Peters & Schaal, 2008), however it uses a different stepsize determination scheme and numerical procedure for computing the step. Since prior work (Schulman et al., 2015) compared TRPO to a variety of different policy optimiza- tion algorithms, we will not repeat these comparisons; rather, we will focus on varying the γ, λ parameters of policy gradient estimator while keeping the underlying algorithm fixed. For completeness, the whole algorithm for iteratively updating policy and value function is given below:
1506.02438#28
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
28
[3] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493–2537, 2011. [4] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [5] Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, pages 1701–1708. IEEE, 2014. [6] Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with cots hpc systems. In 30th ICML, pages 1337–1345, 2013. [7] Mark Horowitz. Energy table for 45nm process, Stanford VLSI wiki. [8] JP Rauschecker. Neuronal mechanisms of developmental plasticity in the cat’s visual system. Human neurobiology, 3(2):109–114, 1983.
1506.02626#28
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
29
For completeness, the whole algorithm for iteratively updating policy and value function is given below: Initialize policy parameter 69 and value function parameter ¢o. fori =0,1,2,... do Simulate current policy 79, until N timesteps are obtained. Compute 6)’ at all timesteps t € {1,2,...,.N}, using V = Vy,. Compute Ay = 37729 (7A)/5¥,, at all timesteps. Compute 6; with TRPO update, Equation (31). Compute #;41 with Equation (30). end for Note that the policy update θi → θi+1 is performed using the value function Vφi for advantage estimation, not Vφi+1. Additional bias would have been introduced if we updated the value function first. To see this, consider the extreme case where we overfit the value function, and the Bellman residual rt + γV (st+1) − V (st) becomes zero at all timesteps—the policy gradient estimate would be zero. 6.2 EXPERIMENTAL SETUP
1506.02438#29
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
29
neurobiology, 3(2):109–114, 1983. [9] Christopher A Walsh. Peter huttenlocher (1931-2013). Nature, 502(7470):172–172, 2013. [10] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013. [11] Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011. [12] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS, pages 1269–1277, 2014. [13] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
1506.02626#29
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
30
6.2 EXPERIMENTAL SETUP We evaluated our approach on the classic cart-pole balancing problem, as well as several challenging 3D locomotion tasks: (1) bipedal locomotion; (2) quadrupedal locomotion; (3) dynamically standing up, for the biped, which starts off laying on its back. The models are shown in Figure 1. 6.2.1 ARCHITECTURE We used the same neural network architecture for all of the 3D robot tasks, which was a feedforward network with three hidden layers, with 100, 50 and 25 tanh units respectively. The same architecture was used for the policy and value function. The final output layer had linear activation. The value function estimator used the same architecture, but with only one scalar output. For the simpler cart- pole task, we used a linear policy, and a neural network with one 20-unit hidden layer as the value function. 8 Published as a conference paper at ICLR 2016
1506.02438#30
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
30
[14] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [15] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [16] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. [17] Stephen Jos´e Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with back-propagation. In Advances in neural information processing systems, pages 177–185, 1989. [18] Yann Le Cun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, pages 598–605. Morgan Kaufmann, 1990.
1506.02626#30
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
31
Figure 1: Top figures: robot models used for 3D locomotion. Bottom figures: a sequence of frames from the learned gaits. Videos are available at https://sites.google.com/site/ gaepapersupp. 6.2.2 TASK DETAILS For the cart-pole balancing task, we collected 20 trajectories per batch, with a maximum length of 1000 timesteps, using the physical parameters from Barto et al. (1983). The simulated robot tasks were simulated using the MuJoCo physics engine (Todorov et al., 2012). The humanoid model has 33 state dimensions and 10 actuated degrees of freedom, while the quadruped model has 29 state dimensions and 8 actuated degrees of freedom. The initial state for these tasks consisted of a uniform distribution centered on a reference configuration. We used 50000 timesteps per batch for bipedal locomotion, and 200000 timesteps per batch for quadrupedal locomotion and bipedal standing. Each episode was terminated after 2000 timesteps if the robot had not reached a terminal state beforehand. The timestep was 0.01 seconds. The reward functions are provided in the table below.
1506.02438#31
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
31
[19] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pages 164–164, 1993. [20] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015. [21] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and SVN Vishwanathan. Hash kernels for structured data. The Journal of Machine Learning Research, 10:2615–2637, 2009. [22] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, pages 1113–1120. ACM, 2009. [23] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15:1929–1958, 2014.
1506.02626#31
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
32
The reward functions are provided in the table below. Task Reward 3D biped locomotion —vgwa — LO~* Jul]? — 10? |] fimpact |? + 0.2 Quadruped locomotion — vga — 107% |u|? — 1073 || fimpact ||? + 0.05 Biped getting up —(Rheaa — 1.5)? — 107>||ul|? Here, vfwd := forward velocity, u := vector of joint torques, fimpact := impact forces, hhead := height of the head. In the locomotion tasks, the episode is terminated if the center of mass of the actor falls below a predefined height: .8 m for the biped, and .2 m for the quadruped. The constant offset in the reward function encourages longer episodes; otherwise the quadratic reward terms might lead lead to a policy that ends the episodes as quickly as possible. 6.3 EXPERIMENTAL RESULTS
1506.02438#32
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02626
32
[24] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320–3328, 2014. [25] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157–166, 1994. [26] Yangqing Jia, et al. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [27] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. CoRR, abs/1409.1556, 2014. [28] Suraj Srinivas and R Venkatesh Babu. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149, 2015. [29] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014.
1506.02626#32
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.
http://arxiv.org/pdf/1506.02626
Song Han, Jeff Pool, John Tran, William J. Dally
cs.NE, cs.CV, cs.LG
Published as a conference paper at NIPS 2015
null
cs.NE
20150608
20151030
[ { "id": "1507.06149" }, { "id": "1504.04788" }, { "id": "1510.00149" } ]
1506.02438
33
6.3 EXPERIMENTAL RESULTS All results are presented in terms of the cost, which is defined as negative reward and is mini- mized. Videos of the learned policies are available at https://sites.google.com/site/ gaepapersupp. In plots, “No VF” means that we used a time-dependent baseline that did not depend on the state, rather than an estimate of the state value function. The time-dependent baseline was computed by averaging the return at each timestep over the trajectories in the batch. # 6.3.1 CART-POLE The results are averaged across 21 experiments with different random seeds. Results are shown in Figure 2, and indicate that the best results are obtained at intermediate values of the parameters: γ ∈ [0.96, 0.99] and λ ∈ [0.92, 0.99]. 9 Published as a conference paper at ICLR 2016 Cart-pole performance after 20 iterations Cart-pole learning curves (at y=0.99) cost ° 10 20 30 40 50 number of policy iterations Xr Cart-pole performance after 20 iterations Xr
1506.02438#33
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
34
Cart-pole performance after 20 iterations Xr Figure 2: Left: learning curves for cart-pole task, using generalized advantage estimation with varying values of λ at γ = 0.99. The fastest policy improvement is obtain by intermediate values of λ in the range [0.92, 0.98]. Right: performance after 20 iterations of policy optimization, as γ and λ are varied. White means higher reward. The best results are obtained at intermediate values of both. 3D Biped > 3D Quadruped , No value fn =1 cost cost 8 100 200 300 400 500 8 200 400 600 800 1000 number of policy iterations number of policy iterations Figure 3: Left: Learning curves for 3D bipedal locomotion, averaged across nine runs of the algo- rithm. Right: learning curves for 3D quadrupedal locomotion, averaged across five runs. 3D BIPEDAL LOCOMOTION
1506.02438#34
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
35
3D BIPEDAL LOCOMOTION Each trial took about 2 hours to run on a 16-core machine, where the simulation rollouts were paral- lelized, as were the function, gradient, and matrix-vector-product evaluations used when optimizing the policy and value function. Here, the results are averaged across 9 trials with different random seeds. The best performance is again obtained using intermediate values of γ ∈ [0.99, 0.995], λ ∈ [0.96, 0.99]. The result after 1000 iterations is a fast, smooth, and stable gait that is effectively completely stable. We can compute how much “real time” was used for this learning process: 0.01 seconds/timestep×50000 timesteps/batch×1000 batches/3600·24 seconds/day = 5.8 days. Hence, it is plausible that this algorithm could be run on a real robot, or multiple real robots learning in par- allel, if there were a way to reset the state of the robot and ensure that it doesn’t damage itself. # 6.3.3 OTHER 3D ROBOT TASKS
1506.02438#35
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
36
# 6.3.3 OTHER 3D ROBOT TASKS The other two motor behaviors considered are quadrupedal locomotion and getting up off the ground for the 3D biped. Again, we performed 5 trials per experimental condition, with different random seeds (and initializations). The experiments took about 4 hours per trial on a 32-core machine. We performed a more limited comparison on these domains (due to the substantial computational resources required to run these experiments), fixing γ = 0.995 but varying λ = {0, 0.96}, as well as an experimental condition with no value function. For quadrupedal locomotion, the best results are obtained using a value function with λ = 0.96 Section 6.3.2. For 3D standing, the value function always helped, but the results are roughly the same for λ = 0.96 and λ = 1. 10 Published as a conference paper at ICLR 2016 ds 3D Standing Up — 7=0.99, No value fn rm = oc 2 1 0.5 0.0 6 ° 100 208 300 400 560 4 5 number of policy iterations = cost 2 1 6 4 5 = Figure 4: (a) Learning curve from quadrupedal walking, (b) learning curve for 3D standing up, (c) clips from 3D standing up. # 7 DISCUSSION
1506.02438#36
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
37
# 7 DISCUSSION Policy gradient methods provide a way to reduce reinforcement learning to stochastic gradient de- scent, by providing unbiased gradient estimates. However, so far their success at solving difficult control problems has been limited, largely due to their high sample complexity. We have argued that the key to variance reduction is to obtain good estimates of the advantage function. We have provided an intuitive but informal analysis of the problem of advantage function estimation, and justified the generalized advantage estimator, which has two parameters γ, λ which adjust the bias-variance tradeoff. We described how to combine this idea with trust region policy optimization and a trust region algorithm that optimizes a value function, both represented by neural networks. Combining these techniques, we are able to learn to solve difficult control tasks that have previously been out of reach for generic reinforcement learning methods. Our main experimental validation of generalized advantage estimation is in the domain of simulated robotic locomotion. As shown in our experiments, choosing an appropriate intermediate value of λ in the range [0.9, 0.99] usually results in the best performance. A possible topic for future work is how to adjust the estimator parameters γ, λ in an adaptive or automatic way.
1506.02438#37
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
38
One question that merits future investigation is the relationship between value function estimation error and policy gradient estimation error. If this relationship were known, we could choose an error metric for value function fitting that is well-matched to the quantity of interest, which is typically the accuracy of the policy gradient estimation. Some candidates for such an error metric might include the Bellman error or projected Bellman error, as described in Bhatnagar et al. (2009). Another enticing possibility is to use a shared function approximation architecture for the policy and the value function, while optimizing the policy using generalized advantage estimation. While for- mulating this problem in a way that is suitable for numerical optimization and provides convergence guarantees remains an open question, such an approach could allow the value function and policy representations to share useful features of the input, resulting in even faster learning. In concurrent work, researchers have been developing policy gradient methods that involve differen- tiation with respect to the continuous-valued action (Lillicrap et al., 2015; Heess et al., 2015). While we found empirically that the one-step return (λ = 0) leads to excessive bias and poor performance, these papers show that such methods can work when tuned appropriately. However, note that those papers consider control problems with substantially lower-dimensional state and action spaces than the ones considered here. A comparison between both classes of approach would be useful for future work. # ACKNOWLEDGEMENTS
1506.02438#38
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
39
# ACKNOWLEDGEMENTS We thank Emo Todorov for providing the simulator as well as insightful discussions, and we thank Greg Wayne, Yuval Tassa, Dave Silver, Carlos Florensa Campo, and Greg Brockman for insightful discussions. This research was funded in part by the Office of Naval Research through a Young 11 Published as a conference paper at ICLR 2016 Investigator Award and under grant number N00014-11-1-0688, DARPA through a Young Faculty Award, by the Army Research Office through the MAST program. A FREQUENTLY ASKED QUESTIONS A.1 WHAT’S THE RELATIONSHIP WITH COMPATIBLE FEATURES?
1506.02438#39
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
40
A FREQUENTLY ASKED QUESTIONS A.1 WHAT’S THE RELATIONSHIP WITH COMPATIBLE FEATURES? Compatible features are often mentioned in relation to policy gradient algorithms that make use of a value function, and the idea was proposed in the paper On Actor-Critic Methods by Konda & Tsitsiklis (2003). These authors pointed out that due to the limited representation power of the policy, the policy gradient only depends on a certain subspace of the space of advantage functions. This subspace is spanned by the compatible features ∇θi log πθ(at|st), where i ∈ {1, 2, . . . , dim θ}. This theory of compatible features provides no guidance on how to exploit the temporal structure of the problem to obtain better estimates of the advantage function, making it mostly orthogonal to the ideas in this paper. The idea of compatible features motivates an elegant method for computing the natural policy gradi- ent (Kakade, 2001a; Peters & Schaal, 2008). Given an empirical estimate of the advantage function ˆAt at each timestep, we can project it onto the subspace of compatible features by solving the fol- lowing least squares problem: minimize )|[r - Vo log 76 (at | 81) — Arll?. (32) * t
1506.02438#40
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
41
minimize )|[r - Vo log 76 (at | 81) — Arll?. (32) * t If ˆA is γ-just, the least squares solution is the natural policy gradient (Kakade, 2001a). Note that any estimator of the advantage function can be substituted into this formula, including the ones we derive in this paper. For our experiments, we also compute natural policy gradient steps, but we use the more computationally efficient numerical procedure from Schulman et al. (2015), as discussed in Section 6. A.2 WHY DON’T YOU JUST USE A Q-FUNCTION?
1506.02438#41
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
42
Previous actor critic methods, e.g. in Konda & Tsitsiklis (2003), use a Q-function to obtain poten- tially low-variance policy gradient estimates. Recent papers, including Heess et al. (2015); Lillicrap et al. (2015), have shown that a neural network Q-function approximator can used effectively in a policy gradient method. However, there are several advantages to using a state-value function in the manner of this paper. First, the state-value function has a lower-dimensional input and is thus easier to learn than a state-action value function. Second, the method of this paper allows us to smoothly interpolate between the high-bias estimator (λ = 0) and the low-bias estimator (λ = 1). On the other hand, using a parameterized Q-function only allows us to use a high-bias estimator. We have found that the bias is prohibitively large when using a one-step estimate of the returns, i.e., the λ = 0 esti- mator, ˆAt = δV t = rt + γV (st+1) − V (st). We expect that similar difficulty would be encountered when using an advantage
1506.02438#42
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
44
# B PROOFS Proof of Proposition 1: First we can split the expectation into terms involving Q and b, Es0:∞,a0:∞ [∇θ log πθ(at | st)(Qt(s0:∞, a0:∞) − bt(s0:t, a0:t−1))] = Es0:∞,a0:∞ [∇θ log πθ(at | st)(Qt(s0:∞, a0:∞))] − Es0:∞,a0:∞ [∇θ log πθ(at | st)(bt(s0:t, a0:t−1))] (33) 12 Published as a conference paper at ICLR 2016 We’ll consider the terms with Q and b in turn.
1506.02438#44
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
46
Next, Bs6..5 00:00 [Vo log 76 (at | 81) be (So:t, ao:t—1)] = Boojao0-1 [Eseyrce are [Vo log 79(ar | $¢)be(S0:t; @0:4-1)]] = Eso.4,00:¢-1 [Ese 41:c0,0tsc0 [Vo log 7 (at | s2)] bi(So:t; @0:e—1) | = Ego., 001 [0 «be (S0:2, @0:4—-1)] =0. # REFERENCES Barto, Andrew G, Sutton, Richard S, and Anderson, Charles W. Neuronlike adaptive elements that can solve difficult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834–846, 1983. Baxter, Jonathan and Bartlett, Peter L. Reinforcement learning in POMDPs via direct gradient ascent. In ICML, pp. 41–48, 2000. Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 2. Athena Scientific, 2012.
1506.02438#46
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
47
Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 2. Athena Scientific, 2012. Bhatnagar, Shalabh, Precup, Doina, Silver, David, Sutton, Richard S, Maei, Hamid R, and Szepesv´ari, Csaba. In Advances in Convergent temporal-difference learning with arbitrary smooth function approximation. Neural Information Processing Systems, pp. 1204–1212, 2009. Greensmith, Evan, Bartlett, Peter L, and Baxter, Jonathan. Variance reduction techniques for gradient estimates in reinforcement learning. The Journal of Machine Learning Research, 5:1471–1530, 2004. Hafner, Roland and Riedmiller, Martin. Reinforcement learning in feedback control. Machine learning, 84 (1-2):137–169, 2011. Heess, Nicolas, Wayne, Greg, Silver, David, Lillicrap, Timothy, Tassa, Yuval, and Erez, Tom. Learning contin- uous control policies by stochastic value gradients. arXiv preprint arXiv:1510.09142, 2015. Hull, Clark. Principles of behavior. 1943. Kakade, Sham. A natural policy gradient. In NIPS, volume 14, pp. 1531–1538, 2001a.
1506.02438#47
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
48
Hull, Clark. Principles of behavior. 1943. Kakade, Sham. A natural policy gradient. In NIPS, volume 14, pp. 1531–1538, 2001a. Kakade, Sham. Optimizing average reward using discounted rewards. In Computational Learning Theory, pp. 605–615. Springer, 2001b. Kimura, Hajime and Kobayashi, Shigenobu. An analysis of actor/critic algorithms using eligibility traces: Reinforcement learning with imperfect value function. In ICML, pp. 278–286, 1998. Konda, Vijay R and Tsitsiklis, John N. On actor-critic algorithms. SIAM journal on Control and Optimization, 42(4):1143–1166, 2003. Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Sil- ver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Marbach, Peter and Tsitsiklis, John N. Approximate gradient methods in policy-space optimization of markov reward processes. Discrete Event Dynamic Systems, 13(1-2):111–148, 2003.
1506.02438#48
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
49
Minsky, Marvin. Steps toward artificial intelligence. Proceedings of the IRE, 49(1):8–30, 1961. Ng, Andrew Y, Harada, Daishi, and Russell, Stuart. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278–287, 1999. Peters, Jan and Schaal, Stefan. Natural actor-critic. Neurocomputing, 71(7):1180–1190, 2008. 13 Published as a conference paper at ICLR 2016 Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. arXiv preprint arXiv:1502.05477, 2015. Sutton, Richard S and Barto, Andrew G. Introduction to reinforcement learning. MIT Press, 1998. Sutton, Richard S, McAllester, David A, Singh, Satinder P, and Mansour, Yishay. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057–1063. Citeseer, 1999. Thomas, Philip. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441–448, 2014.
1506.02438#49
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.02438
50
Thomas, Philip. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441–448, 2014. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. IEEE, 2012. Wawrzy´nski, Paweł. Real-time reinforcement learning by sequential actor–critics and experience replay. Neural Networks, 22(10):1484–1497, 2009. Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. Wright, Stephen J and Nocedal, Jorge. Numerical optimization. Springer New York, 1999. 14
1506.02438#50
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
http://arxiv.org/pdf/1506.02438
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel
cs.LG, cs.RO, cs.SY
null
null
cs.LG
20150608
20181020
[ { "id": "1502.05477" }, { "id": "1509.02971" }, { "id": "1510.09142" } ]
1506.01186
1
# Abstract It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically elim- inates the need to experimentally find the best values and schedule for the global learning rates. Instead of mono- tonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable bound- ary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate “reasonable bounds” – linearly increasing the learning rate of the net- work for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks. ‘Xponential —CLR (our approach) 0 1 2 3 4 5 6 7 Iteration x 10° Figure 1. Classification accuracy while training CIFAR-10. The red curve shows the result of training with one of the new learning rate policies.
1506.01186#1
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
2
Figure 1. Classification accuracy while training CIFAR-10. The red curve shows the result of training with one of the new learning rate policies. ing training. This paper demonstrates the surprising phe- nomenon that a varying learning rate during training is ben- eficial overall and thus proposes to let the global learning rate vary cyclically within a band of values instead of set- ting it to a fixed value. In addition, this cyclical learning rate (CLR) method practically eliminates the need to tune the learning rate yet achieve near optimal classification accu- racy. Furthermore, unlike adaptive learning rates, the CLR methods require essentially no additional computation. # 1. Introduction Deep neural networks are the basis of state-of-the-art re- sults for image recognition [17, 23, 25], object detection [7], face recognition [26], speech recognition [8], machine translation [24], image caption generation [28], and driver- less car technology [14]. However, training a deep neural network is a difficult global optimization problem.
1506.01186#2
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
3
The potential benefits of CLR can be seen in Figure 1, which shows the test data classification accuracy of the CIFAR-10 dataset during training1. The baseline (blue curve) reaches a final accuracy of 81.4% after 70, 000 it- erations. In contrast, it is possible to fully train the network using the CLR method instead of tuning (red curve) within 25,000 iterations and attain the same accuracy. The contributions of this paper are: A deep neural network is typically updated by stochastic gradient descent and the parameters 0 (weights) are updated by 0 = ott — 5, where L is a loss function and €; is the learning rate. It is well known that too small a learning rate will make a training algorithm converge slowly while too large a learning rate will make the training algorithm diverge [2]. Hence, one must experiment with a variety of learning rates and schedules. 1. A methodology for setting the global learning rates for training neural networks that eliminates the need to perform numerous experiments to find the best values and schedule with essentially no additional computa- tion. 2. A surprising phenomenon is demonstrated - allowing the learning rate should be a single value that monotonically decreases dur1Hyper-parameters and architecture were obtained in April 2015 from caffe.berkeleyvision.org/gathered/examples/cifar10.html
1506.01186#3
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
4
the learning rate to rise and fall is beneficial overall even though it might temporarily harm the network’s performance. 3. Cyclical learning rates are demonstrated with ResNets, Stochastic Depth networks, and DenseNets on the CIFAR-10 and CIFAR-100 datasets, and on ImageNet with two well-known architectures: AlexNet [17] and GoogleNet [25]. # 2. Related work The book “Neural Networks: Tricks of the Trade” is a terrific source of practical advice. In particular, Yoshua Bengio [2] discusses reasonable ranges for learning rates and stresses the importance of tuning the learning rate. A technical report by Breuel [3] provides guidance on a vari- ety of hyper-parameters. There are also a numerous web- sites giving practical suggestions for setting the learning rates. Adaptive learning rates: Adaptive learning rates can be considered a competitor to cyclical learning rates because one can rely on local adaptive learning rates in place of global learning rate experimentation but there is a signifi- cant computational cost in doing so. CLR does not possess this computational costs so it can be used freely.
1506.01186#4
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
5
A review of the early work on adaptive learning rates can be found in George and Powell [6]. Duchi, et al. [5] pro- posed AdaGrad, which is one of the early adaptive methods that estimates the learning rates from the gradients. RMSProp is discussed in the slides by Geoffrey Hinton2 [27]. RMSProp is described there as “Divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.” RMSProp is a funda- mental adaptive learning rate method that others have built on. Schaul et al. [22] discuss an adaptive learning rate based on a diagonal estimation of the Hessian of the gradients. One of the features of their method is that they allow their automatic method to decrease or increase the learning rate. However, their paper seems to limit the idea of increasing learning rate to non-stationary problems. On the other hand, this paper demonstrates that a schedule of increasing the learning rate is more universally valuable. Zeiler [29] describes his AdaDelta method, which im- proves on AdaGrad based on two ideas: limiting the sum of squared gradients over all time to a limited window, and making the parameter update rule consistent with a units evaluation on the relationship between the update and the Hessian.
1506.01186#5
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
6
More recently, several papers have appeared on adaptive learning rates. Gulcehre and Bengio [9] propose an adaptive learning rate algorithm, called AdaSecant, that utilizes the 2www.cs.toronto.edu/ tijmen/csc321/slides/lecture slides lec6.pdf root mean square statistics and variance of the gradients. Dauphin et al. [4] show that RMSProp provides a biased estimate and go on to describe another estimator, named ESGD, that is unbiased. Kingma and Lei-Ba [16] introduce Adam that is designed to combine the advantages from Ada- Grad and RMSProp. Bache, et al. [1] propose exploiting solutions to a multi-armed bandit problem for learning rate selection. A summary and tutorial of adaptive learning rates can be found in a recent paper by Ruder [20]. Adaptive learning rates are fundamentally different from CLR policies, and CLR can be combined with adaptive learning rates, as shown in Section 4.1. In addition, CLR policies are computationally simpler than adaptive learning rates. CLR is likely most similar to the SGDR method [18] that appeared recently. # 3. Optimal Learning Rates # 3.1. Cyclical Learning Rates
1506.01186#6
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
7
# 3. Optimal Learning Rates # 3.1. Cyclical Learning Rates The essence of this learning rate policy comes from the observation that increasing the learning rate might have a short term negative effect and yet achieve a longer term ben- eficial effect. This observation leads to the idea of letting the learning rate vary within a range of values rather than adopt- ing a stepwise fixed or exponentially decreasing value. That is, one sets minimum and maximum boundaries and the learning rate cyclically varies between these bounds. Ex- periments with numerous functional forms, such as a trian- gular window (linear), a Welch window (parabolic) and a Hann window (sinusoidal) all produced equivalent results This led to adopting a triangular window (linearly increas- ing then linearly decreasing), which is illustrated in Figure 2, because it is the simplest function that incorporates this idea. The rest of this paper refers to this as the triangular learning rate policy. Maximum bound (max_Ir) Minimum bound - (base_Ir) stepsize Figure 2. Triangular learning rate policy. The blue lines represent learning rate values changing between bounds. The input parame- ter stepsize is the number of iterations in half a cycle.
1506.01186#7
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
8
Figure 2. Triangular learning rate policy. The blue lines represent learning rate values changing between bounds. The input parame- ter stepsize is the number of iterations in half a cycle. An intuitive understanding of why CLR methods work comes from considering the loss function topology. Dauphin et al. [4] argue that the difficulty in minimizing the loss arises from saddle points rather than poor local minima. Saddle points have small gradients that slow the learning process. However, increasing the learning rate allows more rapid traversal of saddle point plateaus. A more practical reason as to why CLR works is that, by following the meth- ods in Section 3.3, it is likely the optimum learning rate will be between the bounds and near optimal learning rates will be used throughout training. The red curve in Figure 1 shows the result of the triangular policy on CIFAR-10. The settings used to cre- ate the red curve were a minimum learning rate of 0.001 (as in the original parameter file) and a maximum of 0.006. Also, the cycle length (i.e., the number of iterations until the learning rate returns to the initial value) is set to 4, 000 iterations (i.e., stepsize = 2000) and Figure 1 shows that the accuracy peaks at the end of each cycle.
1506.01186#8
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
9
Implementation of the code for a new learning rate policy is straightforward. An example of the code added to Torch 7 in the experiments shown in Section 4.1.2 is the following few lines: l o c a l c y c l e = math . f l o o r ( 1 + e p o c h C o u n t e r / ( 2 ∗ s t e p s i z e ) ) l o c a l x = math . a b s ( e p o c h C o u n t e r / s t e p s i z e − 2∗ c y c l e + 1 ) l o c a l l r = o p t . LR + ( maxLR − o p t . LR ) (1−x ) ) ∗ math . max ( 0 , where opt.LR is the specified lower (i.e., base) learning rate, epochCounter is the number of epochs of training, and lr is the computed learning rate. This policy is named triangular and is as described above, with two new in- put parameters defined: stepsize (half the period or cycle length) and max lr (the maximum learning rate boundary). This code varies the learning rate linearly between the min- imum (base lr) and the maximum (max lr).
1506.01186#9
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
10
In addition to the triangular policy, the following CLR policies are discussed in this paper: 1. triangular2; the same as the triangular policy ex- cept the learning rate difference is cut in half at the end of each cycle. This means the learning rate difference drops after each cycle. 2. exp range; the learning rate varies between the min- imum and maximum boundaries and each bound- factor of ary value declines by an exponential gammaiteration. # 3.2. How can one estimate a good value for the cycle length? The length of a cycle and the input parameter stepsize can be easily computed from the number of iterations in an epoch. An epoch is calculated by dividing the number of training images by the batchsize used. For example, CIFAR-10 has 50, 000 training images and the batchsize is 100 so an epoch = 50, 000/100 = 500 iterations. The final accuracy results are actually quite robust to cycle length but experiments show that it often is good to set stepsize equal to 2 − 10 times the number of iterations in an epoch. For example, setting stepsize = 8 ∗ epoch with the CIFAR-10 training run (as shown in Figure 1) only gives slightly better results than setting stepsize = 2 ∗ epoch.
1506.01186#10
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
11
Furthermore, there is a certain elegance to the rhythm of these cycles and it simplifies the decision of when to drop learning rates and when to stop the current training run. Experiments show that replacing each step of a con- stant learning rate with at least 3 cycles trains the network weights most of the way and running for 4 or more cycles will achieve even better performance. Also, it is best to stop training at the end of a cycle, which is when the learning rate is at the minimum value and the accuracy peaks. # 3.3. How can one estimate reasonable minimum and maximum boundary values? There is a simple way to estimate reasonable minimum and maximum boundary values with one training run of the network for a few epochs. It is a “LR range test”; run your model for several epochs while letting the learning rate in- crease linearly between low and high LR values. This test is enormously valuable whenever you are facing a new ar- chitecture or dataset. CIFAR-10 0.6 Accuracy 0.1 0 0.005 0.01 Learning rate 0.015 0.02 Figure 3. Classification accuracy as a function of increasing learn- ing rate for 8 epochs (LR range test).
1506.01186#11
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
12
Figure 3. Classification accuracy as a function of increasing learn- ing rate for 8 epochs (LR range test). The triangular learning rate policy provides a simple mechanism to do this. For example, in Caffe, set base lr to the minimum value and set max lr to the maximum value. Set both the stepsize and max iter to the same number of iterations. In this case, the learning rate will increase lin- early from the minimum value to the maximum value dur- ing this short run. Next, plot the accuracy versus learning rate. Note the learning rate value when the accuracy starts to increase and when the accuracy slows, becomes ragged, or
1506.01186#12
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
13
Dataset CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 AlexNet AlexNet AlexNet AlexNet AlexNet GoogLeNet GoogLeNet GoogLeNet GoogLeNet LR policy f ixed triangular2 decay exp exp range f ixed triangular2 exp exp exp range f ixed triangular2 exp exp range Iterations Accuracy (%) 70,000 25, 000 25,000 70,000 42,000 400,000 400,000 300,000 460,000 300,000 420,000 420,000 240,000 240,000 81.4 81.4 78.5 79.1 82.2 58.0 58.4 56.0 56.5 56.5 63.0 64.4 58.2 60.2 Table 1. Comparison of accuracy results on test/validation data at the end of the training. starts to fall. These two learning rates are good choices for bounds; that is, set base lr to the first value and set max lr to the latter value. Alternatively, one can use the rule of thumb that the optimum learning rate is usually within a factor of two of the largest one that converges [2] and set base lr to 1
1506.01186#13
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
14
Figure 3 shows an example of making this type of run with the CIFAR-10 dataset, using the architecture and hyper-parameters provided by Caffe. One can see from Fig- ure 3 that the model starts converging right away, so it is rea- sonable to set base lr = 0.001. Furthermore, above a learn- ing rate of 0.006 the accuracy rise gets rough and eventually begins to drop so it is reasonable to set max lr = 0.006. Whenever one is starting with a new architecture or dataset, a single LR range test provides both a good LR value and a good range. Then one should compare runs with a fixed LR versus CLR with this range. Whichever wins can be used with confidence for the rest of one’s experiments. # 4. Experiments The purpose of this section is to demonstrate the effec- tiveness of the CLR methods on some standard datasets and with a range of architectures. In the subsections below, CLR policies are used for training with the CIFAR-10, CIFAR- 100, and ImageNet datasets. These three datasets and a va- riety of architectures demonstrate the versatility of CLR. # 4.1. CIFAR-10 and CIFAR-100 # 4.1.1 Caffe’s CIFAR-10 architecture
1506.01186#14
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
15
# 4.1. CIFAR-10 and CIFAR-100 # 4.1.1 Caffe’s CIFAR-10 architecture The CIFAR-10 architecture and hyper-parameter settings on the Caffe website are fairly standard and were used here as a baseline. As discussed in Section 3.2, an epoch is equal CIFAR-10 ---Exp policy ——Exp Range 0.2 1 2 3 4 5 6 7 Iteration * 10° Figure 4. Classification accuracy as a function of iteration for 70, 000 iterations. CIFAR10; Combining adaptive LR and CLR —Nesterov + CLR — Adam > 03 —Adam + CLR Iteration % is’ Figure 5. Classification accuracy as a function of iteration for the CIFAR-10 dataset using adaptive learning methods. See text for explanation. to 500 iterations and a good setting for stepsize is 2, 000. Section 3.3 discussed how to estimate reasonable minimum and maximum boundary values for the learning rate from Figure 3. All that is needed to optimally train the network is to set base lr = 0.001 and max lr = 0.006. This is all that is needed to optimally train the network. For the triangular2 policy run shown in Figure 1, the stepsize and learning rate bounds are shown in Table 2.
1506.01186#15
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
16
base lr 0.001 0.0001 0.00001 max lr 0.005 0.0005 0.00005 stepsize 2,000 1,000 500 start 0 16,000 22,000 max iter 16,000 22,000 25,000 Table 2. Hyper-parameter settings for CIFAR-10 example in Fig- ure 1. running with the the result of triangular2 policy with the parameter setting in Table 2. As shown in Table 1, one obtains the same test classifica- tion accuracy of 81.4% after only 25, 000 iterations with the triangular2 policy as obtained by running the standard hyper-parameter settings for 70, 000 iterations. 8 CIFAR10; Sigmoid + Batch Normalization ‘ . 3.0.6 a g4 to2 %% 1 3 3 4 5 6 Iteration x104 Figure 6. Batch Normalization CIFAR-10 example (provided with the Caffe download).
1506.01186#16
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
17
Figure 6. Batch Normalization CIFAR-10 example (provided with the Caffe download). from the triangular policy derive from reducing the learning rate because this is when the accuracy climbs the most. As a test, a decay policy was implemented where the learn- ing rate starts at the max lr value and then is linearly re- duced to the base lr value for stepsize number of itera- tions. After that, the learning rate is fixed to base lr. For the decay policy, max lr = 0.007, base lr = 0.001, and stepsize = 4000. Table 1 shows that the final accuracy is only 78.5%, providing evidence that both increasing and decreasing the learning rate are essential for the benefits of the CLR method. Figure 4 compares the exp learning rate policy in Caffe with the new exp range policy using gamma = 0.99994 for both policies. is that when using the exp range policy one can stop training at iteration 42, 000 with a test accuracy of 82.2% (going to iteration 70, 000 does not improve on this result). This is substantially better than the best test accuracy of 79.1% one obtains from using the exp learning rate policy.
1506.01186#17
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
18
The current Caffe download contains additional archi- tectures and hyper-parameters for CIFAR-10 and in partic- ular there is one with sigmoid non-linearities and batch nor- malization. Figure 6 compares the training accuracy using the downloaded hyper-parameters with a fixed learning rate (blue curve) to using a cyclical learning rate (red curve). As can be seen in this Figure, the final accuracy for the fixed learning rate (60.8%) is substantially lower than the cyclical learning rate final accuracy (72.2%). There is clear perfor- mance improvement when using CLR with this architecture containing sigmoids and batch normalization. Experiments were carried out with architectures featur- ing both adaptive learning rate methods and CLR. Table 3 lists the final accuracy values from various adaptive learning rate methods, run with and without CLR. All of the adap- tive methods in Table 3 were run by invoking the respective option in Caffe. The learning rate boundaries are given in Table 3 (just below the method’s name), which were deter- mined by using the technique described in Section 3.3. Just the lower bound was used for base lr for the f ixed policy.
1506.01186#18
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
19
LR type/bounds Nesterov [19] 0.001 - 0.006 ADAM [16] 0.0005 - 0.002 RMSprop [27] 0.0001 - 0.0003 AdaGrad [5] 0.003 - 0.035 AdaDelta [29] 0.01 - 0.1 LR policy f ixed triangular f ixed triangular triangular f ixed triangular triangular f ixed triangular f ixed triangular Iterations Accuracy (%) 70,000 25,000 70,000 25,000 70,000 70,000 25,000 70,000 70,000 25,000 70,000 25,000 82.1 81.3 81.4 79.8 81.1 75.2 72.8 75.1 74.6 76.0 67.3 67.3 Table 3. Comparison of CLR with adaptive learning rate methods. The table shows accuracy results for the CIFAR-10 dataset on test data at the end of the training.
1506.01186#19
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
20
Table 3. Comparison of CLR with adaptive learning rate methods. The table shows accuracy results for the CIFAR-10 dataset on test data at the end of the training. Table 3 shows that for some adaptive learning rate meth- ods combined with CLR, the final accuracy after only 25,000 iterations is equivalent to the accuracy obtained without CLR after 70,000 iterations. For others, it was nec- essary (even with CLR) to run until 70,000 iterations to ob- tain similar results. Figure 5 shows the curves from running the Nesterov method with CLR (reached 81.3% accuracy in only 25,000 iterations) and the Adam method both with and without CLR (both needed 70,000 iterations). When using adaptive learning rate methods, the benefits from CLR are sometimes reduced, but CLR can still valuable as it some- times provides benefit at essentially no cost. # 4.1.2 ResNets, Stochastic Depth, and DenseNets
1506.01186#20
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
21
# 4.1.2 ResNets, Stochastic Depth, and DenseNets Residual networks [10, 11], and the family of variations that have subsequently emerged, achieve state-of-the-art re- sults on a variety of tasks. Here we provide comparison experiments between the original implementations and ver- sions with CLR for three members of this residual net- work family: the original ResNet [10], Stochastic Depth networks [13], and the recent DenseNets [12]. Our ex- periments can be readily replicated because the authors of these papers make their Torch code available3. Since all three implementation are available using the Torch 7 frame- work, the experiments in this section were performed using Torch. In addition to the experiment in the previous Sec- tion, these networks also incorporate batch normalization [15] and demonstrate the value of CLR for architectures with batch normalization. Both CIFAR-10 and the CIFAR-100 datasets were used # 3https://github.com/facebook/fb.resnet.torch, https://github.com/yueatsprograms/Stochastic Depth, https://github.com/liuzhuang13/DenseNet in these experiments. The CIFAR-100 dataset is similar to the CIFAR-10 data but it has 100 classes instead of 10 and each class has 600 labeled examples.
1506.01186#21
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
22
Architecture ResNet ResNet ResNet ResNet+CLR SD SD SD SD+CLR DenseNet DenseNet DenseNet CIFAR-10 (LR) CIFAR-100 (LR) 92.8(0.1) 93.3(0.2) 91.8(0.3) 93.6(0.1 − 0.3) 94.6(0.1) 94.5(0.2) 94.2(0.3) 94.5(0.1 − 0.3) 94.5(0.1) 94.5(0.2) 94.2(0.3) 71.2(0.1) 71.6(0.2) 71.9(0.3) 72.5(0.1 − 0.3) 75.2(0.1) 75.2(0.2) 74.6(0.3) 75.4(0.1 − 0.3) 75.2(0.1) 75.3(0.2) 74.5(0.3) 75.9(0.1 − 0.2) DenseNet+CLR 94.9(0.1 − 0.2)
1506.01186#22
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
23
Table 4. Comparison of CLR with ResNets [10, 11], Stochastic Depth (SD) [13], and DenseNets [12]. The table shows the average accuracy of 5 runs for the CIFAR-10 and CIFAR-100 datasets on test data at the end of the training. The results for these two datasets on these three archi- tectures are summarized in Table 4. The left column give the architecture and whether CLR was used in the experi- ments. The other two columns gives the average final ac- curacy from five runs and the initial learning rate or range used in parenthesis, which are reduced (for both the fixed learning rate and the range) during the training according to the same schedule used in the original implementation. For all three architectures, the original implementation uses an initial LR of 0.1 which we use as a baseline.
1506.01186#23
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
24
The accuracy results in Table 4 in the right two columns are the average final test accuracies of five runs. The Stochastic Depth implementation was slightly different than the ResNet and DenseNet implementation in that the au- thors split the 50,000 training images into 45,000 training images and 5,000 validation images. However, the reported results in Table 4 for the SD architecture is only test accura- cies for the five runs. The learning rate range used by CLR was determined by the LR range test method and the cycle length was choosen as a tenth of the maximum number of epochs that was specified in the original implementation. In addition to the accuracy results shown in Table 4, similar results were obtained in Caffe for DenseNets [12] on CIFAR-10 using the prototxt files provided by the au- thors. The average accuracy of five runs with learning rates of 0.1, 0.2, 0.3 was 91.67%, 92.17%, 92.46%, respectively, but running with CLR within the range of 0.1 to 0.3, the average accuracy was 93.33%. The results from all of these experiments show similar or better accuracy performance when using CLR versus using a fixed learning rate, even though the performance drops at
1506.01186#24
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
25
The results from all of these experiments show similar or better accuracy performance when using CLR versus using a fixed learning rate, even though the performance drops at ImageNet on AlexNet 0.2 Accuracy ° a we 0.05) ° 6.005 0.01 0.015 0.02 0.025 6.03 6.035 0.04 6.045 Learning rate Figure 7. AlexNet LR range test; validation classification accuracy as a function of increasing learning rate. ImageNet/AlexNet architecture S a se 2s Row es) Validation Accuracy ses “ob —Triangular2 os. 1 15.3225 °3°«35 Iteration x 10° Figure 8. Validation data classification accuracy as a function of iteration for f ixed versus triangular. some of the learning rate values within this range. These experiments confirm that it is beneficial to use CLR for a variety of residual architectures and for both CIFAR-10 and CIFAR-100. # 4.2. ImageNet The ImageNet dataset [21] is often used in deep learning literature as a standard for comparison. The ImageNet clas- sification challenge provides about 1, 000 training images for each of the 1, 000 classes, giving a total of 1, 281, 167 labeled training images. # 4.2.1 AlexNet
1506.01186#25
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
26
# 4.2.1 AlexNet The Caffe website provides the architecture and hyper- parameter files for a slightly modified AlexNet [17]. These were downloaded from the website and used as a baseline. In the training results reported in this section, all weights ImageNet/AlexNet architecture = a S a S nS me (ees) Validation Accuracy —Triangular2 0 ‘ F ; q 4 1 005 1 15. 2. 25. 3. 335 Iteration <i Figure 9. Validation data classification accuracy as a function of iteration for f ixed versus triangular. were initialized the same so as to avoid differences due to different random initializations. Since the batchsize in the architecture file is 256, an epoch is equal to 1, 281, 167/256 = 5, 005 iterations. Hence, a reasonable setting for stepsize is 6 epochs or 30, 000 iterations.
1506.01186#26
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
27
Next, one can estimate reasonable minimum and maxi- mum boundaries for the learning rate from Figure 7. It can be seen from this figure that the training doesn’t start con- verging until at least 0.006 so setting base lr = 0.006 is reasonable. However, for a fair comparison to the baseline where base lr = 0.01, it is necessary to set the base lr to 0.01 for the triangular and triangular2 policies or else the majority of the apparent improvement in the accuracy will be from the smaller learning rate. As for the maxi- mum boundary value, the training peaks and drops above a learning rate of 0.015 so max lr = 0.015 is reasonable. For comparing the exp range policy to the exp policy, set- ting base lr = 0.006 and max lr = 0.014 is reasonable and in this case one expects that the average accuracy of the exp range policy to be equal to the accuracy from the exp policy.
1506.01186#27
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
28
Figure 9 compares the results of running with the f ixed versus the triangular2 policy for the AlexNet architecture. Here, the peaks at iterations that are multiples of 60,000 should produce a classification accuracy that corresponds to the f ixed policy. Indeed, the accuracy peaks at the end of a cycle for the triangular2 policy are similar to the ac- curacies from the standard f ixed policy, which implies that the baseline learning rates are set quite well (this is also im- plied by Figure 7). As shown in Table 1, the final accuracies from the CLR training run are only 0.4% better than the ac- curacies from the f ixed policy. Figure 10 compares the results of running with the exp versus the exp range policy for the AlexNet architecture with gamma = 0.999995 for both policies. As expected, ImageNet/AlexNet architecture 0.5 S & Validation Accuracy Ss bs 0.2 0.1 — Exp Range 0 I 2 3 4 Iteration x10° Figure 10. Validation data classification accuracy as a function of iteration for exp versus exp range. ImageNet/GoogleNet architecture 0.08 id S HB Validation Accuracy 2 2° o Peg Ne eS 0 0.01 002 003 004 0.05 0.06 0.07 Learning rate
1506.01186#28
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
29
Figure 11. GoogleNet LR range test; validation classification ac- curacy as a function of increasing learning rate. Figure 10 shows that the accuracies from the exp range policy do oscillate around the exp policy accuracies. The advantage of the exp range policy is that the accuracy of 56.5% is already obtained at iteration 300, 000 whereas the exp policy takes until iteration 460, 000 to reach 56.5%. Finally, a comparison between the f ixed and exp poli- cies in Table 1 shows the f ixed and triangular2 policies produce accuracies that are almost 2% better than their ex- ponentially decreasing counterparts, but this difference is probably due to not having tuned gamma. # 4.2.2 GoogLeNet/Inception Architecture The GoogLeNet architecture was a winning entry to the ImageNet 2014 image classification competition. Szegedy et al. [25] describe the architecture in detail but did not provide the architecture file. The architecture file publicly available from Princeton4 was used in the following exper- iments. The GoogLeNet paper does not state the learning rate values and the hyper-parameter solver file is not avail4vision.princeton.edu/pvt/GoogLeNet/
1506.01186#29
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
30
Imagenet with GoogLeNet architecture 2 g 2 a 2 in 2 5 2 io Validation Accuracy ° is e 0 05 1 15 2 25) 3 3.5 4 45 Iteration x10 Figure 12. Validation data classification accuracy as a function of iteration for f ixed versus triangular. able for a baseline but not having these hyper-parameters is a typical situation when one is developing a new architec- ture or applying a network to a new dataset. This is a situa- tion that CLR readily handles. Instead of running numerous experiments to find optimal learning rates, the base lr was set to a best guess value of 0.01.
1506.01186#30
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
31
The first step is to estimate the stepsize setting. Since the architecture uses a batchsize of 128 an epoch is equal to 1, 281, 167/128 = 10, 009 iterations. Hence, good settings for stepsize would be 20, 000, 30, 000, or possibly 40, 000. The results in this section are based on stepsize = 30000. The next step is to estimate the bounds for the learning rate, which is found with the LR range test by making a run for 4 epochs where the learning rate linearly increases from 0.001 to 0.065 (Figure 11). This figure shows that one can use bounds between 0.01 and 0.04 and still have the model reach convergence. However, learning rates above 0.025 cause the training to converge erratically. For both triangular2 and the exp range policies, the base lr was set to 0.01 and max lr was set to 0.026. As above, the accuracy peaks for both these learning rate policies corre- spond to the same learning rate value as the f ixed and exp policies. Hence, the comparisons below will focus on the peak accuracies from the LCR methods.
1506.01186#31
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
32
Figure 12 compares the results of running with the f ixed versus the triangular2 policy for this architecture (due to time limitations, each training stage was not run until it fully plateaued). In this case, the peaks at the end of each cycle for the triangular2 policy produce better accuracies than the f ixed policy. The final accuracy shows an improvement from the network trained by the triangular2 policy (Ta- ble 1) to be 1.4% better than the accuracy from the f ixed policy. This demonstrates that the triangular2 policy im- proves on a “best guess” for a fixed learning rate. Figure 13 compares the results of running with the exp versus the exp range policy with gamma = 0.99998. Once again, the peaks at the end of each cycle for the Imagenet with GoogLeNet architecture Ld © = Validation Accuracy ° hed ie -— ExpLR —Exp range 0 0.5 1 15 2 Iteration x10 Figure 13. Validation data classification accuracy as a function of iteration for exp versus exp range. exp range policy produce better validation accuracies than the exp policy. The final accuracy from the exp range pol- icy (Table 1) is 2% better than from the exp policy. # 5. Conclusions
1506.01186#32
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
33
# 5. Conclusions The results presented in this paper demonstrate the ben- efits of the cyclic learning rate (CLR) methods. A short run of only a few epochs where the learning rate linearly in- creases is sufficient to estimate boundary learning rates for the CLR policies. Then a policy where the learning rate cyclically varies between these bounds is sufficient to ob- tain near optimal classification results, often with fewer it- erations. This policy is easy to implement and unlike adap- tive learning rate methods, incurs essentially no additional computational expense. This paper shows that use of cyclic functions as a learn- ing rate policy provides substantial improvements in perfor- mance for a range of architectures. In addition, the cyclic nature of these methods provides guidance as to times to drop the learning rate values (after 3 - 5 cycles) and when to stop the the training. All of these factors reduce the guess- work in setting the learning rates and make these methods practical tools for everyone who trains neural networks. This work has not explored the full range of applications for cyclic learning rate methods. We plan to determine if equivalent policies work for training different architectures, such as recurrent neural networks. Furthermore, we believe that a theoretical analysis would provide an improved un- derstanding of these methods, which might lead to improve- ments in the algorithms.
1506.01186#33
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
34
# References [1] K. Bache, D. DeCoste, and P. Smyth. Hot swapping for online adaptation of optimization hyperparameters. arXiv preprint arXiv:1412.6599, 2014. 2 [2] Y. Bengio. Neural Networks: Tricks of the Trade, chap- ter Practical recommendations for gradient-based training of deep architectures, pages 437–478. Springer Berlin Heidel- berg, 2012. 1, 2, 4 [3] T. M. Breuel. The effects of hyperparameters on sgd training of neural networks. arXiv preprint arXiv:1508.02788, 2015. 2 [4] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rm- sprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015. 2 [5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradi- ent methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011. 2, 5
1506.01186#34
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
35
[6] A. P. George and W. B. Powell. Adaptive stepsizes for re- cursive estimation with applications in approximate dynamic programming. Machine learning, 65(1):167–198, 2006. 2 [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580–587. IEEE, 2014. 1 [8] A. Graves and N. Jaitly. Towards end-to-end speech recog- nition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML- 14), pages 1764–1772, 2014. 1 [9] C. Gulcehre and Y. Bengio. Adasecant: Robust adap- tive secant method for stochastic gradient. arXiv preprint arXiv:1412.7419, 2014. 2
1506.01186#35
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
36
[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Computer Vision and Pattern Recog- nition (CVPR), 2016 IEEE Conference on, 2015. 5, 6 [11] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. 5, 6 [12] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. 5, 6 [13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. arXiv preprint Deep networks with stochastic depth. arXiv:1603.09382, 2016. 5, 6 [14] B. Huval, T. Wang, S. Tandon,
1506.01186#36
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
37
Deep networks with stochastic depth. arXiv:1603.09382, 2016. 5, 6 [14] B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, R. Cheng-Yue, F. Mujica, A. Coates, et al. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716, 2015. 1 [15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 5 [16] D. Kingma and J. Lei-Ba. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2015. 2, 5 Imagenet classification with deep convolutional neural networks. Ad- vances in neural information processing systems, 2012. 1, 2, 6 [18] I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient de- scent with restarts. arXiv preprint arXiv:1608.03983, 2016. 2
1506.01186#37
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
38
[19] Y. Nesterov. A method of solving a convex programming In Soviet Mathe- problem with convergence rate o (1/k2). matics Doklady, volume 27, pages 372–376, 1983. 5 [20] S. Ruder. An overview of gradient descent optimization al- gorithms. arXiv preprint arXiv:1600.04747, 2016. 2 [21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. 6 [22] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. arXiv preprint arXiv:1206.1106, 2012. 2 [23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1
1506.01186#38
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
39
[24] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Infor- mation Processing Systems, pages 3104–3112, 2014. 1 [25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 1, 2, 7 [26] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verifica- tion. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1701–1708. IEEE, 2014. 1 [27] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. 2, 5
1506.01186#39
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
41
} e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == ” t r i a n g u l a r ” ) { i t r = t h i s −> i t e r − t h i s −>p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s −>p a r a m . b a s e l r ( ) + ( t h i s −>p a r a m . m a x l r () − t h i s −>p a r a m . b a s e l r ( ) ) c y c l e = i t r ( 2 ∗ t h i s −>p a r a m . s t e p s i z e ( ) ) ; ( i t r − ( 2 ∗ c y c l e +1)∗ t h i s −>p a r a m . s t e p s i z e ( ) ) ; t h i s −>p a r a m . s t e p s i z e ( ) ; ∗ s t d : : max ( d o u b l e ( 0 )
1506.01186#41
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
42
h i s −>p a r a m . s t e p s i z e ( ) ; ∗ s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 − f a b s ( x ) ) ) ; } e l s e { r a t e = t h i s −>p a r a m . b a s e l r ( ) ; } } e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == ” t r i a n g u l a r 2 ” ) { i t r = t h i s −> i t e r − t h i s −>p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s −>p a r a m . b a s e l r ( ) + ( t h i s −>p a r a m . m a x l r () − t h i s −>p a r a m . b a s e l r ( ) ) ( 2 ∗ t h i s −>p a r a m . s t e p s i z e ( ) ) ; ( i t
1506.01186#42
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01186
43
b a s e l r ( ) ) ( 2 ∗ t h i s −>p a r a m . s t e p s i z e ( ) ) ; ( i t r − ( 2 ∗ c y c l e +1)∗ t h i s −>p a r a m . s t e p s i z e ( ) ) ; c y c l e = i t r t h i s −>p a r a m . s t e p s i z e ( ) ; ∗ s t d : : min ( d o u b l e ( 1 ) , f a b s ( x ) ) / pow ( 2 . 0 , d o u b l e ( c y c l e ) ) ) ) ; s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 − } e l s e { r a t e = t h i s −>p a r a m . b a s e l r ( ) ;
1506.01186#43
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -- linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
http://arxiv.org/pdf/1506.01186
Leslie N. Smith
cs.CV, cs.LG, cs.NE
Presented at WACV 2017; see https://github.com/bckenstler/CLR for instructions to implement CLR in Keras
null
cs.CV
20150603
20170404
[ { "id": "1504.01716" }, { "id": "1502.03167" }, { "id": "1600.04747" }, { "id": "1603.05027" }, { "id": "1508.02788" }, { "id": "1502.04390" }, { "id": "1608.03983" }, { "id": "1603.09382" }, { "id": "1608.06993" } ]
1506.01066
0
6 1 0 2 n a J 8 ] L C . s c [ 2 v 6 6 0 1 0 . 6 0 5 1 : v i X r a # Visualizing and Understanding Neural Models in NLP Jiwei Li1, Xinlei Chen2, Eduard Hovy2 and Dan Jurafsky1 1Computer Science Department, Stanford University, Stanford, CA 94305, USA 2Language Technology Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA {jiweil,jurafsky}@stanford.edu {xinleic,ehovy}@andrew.cmu.edu # Abstract
1506.01066#0
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
1
{jiweil,jurafsky}@stanford.edu {xinleic,ehovy}@andrew.cmu.edu # Abstract While neural networks have been success- fully applied to many NLP tasks the re- sulting vector-based models are very diffi- cult to interpret. For example it’s not clear how they achieve compositionality, build- ing sentence meaning from the meanings of words and phrases. In this paper we describe strategies for visualizing composi- tionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize composi- tionality of negation, intensification, and concessive clauses, allowing us to see well- known markedness asymmetries in nega- tion. We then introduce methods for visu- alizing a unit’s salience, the amount that it contributes to the final composed meaning from first-order derivatives. Our general- purpose methods may have wide applica- tions for understanding compositionality and other semantic properties of deep net- works. # Introduction
1506.01066#1
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
2
# Introduction Neural models match or outperform the perfor- mance of other state-of-the-art systems on a va- riety of NLP tasks. Yet unlike traditional feature- based classifiers that assign and optimize weights to varieties of human interpretable features (parts- of-speech, named entities, word shapes, syntactic parse features etc) the behavior of deep learning models is much less easily interpreted. Deep learn- ing models mainly operate on word embeddings (low-dimensional, continuous, real-valued vectors) through multi-layer neural architectures, each layer of which is characterized as an array of hidden neu- ron units. It is unclear how deep learning models deal with composition, implementing functions like negation or intensification, or combining meaning from different parts of the sentence, filtering away the informational chaff from the wheat, to build sentence meaning. In this paper, we explore multiple strategies to interpret meaning composition in neural models. We employ traditional methods like representation plotting, and introduce simple strategies for measur- ing how much a neural unit contributes to meaning composition, its ‘salience’ or importance using first derivatives.
1506.01066#2
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
3
Visualization techniques/models represented in this work shed important light on how neural mod- els work: For example, we illustrate that LSTM’s success is due to its ability in maintaining a much sharper focus on the important key words than other models; Composition in multiple clauses works competitively, and that the models are able to cap- ture negative asymmetry, an important property of semantic compositionally in natural language understanding; there is sharp dimensional local- ity, with certain dimensions marking negation and quantification in a manner that was surprisingly localist. Though our attempts only touch superfi- cial points in neural models, and each method has its pros and cons, together they may offer some insights into the behaviors of neural models in lan- guage based tasks, marking one initial step toward understanding how they achieve meaning composi- tion in natural language processing. The next section describes some visualization models in vision and NLP that have inspired this work. We describe datasets and the adopted neu- ral models in Section 3. Different visualization strategies and correspondent analytical results are presented separately in Section 4,5,6, followed by a brief conclusion. # 2 A Brief Review of Neural Visualization Similarity is commonly visualized graphically, gen- erally by projecting the embedding space into two dimensions and observing that similar words tend to be clustered together (e.g., Elman (1989), Ji
1506.01066#3
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
4
and Eisenstein (2014), Faruqui and Dyer (2014)). (Karpathy et al., 2015) attempts to interpret recur- rent neural models from a statical point of view and does deeply touch compositionally of mean- ings. Other relevant attempts include (Fyshe et al., 2015; Faruqui et al., 2015). Methods for interpreting and visualizing neu- ral models have been much more significantly ex- plored in vision, especially for Convolutional Neu- ral Networks (CNNs or ConvNets) (Krizhevsky et al., 2012), multi-layer neural networks in which the original matrix of image pixels is convolved and pooled as it is passed on to hidden layers. ConvNet visualizing techniques consist mainly in mapping the different layers of the network (or other fea- tures like SIFT (Lowe, 2004) and HOG (Dalal and Triggs, 2005)) back to the initial image input, thus capturing the human-interpretable information they represent in the input, and how units in these layers contribute to any final decisions (Simonyan et al., 2013; Mahendran and Vedaldi, 2014; Nguyen et al., 2014; Szegedy et al., 2013; Girshick et al., 2014; Zeiler and Fergus, 2014). Such methods include:
1506.01066#4
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
5
(1) Inversion: Inverting the representations by training an additional model to project outputs from different neural levels back to the initial input im- ages (Mahendran and Vedaldi, 2014; Vondrick et al., 2013; Weinzaepfel et al., 2011). The intuition behind reconstruction is that the pixels that are re- constructable from the current representations are the content of the representation. The inverting algorithms allow the current representation to align with corresponding parts of the original images. (2) Back-propagation (Erhan et al., 2009; Si- monyan et al., 2013) and Deconvolutional Net- works (Zeiler and Fergus, 2014): Errors are back propagated from output layers to each intermedi- ate layer and finally to the original image inputs. Deconvolutional Networks work in a similar way by projecting outputs back to initial inputs layer by layer, each layer associated with one supervised model for projecting upper ones to lower ones These strategies make it possible to spot active regions or ones that contribute the most to the final classification decision.
1506.01066#5
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
7
in interpretation. While the above strategies inspire the work we present in this paper, there are fundamental dif- ferences between vision and NLP. In NLP words function as basic units, and hence (word) vectors rather than single pixels are the basic units. Se- quences of words (e.g., phrases and sentences) are also presented in a more structured way than ar- rangements of pixels. In parallel to our research, independent researches (Karpathy et al., 2015) have been conducted to explore similar direction from an error-analysis point of view, by analyzing pre- dictions and errors from a recurrent neural models. Other distantly relevant works include: Murphy et al. (2012; Fyshe et al. (2015) used an manual task to quantify the interpretability of semantic dimen- sions by presetting human users with a list of words and ask them to choose the one that does not belong to the list. Faruqui et al. (2015). Similar strategy is adopted in (Faruqui et al., 2015) by extracting top-ranked words in each vector dimension. # 3 Datasets and Neural Models We explored two datasets on which neural models are trained, one of which is of relatively small scale and the other of large scale. # 3.1 Stanford Sentiment Treebank
1506.01066#7
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
8
We explored two datasets on which neural models are trained, one of which is of relatively small scale and the other of large scale. # 3.1 Stanford Sentiment Treebank Stanford Sentiment Treebank is a benchmark dataset widely used for neural model evaluations. The dataset contains gold-standard sentiment labels for every parse tree constituent, from sentences to phrases to individual words, for 215,154 phrases in 11,855 sentences. The task is to perform both fine-grained (very positive, positive, neutral, nega- tive and very negative) and coarse-grained (positive vs negative) classification at both the phrase and sentence level. For more details about the dataset, please refer to Socher et al. (2013). While many studies on this dataset use recursive parse-tree models, in this work we employ only standard sequence models (RNNs and LSTMs) since these are the most widely used current neu- ral models, and sequential visualization is more straightforward. We therefore first transform each parse tree node to a sequence of tokens. The sequence is first mapped to a phrase/sentence representation and fed into a softmax classifier. Phrase/sentence representations are built with the following three models: Standard Recurrent Se- quence with TANH activation functions, LSTMs and
1506.01066#8
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
9
Bidirectional LSTMs. For details about the three models, please refer to Appendix. Training AdaGrad with mini-batch was used for training, with parameters (L2 penalty, learning rate, mini batch size) tuned on the development set. The number of iterations is treated as a variable to tune and parameters are harvested based on the best performance on the dev set. The number of dimen- sions for the word and hidden layer are set to 60 with 0.1 dropout rate. Parameters are tuned on the dev set. The standard recurrent model achieves 0.429 (fine grained) and 0.850 (coarse grained) accuracy at the sentence level; LSTM achieves 0.469 and 0.870, and Bidirectional LSTM 0.488 and 0.878, respectively. # 3.2 Sequence-to-Sequence Models
1506.01066#9
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
10
# 3.2 Sequence-to-Sequence Models SEQ2SEQ are neural models aiming at generating a sequence of output texts given inputs. Theoret- ically, SEQ2SEQ models can be adapted to NLP tasks that can be formalized as predicting outputs given inputs and serve for different purposes due to different inputs and outputs, e.g., machine trans- lation where inputs correspond to source sentences and outputs to target sentences (Sutskever et al., 2014; Luong et al., 2014); conversational response generation if inputs correspond to messages and outputs correspond to responses (Vinyals and Le, 2015; Li et al., 2015). SEQ2SEQ need to be trained on massive amount of data for implicitly semantic and syntactic relations between pairs to be learned. SEQ2SEQ models map an input sequence to a vector representation using LSTM models and then sequentially predicts tokens based on the pre- obtained representation. The model defines a dis- tribution over outputs (Y) and sequentially predicts tokens given inputs (X) using a softmax function. ny P(Y|X) = [] ewes. LQ, oy Vt, Y1y Y2s +) Yt-1) i=1 exp(f (hi-1, ey) t=1 Ly exp(f (ht-1, ey’))
1506.01066#10
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
11
where f (ht−1, eyt) denotes the activation function between ht−1 and eyt, where ht−1 is the represen- tation output from the LSTM at time t − 1. For each time step in word prediction, SEQ2SEQ mod- els combine the current token with previously built embeddings for next-step word prediction. For easy visualization purposes, we turn to the most straightforward task—autoencoder— where inputs and outputs are identical. The goal of an autoencoder is to reconstruct inputs from the pre- obtained representation. We would like to see how individual input tokens affect the overall sentence representation and each of the tokens to predict in outputs. We trained the auto-encoder on a subset of WMT’14 corpus containing 4 million english sentences with an average length of 22.5 words. We followed training protocols described in (Sutskever et al., 2014). # 4 Representation Plotting We begin with simple plots of representations to shed light on local compositions using Stanford Sentiment Treebank. Local Composition Figure 1 shows a 60d heat- map vector for the representation of selected words/phrases/sentences, with an emphasis on ex- tent modifications (adverbial and adjectival) and negation. Embeddings for phrases or sentences are attained by composing word representations from the pretrained model.
1506.01066#11
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
12
The intensification part of Figure 1 shows sug- gestive patterns where values for a few dimensions are strengthened by modifiers like “a lot” (the red bar in the first example) “so much” (the red bar in the second example), and “incredibly”. Though the patterns for negations are not as clear, there is still a consistent reversal for some dimensions, visible as a shift between blue and red for dimensions boxed on the left.
1506.01066#12
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]
1506.01066
13
We then visualize words and phrases using t- sne (Van der Maaten and Hinton, 2008) in Figure 2, deliberately adding in some random words for com- parative purposes. As can be seen, neural models nicely learn the properties of local composition- ally, clustering negation+positive words (‘not nice’, ’not good’) together with negative words. Note also the asymmetry of negation: “not bad” is clustered more with the negative than the positive words (as shown both in Figure 1 and 2). This asymmetry has been widely discussed in linguistics, for exam- ple as arising from markedness, since ‘good’ is the unmarked direction of the scale (Clark and Clark, 1977; Horn, 1989; Fraenkel and Schul, 2008). This suggests that although the model does seem to fo- cus on certain units for negation in Figure 1, the neural model is not just learning to apply a fixed transform for ‘not’ but is able to capture the subtle differences in the composition of different words.
1506.01066#13
Visualizing and Understanding Neural Models in NLP
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve {\em compositionality}, building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's {\em salience}, the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,
http://arxiv.org/pdf/1506.01066
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
cs.CL
null
null
cs.CL
20150602
20160108
[ { "id": "1510.03055" }, { "id": "1506.02078" }, { "id": "1506.02004" }, { "id": "1506.05869" } ]