doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1506.02626 | 18 | 5
Table 4: For AlexNet, pruning reduces the number of weights by 9Ã and computation by 3Ã.
Layer Weights conv1 conv2 conv3 conv4 conv5 fc1 fc2 fc3 Total 35K 307K 885K 663K 442K 38M 17M 4M 61M FLOP Act% Weights% FLOP% 84% 211M 88% 38% 448M 52% 35% 299M 37% 37% 224M 40% 37% 150M 34% 9% 75M 36% 9% 34M 40% 25% 100% 8M 11% 54% 1.5B 84% 33% 18% 14% 14% 3% 3% 10% 30%
Table 5: For VGG-16, pruning reduces the number of weights by 12Ã and computation by 5Ã.
58% 12% 30% 29% 43% 16% 29% 21% 14% 15% 12% 9% 11% 1% 2% 9% 21% 100% 23% 7.5%
# 4.3 VGG-16 on ImageNet | 1506.02626#18 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 19 | where equality holds when λ = 1.
4
# INTERPRETATION AS REWARD SHAPING
In this section, we discuss how one can interpret λ as an extra discount factor applied after per- forming a reward shaping transformation on the MDP. We also introduce the notion of a response function to help understand the bias introduced by γ and λ.
Reward shaping (Ng et al., 1999) refers to the following transformation of the reward function of an MDP: let Φ : S â R be an arbitrary scalar-valued function on state space, and deï¬ne the transformed reward function Ër by
F(s,a,sâ) =r(s,a, 8â) + y®(sâ) â ®(s), (20)
5
(16)
Published as a conference paper at ICLR 2016
which in turn deï¬nes a transformed MDP. This transformation leaves the discounted advantage function AÏ,γ unchanged for any policy Ï. To see this, consider the discounted sum of rewards of a trajectory starting with state st:
oo 0° lx l Sy F(Se41, Qt, S441) = Sy 1(Se41,Ge41, St4i41) â O(S2). (21) 1=0 1=0 | 1506.02438#19 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 19 | # 4.3 VGG-16 on ImageNet
With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 [27], on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three fully-connected layers. Following a similar methodology, we aggressively pruned both convolutional and fully-connected layers to realize a signiï¬cant reduction in the number of weights, shown in Table 5. We used ï¬ve iterations of pruning an retraining.
The VGG-16 results are, like those for AlexNet, very promising. The network as a whole has been reduced to 7.5% of its original size (13Ã smaller). In particular, note that the two largest fully-connected layers can each be pruned to less than 4% of their original size. This reduction is critical for real time image processing, where there is little reuse of fully connected layers across images (unlike batch processing during training).
# 5 Discussion | 1506.02626#19 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 20 | Letting ËQÏ,γ, ËV Ï,γ, ËAÏ,γ be the value and advantage functions of the transformed MDP, one obtains from the deï¬nitions of these quantities that
ËQÏ,γ(s, a) = QÏ,γ(s, a) â Φ(s) ËV Ï,γ(s, a) = V Ï,γ(s) â Φ(s) ËAÏ,γ(s, a) = (QÏ,γ(s, a) â Φ(s)) â (V Ï,γ(s) â Φ(s)) = AÏ,γ(s, a). (24) Note that if Φ happens to be the state-value function V Ï,γ from the original MDP, then the trans- formed MDP has the interesting property that ËV Ï,γ(s) is zero at every state.
Note that (Ng et al., 1999) showed that the reward shaping transformation leaves the policy gradient and optimal policy unchanged when our objective is to maximize the discounted sum of rewards an y'r(8¢, Gt, 8441). In contrast, this paper is concerned with maximizing the undiscounted sum of rewards, where the discount 7 is used as a variance-reduction parameter. | 1506.02438#20 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 20 | # 5 Discussion
The trade-off curve between accuracy and number of parameters is shown in Figure 5. The more parameters pruned away, the less the accuracy. We experimented with L1 and L2 regularization, with and without retraining, together with iterative pruning to give ï¬ve trade off lines. Comparing solid and dashed lines, the importance of retraining is clear: without retraining, accuracy begins dropping much sooner â with 1/3 of the original connections, rather than with 1/10 of the original connections. Itâs interesting to see that we have the âfree lunchâ of reducing 2à the connections without losing accuracy even without retraining; while with retraining we are ably to reduce connections by 9Ã.
6
-O-L2 regularization w/o retrain -A-L1 regularization w/o retrain -&L1 regularization w/ retrain ~OL2 regularization w/ retrain -®L2 regularization w/ iterative prune and retrain 0.5% 0.0% -0.5% 1.0% â1.5% -2.0% -2.5% -3.0% -3.5% -4.0% -4.5% 40% 50% 60% 70% 80% 90% 100% Parametes Pruned Away Accuracy Loss | 1506.02626#20 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 21 | Having reviewed the idea of reward shaping, let us consider how we could use it to get a policy gradient estimate. The most natural approach is to construct policy gradient estimators that use discounted sums of shaped rewards Ër. However, Equation (21) shows that we obtain the discounted sum of the original MDPâs rewards r minus a baseline term. Next, letâs consider using a âsteeperâ discount γλ, where 0 ⤠λ ⤠1. Itâs easy to see that the shaped reward Ër equals the Bellman residual term δV , introduced in Section 3, where we set Φ = V . Letting Φ = V , we see that
xo xe SON Fsertsar, sestet) = D(âay'6yyy = APAPO, (25) l=0 l=0
Hence, by considering the γλ-discounted sum of shaped rewards, we exactly obtain the generalized advantage estimators from Section 3. As shown previously, λ = 1 gives an unbiased estimate of gγ, whereas λ < 1 gives a biased estimate.
To further analyze the effect of this shaping transformation and parameters γ and λ, it will be useful to introduce the notion of a response function Ï, which we deï¬ne as follows: | 1506.02438#21 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 21 | Figure 5: Trade-off curve for parameter reduction and loss in top-5 accuracy. L1 regularization performs better than L2 at learning the connections without retraining, while L2 regularization performs better than L1 at retraining. Iterative pruning gives the best result.
âconv! ~conv2 fconvs cond -*-convS fet 0% XE SX Bax x18: 0% 0040-0 6 5% § B-10% 3-10% g â15% â18% -20% 25% 50% 75% 400% 0% 25% 50% 75% 100% #Parameters #Parameters
âconv! ~conv2 fconvs cond -*-convS 0% XE SX Bax x18: 6 E | B-10% Es â15% 25% 50% 75% 400% #Parameters
fet 0% 0040-0 5% § 3-10% g â18% -20% 0% 25% 50% 75% 100% #Parameters
Figure 6: Pruning sensitivity for CONV layer (left) and FC layer (right) of AlexNet. | 1506.02626#21 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 22 | XE Se, ¢) = E [regi | se, a] â E [rei | se] - (26) Note that Aâ¢*7(s,a) = 7729 7'x(J; s,a), hence the response function decomposes the advantage function across timesteps. The response function lets us quantify the temporal credit assignment problem: long range dependencies between actions and rewards correspond to nonzero values of the response function for / >> 0. Next, let us revisit the discount factor y and the approximation we are making by using Aâ? rather than Aâ¢!. The discounted policy gradient estimator from Equation (6) has a sum of terms of the form
Vo log mo(a; | 8:)A⢠(St, 44) = Vo log mo(a 51) So y'x(l; St, 41). (27) 1=0 | 1506.02438#22 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 22 | Figure 6: Pruning sensitivity for CONV layer (left) and FC layer (right) of AlexNet.
L1 regularization gives better accuracy than L2 directly after pruning (dotted blue and purple lines) since it pushes more parameters closer to zero. However, comparing the yellow and green lines shows that L2 outperforms L1 after retraining, since there is no beneï¬t to further pushing values towards zero. One extension is to use L1 regularization for pruning and then L2 for retraining, but this did not beat simply using L2 for both phases. Parameters from one mode do not adapt well to the other.
The biggest gain comes from iterative pruning (solid red line with solid circles). Here we take the pruned and retrained network (solid green line with circles) and prune and retrain it again. The leftmost dot on this curve corresponds to the point on the green line at 80% (5Ã pruning) pruned to 8Ã. Thereâs no accuracy loss at 9Ã. Not until 10Ã does the accuracy begin to drop sharply.
Two green points achieve slightly better accuracy than the original model. We believe this accuracy improvement is due to pruning ï¬nding the right capacity of the network and hence reducing overï¬tting. | 1506.02626#22 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 23 | Using a discount 7 < 1 corresponds to dropping the terms with | >> 1/(1 â ). Thus, the error introduced by this approximation will be small if y rapidly decays as / increases, i.e., if the effect of an action on rewards is âforgottenâ after + 1/(1 â 7) timesteps. If the reward function 7 were obtained using @ = Vâ¢7, we would have E[?,4: | s:,a:] = E [r+ | s:] = 0 for 1 > 0, ie., the response function would only be nonzero at | = 0. Therefore, this shaping transformation would turn temporally extended response into an immediate response. Given that V7 completely reduces the temporal spread of the response function, we can hope that a good approximation V + V7 partially reduces it. This observation suggests an interpretation of Equation (16): reshape the rewards using V to shrink the temporal extent of the response function, and then introduce a âsteeperâ discount 7A to cut off the noise arising from long delays, i.e., ignore terms Vo log 9 (az | $¢)6¢,, where 1 >> 1/(1 â 7).
6
(22)
Published as a conference paper at ICLR 2016
# 5 VALUE FUNCTION ESTIMATION | 1506.02438#23 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 23 | Both CONV and FC layers can be pruned, but with different sensitivity. Figure 6 shows the sensitivity of each layer to network pruning. The ï¬gure shows how accuracy drops as parameters are pruned on a layer-by-layer basis. The CONV layers (on the left) are more sensitive to pruning than the fully connected layers (on the right). The ï¬rst convolutional layer, which interacts with the input image directly, is most sensitive to pruning. We suspect this sensitivity is due to the input layer having only 3 channels and thus less redundancy than the other convolutional layers. We used the sensitivity results to ï¬nd each layerâs threshold: for example, the smallest threshold was applied to the most sensitive layer, which is the ï¬rst convolutional layer.
Storing the pruned layers as sparse matrices has a storage overhead of only 15.6%. Storing relative rather than absolute indices reduces the space taken by the FC layer indices to 5 bits. Similarly, CONV layer indices can be represented with only 8 bits.
7 | 1506.02626#23 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 24 | 6
(22)
Published as a conference paper at ICLR 2016
# 5 VALUE FUNCTION ESTIMATION
A variety of different methods can be used to estimate the value function (see, e.g., Bertsekas (2012)). When using a nonlinear function approximator to represent the value function, the sim- plest approach is to solve a nonlinear regression problem:
N minimize So l¥s(sn) -V,ll, (28) n=1
n=1
where Vv, = an ¥! rz41 is the discounted sum of rewards, and n indexes over all timesteps in a batch of trajectories. This is sometimes called the Monte Carlo or TD(1) approach for estimating the value function (Sutton & Barto, 1998).
For the experiments in this work, we used a trust region method to optimize the value function in each iteration of a batch optimization procedure. The trust region helps us to avoid overfitting to the most recent batch of data. To formulate the trust region problem, we first compute ¢? = Â¥ lene (Sn) â Vall?, where dog is the parameter vector before optimization. Then we solve the following constrained optimization problem:
N minimize So Vo(sn) â Vall? @ n=l N . 1 Vo(sn) â Vo, (Sn) |? subject to > 7 <e. (29) Na 20" | 1506.02438#24 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 24 | 7
Table 6: Comparison with other model reduction methods on AlexNet. Data-free pruning [28] saved only 1.5Ã parameters with much loss of accuracy. Deep Fried Convnets [29] worked on fully connected layers only and reduced the parameters by less than 4Ã. [30] reduced the parameters by 4Ã with inferior accuracy. Naively cutting the layer size saves parameters but suffers from 4% loss of accuracy. [12] exploited the linear structure of convnets and compressed each layer individually, where model compression on a single layer incurred 0.9% accuracy penalty with biclustering + SVD. | 1506.02626#24 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 25 | This constraint is equivalent to constraining the average KL divergence between the previous value function and the new value function to be smaller than ¢, where the value function is taken to pa- rameterize a conditional Gaussian distribution with mean V;,(s) and variance 7.
We compute an approximate solution to the trust region problem using the conjugate gradient algo- rithm (Wright & Nocedal, 1999). Speciï¬cally, we are solving the quadratic program
minimize g7( â doa) ¢ 1 N subject to = de â Goa)â H(e â Goa) < â¬. (30) n= | 1506.02438#25 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 25 | Network Baseline Caffemodel [26] Data-free pruning [28] Fastfood-32-AD [29] Fastfood-16-AD [29] Collins & Kohli [30] Naive Cut SVD [12] Network Pruning Top-1 Error Top-5 Error 42.78% 44.40% 41.93% 42.90% 44.40% 47.18% 44.02% 42.77% 19.73% - - - - 23.23% 20.56% 19.67% Parameters 61.0M 39.6M 32.8M 16.4M 15.2M 13.8M 11.9M 6.7M Compression Rate 1Ã 1.5Ã 2Ã 3.7Ã 4Ã 4.4Ã 5Ã 9Ã
# Count
Figure 7: Weight distribution before and after parameter pruning. The right ï¬gure has 10à smaller scale. | 1506.02626#25 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 26 | minimize g7( â doa) ¢ 1 N subject to = de â Goa)â H(e â Goa) < â¬. (30) n=
where g is the gradient of the objective, and H = W DY, jnda, where jn = VeVo(Sn)-. Note that His the âGauss-Newtonâ approximation of the Hessian of the objective, and it is (up to a a? factor) the Fisher information matrix when interpreting the value function as a conditional probability dis- tribution. Using matrix-vector products v + Hv to implement the conjugate gradient algorithm, we compute a step direction s * âH~1g. Then we rescale s + as such that $(as)? H(as) = ⬠and take 6 = ¢o1a + as. This procedure is analogous to the procedure we use for updating the policy, which is described further in Section 6 and based on Schulman et al. (2015).
# 6 EXPERIMENTS
We designed a set of experiments to investigate the following questions:
1. What is the empirical effect of varying λ â [0, 1] and γ â [0, 1] when optimizing episodic total reward using generalized advantage estimation? | 1506.02438#26 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 26 | # Count
Figure 7: Weight distribution before and after parameter pruning. The right ï¬gure has 10à smaller scale.
After pruning, the storage requirements of AlexNet and VGGNet are are small enough that all weights can be stored on chip, instead of off-chip DRAM which takes orders of magnitude more energy to access (Table 1). We are targeting our pruning method for ï¬xed-function hardware specialized for sparse DNN, given the limitation of general purpose hardware on sparse computation. Figure 7 shows histograms of weight distribution before (left) and after (right) pruning. The weight is from the ï¬rst fully connected layer of AlexNet. The two panels have different y-axis scales. The original distribution of weights is centered on zero with tails dropping off quickly. Almost all parameters are between [â0.015, 0.015]. After pruning the large center region is removed. The network parameters adjust themselves during the retraining phase. The result is that the parameters form a bimodal distribution and become more spread across the x-axis, between [â0.025, 0.025].
# 6 Conclusion | 1506.02626#26 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 27 | 1. What is the empirical effect of varying λ â [0, 1] and γ â [0, 1] when optimizing episodic total reward using generalized advantage estimation?
2. Can generalized advantage estimation, along with trust region algorithms for policy and value function optimization, be used to optimize large neural network policies for challenging control problems?
2Another natural choice is to compute target values with an estimator based on the TD(λ) backup (Bertsekas, t = VÏold (sn)+ l=0(γλ)lδt+l. While we experimented with this choice, we did not notice a difference in performance from
7
Published as a conference paper at ICLR 2016
# 6.1 POLICY OPTIMIZATION ALGORITHM
While generalized advantage estimation can be used along with a variety of different policy gra- dient methods, for these experiments, we performed the policy updates using trust region policy optimization (TRPO) (Schulman et al., 2015). TRPO updates the policy by approximately solving the following constrained optimization problem each iteration: Lθold (θ) | 1506.02438#27 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 27 | # 6 Conclusion
We have presented a method to improve the energy efï¬ciency and storage of neural networks without affecting accuracy by ï¬nding the right connections. Our method, motivated in part by how learning works in the mammalian brain, operates by learning which connections are important, pruning the unimportant connections, and then retraining the remaining sparse network. We highlight our experiments on AlexNet and VGGNet on ImageNet, showing that both fully connected layer and convolutional layer can be pruned, reducing the number of connections by 9à to 13à without loss of accuracy. This leads to smaller memory capacity and bandwidth requirements for real-time image processing, making it easier to be deployed on mobile systems.
# References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
8
[2] Alex Graves and J¨urgen Schmidhuber. Framewise phoneme classiï¬cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602â610, 2005. | 1506.02626#27 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 28 | minimize A) . zeta subject to Dir" (79,,,,70) < ⬠1 Qi melan| sn) 5 where Lo,,,(0) = = ). 22" _ A, N hal TO.1a(An | $n) = i Dict! (ora 7) = 5 > Dix (m10(+ | 8n) || T(- | 8n)) (31)
As described in (Schulman et al., 2015), we approximately solve this problem by linearizing the objective and quadraticizing the constraint, which yields a step in the direction θ â θold â âF â1g, where F is the average Fisher information matrix, and g is a policy gradient estimate. This policy update yields the same step direction as the natural policy gradient (Kakade, 2001a) and natural actor-critic (Peters & Schaal, 2008), however it uses a different stepsize determination scheme and numerical procedure for computing the step.
Since prior work (Schulman et al., 2015) compared TRPO to a variety of different policy optimiza- tion algorithms, we will not repeat these comparisons; rather, we will focus on varying the γ, λ parameters of policy gradient estimator while keeping the underlying algorithm ï¬xed.
For completeness, the whole algorithm for iteratively updating policy and value function is given below: | 1506.02438#28 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 28 | [3] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493â2537, 2011.
[4] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[5] Yaniv Taigman, Ming Yang, MarcâAurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face veriï¬cation. In CVPR, pages 1701â1708. IEEE, 2014.
[6] Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with cots hpc systems. In 30th ICML, pages 1337â1345, 2013.
[7] Mark Horowitz. Energy table for 45nm process, Stanford VLSI wiki. [8] JP Rauschecker. Neuronal mechanisms of developmental plasticity in the catâs visual system. Human
neurobiology, 3(2):109â114, 1983. | 1506.02626#28 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 29 | For completeness, the whole algorithm for iteratively updating policy and value function is given below:
Initialize policy parameter 69 and value function parameter ¢o. fori =0,1,2,... do Simulate current policy 79, until N timesteps are obtained. Compute 6)â at all timesteps t ⬠{1,2,...,.N}, using V = Vy,. Compute Ay = 37729 (7A)/5Â¥,, at all timesteps. Compute 6; with TRPO update, Equation (31). Compute #;41 with Equation (30). end for
Note that the policy update θi â θi+1 is performed using the value function VÏi for advantage estimation, not VÏi+1. Additional bias would have been introduced if we updated the value function ï¬rst. To see this, consider the extreme case where we overï¬t the value function, and the Bellman residual rt + γV (st+1) â V (st) becomes zero at all timestepsâthe policy gradient estimate would be zero.
6.2 EXPERIMENTAL SETUP | 1506.02438#29 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 29 | neurobiology, 3(2):109â114, 1983.
[9] Christopher A Walsh. Peter huttenlocher (1931-2013). Nature, 502(7470):172â172, 2013. [10] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning.
In Advances in Neural Information Processing Systems, pages 2148â2156, 2013.
[11] Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
[12] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In NIPS, pages 1269â1277, 2014.
[13] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014. | 1506.02626#29 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 30 | 6.2 EXPERIMENTAL SETUP
We evaluated our approach on the classic cart-pole balancing problem, as well as several challenging 3D locomotion tasks: (1) bipedal locomotion; (2) quadrupedal locomotion; (3) dynamically standing up, for the biped, which starts off laying on its back. The models are shown in Figure 1.
6.2.1 ARCHITECTURE
We used the same neural network architecture for all of the 3D robot tasks, which was a feedforward network with three hidden layers, with 100, 50 and 25 tanh units respectively. The same architecture was used for the policy and value function. The ï¬nal output layer had linear activation. The value function estimator used the same architecture, but with only one scalar output. For the simpler cart- pole task, we used a linear policy, and a neural network with one 20-unit hidden layer as the value function.
8
Published as a conference paper at ICLR 2016 | 1506.02438#30 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 30 | [14] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[15] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [16] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
[17] Stephen Jos´e Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with back-propagation. In Advances in neural information processing systems, pages 177â185, 1989.
[18] Yann Le Cun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, pages 598â605. Morgan Kaufmann, 1990. | 1506.02626#30 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 31 | Figure 1: Top ï¬gures: robot models used for 3D locomotion. Bottom ï¬gures: a sequence of frames from the learned gaits. Videos are available at https://sites.google.com/site/ gaepapersupp.
6.2.2 TASK DETAILS
For the cart-pole balancing task, we collected 20 trajectories per batch, with a maximum length of 1000 timesteps, using the physical parameters from Barto et al. (1983).
The simulated robot tasks were simulated using the MuJoCo physics engine (Todorov et al., 2012). The humanoid model has 33 state dimensions and 10 actuated degrees of freedom, while the quadruped model has 29 state dimensions and 8 actuated degrees of freedom. The initial state for these tasks consisted of a uniform distribution centered on a reference conï¬guration. We used 50000 timesteps per batch for bipedal locomotion, and 200000 timesteps per batch for quadrupedal locomotion and bipedal standing. Each episode was terminated after 2000 timesteps if the robot had not reached a terminal state beforehand. The timestep was 0.01 seconds.
The reward functions are provided in the table below. | 1506.02438#31 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 31 | [19] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pages 164â164, 1993.
[20] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.
[21] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and SVN Vishwanathan. Hash kernels for structured data. The Journal of Machine Learning Research, 10:2615â2637, 2009.
[22] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, pages 1113â1120. ACM, 2009.
[23] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR, 15:1929â1958, 2014. | 1506.02626#31 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 32 | The reward functions are provided in the table below.
Task Reward 3D biped locomotion âvgwa â LO~* Jul]? â 10? |] fimpact |? + 0.2 Quadruped locomotion â vga â 107% |u|? â 1073 || fimpact ||? + 0.05 Biped getting up â(Rheaa â 1.5)? â 107>||ul|?
Here, vfwd := forward velocity, u := vector of joint torques, fimpact := impact forces, hhead := height of the head.
In the locomotion tasks, the episode is terminated if the center of mass of the actor falls below a predeï¬ned height: .8 m for the biped, and .2 m for the quadruped. The constant offset in the reward function encourages longer episodes; otherwise the quadratic reward terms might lead lead to a policy that ends the episodes as quickly as possible.
6.3 EXPERIMENTAL RESULTS | 1506.02438#32 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 32 | [24] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320â3328, 2014.
[25] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. Neural Networks, IEEE Transactions on, 5(2):157â166, 1994.
[26] Yangqing Jia, et al. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[27] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. CoRR, abs/1409.1556, 2014.
[28] Suraj Srinivas and R Venkatesh Babu. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149, 2015.
[29] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014. | 1506.02626#32 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 33 | 6.3 EXPERIMENTAL RESULTS
All results are presented in terms of the cost, which is deï¬ned as negative reward and is mini- mized. Videos of the learned policies are available at https://sites.google.com/site/ gaepapersupp. In plots, âNo VFâ means that we used a time-dependent baseline that did not depend on the state, rather than an estimate of the state value function. The time-dependent baseline was computed by averaging the return at each timestep over the trajectories in the batch.
# 6.3.1 CART-POLE
The results are averaged across 21 experiments with different random seeds. Results are shown in Figure 2, and indicate that the best results are obtained at intermediate values of the parameters: γ â [0.96, 0.99] and λ â [0.92, 0.99].
9
Published as a conference paper at ICLR 2016
Cart-pole performance after 20 iterations Cart-pole learning curves (at y=0.99) cost ° 10 20 30 40 50 number of policy iterations Xr
Cart-pole performance after 20 iterations Xr | 1506.02438#33 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 34 | Cart-pole performance after 20 iterations Xr
Figure 2: Left: learning curves for cart-pole task, using generalized advantage estimation with varying values of λ at γ = 0.99. The fastest policy improvement is obtain by intermediate values of λ in the range [0.92, 0.98]. Right: performance after 20 iterations of policy optimization, as γ and λ are varied. White means higher reward. The best results are obtained at intermediate values of both.
3D Biped > 3D Quadruped , No value fn =1 cost cost 8 100 200 300 400 500 8 200 400 600 800 1000 number of policy iterations number of policy iterations
Figure 3: Left: Learning curves for 3D bipedal locomotion, averaged across nine runs of the algo- rithm. Right: learning curves for 3D quadrupedal locomotion, averaged across ï¬ve runs.
3D BIPEDAL LOCOMOTION | 1506.02438#34 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 35 | 3D BIPEDAL LOCOMOTION
Each trial took about 2 hours to run on a 16-core machine, where the simulation rollouts were paral- lelized, as were the function, gradient, and matrix-vector-product evaluations used when optimizing the policy and value function. Here, the results are averaged across 9 trials with different random seeds. The best performance is again obtained using intermediate values of γ â [0.99, 0.995], λ â [0.96, 0.99]. The result after 1000 iterations is a fast, smooth, and stable gait that is effectively completely stable. We can compute how much âreal timeâ was used for this learning process: 0.01 seconds/timestepÃ50000 timesteps/batchÃ1000 batches/3600·24 seconds/day = 5.8 days. Hence, it is plausible that this algorithm could be run on a real robot, or multiple real robots learning in par- allel, if there were a way to reset the state of the robot and ensure that it doesnât damage itself.
# 6.3.3 OTHER 3D ROBOT TASKS | 1506.02438#35 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 36 | # 6.3.3 OTHER 3D ROBOT TASKS
The other two motor behaviors considered are quadrupedal locomotion and getting up off the ground for the 3D biped. Again, we performed 5 trials per experimental condition, with different random seeds (and initializations). The experiments took about 4 hours per trial on a 32-core machine. We performed a more limited comparison on these domains (due to the substantial computational resources required to run these experiments), ï¬xing γ = 0.995 but varying λ = {0, 0.96}, as well as an experimental condition with no value function. For quadrupedal locomotion, the best results are obtained using a value function with λ = 0.96 Section 6.3.2. For 3D standing, the value function always helped, but the results are roughly the same for λ = 0.96 and λ = 1.
10
Published as a conference paper at ICLR 2016
ds 3D Standing Up â 7=0.99, No value fn rm = oc 2 1 0.5 0.0 6 ° 100 208 300 400 560 4 5 number of policy iterations = cost
2 1 6 4 5 =
Figure 4: (a) Learning curve from quadrupedal walking, (b) learning curve for 3D standing up, (c) clips from 3D standing up.
# 7 DISCUSSION | 1506.02438#36 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 37 | # 7 DISCUSSION
Policy gradient methods provide a way to reduce reinforcement learning to stochastic gradient de- scent, by providing unbiased gradient estimates. However, so far their success at solving difï¬cult control problems has been limited, largely due to their high sample complexity. We have argued that the key to variance reduction is to obtain good estimates of the advantage function.
We have provided an intuitive but informal analysis of the problem of advantage function estimation, and justiï¬ed the generalized advantage estimator, which has two parameters γ, λ which adjust the bias-variance tradeoff. We described how to combine this idea with trust region policy optimization and a trust region algorithm that optimizes a value function, both represented by neural networks. Combining these techniques, we are able to learn to solve difï¬cult control tasks that have previously been out of reach for generic reinforcement learning methods.
Our main experimental validation of generalized advantage estimation is in the domain of simulated robotic locomotion. As shown in our experiments, choosing an appropriate intermediate value of λ in the range [0.9, 0.99] usually results in the best performance. A possible topic for future work is how to adjust the estimator parameters γ, λ in an adaptive or automatic way. | 1506.02438#37 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 38 | One question that merits future investigation is the relationship between value function estimation error and policy gradient estimation error. If this relationship were known, we could choose an error metric for value function ï¬tting that is well-matched to the quantity of interest, which is typically the accuracy of the policy gradient estimation. Some candidates for such an error metric might include the Bellman error or projected Bellman error, as described in Bhatnagar et al. (2009).
Another enticing possibility is to use a shared function approximation architecture for the policy and the value function, while optimizing the policy using generalized advantage estimation. While for- mulating this problem in a way that is suitable for numerical optimization and provides convergence guarantees remains an open question, such an approach could allow the value function and policy representations to share useful features of the input, resulting in even faster learning.
In concurrent work, researchers have been developing policy gradient methods that involve differen- tiation with respect to the continuous-valued action (Lillicrap et al., 2015; Heess et al., 2015). While we found empirically that the one-step return (λ = 0) leads to excessive bias and poor performance, these papers show that such methods can work when tuned appropriately. However, note that those papers consider control problems with substantially lower-dimensional state and action spaces than the ones considered here. A comparison between both classes of approach would be useful for future work.
# ACKNOWLEDGEMENTS | 1506.02438#38 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 39 | # ACKNOWLEDGEMENTS
We thank Emo Todorov for providing the simulator as well as insightful discussions, and we thank Greg Wayne, Yuval Tassa, Dave Silver, Carlos Florensa Campo, and Greg Brockman for insightful discussions. This research was funded in part by the Ofï¬ce of Naval Research through a Young
11
Published as a conference paper at ICLR 2016
Investigator Award and under grant number N00014-11-1-0688, DARPA through a Young Faculty Award, by the Army Research Ofï¬ce through the MAST program.
A FREQUENTLY ASKED QUESTIONS
A.1 WHATâS THE RELATIONSHIP WITH COMPATIBLE FEATURES? | 1506.02438#39 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 40 | A FREQUENTLY ASKED QUESTIONS
A.1 WHATâS THE RELATIONSHIP WITH COMPATIBLE FEATURES?
Compatible features are often mentioned in relation to policy gradient algorithms that make use of a value function, and the idea was proposed in the paper On Actor-Critic Methods by Konda & Tsitsiklis (2003). These authors pointed out that due to the limited representation power of the policy, the policy gradient only depends on a certain subspace of the space of advantage functions. This subspace is spanned by the compatible features âθi log Ïθ(at|st), where i â {1, 2, . . . , dim θ}. This theory of compatible features provides no guidance on how to exploit the temporal structure of the problem to obtain better estimates of the advantage function, making it mostly orthogonal to the ideas in this paper.
The idea of compatible features motivates an elegant method for computing the natural policy gradi- ent (Kakade, 2001a; Peters & Schaal, 2008). Given an empirical estimate of the advantage function ËAt at each timestep, we can project it onto the subspace of compatible features by solving the fol- lowing least squares problem:
minimize )|[r - Vo log 76 (at | 81) â Arll?. (32) * t | 1506.02438#40 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 41 | minimize )|[r - Vo log 76 (at | 81) â Arll?. (32) * t
If ËA is γ-just, the least squares solution is the natural policy gradient (Kakade, 2001a). Note that any estimator of the advantage function can be substituted into this formula, including the ones we derive in this paper. For our experiments, we also compute natural policy gradient steps, but we use the more computationally efï¬cient numerical procedure from Schulman et al. (2015), as discussed in Section 6.
A.2 WHY DONâT YOU JUST USE A Q-FUNCTION? | 1506.02438#41 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 42 | Previous actor critic methods, e.g. in Konda & Tsitsiklis (2003), use a Q-function to obtain poten- tially low-variance policy gradient estimates. Recent papers, including Heess et al. (2015); Lillicrap et al. (2015), have shown that a neural network Q-function approximator can used effectively in a policy gradient method. However, there are several advantages to using a state-value function in the manner of this paper. First, the state-value function has a lower-dimensional input and is thus easier to learn than a state-action value function. Second, the method of this paper allows us to smoothly interpolate between the high-bias estimator (λ = 0) and the low-bias estimator (λ = 1). On the other hand, using a parameterized Q-function only allows us to use a high-bias estimator. We have found that the bias is prohibitively large when using a one-step estimate of the returns, i.e., the λ = 0 esti- mator, ËAt = δV t = rt + γV (st+1) â V (st). We expect that similar difï¬culty would be encountered when using an advantage | 1506.02438#42 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 44 | # B PROOFS
Proof of Proposition 1: First we can split the expectation into terms involving Q and b,
Es0:â,a0:â [âθ log Ïθ(at | st)(Qt(s0:â, a0:â) â bt(s0:t, a0:tâ1))] = Es0:â,a0:â [âθ log Ïθ(at | st)(Qt(s0:â, a0:â))] â Es0:â,a0:â [âθ log Ïθ(at | st)(bt(s0:t, a0:tâ1))] (33)
12
Published as a conference paper at ICLR 2016
Weâll consider the terms with Q and b in turn. | 1506.02438#44 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 46 | Next,
Bs6..5 00:00 [Vo log 76 (at | 81) be (So:t, ao:tâ1)] = Boojao0-1 [Eseyrce are [Vo log 79(ar | $¢)be(S0:t; @0:4-1)]] = Eso.4,00:¢-1 [Ese 41:c0,0tsc0 [Vo log 7 (at | s2)] bi(So:t; @0:eâ1) | = Ego., 001 [0 «be (S0:2, @0:4â-1)] =0.
# REFERENCES
Barto, Andrew G, Sutton, Richard S, and Anderson, Charles W. Neuronlike adaptive elements that can solve difï¬cult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834â846, 1983.
Baxter, Jonathan and Bartlett, Peter L. Reinforcement learning in POMDPs via direct gradient ascent. In ICML, pp. 41â48, 2000.
Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 2. Athena Scientiï¬c, 2012. | 1506.02438#46 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 47 | Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 2. Athena Scientiï¬c, 2012.
Bhatnagar, Shalabh, Precup, Doina, Silver, David, Sutton, Richard S, Maei, Hamid R, and Szepesv´ari, Csaba. In Advances in Convergent temporal-difference learning with arbitrary smooth function approximation. Neural Information Processing Systems, pp. 1204â1212, 2009.
Greensmith, Evan, Bartlett, Peter L, and Baxter, Jonathan. Variance reduction techniques for gradient estimates in reinforcement learning. The Journal of Machine Learning Research, 5:1471â1530, 2004.
Hafner, Roland and Riedmiller, Martin. Reinforcement learning in feedback control. Machine learning, 84 (1-2):137â169, 2011.
Heess, Nicolas, Wayne, Greg, Silver, David, Lillicrap, Timothy, Tassa, Yuval, and Erez, Tom. Learning contin- uous control policies by stochastic value gradients. arXiv preprint arXiv:1510.09142, 2015.
Hull, Clark. Principles of behavior. 1943.
Kakade, Sham. A natural policy gradient. In NIPS, volume 14, pp. 1531â1538, 2001a. | 1506.02438#47 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 48 | Hull, Clark. Principles of behavior. 1943.
Kakade, Sham. A natural policy gradient. In NIPS, volume 14, pp. 1531â1538, 2001a.
Kakade, Sham. Optimizing average reward using discounted rewards. In Computational Learning Theory, pp. 605â615. Springer, 2001b.
Kimura, Hajime and Kobayashi, Shigenobu. An analysis of actor/critic algorithms using eligibility traces: Reinforcement learning with imperfect value function. In ICML, pp. 278â286, 1998.
Konda, Vijay R and Tsitsiklis, John N. On actor-critic algorithms. SIAM journal on Control and Optimization, 42(4):1143â1166, 2003.
Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Sil- ver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Marbach, Peter and Tsitsiklis, John N. Approximate gradient methods in policy-space optimization of markov reward processes. Discrete Event Dynamic Systems, 13(1-2):111â148, 2003. | 1506.02438#48 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 49 | Minsky, Marvin. Steps toward artiï¬cial intelligence. Proceedings of the IRE, 49(1):8â30, 1961.
Ng, Andrew Y, Harada, Daishi, and Russell, Stuart. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278â287, 1999.
Peters, Jan and Schaal, Stefan. Natural actor-critic. Neurocomputing, 71(7):1180â1190, 2008.
13
Published as a conference paper at ICLR 2016
Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. arXiv preprint arXiv:1502.05477, 2015.
Sutton, Richard S and Barto, Andrew G. Introduction to reinforcement learning. MIT Press, 1998.
Sutton, Richard S, McAllester, David A, Singh, Satinder P, and Mansour, Yishay. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â1063. Citeseer, 1999.
Thomas, Philip. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441â448, 2014. | 1506.02438#49 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 50 | Thomas, Philip. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441â448, 2014.
In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026â5033. IEEE, 2012.
Wawrzy´nski, PaweÅ. Real-time reinforcement learning by sequential actorâcritics and experience replay. Neural Networks, 22(10):1484â1497, 2009.
Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Wright, Stephen J and Nocedal, Jorge. Numerical optimization. Springer New York, 1999.
14 | 1506.02438#50 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.01186 | 1 | # Abstract
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically elim- inates the need to experimentally ï¬nd the best values and schedule for the global learning rates. Instead of mono- tonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable bound- ary values. Training with cyclical learning rates instead of ï¬xed values achieves improved classiï¬cation accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate âreasonable boundsâ â linearly increasing the learning rate of the net- work for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
âXponential âCLR (our approach) 0 1 2 3 4 5 6 7 Iteration x 10°
Figure 1. Classiï¬cation accuracy while training CIFAR-10. The red curve shows the result of training with one of the new learning rate policies. | 1506.01186#1 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 2 | Figure 1. Classiï¬cation accuracy while training CIFAR-10. The red curve shows the result of training with one of the new learning rate policies.
ing training. This paper demonstrates the surprising phe- nomenon that a varying learning rate during training is ben- eï¬cial overall and thus proposes to let the global learning rate vary cyclically within a band of values instead of set- ting it to a ï¬xed value. In addition, this cyclical learning rate (CLR) method practically eliminates the need to tune the learning rate yet achieve near optimal classiï¬cation accu- racy. Furthermore, unlike adaptive learning rates, the CLR methods require essentially no additional computation.
# 1. Introduction
Deep neural networks are the basis of state-of-the-art re- sults for image recognition [17, 23, 25], object detection [7], face recognition [26], speech recognition [8], machine translation [24], image caption generation [28], and driver- less car technology [14]. However, training a deep neural network is a difï¬cult global optimization problem. | 1506.01186#2 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 3 | The potential beneï¬ts of CLR can be seen in Figure 1, which shows the test data classiï¬cation accuracy of the CIFAR-10 dataset during training1. The baseline (blue curve) reaches a ï¬nal accuracy of 81.4% after 70, 000 it- erations. In contrast, it is possible to fully train the network using the CLR method instead of tuning (red curve) within 25,000 iterations and attain the same accuracy.
The contributions of this paper are:
A deep neural network is typically updated by stochastic gradient descent and the parameters 0 (weights) are updated by 0 = ott â 5, where L is a loss function and â¬; is the learning rate. It is well known that too small a learning rate will make a training algorithm converge slowly while too large a learning rate will make the training algorithm diverge [2]. Hence, one must experiment with a variety of learning rates and schedules.
1. A methodology for setting the global learning rates for training neural networks that eliminates the need to perform numerous experiments to ï¬nd the best values and schedule with essentially no additional computa- tion.
2. A surprising phenomenon is demonstrated - allowing
the learning rate should be a single value that monotonically decreases dur1Hyper-parameters and architecture were obtained in April 2015 from caffe.berkeleyvision.org/gathered/examples/cifar10.html | 1506.01186#3 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 4 | the learning rate to rise and fall is beneï¬cial overall even though it might temporarily harm the networkâs performance.
3. Cyclical learning rates are demonstrated with ResNets, Stochastic Depth networks, and DenseNets on the CIFAR-10 and CIFAR-100 datasets, and on ImageNet with two well-known architectures: AlexNet [17] and GoogleNet [25].
# 2. Related work
The book âNeural Networks: Tricks of the Tradeâ is a terriï¬c source of practical advice. In particular, Yoshua Bengio [2] discusses reasonable ranges for learning rates and stresses the importance of tuning the learning rate. A technical report by Breuel [3] provides guidance on a vari- ety of hyper-parameters. There are also a numerous web- sites giving practical suggestions for setting the learning rates.
Adaptive learning rates: Adaptive learning rates can be considered a competitor to cyclical learning rates because one can rely on local adaptive learning rates in place of global learning rate experimentation but there is a signiï¬- cant computational cost in doing so. CLR does not possess this computational costs so it can be used freely. | 1506.01186#4 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 5 | A review of the early work on adaptive learning rates can be found in George and Powell [6]. Duchi, et al. [5] pro- posed AdaGrad, which is one of the early adaptive methods that estimates the learning rates from the gradients.
RMSProp is discussed in the slides by Geoffrey Hinton2 [27]. RMSProp is described there as âDivide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.â RMSProp is a funda- mental adaptive learning rate method that others have built on.
Schaul et al. [22] discuss an adaptive learning rate based on a diagonal estimation of the Hessian of the gradients. One of the features of their method is that they allow their automatic method to decrease or increase the learning rate. However, their paper seems to limit the idea of increasing learning rate to non-stationary problems. On the other hand, this paper demonstrates that a schedule of increasing the learning rate is more universally valuable.
Zeiler [29] describes his AdaDelta method, which im- proves on AdaGrad based on two ideas: limiting the sum of squared gradients over all time to a limited window, and making the parameter update rule consistent with a units evaluation on the relationship between the update and the Hessian. | 1506.01186#5 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 6 | More recently, several papers have appeared on adaptive learning rates. Gulcehre and Bengio [9] propose an adaptive learning rate algorithm, called AdaSecant, that utilizes the
2www.cs.toronto.edu/ tijmen/csc321/slides/lecture slides lec6.pdf
root mean square statistics and variance of the gradients. Dauphin et al. [4] show that RMSProp provides a biased estimate and go on to describe another estimator, named ESGD, that is unbiased. Kingma and Lei-Ba [16] introduce Adam that is designed to combine the advantages from Ada- Grad and RMSProp. Bache, et al. [1] propose exploiting solutions to a multi-armed bandit problem for learning rate selection. A summary and tutorial of adaptive learning rates can be found in a recent paper by Ruder [20].
Adaptive learning rates are fundamentally different from CLR policies, and CLR can be combined with adaptive learning rates, as shown in Section 4.1. In addition, CLR policies are computationally simpler than adaptive learning rates. CLR is likely most similar to the SGDR method [18] that appeared recently.
# 3. Optimal Learning Rates
# 3.1. Cyclical Learning Rates | 1506.01186#6 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 7 | # 3. Optimal Learning Rates
# 3.1. Cyclical Learning Rates
The essence of this learning rate policy comes from the observation that increasing the learning rate might have a short term negative effect and yet achieve a longer term ben- eï¬cial effect. This observation leads to the idea of letting the learning rate vary within a range of values rather than adopt- ing a stepwise ï¬xed or exponentially decreasing value. That is, one sets minimum and maximum boundaries and the learning rate cyclically varies between these bounds. Ex- periments with numerous functional forms, such as a trian- gular window (linear), a Welch window (parabolic) and a Hann window (sinusoidal) all produced equivalent results This led to adopting a triangular window (linearly increas- ing then linearly decreasing), which is illustrated in Figure 2, because it is the simplest function that incorporates this idea. The rest of this paper refers to this as the triangular learning rate policy.
Maximum bound (max_Ir) Minimum bound - (base_Ir) stepsize
Figure 2. Triangular learning rate policy. The blue lines represent learning rate values changing between bounds. The input parame- ter stepsize is the number of iterations in half a cycle. | 1506.01186#7 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 8 | Figure 2. Triangular learning rate policy. The blue lines represent learning rate values changing between bounds. The input parame- ter stepsize is the number of iterations in half a cycle.
An intuitive understanding of why CLR methods work comes from considering the loss function topology. Dauphin et al. [4] argue that the difï¬culty in minimizing the loss arises from saddle points rather than poor local minima.
Saddle points have small gradients that slow the learning process. However, increasing the learning rate allows more rapid traversal of saddle point plateaus. A more practical reason as to why CLR works is that, by following the meth- ods in Section 3.3, it is likely the optimum learning rate will be between the bounds and near optimal learning rates will be used throughout training.
The red curve in Figure 1 shows the result of the triangular policy on CIFAR-10. The settings used to cre- ate the red curve were a minimum learning rate of 0.001 (as in the original parameter ï¬le) and a maximum of 0.006. Also, the cycle length (i.e., the number of iterations until the learning rate returns to the initial value) is set to 4, 000 iterations (i.e., stepsize = 2000) and Figure 1 shows that the accuracy peaks at the end of each cycle. | 1506.01186#8 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 9 | Implementation of the code for a new learning rate policy is straightforward. An example of the code added to Torch 7 in the experiments shown in Section 4.1.2 is the following few lines:
l o c a l c y c l e = math . f l o o r ( 1 + e p o c h C o u n t e r / ( 2 â s t e p s i z e ) ) l o c a l x = math . a b s ( e p o c h C o u n t e r / s t e p s i z e â 2â c y c l e + 1 ) l o c a l l r = o p t . LR + ( maxLR â o p t . LR ) (1âx ) ) â math . max ( 0 ,
where opt.LR is the speciï¬ed lower (i.e., base) learning rate, epochCounter is the number of epochs of training, and lr is the computed learning rate. This policy is named triangular and is as described above, with two new in- put parameters deï¬ned: stepsize (half the period or cycle length) and max lr (the maximum learning rate boundary). This code varies the learning rate linearly between the min- imum (base lr) and the maximum (max lr). | 1506.01186#9 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 10 | In addition to the triangular policy, the following CLR policies are discussed in this paper:
1. triangular2; the same as the triangular policy ex- cept the learning rate difference is cut in half at the end of each cycle. This means the learning rate difference drops after each cycle.
2. exp range; the learning rate varies between the min- imum and maximum boundaries and each bound- factor of ary value declines by an exponential gammaiteration.
# 3.2. How can one estimate a good value for the cycle length?
The length of a cycle and the input parameter stepsize can be easily computed from the number of iterations in an epoch. An epoch is calculated by dividing the number of training images by the batchsize used. For example, CIFAR-10 has 50, 000 training images and the batchsize is 100 so an epoch = 50, 000/100 = 500 iterations. The ï¬nal
accuracy results are actually quite robust to cycle length but experiments show that it often is good to set stepsize equal to 2 â 10 times the number of iterations in an epoch. For example, setting stepsize = 8 â epoch with the CIFAR-10 training run (as shown in Figure 1) only gives slightly better results than setting stepsize = 2 â epoch. | 1506.01186#10 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 11 | Furthermore, there is a certain elegance to the rhythm of these cycles and it simpliï¬es the decision of when to drop learning rates and when to stop the current training run. Experiments show that replacing each step of a con- stant learning rate with at least 3 cycles trains the network weights most of the way and running for 4 or more cycles will achieve even better performance. Also, it is best to stop training at the end of a cycle, which is when the learning rate is at the minimum value and the accuracy peaks.
# 3.3. How can one estimate reasonable minimum and maximum boundary values?
There is a simple way to estimate reasonable minimum and maximum boundary values with one training run of the network for a few epochs. It is a âLR range testâ; run your model for several epochs while letting the learning rate in- crease linearly between low and high LR values. This test is enormously valuable whenever you are facing a new ar- chitecture or dataset.
CIFAR-10 0.6 Accuracy 0.1 0 0.005 0.01 Learning rate 0.015 0.02
Figure 3. Classiï¬cation accuracy as a function of increasing learn- ing rate for 8 epochs (LR range test). | 1506.01186#11 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 12 | Figure 3. Classiï¬cation accuracy as a function of increasing learn- ing rate for 8 epochs (LR range test).
The triangular learning rate policy provides a simple mechanism to do this. For example, in Caffe, set base lr to the minimum value and set max lr to the maximum value. Set both the stepsize and max iter to the same number of iterations. In this case, the learning rate will increase lin- early from the minimum value to the maximum value dur- ing this short run. Next, plot the accuracy versus learning rate. Note the learning rate value when the accuracy starts to increase and when the accuracy slows, becomes ragged, or | 1506.01186#12 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 13 | Dataset CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 AlexNet AlexNet AlexNet AlexNet AlexNet GoogLeNet GoogLeNet GoogLeNet GoogLeNet LR policy f ixed triangular2 decay exp exp range f ixed triangular2 exp exp exp range f ixed triangular2 exp exp range Iterations Accuracy (%) 70,000 25, 000 25,000 70,000 42,000 400,000 400,000 300,000 460,000 300,000 420,000 420,000 240,000 240,000 81.4 81.4 78.5 79.1 82.2 58.0 58.4 56.0 56.5 56.5 63.0 64.4 58.2 60.2
Table 1. Comparison of accuracy results on test/validation data at the end of the training.
starts to fall. These two learning rates are good choices for bounds; that is, set base lr to the ï¬rst value and set max lr to the latter value. Alternatively, one can use the rule of thumb that the optimum learning rate is usually within a factor of two of the largest one that converges [2] and set base lr to 1 | 1506.01186#13 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 14 | Figure 3 shows an example of making this type of run with the CIFAR-10 dataset, using the architecture and hyper-parameters provided by Caffe. One can see from Fig- ure 3 that the model starts converging right away, so it is rea- sonable to set base lr = 0.001. Furthermore, above a learn- ing rate of 0.006 the accuracy rise gets rough and eventually begins to drop so it is reasonable to set max lr = 0.006.
Whenever one is starting with a new architecture or dataset, a single LR range test provides both a good LR value and a good range. Then one should compare runs with a ï¬xed LR versus CLR with this range. Whichever wins can be used with conï¬dence for the rest of oneâs experiments.
# 4. Experiments
The purpose of this section is to demonstrate the effec- tiveness of the CLR methods on some standard datasets and with a range of architectures. In the subsections below, CLR policies are used for training with the CIFAR-10, CIFAR- 100, and ImageNet datasets. These three datasets and a va- riety of architectures demonstrate the versatility of CLR.
# 4.1. CIFAR-10 and CIFAR-100
# 4.1.1 Caffeâs CIFAR-10 architecture | 1506.01186#14 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 15 | # 4.1. CIFAR-10 and CIFAR-100
# 4.1.1 Caffeâs CIFAR-10 architecture
The CIFAR-10 architecture and hyper-parameter settings on the Caffe website are fairly standard and were used here as a baseline. As discussed in Section 3.2, an epoch is equal
CIFAR-10 ---Exp policy ââExp Range 0.2 1 2 3 4 5 6 7 Iteration * 10°
Figure 4. Classiï¬cation accuracy as a function of iteration for 70, 000 iterations.
CIFAR10; Combining adaptive LR and CLR âNesterov + CLR â Adam > 03 âAdam + CLR Iteration % isâ
Figure 5. Classiï¬cation accuracy as a function of iteration for the CIFAR-10 dataset using adaptive learning methods. See text for explanation.
to 500 iterations and a good setting for stepsize is 2, 000. Section 3.3 discussed how to estimate reasonable minimum and maximum boundary values for the learning rate from Figure 3. All that is needed to optimally train the network is to set base lr = 0.001 and max lr = 0.006. This is all that is needed to optimally train the network. For the triangular2 policy run shown in Figure 1, the stepsize and learning rate bounds are shown in Table 2. | 1506.01186#15 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 16 | base lr 0.001 0.0001 0.00001 max lr 0.005 0.0005 0.00005 stepsize 2,000 1,000 500 start 0 16,000 22,000 max iter 16,000 22,000 25,000
Table 2. Hyper-parameter settings for CIFAR-10 example in Fig- ure 1.
running with the the result of triangular2 policy with the parameter setting in Table 2. As shown in Table 1, one obtains the same test classiï¬ca- tion accuracy of 81.4% after only 25, 000 iterations with the triangular2 policy as obtained by running the standard hyper-parameter settings for 70, 000 iterations.
8 CIFAR10; Sigmoid + Batch Normalization â . 3.0.6 a g4 to2 %% 1 3 3 4 5 6 Iteration x104
Figure 6. Batch Normalization CIFAR-10 example (provided with the Caffe download). | 1506.01186#16 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 17 | Figure 6. Batch Normalization CIFAR-10 example (provided with the Caffe download).
from the triangular policy derive from reducing the learning rate because this is when the accuracy climbs the most. As a test, a decay policy was implemented where the learn- ing rate starts at the max lr value and then is linearly re- duced to the base lr value for stepsize number of itera- tions. After that, the learning rate is ï¬xed to base lr. For the decay policy, max lr = 0.007, base lr = 0.001, and stepsize = 4000. Table 1 shows that the ï¬nal accuracy is only 78.5%, providing evidence that both increasing and decreasing the learning rate are essential for the beneï¬ts of the CLR method.
Figure 4 compares the exp learning rate policy in Caffe with the new exp range policy using gamma = 0.99994 for both policies. is that when using the exp range policy one can stop training at iteration 42, 000 with a test accuracy of 82.2% (going to iteration 70, 000 does not improve on this result). This is substantially better than the best test accuracy of 79.1% one obtains from using the exp learning rate policy. | 1506.01186#17 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 18 | The current Caffe download contains additional archi- tectures and hyper-parameters for CIFAR-10 and in partic- ular there is one with sigmoid non-linearities and batch nor- malization. Figure 6 compares the training accuracy using the downloaded hyper-parameters with a ï¬xed learning rate (blue curve) to using a cyclical learning rate (red curve). As can be seen in this Figure, the ï¬nal accuracy for the ï¬xed learning rate (60.8%) is substantially lower than the cyclical learning rate ï¬nal accuracy (72.2%). There is clear perfor- mance improvement when using CLR with this architecture containing sigmoids and batch normalization.
Experiments were carried out with architectures featur- ing both adaptive learning rate methods and CLR. Table 3 lists the ï¬nal accuracy values from various adaptive learning rate methods, run with and without CLR. All of the adap- tive methods in Table 3 were run by invoking the respective option in Caffe. The learning rate boundaries are given in Table 3 (just below the methodâs name), which were deter- mined by using the technique described in Section 3.3. Just the lower bound was used for base lr for the f ixed policy. | 1506.01186#18 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 19 | LR type/bounds Nesterov [19] 0.001 - 0.006 ADAM [16] 0.0005 - 0.002 RMSprop [27] 0.0001 - 0.0003 AdaGrad [5] 0.003 - 0.035 AdaDelta [29] 0.01 - 0.1 LR policy f ixed triangular f ixed triangular triangular f ixed triangular triangular f ixed triangular f ixed triangular Iterations Accuracy (%) 70,000 25,000 70,000 25,000 70,000 70,000 25,000 70,000 70,000 25,000 70,000 25,000 82.1 81.3 81.4 79.8 81.1 75.2 72.8 75.1 74.6 76.0 67.3 67.3
Table 3. Comparison of CLR with adaptive learning rate methods. The table shows accuracy results for the CIFAR-10 dataset on test data at the end of the training. | 1506.01186#19 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 20 | Table 3. Comparison of CLR with adaptive learning rate methods. The table shows accuracy results for the CIFAR-10 dataset on test data at the end of the training.
Table 3 shows that for some adaptive learning rate meth- ods combined with CLR, the ï¬nal accuracy after only 25,000 iterations is equivalent to the accuracy obtained without CLR after 70,000 iterations. For others, it was nec- essary (even with CLR) to run until 70,000 iterations to ob- tain similar results. Figure 5 shows the curves from running the Nesterov method with CLR (reached 81.3% accuracy in only 25,000 iterations) and the Adam method both with and without CLR (both needed 70,000 iterations). When using adaptive learning rate methods, the beneï¬ts from CLR are sometimes reduced, but CLR can still valuable as it some- times provides beneï¬t at essentially no cost.
# 4.1.2 ResNets, Stochastic Depth, and DenseNets | 1506.01186#20 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 21 | # 4.1.2 ResNets, Stochastic Depth, and DenseNets
Residual networks [10, 11], and the family of variations that have subsequently emerged, achieve state-of-the-art re- sults on a variety of tasks. Here we provide comparison experiments between the original implementations and ver- sions with CLR for three members of this residual net- work family: the original ResNet [10], Stochastic Depth networks [13], and the recent DenseNets [12]. Our ex- periments can be readily replicated because the authors of these papers make their Torch code available3. Since all three implementation are available using the Torch 7 frame- work, the experiments in this section were performed using Torch. In addition to the experiment in the previous Sec- tion, these networks also incorporate batch normalization [15] and demonstrate the value of CLR for architectures with batch normalization.
Both CIFAR-10 and the CIFAR-100 datasets were used
# 3https://github.com/facebook/fb.resnet.torch, https://github.com/yueatsprograms/Stochastic Depth, https://github.com/liuzhuang13/DenseNet
in these experiments. The CIFAR-100 dataset is similar to the CIFAR-10 data but it has 100 classes instead of 10 and each class has 600 labeled examples. | 1506.01186#21 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 22 | Architecture ResNet ResNet ResNet ResNet+CLR SD SD SD SD+CLR DenseNet DenseNet DenseNet CIFAR-10 (LR) CIFAR-100 (LR) 92.8(0.1) 93.3(0.2) 91.8(0.3) 93.6(0.1 â 0.3) 94.6(0.1) 94.5(0.2) 94.2(0.3) 94.5(0.1 â 0.3) 94.5(0.1) 94.5(0.2) 94.2(0.3) 71.2(0.1) 71.6(0.2) 71.9(0.3) 72.5(0.1 â 0.3) 75.2(0.1) 75.2(0.2) 74.6(0.3) 75.4(0.1 â 0.3) 75.2(0.1) 75.3(0.2) 74.5(0.3) 75.9(0.1 â 0.2) DenseNet+CLR 94.9(0.1 â 0.2) | 1506.01186#22 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 23 | Table 4. Comparison of CLR with ResNets [10, 11], Stochastic Depth (SD) [13], and DenseNets [12]. The table shows the average accuracy of 5 runs for the CIFAR-10 and CIFAR-100 datasets on test data at the end of the training.
The results for these two datasets on these three archi- tectures are summarized in Table 4. The left column give the architecture and whether CLR was used in the experi- ments. The other two columns gives the average ï¬nal ac- curacy from ï¬ve runs and the initial learning rate or range used in parenthesis, which are reduced (for both the ï¬xed learning rate and the range) during the training according to the same schedule used in the original implementation. For all three architectures, the original implementation uses an initial LR of 0.1 which we use as a baseline. | 1506.01186#23 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 24 | The accuracy results in Table 4 in the right two columns are the average ï¬nal test accuracies of ï¬ve runs. The Stochastic Depth implementation was slightly different than the ResNet and DenseNet implementation in that the au- thors split the 50,000 training images into 45,000 training images and 5,000 validation images. However, the reported results in Table 4 for the SD architecture is only test accura- cies for the ï¬ve runs. The learning rate range used by CLR was determined by the LR range test method and the cycle length was choosen as a tenth of the maximum number of epochs that was speciï¬ed in the original implementation.
In addition to the accuracy results shown in Table 4, similar results were obtained in Caffe for DenseNets [12] on CIFAR-10 using the prototxt ï¬les provided by the au- thors. The average accuracy of ï¬ve runs with learning rates of 0.1, 0.2, 0.3 was 91.67%, 92.17%, 92.46%, respectively, but running with CLR within the range of 0.1 to 0.3, the average accuracy was 93.33%.
The results from all of these experiments show similar or better accuracy performance when using CLR versus using a ï¬xed learning rate, even though the performance drops at | 1506.01186#24 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 25 | The results from all of these experiments show similar or better accuracy performance when using CLR versus using a ï¬xed learning rate, even though the performance drops at
ImageNet on AlexNet 0.2 Accuracy ° a we 0.05) ° 6.005 0.01 0.015 0.02 0.025 6.03 6.035 0.04 6.045 Learning rate
Figure 7. AlexNet LR range test; validation classiï¬cation accuracy as a function of increasing learning rate.
ImageNet/AlexNet architecture S a se 2s Row es) Validation Accuracy ses âob âTriangular2 os. 1 15.3225 °3°«35 Iteration x 10°
Figure 8. Validation data classiï¬cation accuracy as a function of iteration for f ixed versus triangular.
some of the learning rate values within this range. These experiments conï¬rm that it is beneï¬cial to use CLR for a variety of residual architectures and for both CIFAR-10 and CIFAR-100.
# 4.2. ImageNet
The ImageNet dataset [21] is often used in deep learning literature as a standard for comparison. The ImageNet clas- siï¬cation challenge provides about 1, 000 training images for each of the 1, 000 classes, giving a total of 1, 281, 167 labeled training images.
# 4.2.1 AlexNet | 1506.01186#25 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 26 | # 4.2.1 AlexNet
The Caffe website provides the architecture and hyper- parameter ï¬les for a slightly modiï¬ed AlexNet [17]. These were downloaded from the website and used as a baseline. In the training results reported in this section, all weights
ImageNet/AlexNet architecture = a S a S nS me (ees) Validation Accuracy âTriangular2 0 â F ; q 4 1 005 1 15. 2. 25. 3. 335 Iteration <i
Figure 9. Validation data classiï¬cation accuracy as a function of iteration for f ixed versus triangular.
were initialized the same so as to avoid differences due to different random initializations.
Since the batchsize in the architecture ï¬le is 256, an epoch is equal to 1, 281, 167/256 = 5, 005 iterations. Hence, a reasonable setting for stepsize is 6 epochs or 30, 000 iterations. | 1506.01186#26 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 27 | Next, one can estimate reasonable minimum and maxi- mum boundaries for the learning rate from Figure 7. It can be seen from this ï¬gure that the training doesnât start con- verging until at least 0.006 so setting base lr = 0.006 is reasonable. However, for a fair comparison to the baseline where base lr = 0.01, it is necessary to set the base lr to 0.01 for the triangular and triangular2 policies or else the majority of the apparent improvement in the accuracy will be from the smaller learning rate. As for the maxi- mum boundary value, the training peaks and drops above a learning rate of 0.015 so max lr = 0.015 is reasonable. For comparing the exp range policy to the exp policy, set- ting base lr = 0.006 and max lr = 0.014 is reasonable and in this case one expects that the average accuracy of the exp range policy to be equal to the accuracy from the exp policy. | 1506.01186#27 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 28 | Figure 9 compares the results of running with the f ixed versus the triangular2 policy for the AlexNet architecture. Here, the peaks at iterations that are multiples of 60,000 should produce a classiï¬cation accuracy that corresponds to the f ixed policy. Indeed, the accuracy peaks at the end of a cycle for the triangular2 policy are similar to the ac- curacies from the standard f ixed policy, which implies that the baseline learning rates are set quite well (this is also im- plied by Figure 7). As shown in Table 1, the ï¬nal accuracies from the CLR training run are only 0.4% better than the ac- curacies from the f ixed policy.
Figure 10 compares the results of running with the exp versus the exp range policy for the AlexNet architecture with gamma = 0.999995 for both policies. As expected,
ImageNet/AlexNet architecture 0.5 S & Validation Accuracy Ss bs 0.2 0.1 â Exp Range 0 I 2 3 4 Iteration x10°
Figure 10. Validation data classiï¬cation accuracy as a function of iteration for exp versus exp range.
ImageNet/GoogleNet architecture 0.08 id S HB Validation Accuracy 2 2° o Peg Ne eS 0 0.01 002 003 004 0.05 0.06 0.07 Learning rate | 1506.01186#28 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 29 | Figure 11. GoogleNet LR range test; validation classiï¬cation ac- curacy as a function of increasing learning rate.
Figure 10 shows that the accuracies from the exp range policy do oscillate around the exp policy accuracies. The advantage of the exp range policy is that the accuracy of 56.5% is already obtained at iteration 300, 000 whereas the exp policy takes until iteration 460, 000 to reach 56.5%.
Finally, a comparison between the f ixed and exp poli- cies in Table 1 shows the f ixed and triangular2 policies produce accuracies that are almost 2% better than their ex- ponentially decreasing counterparts, but this difference is probably due to not having tuned gamma.
# 4.2.2 GoogLeNet/Inception Architecture
The GoogLeNet architecture was a winning entry to the ImageNet 2014 image classiï¬cation competition. Szegedy et al. [25] describe the architecture in detail but did not provide the architecture ï¬le. The architecture ï¬le publicly available from Princeton4 was used in the following exper- iments. The GoogLeNet paper does not state the learning rate values and the hyper-parameter solver ï¬le is not avail4vision.princeton.edu/pvt/GoogLeNet/ | 1506.01186#29 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 30 | Imagenet with GoogLeNet architecture 2 g 2 a 2 in 2 5 2 io Validation Accuracy ° is e 0 05 1 15 2 25) 3 3.5 4 45 Iteration x10
Figure 12. Validation data classiï¬cation accuracy as a function of iteration for f ixed versus triangular.
able for a baseline but not having these hyper-parameters is a typical situation when one is developing a new architec- ture or applying a network to a new dataset. This is a situa- tion that CLR readily handles. Instead of running numerous experiments to ï¬nd optimal learning rates, the base lr was set to a best guess value of 0.01. | 1506.01186#30 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 31 | The ï¬rst step is to estimate the stepsize setting. Since the architecture uses a batchsize of 128 an epoch is equal to 1, 281, 167/128 = 10, 009 iterations. Hence, good settings for stepsize would be 20, 000, 30, 000, or possibly 40, 000. The results in this section are based on stepsize = 30000. The next step is to estimate the bounds for the learning rate, which is found with the LR range test by making a run for 4 epochs where the learning rate linearly increases from 0.001 to 0.065 (Figure 11). This ï¬gure shows that one can use bounds between 0.01 and 0.04 and still have the model reach convergence. However, learning rates above 0.025 cause the training to converge erratically. For both triangular2 and the exp range policies, the base lr was set to 0.01 and max lr was set to 0.026. As above, the accuracy peaks for both these learning rate policies corre- spond to the same learning rate value as the f ixed and exp policies. Hence, the comparisons below will focus on the peak accuracies from the LCR methods. | 1506.01186#31 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 32 | Figure 12 compares the results of running with the f ixed versus the triangular2 policy for this architecture (due to time limitations, each training stage was not run until it fully plateaued). In this case, the peaks at the end of each cycle for the triangular2 policy produce better accuracies than the f ixed policy. The ï¬nal accuracy shows an improvement from the network trained by the triangular2 policy (Ta- ble 1) to be 1.4% better than the accuracy from the f ixed policy. This demonstrates that the triangular2 policy im- proves on a âbest guessâ for a ï¬xed learning rate.
Figure 13 compares the results of running with the exp versus the exp range policy with gamma = 0.99998. Once again, the peaks at the end of each cycle for the
Imagenet with GoogLeNet architecture Ld © = Validation Accuracy ° hed ie -â ExpLR âExp range 0 0.5 1 15 2 Iteration x10
Figure 13. Validation data classiï¬cation accuracy as a function of iteration for exp versus exp range.
exp range policy produce better validation accuracies than the exp policy. The ï¬nal accuracy from the exp range pol- icy (Table 1) is 2% better than from the exp policy.
# 5. Conclusions | 1506.01186#32 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 33 | # 5. Conclusions
The results presented in this paper demonstrate the ben- eï¬ts of the cyclic learning rate (CLR) methods. A short run of only a few epochs where the learning rate linearly in- creases is sufï¬cient to estimate boundary learning rates for the CLR policies. Then a policy where the learning rate cyclically varies between these bounds is sufï¬cient to ob- tain near optimal classiï¬cation results, often with fewer it- erations. This policy is easy to implement and unlike adap- tive learning rate methods, incurs essentially no additional computational expense.
This paper shows that use of cyclic functions as a learn- ing rate policy provides substantial improvements in perfor- mance for a range of architectures. In addition, the cyclic nature of these methods provides guidance as to times to drop the learning rate values (after 3 - 5 cycles) and when to stop the the training. All of these factors reduce the guess- work in setting the learning rates and make these methods practical tools for everyone who trains neural networks.
This work has not explored the full range of applications for cyclic learning rate methods. We plan to determine if equivalent policies work for training different architectures, such as recurrent neural networks. Furthermore, we believe that a theoretical analysis would provide an improved un- derstanding of these methods, which might lead to improve- ments in the algorithms. | 1506.01186#33 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 34 | # References
[1] K. Bache, D. DeCoste, and P. Smyth. Hot swapping for online adaptation of optimization hyperparameters. arXiv preprint arXiv:1412.6599, 2014. 2
[2] Y. Bengio. Neural Networks: Tricks of the Trade, chap- ter Practical recommendations for gradient-based training of
deep architectures, pages 437â478. Springer Berlin Heidel- berg, 2012. 1, 2, 4
[3] T. M. Breuel. The effects of hyperparameters on sgd training of neural networks. arXiv preprint arXiv:1508.02788, 2015. 2
[4] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rm- sprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015. 2 [5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradi- ent methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011. 2, 5 | 1506.01186#34 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 35 | [6] A. P. George and W. B. Powell. Adaptive stepsizes for re- cursive estimation with applications in approximate dynamic programming. Machine learning, 65(1):167â198, 2006. 2 [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â587. IEEE, 2014. 1
[8] A. Graves and N. Jaitly. Towards end-to-end speech recog- nition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML- 14), pages 1764â1772, 2014. 1
[9] C. Gulcehre and Y. Bengio. Adasecant: Robust adap- tive secant method for stochastic gradient. arXiv preprint arXiv:1412.7419, 2014. 2 | 1506.01186#35 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 36 | [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Computer Vision and Pattern Recog- nition (CVPR), 2016 IEEE Conference on, 2015. 5, 6 [11] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. 5, 6
[12] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. 5, 6
[13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. arXiv preprint
Deep networks with stochastic depth. arXiv:1603.09382, 2016. 5, 6 [14] B. Huval, T. Wang, S. Tandon, | 1506.01186#36 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 37 | Deep networks with stochastic depth. arXiv:1603.09382, 2016. 5, 6 [14] B. Huval, T. Wang, S. Tandon,
J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, R. Cheng-Yue, F. Mujica, A. Coates, et al. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716, 2015. 1 [15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 5
[16] D. Kingma and J. Lei-Ba. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2015. 2, 5
Imagenet classiï¬cation with deep convolutional neural networks. Ad- vances in neural information processing systems, 2012. 1, 2, 6
[18] I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient de- scent with restarts. arXiv preprint arXiv:1608.03983, 2016. 2 | 1506.01186#37 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 38 | [19] Y. Nesterov. A method of solving a convex programming In Soviet Mathe- problem with convergence rate o (1/k2). matics Doklady, volume 27, pages 372â376, 1983. 5
[20] S. Ruder. An overview of gradient descent optimization al- gorithms. arXiv preprint arXiv:1600.04747, 2016. 2 [21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. 6
[22] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. arXiv preprint arXiv:1206.1106, 2012. 2
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1 | 1506.01186#38 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 39 | [24] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Infor- mation Processing Systems, pages 3104â3112, 2014. 1 [25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 1, 2, 7
[26] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face veriï¬ca- tion. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1701â1708. IEEE, 2014. 1 [27] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. 2, 5 | 1506.01186#39 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 41 | } e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == â t r i a n g u l a r â ) { i t r = t h i s â> i t e r â t h i s â>p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s â>p a r a m . b a s e l r ( ) + ( t h i s â>p a r a m . m a x l r () â t h i s â>p a r a m . b a s e l r ( ) ) c y c l e = i t r ( 2 â t h i s â>p a r a m . s t e p s i z e ( ) ) ; ( i t r â ( 2 â c y c l e +1)â t h i s â>p a r a m . s t e p s i z e ( ) ) ; t h i s â>p a r a m . s t e p s i z e ( ) ; â s t d : : max ( d o u b l e ( 0 ) | 1506.01186#41 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 42 | h i s â>p a r a m . s t e p s i z e ( ) ; â s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 â f a b s ( x ) ) ) ; } e l s e { r a t e = t h i s â>p a r a m . b a s e l r ( ) ; } } e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == â t r i a n g u l a r 2 â ) { i t r = t h i s â> i t e r â t h i s â>p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s â>p a r a m . b a s e l r ( ) + ( t h i s â>p a r a m . m a x l r () â t h i s â>p a r a m . b a s e l r ( ) ) ( 2 â t h i s â>p a r a m . s t e p s i z e ( ) ) ; ( i t | 1506.01186#42 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01186 | 43 | b a s e l r ( ) ) ( 2 â t h i s â>p a r a m . s t e p s i z e ( ) ) ; ( i t r â ( 2 â c y c l e +1)â t h i s â>p a r a m . s t e p s i z e ( ) ) ; c y c l e = i t r t h i s â>p a r a m . s t e p s i z e ( ) ; â s t d : : min ( d o u b l e ( 1 ) , f a b s ( x ) ) / pow ( 2 . 0 , d o u b l e ( c y c l e ) ) ) ) ; s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 â } e l s e { r a t e = t h i s â>p a r a m . b a s e l r ( ) ; | 1506.01186#43 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | [
{
"id": "1504.01716"
},
{
"id": "1502.03167"
},
{
"id": "1600.04747"
},
{
"id": "1603.05027"
},
{
"id": "1508.02788"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
}
] |
1506.01066 | 0 | 6 1 0 2
n a J 8 ] L C . s c [
2 v 6 6 0 1 0 . 6 0 5 1 : v i X r a
# Visualizing and Understanding Neural Models in NLP
Jiwei Li1, Xinlei Chen2, Eduard Hovy2 and Dan Jurafsky1 1Computer Science Department, Stanford University, Stanford, CA 94305, USA 2Language Technology Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
{jiweil,jurafsky}@stanford.edu {xinleic,ehovy}@andrew.cmu.edu
# Abstract | 1506.01066#0 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 1 | {jiweil,jurafsky}@stanford.edu {xinleic,ehovy}@andrew.cmu.edu
# Abstract
While neural networks have been success- fully applied to many NLP tasks the re- sulting vector-based models are very difï¬- cult to interpret. For example itâs not clear how they achieve compositionality, build- ing sentence meaning from the meanings of words and phrases. In this paper we describe strategies for visualizing composi- tionality in neural models for NLP, inspired by similar work in computer vision. We ï¬rst plot unit values to visualize composi- tionality of negation, intensiï¬cation, and concessive clauses, allowing us to see well- known markedness asymmetries in nega- tion. We then introduce methods for visu- alizing a unitâs salience, the amount that it contributes to the ï¬nal composed meaning from ï¬rst-order derivatives. Our general- purpose methods may have wide applica- tions for understanding compositionality and other semantic properties of deep net- works.
# Introduction | 1506.01066#1 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 2 | # Introduction
Neural models match or outperform the perfor- mance of other state-of-the-art systems on a va- riety of NLP tasks. Yet unlike traditional feature- based classiï¬ers that assign and optimize weights to varieties of human interpretable features (parts- of-speech, named entities, word shapes, syntactic parse features etc) the behavior of deep learning models is much less easily interpreted. Deep learn- ing models mainly operate on word embeddings (low-dimensional, continuous, real-valued vectors) through multi-layer neural architectures, each layer of which is characterized as an array of hidden neu- ron units. It is unclear how deep learning models deal with composition, implementing functions like negation or intensiï¬cation, or combining meaning from different parts of the sentence, ï¬ltering away
the informational chaff from the wheat, to build sentence meaning.
In this paper, we explore multiple strategies to interpret meaning composition in neural models. We employ traditional methods like representation plotting, and introduce simple strategies for measur- ing how much a neural unit contributes to meaning composition, its âsalienceâ or importance using ï¬rst derivatives. | 1506.01066#2 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 3 | Visualization techniques/models represented in this work shed important light on how neural mod- els work: For example, we illustrate that LSTMâs success is due to its ability in maintaining a much sharper focus on the important key words than other models; Composition in multiple clauses works competitively, and that the models are able to cap- ture negative asymmetry, an important property of semantic compositionally in natural language understanding; there is sharp dimensional local- ity, with certain dimensions marking negation and quantiï¬cation in a manner that was surprisingly localist. Though our attempts only touch superï¬- cial points in neural models, and each method has its pros and cons, together they may offer some insights into the behaviors of neural models in lan- guage based tasks, marking one initial step toward understanding how they achieve meaning composi- tion in natural language processing.
The next section describes some visualization models in vision and NLP that have inspired this work. We describe datasets and the adopted neu- ral models in Section 3. Different visualization strategies and correspondent analytical results are presented separately in Section 4,5,6, followed by a brief conclusion.
# 2 A Brief Review of Neural Visualization
Similarity is commonly visualized graphically, gen- erally by projecting the embedding space into two dimensions and observing that similar words tend to be clustered together (e.g., Elman (1989), Ji | 1506.01066#3 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 4 | and Eisenstein (2014), Faruqui and Dyer (2014)). (Karpathy et al., 2015) attempts to interpret recur- rent neural models from a statical point of view and does deeply touch compositionally of mean- ings. Other relevant attempts include (Fyshe et al., 2015; Faruqui et al., 2015).
Methods for interpreting and visualizing neu- ral models have been much more signiï¬cantly ex- plored in vision, especially for Convolutional Neu- ral Networks (CNNs or ConvNets) (Krizhevsky et al., 2012), multi-layer neural networks in which the original matrix of image pixels is convolved and pooled as it is passed on to hidden layers. ConvNet visualizing techniques consist mainly in mapping the different layers of the network (or other fea- tures like SIFT (Lowe, 2004) and HOG (Dalal and Triggs, 2005)) back to the initial image input, thus capturing the human-interpretable information they represent in the input, and how units in these layers contribute to any ï¬nal decisions (Simonyan et al., 2013; Mahendran and Vedaldi, 2014; Nguyen et al., 2014; Szegedy et al., 2013; Girshick et al., 2014; Zeiler and Fergus, 2014). Such methods include: | 1506.01066#4 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 5 | (1) Inversion: Inverting the representations by training an additional model to project outputs from different neural levels back to the initial input im- ages (Mahendran and Vedaldi, 2014; Vondrick et al., 2013; Weinzaepfel et al., 2011). The intuition behind reconstruction is that the pixels that are re- constructable from the current representations are the content of the representation. The inverting algorithms allow the current representation to align with corresponding parts of the original images.
(2) Back-propagation (Erhan et al., 2009; Si- monyan et al., 2013) and Deconvolutional Net- works (Zeiler and Fergus, 2014): Errors are back propagated from output layers to each intermedi- ate layer and ï¬nally to the original image inputs. Deconvolutional Networks work in a similar way by projecting outputs back to initial inputs layer by layer, each layer associated with one supervised model for projecting upper ones to lower ones These strategies make it possible to spot active regions or ones that contribute the most to the ï¬nal classiï¬cation decision. | 1506.01066#5 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 7 | in interpretation.
While the above strategies inspire the work we present in this paper, there are fundamental dif- ferences between vision and NLP. In NLP words function as basic units, and hence (word) vectors rather than single pixels are the basic units. Se- quences of words (e.g., phrases and sentences) are also presented in a more structured way than ar- rangements of pixels. In parallel to our research, independent researches (Karpathy et al., 2015) have been conducted to explore similar direction from an error-analysis point of view, by analyzing pre- dictions and errors from a recurrent neural models. Other distantly relevant works include: Murphy et al. (2012; Fyshe et al. (2015) used an manual task to quantify the interpretability of semantic dimen- sions by presetting human users with a list of words and ask them to choose the one that does not belong to the list. Faruqui et al. (2015). Similar strategy is adopted in (Faruqui et al., 2015) by extracting top-ranked words in each vector dimension.
# 3 Datasets and Neural Models
We explored two datasets on which neural models are trained, one of which is of relatively small scale and the other of large scale.
# 3.1 Stanford Sentiment Treebank | 1506.01066#7 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 8 | We explored two datasets on which neural models are trained, one of which is of relatively small scale and the other of large scale.
# 3.1 Stanford Sentiment Treebank
Stanford Sentiment Treebank is a benchmark dataset widely used for neural model evaluations. The dataset contains gold-standard sentiment labels for every parse tree constituent, from sentences to phrases to individual words, for 215,154 phrases in 11,855 sentences. The task is to perform both ï¬ne-grained (very positive, positive, neutral, nega- tive and very negative) and coarse-grained (positive vs negative) classiï¬cation at both the phrase and sentence level. For more details about the dataset, please refer to Socher et al. (2013).
While many studies on this dataset use recursive parse-tree models, in this work we employ only standard sequence models (RNNs and LSTMs) since these are the most widely used current neu- ral models, and sequential visualization is more straightforward. We therefore ï¬rst transform each parse tree node to a sequence of tokens. The sequence is ï¬rst mapped to a phrase/sentence representation and fed into a softmax classiï¬er. Phrase/sentence representations are built with the following three models: Standard Recurrent Se- quence with TANH activation functions, LSTMs and | 1506.01066#8 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 9 | Bidirectional LSTMs. For details about the three models, please refer to Appendix.
Training AdaGrad with mini-batch was used for training, with parameters (L2 penalty, learning rate, mini batch size) tuned on the development set. The number of iterations is treated as a variable to tune and parameters are harvested based on the best performance on the dev set. The number of dimen- sions for the word and hidden layer are set to 60 with 0.1 dropout rate. Parameters are tuned on the dev set. The standard recurrent model achieves 0.429 (ï¬ne grained) and 0.850 (coarse grained) accuracy at the sentence level; LSTM achieves 0.469 and 0.870, and Bidirectional LSTM 0.488 and 0.878, respectively.
# 3.2 Sequence-to-Sequence Models | 1506.01066#9 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 10 | # 3.2 Sequence-to-Sequence Models
SEQ2SEQ are neural models aiming at generating a sequence of output texts given inputs. Theoret- ically, SEQ2SEQ models can be adapted to NLP tasks that can be formalized as predicting outputs given inputs and serve for different purposes due to different inputs and outputs, e.g., machine trans- lation where inputs correspond to source sentences and outputs to target sentences (Sutskever et al., 2014; Luong et al., 2014); conversational response generation if inputs correspond to messages and outputs correspond to responses (Vinyals and Le, 2015; Li et al., 2015). SEQ2SEQ need to be trained on massive amount of data for implicitly semantic and syntactic relations between pairs to be learned. SEQ2SEQ models map an input sequence to a vector representation using LSTM models and then sequentially predicts tokens based on the pre- obtained representation. The model deï¬nes a dis- tribution over outputs (Y) and sequentially predicts tokens given inputs (X) using a softmax function.
ny P(Y|X) = [] ewes. LQ, oy Vt, Y1y Y2s +) Yt-1) i=1 exp(f (hi-1, ey) t=1 Ly exp(f (ht-1, eyâ)) | 1506.01066#10 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 11 | where f (htâ1, eyt) denotes the activation function between htâ1 and eyt, where htâ1 is the represen- tation output from the LSTM at time t â 1. For each time step in word prediction, SEQ2SEQ mod- els combine the current token with previously built embeddings for next-step word prediction.
For easy visualization purposes, we turn to the most straightforward taskâautoencoderâ where
inputs and outputs are identical. The goal of an autoencoder is to reconstruct inputs from the pre- obtained representation. We would like to see how individual input tokens affect the overall sentence representation and each of the tokens to predict in outputs. We trained the auto-encoder on a subset of WMTâ14 corpus containing 4 million english sentences with an average length of 22.5 words. We followed training protocols described in (Sutskever et al., 2014).
# 4 Representation Plotting
We begin with simple plots of representations to shed light on local compositions using Stanford Sentiment Treebank.
Local Composition Figure 1 shows a 60d heat- map vector for the representation of selected words/phrases/sentences, with an emphasis on ex- tent modiï¬cations (adverbial and adjectival) and negation. Embeddings for phrases or sentences are attained by composing word representations from the pretrained model. | 1506.01066#11 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 12 | The intensiï¬cation part of Figure 1 shows sug- gestive patterns where values for a few dimensions are strengthened by modiï¬ers like âa lotâ (the red bar in the ï¬rst example) âso muchâ (the red bar in the second example), and âincrediblyâ. Though the patterns for negations are not as clear, there is still a consistent reversal for some dimensions, visible as a shift between blue and red for dimensions boxed on the left. | 1506.01066#12 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 13 | We then visualize words and phrases using t- sne (Van der Maaten and Hinton, 2008) in Figure 2, deliberately adding in some random words for com- parative purposes. As can be seen, neural models nicely learn the properties of local composition- ally, clustering negation+positive words (ânot niceâ, ânot goodâ) together with negative words. Note also the asymmetry of negation: ânot badâ is clustered more with the negative than the positive words (as shown both in Figure 1 and 2). This asymmetry has been widely discussed in linguistics, for exam- ple as arising from markedness, since âgoodâ is the unmarked direction of the scale (Clark and Clark, 1977; Horn, 1989; Fraenkel and Schul, 2008). This suggests that although the model does seem to fo- cus on certain units for negation in Figure 1, the neural model is not just learning to apply a ï¬xed transform for ânotâ but is able to capture the subtle differences in the composition of different words. | 1506.01066#13 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |