Reza8848's picture
Track large files with Git LFS
837b615
{
"ID": "1KtU2ya2zh5",
"Title": "META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for Unbounded Functions",
"Keywords": "Nonconvex Optimization, Stochastic Optimization, Adaptive Algorithms, Variance Reduction",
"URL": "https://openreview.net/forum?id=1KtU2ya2zh5",
"paper_draft_url": "/references/pdf?id=0sJkk00mSG",
"Conferece": "ICLR_2023",
"track": "Optimization (eg, convex and non-convex optimization)",
"acceptance": "Reject",
"review_scores": "[['3', '6', '3'], ['3', '5', '3'], ['3', '6', '4'], ['3', '5', '3']]",
"input": {
"source": "CRF",
"title": "META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for Unbounded Functions",
"authors": [],
"emails": [],
"sections": [
{
"heading": null,
"text": "1 INTRODUCTION\nIn this paper, we consider the stochastic optimization problem in the form\nmin x\u2208Rd\nF (x) := E\u03be\u223cD [f(x, \u03be)] , (1)\nwhere F : Rd \u2192 R is possibly non-convex. We assume only access to a first-order stochastic oracle via sample functions f(x, \u03be), where \u03be comes from a distribution D representing the randomness in the sampling process. Optimization problems of this form are ubiquitous in machine learning and deep learning. Empirical risk minimization (ERM) is one instance, where F (x) is the loss function that can be evaluated by a sample or a minibatch represented by \u03be.\nAn important advance in solving Problem (1) is the recent development of variance reduction (VR) techniques that improve the convergence rate to critical points of vanilla SGD from O(1/T 1/4) to O(1/T 1/3) (Fang et al., 2018; Li et al., 2021) for the class of mean-squared smooth functions (Arjevani et al., 2019). In contrast to earlier VR algorithms which often require the computation of the gradients over large batches, recent methods such as Cutkosky & Orabona (2019); Levy et al. (2021); Huang et al. (2021) avoid this drawback by using a weighted average of past gradients, often known as momentum. When the weights are selected appropriately, momentum reduces the error in the gradient estimates which improves the convergence rate.\nA different line of work on adaptive methods (Duchi et al., 2011; Kingma & Ba, 2014), some of which incorporate momentum techniques, have shown tremendous success in practice. These adap-\ntive methods remove the burden of obtaining certain problem-specific parameters, such as smoothness, in order to set the right step size to guarantee convergence. STORM+ (Levy et al., 2021) is the first algorithm to bridge the gap between fully-adaptive algorithms and VR methods, achieving the variance-reduced convergence rate of O(1/T 1/3) while not requiring knowledge of any problemspecific parameter. This is also the first work to demonstrate the interplay between adaptive momentum and step sizes to adapt to the problem\u2019s structure, while still achieving the VR rate. However, STORM+ relies on a strong assumption that the function values are bounded, which generally does not hold in practice. Moreover, the convergence rate of STORM+ has high polynomial dependencies on the problem parameters, compared to what can be achieved by appropriately configuring the step sizes and momentum parameters given knowledge of the problem parameters (see Section 3.1).\nOur contributions: In this work, we propose META-STORM-SG and META-STORM, two flexible algorithmic frameworks that attain the optimal variance-reduced convergence rate for general nonconvex objectives. Both of them generalize STORM+ by allowing a wider range of parameter selection and removing the restrictive bounded function value assumption while maintaining its desirable fully-adaptive property \u2013 eliminating the need to obtain any problem-specific parameter. These have been enabled via our novel analysis framework that also establishes a convergence rate with much better dependency on the problem parameters. We present a comparison of META-STORM and its sibling META-STORM-SG against recent VR methods in Table 1. In the appendix, we propose another algorithm, META-STORM-NA, with even less restrictive assumptions; however, with a tradeoff of losing the adaptivity to the variance parameter.\nWe complement our theoretical results with experiments across three common tasks: image classification, masked language modeling, and sentiment analysis. Our algorithms improve upon the previous work, STORM+. Furthermore, the addition of heuristics such as exponential moving average and per-coordinate updates improves our algorithms\u2019 generalization performance. These versions of our algorithms are shown to be competitive with widely used algorithms such as Adam and AdamW.\n1.1 RELATED WORK\nVariance reduction methods for stochastic non-convex optimization: Variance reduction is introduced for non-convex optimization by Allen-Zhu & Hazan (2016); Reddi et al. (2016) in the context of finite sum optimization, achieving faster convergence over the full gradient descent method. These methods are first improved by Lei et al. (2017) and later by Fang et al. (2018); Li et al. (2021) both of which achieve an O(1/T 1/3) convergence rate, matching the lower bounds in Arjevani et al. (2019). However, these earlier methods periodically need to compute the full gradient (in the finitesum case) or a giant batch at a check point, which can be quite costly. Shortly after, Cutkosky & Orabona (2019) and Tran-Dinh et al. (2019) introduce a different approach that utilizes stochastic gradients from previous time steps instead of computing the full gradient at a checkpoints. These methods are framed as momentum-based methods as they are similar to using a weighted average of the gradient estimates to achieve the variance reduction. Recently, SUPER-ADAM (Huang et al., 2021) integrates STORM in a larger framework of adaptive algorithms, but loses adaptivity to the variance parameter \u03c3. At the same time, STORM+ (Levy et al., 2021) proposes a fully adaptive version of STORM, which our work builds upon.\nAdaptive methods for stochastic non-convex optimization: Classical methods, like SGD (Ghadimi & Lan, 2013), typically require the knowledge of problem parameters, such as the smoothness and the variance of the stochastic gradients, to set the step sizes. In contrast, adaptive methods (Duchi et al., 2011; Tieleman et al., 2012; Kingma & Ba, 2014) forgo this requirement: their step sizes only rely on the stochastic gradients obtained by the algorithms. Although these adaptive methods are originally designed for convex optimization, they enjoy great successes and popularity in highly non-convex practical applications such as training deep neural networks, often making them the method of choice in practice. As a result, theoretical understanding of adaptive methods for non-convex problems has received significant attention in recent years. The works by Ward et al. (2019); Kavis et al. (2021) propose a convergence analysis of AdaGrad under various assumptions. Among VR methods, STORM+ is the only fully adaptive algorithm that does not require knowledge of any problem parameter. Our work builds on and generalizes STORM+, removing the bounded function value assumption while obtaining much better dependencies on the problem parameters.\n1.2 PROBLEM DEFINITION AND ASSUMPTIONS\nWe study stochastic non-convex optimization problems for which the objective function F : Rd \u2192 R that has form F (x) := E\u03be\u223cD [f(x, \u03be)] and f(\u00b7, \u03be) is a sampling function depending on a random variable \u03be drawn from a distribution D. We will omit the writing of D in E\u03be\u223cD [f(x, \u03be)] for simplicity in the remaining paper. \u2016 \u00b7 \u2016 represents \u2016 \u00b7 \u20162 for brevity. [T ] is defined as {1, 2, \u00b7 \u00b7 \u00b7 , T}. The analysis of our algorithms relies on the following assumptions 1\u20135:\n1. Lower bounded function value: F \u2217 := infx\u2208Rd F (x) > \u2212\u221e. 2. Unbiased estimator with bounded variance: We assume to have access to \u2207f(x, \u03be) satisfying E\u03be [\u2207f(x, \u03be)] = \u2207F (x), E\u03be [ \u2016\u2207f(x, \u03be)\u2212\u2207F (x)\u20162 ] \u2264 \u03c32 for some \u03c3 \u2265 0.\n3. Averaged \u03b2-smoothness: E\u03be [ \u2016\u2207f(x, \u03be)\u2212\u2207f(y, \u03be)\u20162 ] \u2264 \u03b22\u2016x\u2212 y\u20162,\u2200x, y \u2208 Rd.\n4. Bounded stochastic gradients: \u2016\u2207f(x, \u03be)\u2016 \u2264 G\u0302,\u2200x \u2208 Rd, \u03be \u2208 support(D) for some G\u0302 \u2265 0. 5. Bounded stochastic gradient differences: \u2016\u2207f(x, \u03be) \u2212 \u2207f(x, \u03be\u2032)\u2016 \u2264 2\u03c3\u0302,\u2200x \u2208 Rd, \u03be, \u03be\u2032 \u2208 support(D) for some \u03c3\u0302 \u2265 0. Assumptions 1, 2 and 3 are standard in the VR setting (Arjevani et al., 2019). Assumption 5 is weaker than the assumptions made in the prior works based on the STORM framework (Cutkosky & Orabona, 2019; Levy et al., 2021). These works assume that the stochastic gradients are bounded, i.e., Assumption 4. We note that assumption 4 implies that assumption 5 holds by replacing \u03c3\u0302 by G\u0302, thus we only have to consider \u03c3\u0302 = O(G\u0302). To better understand assumption 5, we fix \u03be \u2208 support(D) and consider another \u03be\u2032 \u223c D, then due to the convexity of \u2016\u00b7\u2016, \u2016\u2207f(x, \u03be)\u2212\u2207F (x)\u2016 = \u2016\u2207f(x, \u03be) \u2212 E\u03be\u2032 [\u2207f(x, \u03be\u2032)] \u2016 \u2264 E\u03be\u2032 [\u2016\u2207f(x, \u03be)\u2212\u2207f(x, \u03be)\u2032\u2016] \u2264 2\u03c3\u0302. This means assumption 5 implies a stronger version of assumption 2. For this reason, we can consider \u03c3 = O(\u03c3\u0302).\n1This bound holds when \u03c32 > 0 and T is large enough.\nAlgorithm 1 META-STORM-SG Input: Initial point x1 \u2208 Rd Parameters: a0, b0, \u03b7, p \u2208 [ 14 , 1 2 ], p+ 2q = 1 Sample \u03be1 \u223c D, d1 = \u2207f(x1, \u03be1) for t = 1, \u00b7 \u00b7 \u00b7 , T do:\nat+1 = ( 1 + \u2211t i=1 \u2016\u2207f(xi,\u03bei)\u20162 a20 )\u2212 23 bt = (b 1/p 0 + \u2211t i=1 \u2016di\u20162)p/a q t+1 xt+1 = xt \u2212 \u03b7bt dt Sample \u03bet+1 \u223c D dt+1 = \u2207f(xt+1, \u03bet+1) + (1 \u2212 at+1)(dt \u2212\n\u2207f(xt, \u03bet+1)) end for Output xout = xt where t \u223c Uniform ([T ]).\nAlgorithm 2 META-STORM Input: Initial point x1 \u2208 Rd Parameters: a0, b0, \u03b7, p \u2208 [ 3\u2212 \u221a 7 2 , 1 2 ], p+ 2q = 1 Sample \u03be1 \u223c D, d1 = \u2207f(x1, \u03be1), a1 = 1 for t = 1, \u00b7 \u00b7 \u00b7 , T do: bt = (b 1/p 0 + \u2211t i=1 \u2016di\u20162)p/a q t\nxt+1 = xt \u2212 \u03b7bt dt Sample \u03bet+1 \u223c D\nat+1 = ( 1 + \u2211t i=1 \u2016\u2207f(xi,\u03bei)\u2212\u2207f(xi,\u03bei+1)\u20162 a20 )\u2212 23 dt+1 = \u2207f(xt+1, \u03bet+1) + (1 \u2212 at+1)(dt \u2212\n\u2207f(xt, \u03bet+1)) end for Output xout = xt where t \u223c Uniform ([T ]).\nAdditional assumptions made in the prior works (Cutkosky & Orabona, 2019; Levy et al., 2021; Huang et al., 2021) include the following:\n3\u2019. Almost surely \u03b2-smooth: \u2016\u2207f(x, \u03be)\u2212\u2207f(y, \u03be)\u2016 \u2264 \u03b2\u2016x\u2212 y\u2016,\u2200x, y \u2208 Rd, \u03be \u2208 support(D). 6. Bounded function values: There exists B \u2265 0 such that |F (x)\u2212 F (y)| \u2264 B for all x, y \u2208 Rd. We remark that 3\u2019 is strictly stronger than 3 and it is NOT a standard assumption in Arjevani et al. (2019). Moreover, assumption 6, which plays a critical role in the analysis of Levy et al. (2021), is relatively strong and cannot be always satisfied in non-convex optimization. Our work removes these two restrictive assumptions and also improves the dependency on the problem parameters.\n2 OUR ALGORITHMS\nIn this section, we introduce our two main algorithms, META-STORM-SG and META-STORM, shown in Algorithm 1 and Algorithm 2 respectively. Our algorithms follow the generic framework of momentum-based variance-reduced SGD put forward by STORM (Cutkosky & Orabona, 2019). The STORM template incorporates momentum and variance reduction as follows:\ndt = at\u2207f(xt, \u03bet) + (1\u2212 at) dt\u22121\ufe38 \ufe37\ufe37 \ufe38 momentum + (1\u2212 at) (\u2207f(xt, \u03bet)\u2212\u2207f(xt\u22121, \u03bet))\ufe38 \ufe37\ufe37 \ufe38 variance reduction\n(2)\nxt+1 = xt \u2212 \u03b7\nbt dt. (3)\nThe first variant, META-STORM-SG, similar to prior works, uses the gradient norms when setting at and similarly, requires the strong assumption on the boundedness of the stochastic gradients. The major difference lies in the structure of the momentum parameters and the step sizes and their relationship, which is further developed in the second algorithm META-STORM so that assumption 4 can be relaxed to assumption 5. We now highlight our key algorithmic contributions and how they depart from prior works.\nA first point of departure is our use of stochastic gradient differences when setting the momentum parameter at in META-STORM: prior works set at based on the stochastic gradients, while META-STORM sets at based on the difference of two gradient estimators taken at two different time step \u03bet\u22121 and \u03bet at the same point xt\u22121. The gradient difference can be viewed as a proxy for the variance \u03c32, which allows us to require the mild assumption 5 in the analysis. With this choice, our algorithm obtains the best dependency on the problem parameters. On the other hand, the coefficient 1\u2212 at+1 in the update for dt+1 now depends on \u03bet+1, and addressing this correlation requires a more careful analysis. The second point of departure is the setting of the step sizes bt and their relationship to the momentum parameters at in both META-STORM-SG and META-STORM. We propose a general update rule bt = (b 1/p 0 + \u2211t i=1 \u2016di\u20162)p/a q t that allows for a broad range of choices for p and q that subsume prior works. In practice, different problem domains may benefit from different choices of p and q. Our framework allows us to capture prior works such as\nthe STORM+ update bt = ( \u2211t i=1 \u2016di\u20162/ai+1)1/3 using a different but related choice of momentum parameters and a simpler update that uses only the current momentum value at instead of all the previous momentum values ai+1 with i \u2264 t. We further motivate and provide intuition for our algorithmic choices in Section 3. We note that our algorithm uses only the stochastic gradient information received, and it does not require any knowledge of the problem parameters.\nWe provide an overview and intuition for our algorithm in Section 3, and give the complete analysis in the appendix. Our analysis departs significantly from prior works such as STORM+, and it allows us to forgo the bounded function value assumption and improve the convergence rate\u2019s dependency on the problem parameters. It remains an interesting open question to determine the best convergence rate that can be achieved when the function values are bounded.\nWe can further alleviate assumption 5 in another new algorithm, META-STORM-NA (Algorithm 4), provided in Section H in the appendix. To the best of our knowledge, META-STORM-NA is the only adaptive algorithm that enjoys the convergence rate O\u0303(1/T 1/3) under only the weakest assumptions 1-3. It also allows a wide range of choices for p \u2208 ( 0, 12 ] . However, the tradeoff is that the algorithm does not adapt to the variance parameter \u03c3. For the detailed analysis, we refer readers to Section H.\nFinally, we show the convergence rate obtained by Algorithms 1 and 2 in the following theorems. The convergence rates for general p are given in the appendix. Theorem 2.1. Under the assumptions 1-4 in Section 1.2, with the choice p = 12 and setting a0 = b0 = \u03b7 = 1 to simplify the final bound, META-STORM-SG ensures that\nE [ \u2016\u2207F (xout)\u2016 2 3 ] = O (W11 [(\u03c32T )1/3 \u2264W1] T 1/3 + ( W2 +W3 log 2 3 ( 1 + \u03c32T ))( 1 T 1/3 + \u03c32/9 T 2/9 )) whereW1 = O ( F (x1)\u2212F \u2217+\u03c32 +G\u03022 +\u03b2 ( 1+G\u03022 ) log ( \u03b2+G\u03022\u03b2 )) , W2 = O ( (F (x1)\u2212F \u2217)2/3 +\n\u03c34/3 + G\u03024/3 + (1 + G\u03024/3)\u03b22/3 log2/3 ( \u03b2 + G\u03022\u03b2 )) and W3 = O ( (1 + G\u03024/3)\u03b22/3 ) .\nWe note that when \u03c32 > 0 and T is large enough, the effect of W1 can be eliminated. Combining Theorem 2.1 and Markov\u2019s inequality, we immediately have the following corollary. Corollary 2.2. Under the same setting in Theorem 2.1, additionally we assume \u03c32 > 0 and T is large enough, then for any 0 < \u03b4 < 1, with probability 1\u2212 \u03b4\n\u2016\u2207F (xout)\u2016 \u2264 O (\u03ba1 + \u03ba2 log (1 + \u03c32T )\n\u03b43/2\n( 1 T 1/2 + \u03c31/3 T 1/3 )) where \u03ba1 = O ( F (x1)\u2212 F \u2217 + \u03c32 + G\u03022 + \u03ba2 log \u03ba2 ) and \u03ba2 = O (( 1 + G\u03022 ) \u03b2 ) .\nTheorem 2.3. Under the assumptions 1\u20133 and 5 in Section 1.2, with the choice p = 12 and setting a0 = b0 = \u03b7 = 1 to simplify the final bound, META-STORM ensures that\nE [ \u2016\u2207F (xout)\u2016 6 7 ] = O (( Q1 +Q2 log 6 7 ( 1 + \u03c32T ))( 1 T 3/7 + \u03c32/7 T 2/7 )) where Q1 = O (( F (x1)\u2212F \u2217 )6/7 + ( \u03c3\u0302\u03c3 )6/7 +\u03c312/7 + \u03c3\u030218/7 + ( 1 + \u03c3\u030218/7 ) \u03b26/7 log6/7 ( \u03b2+ \u03c3\u03023\u03b2\n) and Q2 = O (( 1 + \u03c3\u030218/7 ) \u03b26/7 ) .\nCombining Theorem 2.3 and Markov\u2019s inequality, we also have the following corollary. Corollary 2.4. Under the same setting in Theorem 2.3, then, for any 0 < \u03b4 < 1, with probability 1\u2212 \u03b4\n\u2016\u2207F (xout)\u2016 \u2264 O (\u03ba1 + \u03ba2 log (1 + \u03c32T )\n\u03b47/6\n( 1 T 1/2 + \u03c31/3 T 1/3 )) where \u03ba1 = O ( F (x1)\u2212 F \u2217 + \u03c3\u0302\u03c3 + \u03c32 + \u03c3\u03023 + \u03ba2 log \u03ba2 ) and \u03ba2 = O (( 1 + \u03c3\u03023 ) \u03b2 ) .\nWe emphasize that the aim of our analysis is to provide a convergence in expectation or with constant probability. In particular, we state Corollaries 2.2 and 2.4 only to give a more intuitive way to see the dependency on the problem parameters. To boost the success probability and achieve a log 1\u03b4 dependency on the probability margin, a common approach is to perform log 1 \u03b4 independent repetitions of the algorithms.\nWe briefly discuss the difference between the convergence rate of the two algorithms. We note that these two rates cannot be compared directly since assumption 4 is stronger than assumption 5. Additionally, as pointed out in Section 1.2, we have \u03c3\u0302 = O(G\u0302) and thus the termO(\u03c3\u03023) in Corollary 2.4 is O(G\u03023), whereas Corollary 2.2 has a O(G\u03022) term. To give an intuition why an extra higher order term W1 appears in Theorem 2.1 when \u03c3 = 0 compared with Theorem 2.3, we note that when \u03c3 = 0, dt in both algorithms degenerates to \u2207F (xt). However, the coefficient at+1 becomes 1 in META-STORM but does not in META-STORM-SG. This discrepancy leads to bt being larger in META-STORM-SG than in META-STORM, and moreover the META-STORM bt becomes exactly the same as the stepsize used in AdaGrad. Due to the larger bt when \u03c3 = 0, it is reasonable to expect a slower convergence rate for META-STORM-SG. The appearance of the term W1 reflects that.\n3 OVERVIEW OF MAIN IDEAS AND ANALYSIS\nIn this section, we an overview of our novel analysis framework. We first give a basic non-adaptive algorithm and its analysis to motivate the algorithmic choices made by our adaptive algorithms. We then discuss how to turn the non-adaptive algorithm into an adaptive one. Section D in the appendix gives a proof sketch for Theorem 2.3 for the special case p = 12 that illustrates the main ideas used in the analyses of all of our algorithms. We give the complete analyses in the appendix.\n3.1 NON-ADAPTIVE ALGORITHM\nAs a warm-up towards our fully adaptive algorithms and their analysis, we start with a basic nonadaptive algorithm and analysis that will guide our algorithmic choices and provide intuition for our analysis. The algorithm instantiates the STORM template using fixed choices at = a and bt/\u03b7 = b for the momentum and step size. In the following, we outline an analysis for the algorithm and derive appropriate choices for the values a and b.\nAlgorithm: As noted above, the algorithm performs the following updates:\nxt+1 = xt \u2212 1\nb dt; dt+1 = \u2207f(xt+1, \u03bet+1) + (1\u2212 a)(dt \u2212\u2207f(xt, \u03bet+1)).\nTo make it simpler, we assume d1 = \u2207F (x1). Alternatively, one can use a standard mini-batch setting to set d1 = 1m \u2211m i=1\u2207f(x1; \u03bei) with a proper m leading to small variance as in previous non-adaptive analysis (Fang et al., 2018; Zhou et al., 2018; Tran-Dinh et al., 2019).\nKey idea: We start by introducing some convenient notation. Let t = dt\u2212\u2207F (xt) be the stochastic error (in particular, 1 = 0) and\nHt := t\u2211 i=1 \u2016\u2207F (xi)\u20162 Dt := t\u2211 i=1 \u2016di\u20162 Et := t\u2211 i=1 \u2016 i\u20162.\nFirst, to bound E [\u2016\u2207F (xout)\u2016] where xout is an iterate chosen uniformly at random, it suffices to upper bound E[HT ]. Then, we can translate this term to a convergence guarantee for E [\u2016\u2207F (xout)\u2016]. An important intuition from STORM/STORM+ is the incorporation of VR in (2), leading to a decrease over time of the error term t. Thus, we can view dt as a proxy for\u2207F (xt). It is then natural to decompose HT in terms of DT and ET . By the definition of t, we can write HT \u2264 2DT +2ET . Therefore, to upper bound E[HT ], it suffices to upper bound E[DT ] and E[ET ], which will be the essential steps in the analysis framework. A key insight is that E[DT ] and E[ET ] can be upper bounded in terms of each other, as we now show.\nBounding DT : Starting from the function value analysis, using smoothness, the update rule xt+1 = xt \u2212 1bdt, the definition of t = dt \u2212\u2207F (xt), and Cauchy-Schwarz, we obtain\nF (xt+1)\u2212 F (xt) \u2264 \u3008\u2207F (xt), xt+1 \u2212 xt\u3009+ \u03b2\n2 \u2016xt+1 \u2212 xt\u20162 = \u2212\n1 b \u3008\u2207F (xt), dt\u3009+ \u03b2 2b2 \u2016dt\u20162\n= \u22121 b \u2016dt\u20162 + 1 b \u3008 t, dt\u3009+ \u03b2 2b2 \u2016dt\u20162 \u2264 \u2212 1 2b \u2016dt\u20162 + 1 2b \u2016 t\u20162 + \u03b2 2b2 \u2016dt\u20162.\nSuppose that we choose b so that b \u2265 2\u03b2, which ensures that \u03b22b2 \u2264 1 4b . By rearranging the previous inequality, summing up over all iterations, and taking expectation, we obtain E [DT ] \u2264 4bE (F (x1)\u2212 F (xT+1)) + 2E [ET ] \u2264 4b (F (x1)\u2212 F \u2217) + 2E [ET ] . (4)\nBounding ET : By the standard calculation for the stochastic error t used in STORM, we have\nE [ \u2016 t+1\u20162 ] \u2264 (1\u2212 a)2E [ \u2016 t\u20162 ] + 2(1\u2212 a)2 \u03b2 2 b2 E [ \u2016dt\u20162 ] + 2a2\u03c32.\nSumming up over all iterations, rearranging, and using that a \u2208 [0, 1] and 1 = 0, we obtain\nE [ET ] \u2264 1 1\u2212 (1\u2212 a)2 ( 2(1\u2212 a)2 \u03b2 2 b2 E [DT ] + 2a2\u03c32T ) \u2264 2\u03b2 2 ab2 E [DT ] + 2a\u03c32T. (5)\nBy combining inequalities (4) and (5), we obtain\nE [DT ] \u2264 4b (F (x1)\u2212 F \u2217) + 4\u03b22\nab2 E [DT ] + 4a\u03c32T ; (6)\nE [ET ] \u2264 8\u03b22\nab (F (x1)\u2212 F \u2217) +\n4\u03b22 ab2 E [ET ] + 2a\u03c32T. (7)\nIdeal non-adaptive choices for a, b: Here, we set a and b to optimize the overall bound, and obtain choices that depend on the problem parameters. In the next section, we build upon these choices to obtain adaptive algorithms that use only the stochastic gradient information received by the algorithm.\nWe observe that (6) and (7) bound E[DT ] and E[ET ] in terms of themselves, and the coefficient on the right-hand side is 4\u03b2 2\nab2 . Suppose that we set a so that this coefficient is 1 2 , i.e., we set a =\n8\u03b22\nb2 ,\nso that 4\u03b2 2\nab2 = 1 2 (note that this requires setting b \u2265 2\n\u221a 2\u03b2, so that a \u2264 1). By plugging this choice\ninto (6) and (7), we obtain E [DT ] ,E [ET ] \u2264 O ( b (F (x1)\u2212 F \u2217) + \u03b22\u03c32T\nb2\n) .\nThe best choice for b is the one that balances the two terms above: b = \u0398 (\n\u03b22\u03c32T F (x1)\u2212F\u2217\n)1/3 . Since we\nalso need b \u2265 \u2126(\u03b2), we can set b to the sum of the two. Hence, we obtain\na = \u0398 (\u03b22 b2 ) = \u0398 ( 1 1 + (\u03b2 (F (x1)\u2212 F \u2217))\u22122/3 (\u03c32T )2/3 ) ; (8)\nb = \u0398 ( \u03b2 + \u03b22/3 ( F (x1)\u2212 F \u2217 )\u22121/3( \u03c32T )1/3) ; (9)\nE [DT ] ,E [ET ] ,E [HT ] \u2264 O ( \u03b2 ( F (x1)\u2212 F \u2217 ) + ( \u03b2 ( F (x1)\u2212 F \u2217 ))2/3( \u03c32T )1/3) . (10)\n3.2 ADAPTIVE ALGORITHM\nIn this section, we build on the non-adaptive algorithm and its analysis from the previous section. We first motivate the algorithmic choices made by our algorithm via a thought experiment where we pretend that HT ,DT ,ET are deterministic quantities.\nTowards adaptive algorithms: To develop an adaptive algorithm, we would like to pick a, bwithout an explicit dependence on the problem parameters by using quantities that the algorithm can track. We break this down by first considering choices that do not depend on \u03b2, but on \u03c3, and then removing the dependency on \u03c3. As a thought experiment, let us pretend that HT ,DT ,ET are deterministic quantities. A natural choice for a that mirrors the non-adaptive choice (8) is a = (1 + \u03c32T )\u22122/3. Since we are pretending that DT is a deterministic quantity, we can set b by inspecting (5):\nET (5) \u2264 2\u03b2\n2\nab2 DT + 2a\u03c3\n2T\nIf we set b = D1/2T /a 1/4, we ensure that DT cancels and we obtain the desired upper bound on ET . More precisely, by plugging in a = (1 + \u03c32T )\u22122/3 and b = D1/2T /a 1/4 into (5), we obtain\nET (5) \u2264 2\u03b2\n2\na1/2DT DT + 2a\u03c3\n2T \u2264 O ( \u03b22 ( 1 + \u03c32T )1/3 + ( 1 + \u03c32T )1/3 )\nWe now consider two cases for DT . If DT \u2264 16\u03b22(1 + \u03c32T )1/3, the above inequality together with HT \u2264 2DT + 2ET imply that HT \u2264 O((1 + \u03b22)(1 + \u03c32T )1/3). Otherwise, we have DT \u2265 16\u03b22(1 + \u03c32T )1/3 and thus ab2 \u2265 16\u03b22. Plugging into (6), we obtain\nDT (6) \u2264 O (\u221a DT ( 1 + \u03c32T )1/6 (F (x1)\u2212 F (x\u2217)) + ( 1 + \u03c32T )1/3 ) which solves to DT \u2264 O((1 + \u03c32T )1/3(F (x1) \u2212 F \u2217)2). We can again bound HT using HT \u2264 2DT + 2ET . In both cases, we have the bound\nHT \u2264 O (( 1 + \u03b22 + (F (x1)\u2212 F \u2217)2 ) ( 1 + \u03c32T )1/3 )\nWe now turn to removing the dependency on \u03c32T in a. The algorithm can also track H\u0303T :=\u2211T t=1 \u2016\u2207f(xt; \u03bet)\u2212\u2207f(xt; \u03bet+1)\u20162, which can be viewed as a proxy for \u03c32T . Replacing \u03c32T by this proxy and making a and b be time dependent give the update rules employed by our algorithm in the special case p = 12 . Our update rule for general p follows from a similar thought experiment.\nAnalysis: Using a similar approach as in the non-adaptive analysis, we can turn the above argument into a rigorous analysis. In the appendix, we give the complete analysis as well as a proof sketch in Section D that gives an overview of our main analysis techniques.\n4 EXPERIMENTS\nWe examine the empirical performance of our methods against the previous work STORM+ (Levy et al., 2021) and popular algorithms (Adam, AdamW, AdaGrad, and SGD) on three tasks: (1) Image classification with the CIFAR10 dataset (Krizhevsky et al., 2009) using ResNet18 (Ren et al., 2016) models; (2) Masked language modeling via the BERT pretraining loss (Devlin et al., 2018) with the IMDB dataset (Maas et al., 2011) using distill-BERT models (Sanh et al., 2019), where we employ the standard cross entropy loss for MLM fine tuning (with whole word masking and fixed test masks) with maximum length 128; and (3) Sentiment analysis with the SST2 dataset (Socher et al., 2013) via finetuning BERT models (Devlin et al., 2018). We use the standard train/validation split and run all algorithms for 4 epochs.\nWe use the default implementation of AdaGrad, Adam, AdamW, and SGD from Pytorch. For STORM+, we follow the authors\u2019 original implementation.2 We give the complete implementation details and tables of hyperparameters for all algorithms in Section B.1 of the Appendix.\nHeuristics. For our algorithms, we further examine whether heuristics like exponential moving average (EMA) of the gradient sums (or often called online moment estimation) and per-coordinate update would be beneficial. This version with heuristics is further denoted (H) in our results below. This is discussed in full details in Section B.1.1 of the Appendix.\nResults. We perform our experiments on the standard train/test splits of each dataset. We tune for the best learning rate across a fixed grid for all algorithms and perform each run 5 times. For readability, we omit error bars in the plot. Full plots with error bars and tabular results with standard deviation as well as further discussions are presented in Section B.2 of the Appendix.3\n1. CIFAR10 (Figure 1). Overall, META-STORM-SG achieves the lowest training loss with METASTORM and STORM+ coming in close. META-STORM with heuristics attains the best test accuracy, with Adam coming in close.\n2. IMDB (Figure 2). AdamW attains the best training loss. However, META-STORM with heuristics achieve the best test loss (with AdamW coming in close). META-STORM-SG and the heuristic algorithms outperform STORM+ in both minimizing training loss and test loss.\n3. SST2 (Figure 3). META-STORM with heuristics attain the best training loss and accuracy, above Adam and AdamW. It also achieves the best validation accuracy out of all the algorithms. Furthermore, non-heuristic META-STORM and META-STORM-SG outperform STORM+. We\n2Link to the code of STORM+: https://github.com/LIONS-EPFL/storm-plus-code. 3The reader should keep in mind that variance-reduced algorithms like META-STORM require twice the amount of gradient queries, so the improvement in performance that our algorithms exhibit does not come without a cost. Additional plots and further discussions are available in Section B.\nremark that STORM+ appears to be rather unstable for this task as some of the random runs do not converge to good stationary points.\n5 CONCLUSION\nIn this paper, we propose META-STORM-SG and META-STORM, two fully-adaptive momentumbased variance-reduced SGD frameworks that generalize upon STORM+ and remove STORM+\u2019s restrictive bounded function values assumption. META-STORM and its sibling META-STORM-SG attain the optimal convergence rate with better dependency on the problem parameters than previous methods and allow for a wider range of configurations. Experiments demonstrate our algorithms\u2019 effectiveness across common deep learning tasks against the previous work STORM+, and when heuristics are further added, achieve competitive performance against state-of-the-art algorithms.\nReproducibility Statement. We include the full proofs of all theorems in the Appendix. For our experiments, full implementation details including hyperparameter selection and algorithm development are included in Section B of the Appendix. We also make our source code available.\nREFERENCES Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. In\nInternational conference on machine learning, pp. 699\u2013707. PMLR, 2016.\nYossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. arXiv preprint arXiv:1912.02365, 2019.\nAshok Cutkosky and Francesco Orabona. Momentum-based variance reduction in non-convex sgd. Advances in neural information processing systems, 32, 2019.\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011.\nCong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path integrated differential estimator. arXiv preprint arXiv:1807.01695, 2018.\nSaeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341\u20132368, 2013.\nFeihu Huang, Junyi Li, and Heng Huang. Super-adam: Faster and universal framework of adaptive gradients. arXiv preprint arXiv:2106.08208, 2021.\nAli Kavis, Kfir Yehuda Levy, and Volkan Cevher. High probability bounds for a class of nonconvex algorithms with adagrad stepsize. In International Conference on Learning Representations, 2021.\nDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nAlex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.\nLihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan. Non-convex finite-sum optimization via scsg methods. Advances in Neural Information Processing Systems, 30, 2017.\nKfir Levy, Ali Kavis, and Volkan Cevher. Storm+: Fully adaptive sgd with recursive momentum for nonconvex optimization. Advances in Neural Information Processing Systems, 34, 2021.\nZhize Li, Hongyan Bao, Xiangliang Zhang, and Peter Richt\u00e1rik. Page: A simple and optimal probabilistic gradient estimator for nonconvex optimization. In International Conference on Machine Learning, pp. 6286\u20136295. PMLR, 2021.\nAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142\u2013150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http: //www.aclweb.org/anthology/P11-1015.\nSashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. In International conference on machine learning, pp. 314\u2013 323. PMLR, 2016.\nShaoqing Ren, Jian Sun, K He, and X Zhang. Deep residual learning for image recognition. In CVPR, volume 2, pp. 4, 2016.\nVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.\nRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631\u20131642, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://aclanthology.org/D13-1170.\nTijmen Tieleman, Geoffrey Hinton, et al. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26\u201331, 2012.\nQuoc Tran-Dinh, Nhan H Pham, Dzung T Phan, and Lam M Nguyen. Hybrid stochastic gradient descent algorithms for stochastic nonconvex optimization. arXiv preprint arXiv:1905.05920, 2019.\nRachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. In International Conference on Machine Learning, pp. 6677\u20136686. PMLR, 2019.\nDongruo Zhou, Pan Xu, and Quanquan Gu. Stochastic nested variance reduction for nonconvex optimization. Advances in Neural Information Processing Systems, 31, 2018.\nA APPENDIX OUTLINE\nThe appendix is organized as follows.\n\u2022 Section B presents the full implementation details for our algorithms and hyperparameters used. This section also includes additional ablation studies and experiments.\n\u2022 Section C introduces the notations used in the analysis of our algorithms.\n\u2022 Section D presents the proof sketch of Theorem 2.3.\n\u2022 Section E establishes some basic results that are used in our full analysis.\n\u2022 Section F gives the analysis of META-STORM for general p.\n\u2022 Section G gives the analysis of META-STORM-SG for general p.\n\u2022 Section H introduces META-STORM-NA and gives the analysis for general p.\n\u2022 Section I gives several basic inequalities that are used in our analysis.\nB EXPERIMENTAL DETAILS AND ADDITIONAL EXPERIMENTS\nIn this section, we present the complete implementation details along with the full experimental setup. All of our experiments were conducted on two NVIDIA RTX3090.\nB.1 IMPLEMENTATION DETAILS AND HYPERPARAMETER TUNING\nIn this section, we present the full implementation details of the heuristics version, parameter selection, and hyperparameter tuning for all 3 datasets.\nB.1.1 HEURISTICS VERSIONS OF META-STORM AND META-STORM-SG\nAlgorithm 3 Heuristic update of META-STORM and META-STORM-SG.\nbt = ( b 1/p 0 +Dt )p /aqt for META-STORM (H)(\nb 1/p 0 +Dt )p /aqt+1 for META-STORM-SG (H)\nat+1 = ( 1 +Gt/a 2 0 )\u22122/3 where Dt = \u03b1Dt\u22121 + (1\u2212 \u03b1)d2t\nGt = { \u03b1Gt\u22121 + (1\u2212 \u03b1) (\u2207f(xt, \u03bet)\u2212\u2207f(xt, \u03bet+1))2 for META-STORM (H) \u03b1Gt\u22121 + (1\u2212 \u03b1) (\u2207f(xt, \u03bet))2 for META-STORM-SG (H)\nFor our algorithms, we employ the common heuristic of using an exponential moving average (EMA) scheme in the momentum and the step size. We also perform a per-coordinate update instead of simply using the norm. With this, our update rules for xt+1 = xt \u2212 \u03b7dt/bt becomes coordinatewise division with the update rules as in Algorithm 3, where all the operations between vectors here are coordinate-wise multiplication, exponentiation, and division. In our experiments, we set \u03b1 = 0.99, a0 = 1, b0 = 10\u22128 as selected by the criterion detailed next.\nB.1.2 ALGORITHM DEVELOPMENT AND DEFAULT PARAMETERS SELECTION\nWe develop our algorithm on MNIST and tune for p, a0, and b0. For a0, we tune on MNIST across a range of values from 1 to 108 and found that larger values of a0 are helpful. For b0, we simply need a small number for numerical stability so we pick 10\u22128. For the heuristic versions of our algorithms, a0 = 1 gives the best results. This might be due to the effects of per-coordinate operations removing the need to scale down the gradient-accumulated step-size.\nEffects of varying p. In Figures 4 and 5, we show the training loss and test accuracy of different values of p of our algorithms on MNIST (with a0 = 108 and b0 = 10\u22128). For each configuration, we tune the base learning rate \u03b7 across { 10\u22123, 10\u22122, 10\u22121, 1, 10 } .The results suggest that the lower\nvalues of p tend to perform better. While p = 1/3 has comparable performance to the lowest setting of p, this choice is somewhat analogous to STORM+. Hence, we select the lowest possible value p for our algorithms in the subsequent experiments (with p = 0.20 for META-STORM and p = 0.25 for META-STORM-SG).\nFor the heuristics versions of our algorithms, we perform the same experiments and show the results in Figures 6 and 7. Since p = 0.50 attains the lowest training loss for both heuristics versions of our algorithms, we select such value for all our experiments.\nDefault parameters.\nThe discussion above leads to the choice of a0 = 108 and b0 = 10\u22128 by default for our algorithms with p = 0.20 for META-STORM and p = 0.25 for META-STORM-SG on the benchmarks present in this section. For the heuristic versions of META-STORM, we use p = 0.50, a0 = 1, and b0 = 10\u22128 for our algorithm with heuristics. This version with heuristics is further denoted (H) in our results below. For STORM+, we use the original authors\u2019 implementation of setting a0 to the number of parameters of the model (which is roughly 108 for ResNet18 for example). For other baseline algorithms, we use the default parameters from Pytorch implementation.\nHyperparameter tuning. For all algorithms, we tune only the learning rate while using the default values for the other parameters for all algorithms. For STORM+, the default a0 is equal to the number of parameters of the model and b0 = 1.\nFor learning rate tuning, we perform a grid search across values { 10\u22125, 10\u22124, 10\u22123, 10\u22122, 10\u22121, 1 }\nfor CIFAR10 and IMDB and across values { 10\u22125, 2\u00d7 10\u22125, 10\u22124, 10\u22123, 10\u22122, 10\u22121, 1 }\nfor SST2 (due to 2\u00d7 10\u22125 being the default learning rate for AdamW on SST2 and also more practical due to SST2 being a smaller dataset). For Adam on IMDB, the learning rate in our grid search is not small enough to converge, requiring additional tuning for decreasing training loss.\nTable 3 includes the selected learning rate we used for each algorithm across the datasets. After obtaining the best learning rate, we additionally run each algorithm across 5 different seeds to obtain error bars.\nIn this section, we show complete plots and tabular results along with more detailed discussions for our experiments. The reader should note that STORM-based methods require twice the amount of oracle access over the baselines. The plots show average across 5 seeds along with min/max bars. The tables show the average across 5 seeds across a range of selected epochs and one standard deviation is included at the last epoch. In the plots and tables below: (H) denotes the version of the algorithm with the heuristics (EMA and per-coordinate update) employed.\nB.2.1 CIFAR10: RESULTS AND DISCUSSIONS\nFigure 8 shows all 4 plots of the main experiments in Section 4 in Figure 8.\nTables. Tables 4 and 5 show the training loss and accuracy for CIFAR10. Tables 6 and 7 show the test loss and accuracy for CIFAR10.\nDiscussion. META-STORM-SG achieves the lowest training loss and best training accuracy (with META-STORM and STORM+ coming in close). META-STORM-SG maintains the best training loss and accuracy for longest before the final epoch. For test loss and test accuracy, META-STORM (H) attains the best test accuracy (with Adam coming in close) while Adam attains the best test loss. While META-STORM-SG and META-STORM achieve low training loss, their generalization performance seems worse than their heuristic counterparts.\nTo further study this generalization gap among different algorithms, Table 8 shows the generalization gap of different algorithms. META-STORM with heuristics and Adam achieve the smallest gap among all the algorithms. For our algorithms, the version with heuristics exhibit a smaller generalization gap than the version without the heuristics while STORM+ lies in between. Interestingly, Adagrad and SGD exhibit larger generalization gaps.\nTables. Tables 9 and 10 show the train and test loss for our experiments.\nDiscussion. Here, AdamW achieves the best training loss with the heuristic algorithms coming in close. For the test loss, these algorithms also have similar performances. All META-STORM algorithms (with and without heuristics) perform better than STORM+ in minimizing training loss. For test loss, META-STORM-SG performs better than STORM+ but META-STORM does not. Both the heuristic versions of META-STORM and META-STORM-SG outperform STORM+.\nB.2.3 SST2: FULL RESULTS AND DISCUSSIONS\nFigure 10 shows all 4 plots of the main experiments for SST2.\nTables. Tables 11 and 12 present the training loss and accuracy for the experiments for SST2. Tables 13 and 14 show the validation loss and accuracy for the experiments for SST2.\nDiscussions. Similarly to CIFAR10, we examine the generalization gap of different algorithms in Table 15. Here, we see that MS-SG attains the lowest generalization gap between training accuracy and test accuracy while Adam suffers from the largest generalization gap among the algorithms compared in our experiments.\n2. Unbiased estimator with bounded variance: We assume to have access to \u2207f(x, \u03be) satisfying E\u03be [\u2207f(x, \u03be)] = \u2207F (x), E\u03be [ \u2016\u2207f(x, \u03be)\u2212\u2207F (x)\u20162 ] \u2264 \u03c32 for some \u03c3 \u2265 0.\n3. Averaged \u03b2-smoothness: E\u03be [ \u2016\u2207f(x, \u03be)\u2212\u2207f(y, \u03be)\u20162 ] \u2264 \u03b22\u2016x\u2212 y\u20162,\u2200x, y \u2208 Rd.\n4. Bounded stochastic gradients: \u2016\u2207f(x, \u03be)\u2016 \u2264 G\u0302,\u2200x \u2208 Rd, \u03be \u2208 support(D) for some G\u0302 \u2265 0. 5. Bounded stochastic gradient differences: \u2016\u2207f(x, \u03be) \u2212 \u2207f(x, \u03be\u2032)\u2016 \u2264 2\u03c3\u0302,\u2200x \u2208 Rd, \u03be, \u03be\u2032 \u2208 support(D) for some \u03c3\u0302 \u2265 0.\nWe remind the reader that \u03c3 = O(\u03c3\u0302) and \u03c3\u0302 = O(G\u0302).\nC.2 NOTATIONS\nIn the analysis below, we employ the following notations\n\u03b2max := max {\u03b2, 1} ; Dt := t\u2211 i=1 \u2016di\u20162 ; Et,s := t\u2211 i=1 asi+1 \u2016 i\u2016 2 ;\nHt := t\u2211 i=1 \u2016\u2207F (xi)\u20162 ; H\u0302t := t\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u20162 ; H\u0303t := t\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162\nWe will also write Et := Et,0 = \u2211t i=1 \u2016 i\u2016\n2. We denote Ft = \u03c3 (\u03bei, 1 \u2264 i \u2264 t) as the sigma algebra generated by the first t samples. Besides, we define 00 := 1. In Section I, we will list and prove all inequalities used in the subsequent proofs.\nD PROOF SKETCH FOR THEOREM 2.3\nIn this section, to give an overview of the proof techniques, we present the proof sketch for Theorem 2.3 for the special case p = 12 . For simplicity, we assume \u03b2 \u2265 1 to simplify the notation. The analysis of the fully adaptive algorithms follows a similar approach to the non-adaptive analysis given in Section 3. As before, towards our final goal of bounding \u2016\u2207F (xout)\u2016, we will translate to HT and upper bound it via DT and ET .\nBounding ET : As in existing VR algorithms, we need to calculate how the stochastic error t changes with each iteration. By a standard calculation, we obtain\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 (11)\nwhere\nZt+1 = \u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt, \u03bet+1)\u2212\u2207F (xt+1) +\u2207F (xt); Mt+1 = 2(1\u2212 at+1)2\u3008 t, Zt+1\u3009+ 2(1\u2212 at+1)at+1\u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009.\nWe note that, in META-STORM, at+1 \u2208 Ft+1, which implies E[Mt+1 | Ft] 6= 0. This extra term Mt+1 makes our analysis more challenging compared with previous works. Now, we highlight some challenges and point out how to solve them:\nCHALLENGE 1. How to obtain a term as close to ET as possible with a proper upper bound? In the L.H.S. of (11), we can see an extra coefficient at+1 appear in front of \u2016 t\u20162. A straightforward option is to divide both sides by at+1 then sum up to get ET . However, if we do so, the following problem arises. Let us focus on the term \u2016Zt+1\u20162/at+1 . The averaged \u03b2-smoothness assumption gives\nE [ \u2016Zt+1\u20162 | Ft ] \u2264 \u03b72\u03b22 \u2016dt\u2016 2\nb2t .\nHowever, we cannot apply this result to \u2016Zt+1\u20162/at+1 since at+1 \u2208 Ft+1 as noted above. If we temporarily think a\u22121t+1 \u2264 ca \u22121 t for some constant c (we can expect this because the change from at to at+1 is not too large due to the bounded differences assumption), we will get E[a\u22121t+1|\u2016Zt+1\u20162 |\nFt] \u2264 E[ca\u22121t \u2016Zt+1\u20162 | Ft] \u2264 \u03b72\u03b22 \u2016dt\u20162 atb2t . If we plug in the update rule of bt = (b20+DT ) 1/2/a 1/4 t , then we obtain E[\u2016Zt+1\u20162|/at+1 | Ft] \u2264 \u03b72\u03b22a\u22121/2t \u2016dt\u20162 b20+Dt . It can be shown that \u2211T t=1 \u2016dt\u20162 b20+DT can be upper bounded by log DT b20 , but now we still have the extra a\u22121/2t coefficent. To remove it, it is reasonable to divide both sides of (11) by a1/2t+1 rather than at+1.\nCHALLENGE 2. How to get rid of the term involving Mt+1? As discussed in Challenge 1, we want to divide both sides by a1/2t+1. Now we focus on the term a \u22121/2 t+1 Mt+1. Again, due to at+1 \u2208 Ft+1, E[a\u22121/2t+1 Mt+1 | Ft] 6= 0. An important observation here is that, if we replace at+1 by at in Mt+1, we will have a martingale difference sequence. Formally, we define\nNt+1 = 2(1\u2212 at)2\u3008 t, Zt+1\u3009+ 2(1\u2212 at)at\u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009.\nThen E[Nt+1 | Ft] and E[a\u22121/2t Nt+1 | Ft] are both 0. This observation tells us that, in order to bound E[ \u2211T t=1 a \u22121/2 t+1 Mt+1], it suffices to bound E[ \u2211T t=1 a \u22121/2 t+1 Mt+1 \u2212 a \u22121/2 t Nt+1].\nUsing the Cauchy-Schwartz inequality, we show that the term \u2211T t=1 a \u22121/2 t+1 Mt+1 \u2212 a \u22121/2 t Nt+1\ncan be bounded by terms related to \u2211T t=1(a \u22121/2 t+1 \u2212 a \u22121/2 t )\u2016 t\u20162, \u2211T t=1 a\n\u22121/2 t+1 \u2016Zt+1\u20162 and\u2211T\nt=1 a 3/2 t \u2016\u2207f(xt+1, \u03bet+1) \u2212 \u2207F (xt+1)\u20162. We then bound these latter terms in turn, and elim-\ninate the term involving Mt+1.\nAfter overcoming the two challenges above, we can finally show the following inequality, where K1,K2,K4 are constants that depend only on \u03c3, \u03c3\u0302, \u03b2, a0, b0, \u03b7 and are independent of T .\nE [ a 1/2 T+1ET ] \u2264 E [ ET,1/2 ] \u2264 K1 +K2E [ log ( 1 + H\u0303T /a 2 0 )] +K4E [ log ( 1 + DT /b 2 0 )] . (12)\nBounding DT : By following the standard non-adaptive analysis via smoothness, we obtain\nF (xt+1) \u2264 F (xt)\u2212 \u03b7\nbt \u3008\u2207F (xt), dt\u3009+\n\u03b72\u03b2 2b2t \u2016dt\u20162. (13)\nHere we proceed similarly to the non-adaptive analysis from Section 3.1, but start to diverge from the analysis approach used in STORM+. The STORM+ analysis proceeds by splitting \u2212\u3008\u2207F (xt), dt\u3009 = \u2212\u2016\u2207F (xt)\u20162 \u2212 \u3008\u2207F (xt), t\u3009 \u2264 \u2212 12\u2016\u2207F (xt)\u2016 2 + 12\u2016 t\u2016 2, multiplying both sides of (13) with bt/\u03b7, and summing up over all iterations. This gives the following upper bound on HT :\nHT = T\u2211 t=1 \u2016\u2207F (xt)\u20162 \u2264 T\u2211 t=1 2 \u03b7 (F (xt)\u2212 F (xt+1)) bt + T\u2211 t=1 \u2016 t\u20162 + \u03b7\u03b2 T\u2211 t=1 \u2016dt\u20162 bt .\nThis analysis requires F (x) to be bounded so that the sum \u2211T t=1 2 \u03b7 (F (xt) \u2212 F (xt+1))bt can telescope. To remove this assumption, we go back to (13), split \u2212\u3008\u2207F (xt), dt\u3009 = \u2212\u2016dt\u20162 + \u3008 t, dt\u3009, and upper bound the inner product via the Cauchy-Schwartz inequality and the inequality ab \u2264 \u03b3 2a 2 + 12\u03b3 b 2 which holds for any \u03b3 > 0:\n\u3008 t, dt\u3009 \u2264 \u2016 t\u2016\u2016dt\u2016 \u2264 \u03bba\n1/2 t+1bt\n2\u03b7\u03b2 \u2016 t\u20162 +\n\u03b7\u03b2\n2\u03bba 1/2 t+1bt\n\u2016dt\u20162\nwhere \u03bb > 0 is a constant (setting \u03bb based on \u03c3\u0302 yields the best dependence on \u03c3\u0302). We note that this choice will need a bound on E[ \u2211T t=1 a 1/2 t+1\u2016 t\u20162], and 1/2 turns out to be the smallest choice of\nc which makes E[ \u2211T t=1 a c t+1\u2016 t\u20162] have a constant order. The intuition for setting \u03b3 = \u03bba 1/2 t+1bt \u03b7\u03b2 is that this coefficient ensures a constant split if at and bt correspond to the non-adaptive choices we derived in Section 3.1, which were set so that a1/2b = \u0398 (\u03b2). We obtain\nE [ T\u2211 t=1 \u2016dt\u20162 bt ] \u2264 2 \u03b7 (F (x1)\u2212 F \u2217) + E [ T\u2211 t=1 ( \u03b7\u03b2 + \u03b7\u03b2 a 1/2 t+1\u03bb \u2212 bt ) \u2016dt\u20162 b2t ] \ufe38 \ufe37\ufe37 \ufe38\n(?)\n+ \u03bb \u03b7\u03b2 E [ ET,1/2 ]\ufe38 \ufe37\ufe37 \ufe38 (??) .\n(14)\nThe term (?) can be bounded using standard techniques used in the analyses of adaptive algorithms. The term (??) has already been bounded in the previous analysis. Now we only need to simplify the term on the L.H.S. to DT . But due to the randomness of bt, this is not achievable. However, the same as for the first inequality in (12), we can bridge this gap by aiming for a slightly weaker inequality that bounds D1/2T instead of DT . More precisely, we connect the left-hand side of (14) to D\n1/2 T as follows:\nT\u2211 t=1 \u2016dt\u20162 bt \u2265 \u2212b0 + b20 b0 + T\u2211 t=1 a 1/4 T+1\u2016dt\u20162( b20 + \u2211T i=1 \u2016di\u20162\n)1/2 \u2265 a1/4T+1D1/2T \u2212 b0. (15) By plugging in (15) into (14) and setting \u03bb appropriately, we can finally obtain the following upper bound:\nE [ a 1/4 T+1D 1/2 T ] \u2264 K5 +K6E [ log ( 1 + H\u0303T /a 2 0 )] +K7E log K8 +K9 ( 1 + H\u0303T /a 2 0 )1/3 b0 . (16)\nwhere K5,K6,K7,K8,K9 depend only on \u03c3, \u03c3\u0302, \u03b2, a0, b0, \u03b7 and are independent of T .\nCombining the bounds: The final part of the analysis is to combine (12) and (16). In contrast to the simpler non-adaptive analysis, these inequalities bound a1/2T+1ET and a 1/4 T+1D 1/2 T instead of DT and ET . In order to obtain an upper bound on HT via the inequality HT \u2264 2DT + 2ET , we need to connect a1/2T+1ET and a 1/4 T+1D 1/2 T and DT and ET . The bounded variance assumption on the stochastic gradients gives us a bound on E[a\u22123/2T+1 ] = E[1 + H\u0303T /a20] = O(1 + \u03c32T ) (note that this \u22123/2 is the smallest c to make sure we can upper bound E[act+1]). Combining this result and Holder\u2019s inequality gives us the bound\nE [ D\n3/7 T\n] \u2264 E6/7 [ a 1/4 T+1D 1/2 T ] E1/7 [ a \u22123/2 T+1 ] ;\nE [ E\n3/7 T\n] \u2264 E3/7 [ a 1/2 T+1ET ] E4/7 [ a \u22123/8 T+1 ] \u2264 E3/7 [ a 1/2 T+1ET ] E1/7 [ a \u22123/2 T+1 ] ;\nwhere 3/7 is chosen to ensure that we finally can use the bound on E[a\u22123/2T+1 ]. Thus we obtain an upper bound on E[H3/7T ]. Finally, applying the concavity of x3/7 to E[H 3/7 T ] gives Theorem 2.3.\nE BASIC ANALYSIS\nAs discussed in Section 3, we aim to use ET and DT to bound HT . Here, we apply this framework to give some basic results which will be used frequently for the full analysis of every algorithm. We first state the following decomposition in our analysis framework. The reason we use p\u0302 \u2264 1 here is that we can not always bound HT directly because of the randomness of at and bt in our algorithms. Lemma E.1. Given p\u0302 \u2264 1, we have\nE [ Hp\u0302T ] \u2264 2p\u0302+1 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} \u2264 4 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} .\nProof. By the definition of HT , ET and DT , we have HT \u2264 2ET + 2DT . Hence\nHp\u0302T \u2264 (2ET + 2DT ) p\u0302\n(a) \u2264 2p\u0302Ep\u0302T + 2 p\u0302Dp\u0302T \u21d2 E [ Hp\u0302T ] \u2264 2p\u0302 ( E [ Ep\u0302T ] + E [ Dp\u0302T ]) \u2264 2p\u0302+1 max { E [ Ep\u0302T ] ,E [ Dp\u0302T\n]} (b)\n\u2264 4 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} where (a) and (b) are both by due to p\u0302 \u2264 1.\nE.1 VARIANCE REDUCTION ANALYSIS FOR ET\nThe same as in all existing momentum-based VR methods, we need to analyze how the error term t changes in the algorithm. Based on our notations, we give the following two standard lemmas. Lemma E.2. \u2200t \u2265 1, we have\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162\n+ 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1, where\nZt+1 := \u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt, \u03bet+1)\u2212\u2207F (xt+1) +\u2207F (xt), Mt+1 := 2(1\u2212 at+1)2\u3008 t, Zt+1\u3009+ 2(1\u2212 at+1)at+1\u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009.\nProof. Starting from the definition of t+1, we have\n\u2016 t+1\u20162 = \u2016dt+1 \u2212\u2207F (xt+1)\u20162\n= \u2016\u2207f(xt+1, \u03bet+1) + (1\u2212 at+1)(dt \u2212\u2207f(xt, \u03bet+1))\u2212\u2207F (xt+1)\u20162\n= \u2016(1\u2212 at+1) t + (1\u2212 at+1)Zt+1 + at+1(\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1))\u20162\n= (1\u2212 at+1)2\u2016 t\u20162\n+ \u2016(1\u2212 at+1)Zt+1 + at+1(\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1))\u20162 +Mt+1 (a) \u2264 (1\u2212 at+1)2\u2016 t\u20162\n+ 2(1\u2212 at+1)2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 (b) \u2264 (1\u2212 at+1)\u2016 t\u20162\n+ 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 where (a) is by (x + y)2 \u2264 2x2 + 2y2, (b) is by 0 \u2264 1 \u2212 at+1 \u2264 1. Adding at+1\u2016 t\u20162 \u2212 \u2016 t+1\u20162 to both sides, we get the desired result.\nLemma E.3. \u2200t \u2265 1, we have\nE [ \u2016Zt+1\u20162 | Ft ] \u2264 \u03b72\u03b22 \u2016dt\u2016 2\nb2t .\nProof. From the definition of Zt+1, we have E [ \u2016Zt+1\u20162 | Ft ] = E [ \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt, \u03bet+1)\u2212\u2207F (xt+1) +\u2207F (xt)\u20162|Ft ] (a)\n\u2264 E [ \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt, \u03bet+1)\u20162|Ft ] (b) \u2264 \u03b22\u2016xt+1 \u2212 xt\u20162 (c) = \u03b72\u03b22 \u2016dt\u20162\nb2t where (a) is by E [ \u2016X \u2212 E [X] \u20162 ] \u2264 E [ \u2016X\u20162 ] , (b) is by the averaged \u03b2-smooth assumption, (c) is by the fact xt+1 \u2212 xt = \u2212 \u03b7bt dt.\nE.2 ON THE WAY TO BOUND DT\nWe choose to bound the terms DT instead of starting from HT as done in AdaGradNorm or STORM+. The latter also requires the bounded function value assumption in the analysis. Lemma E.4. For any of META-STORM-SG, META-STORM or META-STORM-NA, we have, for any \u03bb > 0\nE [ aqT+1D 1\u2212p T ] \u2264 b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217)\n+ E [ T\u2211 t=1 ( \u03b7\u03b2max + \u03b7\u03b2max a 1/2 t+1\u03bb \u2212 bt ) \u2016dt\u20162 b2t ] + \u03bbE [ ET,1/2 ] \u03b7\u03b2max .\nProof. Using smoothness, the update rule xt+1 = xt\u2212 \u03b7bt dt and the definition of t = dt\u2212\u2207F (xt), we obtain\nF (xt+1) \u2264 F (xt) + \u3008\u2207F (xt), xt+1 \u2212 xt\u3009+ \u03b2\n2 \u2016xt+1 \u2212 xt\u20162\n= F (xt)\u2212 \u03b7\u3008\u2207F (xt), dt\u3009 bt + \u03b72\u03b2 2b2t \u2016dt\u20162 = F (xt)\u2212 \u03b7\u2016dt\u20162\nbt + \u03b7\u3008 t, dt\u3009 bt + \u03b72\u03b2 2b2t \u2016dt\u20162.\nFirst we use Cauchy-Schwarz to separate the stochastic gradient and the stochastic error terms\nF (xt+1) \u2264 F (xt)\u2212 \u03b7\u2016dt\u20162 bt + \u03bbt\u03b7\u2016 t\u20162 2bt + \u03b7\u2016dt\u20162 2\u03bbtbt + \u03b72\u03b2 2b2t \u2016dt\u20162.\nTaking\n\u03bbt = \u03bba\n1/2 t+1bt\n\u03b7\u03b2max for some \u03bb > 0. We have\n\u03b7\u2016dt\u20162\n2bt \u2264 F (xt)\u2212 F (xt+1) +\n( \u03b72\u03b2\n2b2t +\n\u03b7 2\u03bbtbt \u2212 \u03b7 2bt\n) \u2016dt\u20162 + \u03bbt\u03b7\u2016 t\u20162\n2bt\n= F (xt)\u2212 F (xt+1) +\n( \u03b72\u03b2\n2b2t +\n\u03b72\u03b2max\n2b2ta 1/2 t+1\u03bb\n\u2212 \u03b7 2bt\n) \u2016dt\u20162 + \u03bba 1/2 t+1\u2016 t\u20162\n2\u03b2max\n= F (xt)\u2212 F (xt+1) +\n( \u03b72\u03b2\n2 + \u03b72\u03b2max\n2a 1/2 t+1\u03bb\n\u2212 \u03b7bt 2\n) \u2016dt\u20162\nb2t + \u03bba\n1/2 t+1\u2016 t\u20162\n2\u03b2max\n\u2264 F (xt)\u2212 F (xt+1) +\n( \u03b72\u03b2max\n2 + \u03b72\u03b2max\n2a 1/2 t+1\u03bb\n\u2212 \u03b7bt 2\n) \u2016dt\u20162\nb2t + \u03bba\n1/2 t+1\u2016 t\u20162\n2\u03b2max\n\u21d2 E [ T\u2211 t=1 \u2016dt\u20162 bt ] \u2264 2 \u03b7 (F (x1)\u2212 F \u2217) + E [ T\u2211 t=1 ( \u03b7\u03b2max + \u03b7\u03b2max a 1/2 t+1\u03bb \u2212 bt ) \u2016dt\u20162 b2t ] + \u03bbE [ ET,1/2 ] \u03b7\u03b2max .\nThe final step is to relate the L.H.S. to DT . Recall for META-STORM-SG and META-STORM-NA, we have\nbt = (b 1/p 0 + t\u2211 i=1 \u2016di\u20162)p/aqt+1.\nHence T\u2211 t=1 \u2016dt\u20162 bt = T\u2211 t=1\naqt+1\u2016dt\u20162\n(b 1/p 0 + \u2211t i=1 \u2016di\u20162)p \u2265 T\u2211 t=1\naqT+1\u2016dt\u20162\n(b 1/p 0 + \u2211T i=1 \u2016di\u20162)p\n= aqT+1(b 1/p 0 + T\u2211 i=1 \u2016di\u20162)1\u2212p \u2212 aqT+1 b 1/p 0 (b 1/p 0 + \u2211T i=1 \u2016di\u20162)p\n\u2265 aqT+1(b 1/p 0 + T\u2211 i=1 \u2016di\u20162)1\u2212p \u2212 b1/p\u221210\n\u2265 aqT+1D 1\u2212p T \u2212 b 1/p\u22121 0 .\nThe same result holds for META-STORM by a similar proof. By using this bound, the proof is finished.\nTo finish section, we prove a technical result, Lemma E.5, which will be very useful in the proof of every algorithm. The motivation to prove it is because we want to bound the term inside the expectation part in Lemma E.4.\nLemma E.5. Given A,B \u2265 0. We have\n\u2022 for META-STORM-SG and META-STORM-NA T\u2211 t=1 ( A+ B a 1/2 t+1 \u2212 bt ) \u2016dt\u20162 b2t \u2264 (A+B) 1 p\u22121 1\u2212 p log A+ a \u22121/2 T+1 B b0 .\n\u2022 for META-STORM T\u2211 t=1 ( A+ B a 1/2 t \u2212 bt ) \u2016dt\u20162 b2t \u2264 (A+B) 1 p\u22121 1\u2212 p log A+ a \u22121/2 T+1 B b0 .\nProof. In META-STORM-SG and META-STORM-NA, we have\nbt = (b 1/p 0 + t\u2211 i=1 \u2016di\u20162)p/aqt+1\nwhere p+ 2q = 1. Define the set\nS = { t \u2208 [T ] : bt \u2264 A+ B\na 1/2 t+1 } and let s = maxS. We know\nT\u2211 t=1\n( A+ B\na 1/2 t+1\n\u2212 bt\n) \u2016dt\u20162\nb2t \u2264 \u2211 t\u2208S\n( A+ B\na 1/2 t+1\n\u2212 bt\n) \u2016dt\u20162\nb2t\n= \u2211 t\u2208S\n( A+ B\na 1/2 t+1\n\u2212 bt\n) a q/p t+1b 1/p t \u2212 a q/p t b 1/p t\u22121\nb2t\n(a) \u2264 \u2211 t\u2208S\n( A+ B\na 1/2 t+1\n\u2212 bt ) a q/p t+1 b 1/p t \u2212 b 1/p t\u22121\nb2t\n= \u2211 t\u2208S ( a 1/2 t+1A+B \u2212 a 1/2 t+1bt ) a q p\u2212 1 2 t+1 b 1 p\u22122 t b 1/p t \u2212 b 1/p t\u22121 b 1/p t\nwhere (a) is by at \u2265 at+1. Note that( a 1/2 t+1A+B \u2212 a 1/2 t+1bt ) a q p\u2212 1 2 t+1 b 1 p\u22122 t (b) \u2264 ( A+B \u2212 a1/2t+1bt ) a q p\u2212 1 2 t+1 b 1 p\u22122 t\n(c) = ( A+B \u2212 a1/2t+1bt ) a 1 2p\u22121 t+1 b 1 p\u22122 t\n= ( A+B \u2212 a1/2t+1bt )( a 1/2 t+1bt ) 1 p\u22122\n(d) \u2264 ( A+B 1 p \u2212 1 ) 1 p\u22121( 1 p \u2212 2 ) 1 p\u22122 \u2264 p 1\u2212 p (A+B) 1 p\u22121\nwhere (b) holds by at+1 \u2264 1, (c) is due to qp \u2212 1 2 = 2q\u2212p 2p = 1\u22122p 2p = 1 2p \u2212 1 by p+ 2q = 1 and (d) is by applying Lemma I.8. Thus we know T\u2211 t=1 ( A+ B a 1/2 t+1 \u2212 bt ) \u2016dt\u20162 b2t \u2264 p 1\u2212 p (A+B) 1 p\u22121 \u2211 t\u2208S b 1/p t \u2212 b 1/p t\u22121 b 1/p t\n(e) \u2264 (A+B) 1 p\u22121 1\u2212 p \u2211 t\u2208S log bt bt\u22121\n(f) \u2264 (A+B) 1 p\u22121\n1\u2212 p\ns\u2211 t=1 log bt bt\u22121\n= (A+B)\n1 p\u22121\n1\u2212 p log bs b0\n(g) \u2264 (A+B) 1 p\u22121\n1\u2212 p log\nA+ a \u22121/2 T+1 B\nb0\nwhere (e) is by taking x = (bt/bt\u22121) 1/p in 1\u2212 1x \u2264 log x, (f) is because bt is increasing. The reason (g) is true is that bs \u2264 A+a\u22121/2s+1 B \u2264 A+a \u22121/2 T+1 B where the first inequality is due to s \u2208 S and the second one holds by that a\u22121/2t is increasing. Now we finish the proof for META-STORM-SG and META-STORM-NA. The proof for META-STORM is essentially the same hence omitted here.\nF ANALYSIS OF META-STORM FOR GENERAL p\nIn this section, we give a general analysis for our Algorithm META-STORM. We will see that p = 12 is a special corner case. First we recall the choices of at and bt\nat+1 = (1 + t\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162 /a20)\u22122/3,\nbt = (b 1/p 0 + t\u2211 i=1 \u2016di\u20162)p/aqt\nwhere p, q satisfy p+ 2q = 1, p \u2208 [ 3\u2212 \u221a 7\n2 , 1 2 ] . a0 > 0 and b0 > 0 are absolute constants. Naturally,\nwe have a1 = 1. We will finally prove the following theorem. Theorem F.1. Under the assumptions 1-3 and 5, by defining p\u0302 = 3(1\u2212p)4\u2212p \u2208 [ 3 7 , \u221a 7\u2212 2 ] , we have\nE [ Hp\u0302T ]\n\u22644 ( 2K1 K4 ) p\u0302 1\u22122p + (( 2K2 K4 ) p\u0302 1\u22122p + (2K4) p\u0302 2p )( 1 + 2\u03c3 2T a20 ) p\u0302 3 p 6= 12( 2K1 + 2 ( K2 + K4 3 ) log ( 1 + 2\u03c3 2T a20 ) + 2K4p\u0302 log 4K4 b2p\u03020 )p\u0302 ( 1 + 2\u03c3 2T a20 ) p\u0302 3 + b2p\u03020 p = 1 2\n+ 4 ( K5 + ( K6 +\nK7 3\n) log ( 1 + 2\u03c32T\na20\n) +K7 log\nK8 +K9 b0\n) p\u0302 1\u2212p ( 1 + 2\u03c32T\na20\n) p\u0302 3\n,\nwhere Ki, i \u2208 [9] are some constants only depending on a0, b0, \u03b7, \u03c3, \u03c3\u0302, \u03b2, p, q, F (x1) \u2212 F \u2217. To simplify our final bound, we only indicate the dependency on \u03b2 and F (x1)\u2212 F \u2217.\nE [ Hp\u0302T ] = O (( (F (x1)\u2212 F \u2217) p\u0302 1\u2212p + \u03b2 p\u0302 p log p\u0302 1\u2212p \u03b2 + \u03b2 p\u0302 p log p\u0302 1\u2212p ( 1 + \u03c32T )) (1 + \u03c32T ) p\u0302 3 ) .\nRemark F.2. For all i \u2208 [9], the constant Ki will be defined in the proof that follows.\nBy using the concavity of xp\u0302, we state the following convergence theorem without proof. Theorem F.3. Under the assumptions 1-3 and 5, by defining p\u0302 = 3(1\u2212p)4\u2212p \u2208 [ 3 7 , \u221a 7\u2212 2 ] , we have\nE [ \u2016\u2207F (xout)\u20162p\u0302 ] = O ( (F (x1)\u2212 F \u2217) p\u0302 1\u2212p + \u03b2 p\u0302 p log p\u0302 1\u2212p \u03b2 + \u03b2 p\u0302 p log p\u0302 1\u2212p ( 1 + \u03c32T ))( 1 T p\u0302 + \u03c32p\u0302/3 T 2p\u0302/3 ) .\nHere, we give a more explicit convergence dependency for p = 12 used in Theorem 2.3\nTheorem F.4. Under the assumptions 1-3 and 5, when p = 12 , by setting \u03bb = min { 1, (a0/\u03c3\u0302) 7/3 }\n(which is used in K5to K9) we get the best dependency on \u03c3\u0302. For simplicity, under the setting a0 = b0 = \u03b7 = 1, we have\nE [ \u2016\u2207F (xout)\u20166/7 ] = O (( Q1 +Q2 log 6/7 ( 1 + \u03c32T ))( 1 T 3/7 + \u03c32/7 T 2/7 )) whereQ1 = O ( (F (x1)\u2212 F \u2217)6/7 + \u03c312/7 + (\u03c3\u0302\u03c3)6/7 + \u03c3\u030218/7 + ( 1 + \u03c3\u030218/7 ) \u03b26/7 log6/7 ( \u03b2 + \u03c3\u03023\u03b2\n)) and Q2 = O (( 1 + \u03c3\u030218/7 ) \u03b26/7 ) .\nTo start with, we first state the following useful bound for at: Lemma F.5. \u2200\u03b1 \u2208 (0, 3/2] and \u2200t \u2265 1, there is(\nat at+1\n)\u03b1 \u2264 1 + ( 4\u03c3\u03022\na20\n) 2\u03b1 3\na\u03b1t .\nEspecially, taking \u03b1 \u2208 {1/2, 1, 3/2}, we have( at at+1 )1/2 \u2264 1 + 4 1/3\u03c3\u03022/3\na 2/3 0\na 1/2 t ;\nat at+1 \u2264 1 + 4 2/3\u03c3\u03024/3\na 2/3 0 at;( at at+1 )3/2 \u2264 1 + 4\u03c3\u0302 2 a20 a 3/2 t .\nProof. Note that( at at+1 )\u03b1 = a\u03b1t ( 1\na 3/2 t\n+ \u2016\u2207f(xt, \u03bet)\u2212\u2207f(xt, \u03bet+1)\u20162\na20\n)2\u03b1/3\n= ( 1 + \u2016\u2207f(xt, \u03bet)\u2212\u2207f(xt, \u03bet+1)\u20162\na20 a 3/2 t )2\u03b1/3 \u2264 ( 1 + 4\u03c3\u03022\na20 a 3/2 t\n)2\u03b1/3 \u2264 1 + ( 4\u03c3\u03022\na20\n)2\u03b1/3 a\u03b1t\nwhere the last inequality is because 2\u03b1/3 \u2264 1.\nLemma F.5 allows us to obtain some other properties of at. Lemma F.6. For t \u2265 1 (\n(1\u2212 at+1)2 \u2212 (1\u2212 at)2 )2\nat+1 \u2264 4\n2/3\u03c3\u03024/3\na 4/3 0\n((1\u2212 at+1)at+1 \u2212 (1\u2212 at)at)2\nat+1 \u2264 4\n2/3\u03c3\u03024/3\na 4/3 0\na2t .\nProof. Let at+1 = x, at = y and note that x \u2264 y \u2264 1. For the first inequality,( (1\u2212 at+1)2 \u2212 (1\u2212 at)2 )2 at+1 \u2264 (1\u2212 x) 2 \u2212 (1\u2212 y)2 x\n= (y \u2212 x)(2\u2212 x\u2212 y) x \u2264 (y x \u2212 1)(2\u2212 y)\n\u2264 4 2/3\u03c3\u03024/3\na 2/3 0\nat \u00d7 (2\u2212 at) (Lemma F.5)\n\u2264 4 2/3\u03c3\u03024/3\na 4/3 0\n.\nFor the second inequality, we have\n((1\u2212 at+1)at+1 \u2212 (1\u2212 at)at)2\nat+1 =\n((1\u2212 x)x\u2212 (1\u2212 y)y)2\nx =\n(y \u2212 x)2(1\u2212 x\u2212 y)2\nx\n\u2264 (y \u2212 x) 2 x \u2264 (y x \u2212 1 ) y\n\u2264 4 2/3\u03c3\u03024/3\na 2/3 0\nat \u00d7 at (Lemma F.5)\n= 42/3\u03c3\u03024/3\na 4/3 0\na2t .\nF.1 ANALYSIS OF ET\nFollowing a similar approach, we first define a random time \u03c4 satisfying\n\u03c4 = max {[T ] , at \u2265 K\u22121} ,\nwhere K\u22121 := min { 1, a40/(144\u03c3\u0302 4) } .\nOne thing we need to emphasize here is that, in our current choice, at \u2208 Ft, which implies {\u03c4 + 1 = t} = {\u03c4 = t\u2212 1} = {at\u22121 \u2265 K\u22121, at < K\u22121} \u2208 Ft. This means \u03c4 + 1 is a stopping time instead of \u03c4 itself. We now prove a useful proposition for \u03c4 :\nLemma F.7. We have\nat+1 \u2265 K0,\u2200t \u2264 \u03c4, a\u22121t+1 \u2212 a \u22121 t \u2264 2/9,\u2200t \u2265 \u03c4 + 1.\nwhere\nK0 := (K \u22123/2 \u22121 + 4\u03c3\u0302 2/a20) \u22122/3 = (max { 1, 1728\u03c3\u03026/a60 } + 4\u03c3\u03022/a20) \u22122/3.\nProof. First, by the definition of \u03c4 , we know at \u2265 K\u22121 \u2265 K0,\u2200t \u2264 \u03c4 . For time \u03c4 , we have\na \u22123/2 \u03c4+1 \u2212 a\u22123/2\u03c4 = \u2016\u2207f(x\u03c4 , \u03be\u03c4 )\u2212\u2207f(x\u03c4 , \u03be\u03c4+1)\u20162/a20 \u2264 4\u03c3\u03022/a20\n\u21d2 a\u22121\u03c4+1 \u2264 (a\u22123/2\u03c4 + 4\u03c3\u03022/a20)2/3 \u2264 (K \u22123/2 \u22121 + 4\u03c3\u0302 2/a20) 2/3 = K\u221210 ,\nwhich implies a\u03c4+1 \u2265 K0.\nFor the second proposition, let h(y) = y2/3. Due to the concavity of h, we know h(y1)\u2212 h(y2) \u2264 h\u2032(y2)(y1 \u2212 y2) = 2(y1\u2212y2)\n3y 1/3 2\n. Now we have\na\u22121t+1 \u2212 a \u22121 t = (a \u22123/2 t + \u2016\u2207f(xt, \u03bet)\u2212\u2207f(xt, \u03bet+1)\u20162/a20)2/3 \u2212 (a \u22123/2 t ) 2/3\n\u2264 2a 1/2 t \u2016\u2207f(xt, \u03bet)\u2212\u2207f(xt, \u03bet+1)\u20162\n3a20 \u2264 8a\n1/2 t \u03c3\u0302 2\n3a20 \u2264 2 9\nwhere the last step is by at \u2264 a\u03c4+1 < K\u22121 \u2264 a40/(144\u03c3\u03024). F.1.1 BOUND ON E [ E\u03c4,3/2\u22122` ] FOR ` \u2208 [ 1 4 , 1 2 ] Unlike STORM+ in which they bound E [E\u03c4 ], we choose to bound E [ E\u03c4,3/2\u22122` ] . We first prove\nthe following bound on E [ E\u03c4,3/2\u22122` ] :\nLemma F.8. For any ` \u2208 [ 1 4 , 1 2 ] , we have\nE [ E\u03c4,3/2\u22122` ] \u2264 2\u03c32 + 16\n( 1 + 6\u03c3\u0302 4/3\na 4/3 0\n)( 3a20 + 5\u03c3\u0302 2 )\nK 2`\u22121/2 0\n+\n4 ( 1 + 6\u03c3\u0302 4/3\na 4/3 0\n) \u03b72\u03b22\nK 2`\u22121/2 0\nE [ T\u2211 t=1 \u2016dt\u20162 b2t ] .\nProof. We start from Lemma E.2\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1. Summing up from 1 to \u03c4 and taking expectations on both sides, we will have\nE [E\u03c4,1]\n\u2264E [ \u03c4\u2211 t=1 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 ]\n\u2264\u03c32 + E [ \u03c4\u2211 t=1 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 ]\n\u2264\u03c32 + E [ T\u2211 t=1 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + \u03c4\u2211 t=1 Mt+1 ] . (17)\nFirst we bound E [ \u2211\u03c4 t=1Mt+1]. From the definition of Mt+1, we have\nE [Mt+1] = E [ 2(1\u2212 at+1)2\u3008 t, Zt+1\u3009+ 2(1\u2212 at+1)at+1\u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009 ] .\nNow for t \u2265 1, we define Nt+1 := 2(1\u2212 at)2\u3008 t, Zt+1\u3009+ 2(1\u2212 at)at\u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009 \u2208 Ft+1\nwith N1 := 0. A key observation is that\nE [ \u03c4\u2211 t=1 Nt+1 ] = 0.\nThis is because Nt := \u2211t i=1Nt is a martingale and \u03c4 + 1 is a bounded stopping time. Then by optional sampling theorem, we have\nE [ \u03c4\u2211 t=1 Nt+1 ] = E [ \u03c4+1\u2211 t=1 Nt ] = E [N\u03c4+1] = 0.\nBy subtracting E [ \u2211\u03c4 t=1Mt+1] by E [ \u2211\u03c4 t=1Nt+1], we obtain\nE [ \u03c4\u2211 t=1 Mt+1 ] = E [ \u03c4\u2211 t=1 2 ( (1\u2212 at+1)2 \u2212 (1\u2212 at)2 ) \u3008 t, Zt+1\u3009\n+ 2 ((1\u2212 at+1)at+1 \u2212 (1\u2212 at)at) \u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009] . (18) Using Cauchy-Schwarz inequality for each term, we have\n2 ( (1\u2212 at+1)2 \u2212 (1\u2212 at)2 ) \u3008 t, Zt+1\u3009\n\u22642 \u2223\u2223(1\u2212 at+1)2 \u2212 (1\u2212 at)2\u2223\u2223 \u2016 t\u2016\u2016Zt+1\u2016\n\u2264at+1 4 \u2016 t\u20162 +\n4 ( (1\u2212 at+1)2 \u2212 (1\u2212 at)2 )2 at+1 \u2016Zt+1\u20162,\n2 ((1\u2212 at+1)at+1 \u2212 (1\u2212 at)at) \u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009 \u22642 |(1\u2212 at+1)at+1 \u2212 (1\u2212 at)at| \u2016 t\u2016\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u2016\n\u2264at+1 4 \u2016 t\u20162 +\n4 ((1\u2212 at+1)at+1 \u2212 (1\u2212 at)at)2\nat+1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162.\nPlugging the above bounds into (18), we obtain\nE [ \u03c4\u2211 t=1 Mt+1 ]\n\u2264E \u03c4\u2211 t=1 at+1 2 \u2016 t\u20162 + ( (1\u2212 at+1)2 \u2212 (1\u2212 at)2 )2 at+1\ufe38 \ufe37\ufe37 \ufe38 (i) 4\u2016Zt+1\u20162\n+ \u03c4\u2211 t=1 ((1\u2212 at+1)at+1 \u2212 (1\u2212 at)at)2\nat+1\ufe38 \ufe37\ufe37 \ufe38 (ii)\n4\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 . (19) Plugging the bounds for (i) and (ii) from Lemma F.6 into (19), the following bound on E [ \u2211\u03c4 t=1Mt+1] comes up\nE [ \u03c4\u2211 t=1 Mt+1 ]\n\u2264E [ \u03c4\u2211 t=1 at+1 2 \u2016 t\u20162 + 45/3\u03c3\u03024/3 a 4/3 0 \u2016Zt+1\u20162 + 45/3\u03c3\u03024/3 a 4/3 0 a2t\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 ]\n\u2264E\n[ 1\n2 E\u03c4,1 +\n12\u03c3\u03024/3\na 4/3 0\n\u2016Zt+1\u20162 + 12\u03c3\u03024/3\na 4/3 0\na2t\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 ] .\nThen from (17), we have E [E\u03c4,1] \u2264 \u03c32 + E [ 1\n2 E\u03c4,1\n] + E [ T\u2211 t=1 ( 2 + 12\u03c3\u03024/3 a 4/3 0 ) \u2016Zt+1\u20162 ]\n+ E [ T\u2211 t=1 ( 2a2t+1 + 12\u03c3\u03024/3 a 4/3 0 a2t ) \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 ] ,\nwhich will give us\nE [E\u03c4,1] \u2264 2\u03c32 + 4 ( 1 + 6\u03c3\u03024/3\na 4/3 0\n) E T\u2211 t=1 \u2016Zt+1\u20162\ufe38 \ufe37\ufe37 \ufe38 (iii) \n+ E T\u2211 t=1 4 ( a2t+1 + 6\u03c3\u03024/3 a 4/3 0 a2t ) \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162\ufe38 \ufe37\ufe37 \ufe38\n(iv)\n . (20)\nFor term (iii), Lemma E.3 tells us\nE [ \u2016Zt+1\u20162 | Ft ] \u2264\u03b72\u03b22 \u2016dt\u2016 2\nb2t . (21)\nFor term (iv), we know\nE [(iv)] = E [ T\u2211 t=1 4 ( a2t+1 + 6\u03c3\u03024/3 a 4/3 0 a2t ) \u2016\u2207f(xt+1, \u03bet+1)\u2212 E [\u2207f(xt+1, \u03bet+2)|Ft+1] \u20162 ]\n\u2264 E [ T\u2211 t=1 4 ( a2t+1 + 6\u03c3\u03024/3 a 4/3 0 a2t ) E [ \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162|Ft+1 ]]\n= E [ T\u2211 t=1 4 ( a2t+1 + 6\u03c3\u03024/3 a 4/3 0 a2t ) \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162 ] . (22)\nNote that\na2t =\n( 1 +\nt\u22121\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162/a20\n)4/3 ,\nthen we have T\u2211 t=1 4 ( a2t+1 + 6\u03c3\u03024/3 a 4/3 0 a2t ) \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162\n=4a20 T\u2211 t=1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162/a20( 1 + \u2211t i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162/a20\n)4/3 + 24\u03c3\u03024/3a\n2/3 0 T\u2211 t=1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162/a20( 1 + \u2211t\u22121 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162/a20\n)4/3 \u22644a20 ( 12 + 8\u03c3\u03022\na20\n) + 24\u03c3\u03024/3a\n2/3 0\n( 12 + 20\u03c3\u03022\na20 ) =16 ( 3a20 + 2\u03c3\u0302 2 ) + 96 \u03c3\u03024/3\na 4/3 0\n( 3a20 + 5\u03c3\u0302 2 )\n\u226416 ( 1 + 6\u03c3\u03024/3\na 4/3 0\n)( 3a20 + 5\u03c3\u0302 2 ) , (23)\nwhere, for the first inequality, we use Lemma I.4 and Lemma I.5. Plugging (21) and (23) into (20), we obtain\nE [E\u03c4,1] \u2264 2\u03c32 + 16 ( 1 + 6\u03c3\u03024/3\na 4/3 0\n)( 3a20 + 5\u03c3\u0302 2 ) + 4 ( 1 + 6\u03c3\u03024/3\na 4/3 0\n) \u03b72\u03b22E [ T\u2211 t=1 \u2016dt\u20162 b2t ] .\nNote that by Lemma F.7, we have for t \u2264 \u03c4 ,at+1 \u2265 K0. By using this property and noticing 2`\u2212 1/2 \u2265 0 , we can obtain\nE [ K\n2`\u22121/2 0 E\u03c4,3/2\u22122` ] =E [ K\n2`\u22121/2 0 \u03c4\u2211 t=1 a 3/2\u22122` t+1 \u2016 t\u20162\n] \u2264 E [ \u03c4\u2211 t=1 at+1\u2016 t\u20162 ]\n\u22642\u03c32 + 16 ( 1 + 6\u03c3\u03024/3\na 4/3 0\n)( 3a20 + 5\u03c3\u0302 2 ) + 4 ( 1 + 6\u03c3\u03024/3\na 4/3 0\n) \u03b72\u03b22E [ T\u2211 t=1 \u2016dt\u20162 b2t ] ,\nwhich will give the desired bound immediately. F.1.2 BOUND ON E [ET,1\u22122`] FOR ` \u2208 [ 1 4 , 1 2 ] With the previous result on E [ E\u03c4,3/2\u22122` ] , we can bound E [ET,1\u22122`].\nLemma F.9. For any ` \u2208 [ 1 4 , 1 2 ] , we have\nE [ET,1\u22122`] \u2264 K1(`) +K2(`) E [( H\u0303T /a 2 0 ) 4`\u22121 3 ] ` > 14 E [ log (\n1 + H\u0303T /a 2 0 )] ` = 14\n+ E [ T\u2211 t=1 ( K3(`)a 2` t + 3 ( 1 + 2`2 ) `2 ) \u03b72\u03b22 \u2016dt\u20162 a2`t b 2 t ] ,\nwhere\nK1(`) := 3\n( \u03c32 + 24 ( 1 + `2 ) \u03c3\u03022\n`2\n) +\n72\u03c3\u03022 ( \u03c32 + 8 ( 1 + 6\u03c3\u0302 4/3\na 4/3 0\n)( 3a20 + 5\u03c3\u0302 2 ))\na20K 2`\u22121/2 0\nK2(`) := 9(1+2`2)a20 `2(4`\u22121) ` 6= 1 4\n3(1+2`2)a20 `2 ` = 1 4\nK3(`) := 144\u03c3\u03022\nK 2`\u22121/2 0 a 2 0\n( 1 + 6\u03c3\u03024/3\na 4/3 0\n) + 3 ( 1 + 2`2 ) `2 ( 4\u03c3\u03022\na20\n) 4` 3\nProof. We use a similar strategy as in the previous proof in which we bound E [ E\u03c4,3/2\u22122` ] . Starting from Lemma E.2\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1.\nDividing both sides by a2`t+1, taking the expectations on both sides and summing up from 1 to T to get\nE [ET,1\u22122`] \u2264 E [ T\u2211 t=1 \u2016 t\u20162 a2`t+1 \u2212 \u2016 t+1\u2016 2 a2`t+1\n+ 2\na2`t+1 \u2016Zt+1\u20162 + 2a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + Mt+1 a2`t+1\n]\n\u2264 \u03c32 + E [ T\u2211 t=1 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162\n+ T\u2211 t=1 2 a2`t+1 \u2016Zt+1\u20162 + 2a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + Mt+1 a2`t+1\n] . (24)\nAs before, we bound E [ Mt+1 a2`t+1 ] first. From the definition of Mt+1, we have\nE [ Mt+1 a2`t+1 ] = E [ 2(1\u2212 at+1)2 a2`t+1 \u3008 t, Zt+1\u3009+ 2(1\u2212 at+1)a1\u22122`t+1 \u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009 ] .\nA similar key observation is that, if we replace at+1 by at, we can find E [ 2(1\u2212 at)2\na2`t \u3008 t, Zt+1\u3009+ 2(1\u2212 at)a1\u22122`t \u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009\n] = 0.\nBy subtracting E [ Mt+1 a2`t+1 ] by 0, we know\nE [ Mt+1 a2`t+1 ] = E [ 2 ( (1\u2212 at+1)2 a2`t+1 \u2212 (1\u2212 at) 2 a2`t ) \u3008 t, Zt+1\u3009\n+2 ( (1\u2212 at+1)a1\u22122`t+1 \u2212 (1\u2212 at)a 1\u22122` t ) \u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009 ] . (25)\nUsing Cauchy-Schwarz for each term\n2\n( (1\u2212 at+1)2\na2`t+1 \u2212 (1\u2212 at) 2 a2`t\n) \u3008 t, Zt+1\u3009\n\u22642 \u2223\u2223\u2223\u2223 (1\u2212 at+1)2a2`t+1 \u2212 (1\u2212 at) 2 a2`t \u2223\u2223\u2223\u2223 \u2016 t\u2016\u2016Zt+1\u2016 \u2264 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 + ( (1\u2212at+1)2 a2`t+1 \u2212 (1\u2212at) 2 a2`t )2 a\u22122`t+1 \u2212 a \u22122` t \u2016Zt+1\u20162,\n2 ( (1\u2212 at+1)a1\u22122`t+1 \u2212 (1\u2212 at)a 1\u22122` t ) \u3008 t,\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u3009\n\u22642 \u2223\u2223(1\u2212 at+1)a1\u22122`t+1 \u2212 (1\u2212 at)a1\u22122`t \u2223\u2223 \u2016 t\u2016\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u2016 \u2264 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 + ( (1\u2212 at+1)a1\u22122`t+1 \u2212 (1\u2212 at)a 1\u22122` t )2 a\u22122`t+1 \u2212 a \u22122` t \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162,\nPlugging these two bounds into (25), we obtain\nE [ Mt+1 a2`t+1 ] \u2264 E [ 2 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 ] + E ( (1\u2212at+1)2 a2`t+1 \u2212 (1\u2212at) 2 a2`t )2 a\u22122`t+1 \u2212 a\n\u22122` t\ufe38 \ufe37\ufe37 \ufe38\n(i)\n\u2016Zt+1\u20162 \n+ E ( (1\u2212 at+1)a1\u22122`t+1 \u2212 (1\u2212 at)a 1\u22122` t )2 a\u22122`t+1 \u2212 a\n\u22122` t\ufe38 \ufe37\ufe37 \ufe38\n(ii)\n\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 . (26)\nTo bound (i) and (ii), let a`t+1 = x, a ` t = y and note that 0 \u2264 x \u2264 y \u2264 1. By Lemma I.6, we have for (i) ( (1\u2212at+1)2 a2`t+1 \u2212 (1\u2212at) 2 a2`t\n)2 a\u22122`t+1 \u2212 a \u22122` t = ( (1\u2212x1/`)2 x2 \u2212 (1\u2212y1/`)2 y2 )2 x2y2 y2 \u2212 x2\n\u2264 1 `2x2 = 1 `2a2`t+1 . (27)\nFor (ii), by Lemma I.7,( (1\u2212 at+1)a1\u22122`t+1 \u2212 (1\u2212 at)a 1\u22122` t )2 a\u22122`t+1 \u2212 a \u22122` t = ( (1\u2212 x1/`)x1/`\u22122 \u2212 (1\u2212 y1/`)y1/`\u22122 )2 x2y2 y2 \u2212 x2\n\u2264 y 2/`\u22122 `2 = a2\u22122`t `2 . (28)\nPlugging (27) and (28) into (26), we will have E [ Mt+1 a2`t+1 ] \u2264 E [ 2 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162\n+ 1\n`2a2`t+1 \u2016Zt+1\u20162 + a2\u22122`t `2 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162\n] .\nNow combining this with (24), we obtain\nE [ET,1\u22122`] \u2264 \u03c32 + E T\u2211 t=1 3 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162\ufe38 \ufe37\ufe37 \ufe38\n(iii)\n+ T\u2211 t=1 1 + 2`2 `2a2`t+1 \u2016Zt+1\u20162\ufe38 \ufe37\ufe37 \ufe38\n(iv)\n+ T\u2211 t=1 ( a2\u22122`t `2 + 2a2\u22122`t+1 ) \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162\ufe38 \ufe37\ufe37 \ufe38\n(v)\n . (29)\nFor (iii), we split the sum according to \u03c4 then use Lemma F.7 and Lemma F.5,\nT\u2211 t=1 3 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 = \u03c4\u2211 t=1 3 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 + T\u2211 t=\u03c4+1 3 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162\nNote that 3/2\u2212 2` \u2208 [ 1 2 , 1 ] , we have\na\u22122`t+1 \u2212 a \u22122` t =\n( 1\na 3/2 t+1 \u2212 1 a2`t a 3/2\u22122` t+1\n) a 3/2\u22122` t+1 \u2264 ( a \u22123/2 t+1 \u2212 a \u22123/2 t ) a 3/2\u22122` t+1\n\u2264 4\u03c3\u0302 2\na20 a 3/2\u22122` t+1 , (Lemma F.5)\nand we can use Lemma F.7 to bound for t \u2265 \u03c4 + 1\na\u22122`t+1 \u2212 a \u22122` t =\n( 1\nat+1 \u2212 1 a2`t a 1\u22122` t+1\n) a1\u22122`t+1 \u2264 ( a\u22121t+1 \u2212 a \u22121 t ) a1\u22122`t+1\n\u2264 2 9 a1\u22122`t+1 .\nThus T\u2211 t=1 3 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 \u2264 \u03c4\u2211 t=1 12\u03c3\u03022 a20 a 3/2\u22122` t+1 \u2016 t\u20162 + T\u2211 t=\u03c4+1 2 3 a1\u22122`t+1 \u2016 t\u20162\n\u2264 12\u03c3\u0302 2\na20 \u03c4\u2211 t=1 a 3/2\u22122` t+1 \u2016 t\u20162 + T\u2211 t=1 2 3 a1\u22122`t+1 \u2016 t\u20162\n= 12\u03c3\u03022\na20 E\u03c4,3/2\u22122` +\n2 3 ET,1\u22122`.\nFor (iv), note that E [ 1 + 2`2\n`2a2`t+1 \u2016Zt+1\u20162\n] = 1 + 2`2 `2 E [ a2`t a2`t+1 \u2016Zt+1\u20162 a2`t ]\n\u2264 1 + 2` 2\n`2 E\n[( 1 + ( 4\u03c3\u03022\na20\n) 4` 3\na2`t\n) \u2016Zt+1\u20162\na2`t\n] (Lemma F.5)\n\u2264 1 + 2` 2\n`2 E\n[( 1 + ( 4\u03c3\u03022\na20\n) 4` 3\na2`t\n) E [ \u2016Zt+1\u20162 | Ft ] a2`t ]\n\u2264 1 + 2` 2\n`2 E\n[( 1 + ( 4\u03c3\u03022\na20\n) 4` 3\na2`t\n) \u03b72\u03b22 \u2016dt\u20162\na2`t b 2 t\n] ,\nwhere the last step is by Lemma E.3. Hence we obtain\nE [ T\u2211 t=1 1 + 2`2 `2a2`t+1 \u2016Zt+1\u20162 ] \u2264 E [ T\u2211 t=1 1 + 2`2 `2 ( 1 + ( 4\u03c3\u03022 a20 ) 4` 3 a2`t ) \u03b72\u03b22 \u2016dt\u20162 a2`t b 2 t ] .\nFor (v), by the same argument when bounding (22), we know\nE [(v)] \u2264 E [ T\u2211 t=1 ( a2\u22122`t `2 + 2a2\u22122`t+1 ) \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)|\u20162 ] .\nNow we use Lemma I.2 and Lemma I.3 to get T\u2211 t=1 ( a2\u22122`t `2 + 2a2\u22122`t+1 ) \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162\n= a20 `2 T\u2211 t=1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162/a20( 1 + \u2211t\u22121 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162/a20\n)4(1\u2212`)/3 + 2a20\nT\u2211 t=1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207f(xt+1, \u03bet+2)\u20162/a20( 1 + \u2211t i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162/a20\n)4(1\u2212`)/3 \u2264a 2 0\n`2 \u00d7 24\u03c3\u0302\n2\na20 + 2a20 \u00d7\n12\u03c3\u03022\na20\n+\n( 1 + 2`2 ) a20\n`2\n 3 4`\u22121 ( H\u0303T /a 2 0 ) 4`\u22121 3 ` 6= 14 log (\n1 + H\u0303T /a 2 0 ) ` = 14\n= 24 ( 1 + `2 ) \u03c3\u03022\n`2 +\n( 1 + 2`2 ) a20\n`2\n 3 4`\u22121 ( H\u0303T /a 2 0 ) 4`\u22121 3 ` 6= 14 log (\n1 + H\u0303T /a 2 0 ) ` = 14 .\nPlugging the bounds on (iii), (iv) and (v) into (29), we get\nE [ET,1\u22122`] \u2264 \u03c32 + 24 ( 1 + `2 ) \u03c3\u03022\n`2 + E\n[ 12\u03c3\u03022\na20 E\u03c4,3/2\u22122` +\n2 3 ET,1\u22122` ] + E\n[ T\u2211 t=1 1 + 2`2 `2 ( 1 + ( 4\u03c3\u03022 a20 ) 4` 3 a2`t ) \u03b72\u03b22 \u2016dt\u20162 a2`t b 2 t ]\n+\n( 1 + 2`2 ) a20\n`2\n 3 4`\u22121E [( H\u0303T /a 2 0 ) 4`\u22121 3 ] ` 6= 14 E [ log (\n1 + H\u0303T /a 2 0 )] ` = 14 ,\nwhich gives us\nE [ET,1\u22122`] \u2264 3 ( \u03c32 + 24 ( 1 + `2 ) \u03c3\u03022\n`2\n) + 36\u03c3\u03022\na20 E [ E\u03c4,3/2\u22122` ] + E\n[ T\u2211 t=1 3 ( 1 + 2`2 ) `2 ( 1 + ( 4\u03c3\u03022 a20 ) 4` 3 a2`t ) \u03b72\u03b22 \u2016dt\u20162 a2`t b 2 t ]\n+ 3 ( 1 + 2`2 ) a20\n`2\n 3 4`\u22121E [( H\u0303T /a 2 0 ) 4`\u22121 3 ] ` 6= 14 E [ log (\n1 + H\u0303T /a 2 0 )] ` = 14 .\nNow we plug in the bound on E [ E\u03c4,3/2\u22122` ] in Lemma F.8 to get the final result\nE [ET,1\u22122`]\n\u2264 3 ( \u03c32 + 24 ( 1 + `2 ) \u03c3\u03022\n`2\n) +\n72\u03c3\u03022 ( \u03c32 + 8 ( 1 + 6\u03c3\u0302 4/3\na 4/3 0\n)( 3a20 + 5\u03c3\u0302 2 ))\na20K 2`\u22121/2 0\ufe38 \ufe37\ufe37 \ufe38\nK1(`)\n+K2(`) E [( H\u0303T /a 2 0 ) 4`\u22121 3 ] ` 6= 14 E [ log (\n1 + H\u0303T /a 2 0 )] ` = 14\n+ E T\u2211 t=1 ( 144\u03c3\u03022 K 2`\u22121/2 0 a 2 0 ( 1 + 6\u03c3\u03024/3 a 4/3 0 ) + 3 ( 1 + 2`2 ) `2 ( 4\u03c3\u03022 a20 ) 4` 3 ) \ufe38 \ufe37\ufe37 \ufe38\nK3(`)\na2`t + 3 ( 1 + 2`2 ) `2 \u03b72\u03b22 \u2016dt\u2016 2 a2`t b 2 t , where\nK2(`) := 9(1+2`2)a20 (4`\u22121)`2 ` 6= 1 4\n3(1+2`2)a20 `2 ` = 1 4\n.\nF.1.3 BOUND ON E [ ET,1/2 ] The following bound on E [ ET,1/2 ] will be useful when we bound DT .\nCorollary F.10. We have E [ ET,1/2 ] \u2264 K1(1/4)+K2(1/4)E [ log ( 1 + H\u0303T /a 2 0 )] +E [ T\u2211 t=1 ( K3(1/4)a 1/2 t + 54 ) \u03b72\u03b22 \u2016dt\u20162 a 1/2 t b 2 t ] .\nProof. Take ` = 14 in Lemma F.9. F.1.4 BOUND ON E [ a1\u22122qT+1 ET ] With the previous result on E [ET,1\u22122`], we can bound E [ a1\u22122qT+1 ET ] immediately.\nLemma F.11. Given p+ 2q = 1,p \u2208 [ 3\u2212 \u221a 7\n2 , 1 2\n] , we have\nE [ a1\u22122qT+1 ET ] \u2264 K1 +K2E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] +K4E [ D1\u22122pT ] q > 14 K1 +K2E [ log ( 1 + H\u0303T /a 2 0 )] +K4E [ log (\n1 + DT b20 )] q = 14\nwhere\nK1 := K1(q)\nK2 := K2(q)\nK4 := ( K3(q) + 3(1+2q2) q2 ) \u03b72\u03b22 4q\u22121 q > 1 4( K3(q) + 3(1+2q2)\nq2 ) \u03b72\u03b22 q = 14 .\nProof. When q > 14 \u21d4 p < 1 2 , by Lemma F.9, taking ` = q, we know\nE [ET,1\u22122q] \u2264 K1(q) +K2(q)E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] + E [ T\u2211 t=1 ( K3(q)a 2q t + 3 ( 1 + 2q2 ) q2 ) \u03b72\u03b22 \u2016dt\u20162 a2qt b 2 t ]\n\u2264 K1(q) +K2(q)E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] + E [ T\u2211 t=1 ( K3(q) + 3 ( 1 + 2q2 ) q2 ) \u03b72\u03b22 \u2016dt\u20162 a2qt b 2 t ]\n= K1(q) +K2(q)E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] + ( K3(q) + 3 ( 1 + 2q2 ) q2 ) \u03b72\u03b22E [ T\u2211 t=1 \u2016dt\u20162 a2qt b 2 t ] (a) = K1(q) +K2(q)E [( H\u0303T /a 2 0 ) 4q\u22121 3 ]\n+ ( K3(q) + 3 ( 1 + 2q2 ) q2 ) \u03b72\u03b22E T\u2211 t=1 \u2016dt\u20162( b 1/p 0 + \u2211t i=1 \u2016di\u20162 )2p \n(b) \u2264 K1(q) +K2(q)E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] + ( K3(q) + 3 ( 1 + 2q2 ) q2 ) \u03b72\u03b22E [ D1\u22122pT 1\u2212 2p ] (c) = K1(q) +K2(q)E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] + ( K3(q) + 3 ( 1 + 2q2 ) q2 ) \u03b72\u03b22 4q \u2212 1 E [ D1\u22122pT ] ,\nwhere (a) is by\na2qt b 2 t = a 2q t\n( b 1/p 0 + \u2211t i=1 \u2016di\u20162 )2p a2qt = ( b 1/p 0 + t\u2211 i=1 \u2016di\u20162 )2p ,\n(b) is by Lemma I.1, (c) is by 1\u2212 2p = 4q \u2212 1. When q = 14 , by a similar argument, we have\nE [ET,1\u22122q] \u2264 K1(q) +K2(q)E [ log ( 1 + H\u0303T /a 2 0 )] + ( K3(q) + 3 ( 1 + 2q2 ) q2 ) \u03b72\u03b22E [ log ( 1 + DT b20 )] .\nNow we can define\nK4 := ( K3(q) + 3(1+2q2) q2 ) \u03b72\u03b22 4q\u22121 q > 1 4( K3(q) + 3(1+2q2)\nq2 ) \u03b72\u03b22 q = 14 .\nThe final step is by noticing for 1\u2212 2q = p > 0,\nET,1\u22122q = T\u2211 t=1 a1\u22122qt+1 \u2016 t\u20162 \u2265 a 1\u22122q T+1 T\u2211 t=1 \u2016 t\u20162 = a1\u22122qT+1 ET .\nF.2 ANALYSIS OF DT\nWe will prove the following bound Lemma F.12. Given p+ 2q = 1,p \u2208 [ 3\u2212 \u221a 7\n2 , 1 2\n] , we have\nE [ aqT+1D 1\u2212p T ] \u2264 K5 +K6E [ log\na20 + H\u0303T a20\n] +K7E log K8 +K9 ( 1 + H\u0303T /a 2 0 )1/3 b0 where\nK5 := b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03bbK1(1/4)\n\u03b7\u03b2max ,K6 :=\n\u03bbK2(1/4)\n\u03b7\u03b2max ,\nK7 := (K8 +K9)\n1 p\u22121\n1\u2212 p ,K8 := (1 + \u03bbK3(1/4)) \u03b7\u03b2max,K9 :=\n( 1\n\u03bb +\n2\u03c3\u03022/3\na 2/3 0 \u03bb\n+ 54\u03bb ) \u03b7\u03b2max,\n\u03bb > 0 can be any number.\nProof. We start from Lemma E.4 E [ aqT+1D 1\u2212p T ] \u2264 b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217)\n+ E [ T\u2211 t=1 ( \u03b7\u03b2max + \u03b7\u03b2max a 1/2 t+1\u03bb \u2212 bt ) \u2016dt\u20162 b2t ] + \u03bbE [ ET,1/2 ] \u03b7\u03b2max\nwhere \u03bb > 0 is used to reduce the order of \u03c3\u0302 in the final bound. In the proof of the general case, we don\u2019t choose \u03bb explicitly anymore. Plugging in the bound on E [ ET,1/2 ] in Corollary F.10, we have\nE [ aqT+1D 1\u2212p T ] \u2264b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03bbK1(1/4) \u03b7\u03b2max + \u03bbK2(1/4) \u03b7\u03b2max E\n[ log\na20 + H\u0303T a20\n]\n+ E [ T\u2211 t=1 ( \u03b7\u03b2max + \u03b7\u03b2max a 1/2 t+1\u03bb + K3(1/4)\u03bb\u03b7 2\u03b22 \u03b7\u03b2max + 54\u03bb\u03b72\u03b22 a 1/2 t \u03b7\u03b2max \u2212 bt ) \u2016dt\u20162 b2t ]\n\u2264K5 +K6E [ log\na20 + H\u0303T a20\n]\n+ E [ T\u2211 t=1 ( (1 + \u03bbK3(1/4)) \u03b7\u03b2max + ( a 1/2 t \u03bba 1/2 t+1 + 54\u03bb ) \u03b7\u03b2max a 1/2 t \u2212 bt ) \u2016dt\u20162 b2t ]\n\u2264K5 +K6E [ log\na20 + H\u0303T a20\n]\n+ E T\u2211 t=1 ( (1 + \u03bbK3(1/4)) \u03b7\u03b2max + ( 1 \u03bb + 2\u03c3\u03022/3 a 2/3 0 \u03bb + 54\u03bb ) \u03b7\u03b2max a 1/2 t \u2212 bt ) \u2016dt\u20162\nb2t\ufe38 \ufe37\ufe37 \ufe38 (i) (30) where, in the last step, we use Lemma F.5. Next, we apply Lemma E.5 to (i) to get\n(i) \u2264\n(( 1 + \u03bbK3(1/4) + 1 \u03bb + 2\u03c3\u03022/3\na 2/3 0 \u03bb\n+ 54\u03bb ) \u03b7\u03b2max ) 1 p\u22121\n1\u2212 p\n\u00d7 log (1 + \u03bbK3(1/4)) \u03b7\u03b2max +\n( 1 \u03bb + 2\u03c3\u03022/3\na 2/3 0 \u03bb\n+ 54\u03bb ) \u03b7\u03b2max ( 1 + H\u0303T /a 2 0 )1/3 b0\n= K7 log K8 +K9\n( 1 + H\u0303T /a 2 0 )1/3 b0\nBy plugging the above bound into (30), we get the desired result.\nF.3 COMBINE THE BOUNDS AND THE FINAL PROOF\nFrom Lemma F.11, we have\nE [ a1\u22122qT+1 ET ] \u2264 K1 +K2E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] +K4E [ D1\u22122pT ] q > 14 K1 +K2E [ log ( 1 + H\u0303T /a 2 0 )] +K4E [ log (\n1 + DT b20 )] q = 14\nFrom Lemma F.12, we have\nE [ aqT+1D 1\u2212p T ] \u2264 K5 +K6E [ log\na20 + H\u0303T a20\n] +K7E log K8 +K9 ( 1 + H\u0303T /a 2 0 )1/3 b0 .\nNow let\np\u0302 = 3(1\u2212 p) 4\u2212 p \u2208 [ 3 7 , \u221a 7\u2212 2 ] .\nApply Lemma E.1, we can obtain E [ Hp\u0302T ] \u2264 4 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} . (31)\nNow we can give the final proof of Theorem F.1.\nProof. First, we have\nE [ H\u0303T ] = E [ T\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207f(xi, \u03bei+1)\u20162 ]\n= 2 T\u2211 i=1 Var [\u2207f(xi, \u03bei)] \u2264 2\u03c32T,\nwhere the second equation is by the independency of \u03bei and \u03bei+1. Now we consider following two cases:\nCase 1: E [ Dp\u0302T ] \u2264 E [ Ep\u0302T ] . In this case, we will finally prove\nE [ Ep\u0302T ] \u2264 ( 2K1 K4 ) p\u0302 1\u22122p + (( 2K2 K4 ) p\u0302 1\u22122p + (2K4) p\u0302 2p )( 1 + 2\u03c3 2T a20 ) p\u0302 3 q 6= 14( 2K1 + 2 ( K2 + K4 3 ) log ( 1 + 2\u03c3 2T a20 ) + 2K4p\u0302 log 4K4 b2p\u03020 )p\u0302 ( 1 + 2\u03c3 2T a20 ) p\u0302 3 + b2p\u03020 q = 1 4 .\nNote that by Holder inequality E [ Ep\u0302T ] = E [ a (1\u22122q)p\u0302 T+1 E p\u0302 T \u00d7 a \u2212(1\u22122q)p\u0302 T+1 ] \u2264 Ep\u0302 [ a1\u22122qT+1 ET ] E1\u2212p\u0302 [ a \u2212 (1\u22122q)p\u03021\u2212p\u0302 T+1\n] = Ep\u0302 [ a1\u22122qT+1 ET ] E1\u2212p\u0302 [ (1 + H\u0303T /a 2 0) 2(1\u22122q)p\u0302 3(1\u2212p\u0302)\n] (a) = Ep\u0302 [ a1\u22122qT+1 ET ] E1\u2212p\u0302 [ (1 + H\u0303T /a 2 0) 2pp\u0302 3(1\u2212p\u0302)\n] (b)\n\u2264 Ep\u0302 [ a1\u22122qT+1 ET ] E 2pp\u0302 3 [ 1 + H\u0303T /a 2 0 ] ,\nwhere (a) is by 1\u2212 2q = p, (b) is due to 2pp\u03023(1\u2212p\u0302) = 2p(1\u2212p) 2p+1 < 1.\nFirst, if q 6= 14 , we have\nE [ a1\u22122qT+1 ET ] \u2264 K1 +K2E [( H\u0303T /a 2 0 ) 4q\u22121 3 ] +K4E [ D1\u22122pT ] (c)\n\u2264 K1 +K2E 1\u22122p 3 [ H\u0303T /a 2 0 ] +K4E 1\u22122p p\u0302 [ Dp\u0302T ] (d)\n\u2264 K1 +K2 ( 2\u03c32T/a20 ) 1\u22122p 3 +K4E 1\u22122p p\u0302 [ Ep\u0302T ] where (c) is by 4q\u221213 = 1\u22122p 3 \u2264 1 and p \u2265 3\u2212 \u221a 7 2 \u21d2 1 \u2212 2p \u2264 3(1\u2212p) 4\u2212p = p\u0302, (d) is by E [ H\u0303T ] \u2264\n2\u03c32T and E [ Dp\u0302T ] \u2264 E [ Ep\u0302T ] . Then we know\nE [ Ep\u0302T ] \u2264 Ep\u0302 [ a1\u22122qT+1 ET ] E 2pp\u0302 3 [ 1 + H\u0303T /a 2 0 ] \u2264 ( K1 +K2 ( 2\u03c32T/a20 ) 1\u22122p 3 +K4E 1\u22122p p\u0302 [ Ep\u0302T ])p\u0302 ( 1 + 2\u03c32T/a20 ) 2pp\u0302 3 .\nIf K4E 1\u22122p p\u0302 [ Ep\u0302T ] \u2264 K1 +K2 ( 2\u03c32T/a20 ) 1\u22122p 3 , we know\nE 1\u22122p p\u0302 [ Ep\u0302T ] \u2264 K1 K4 + K2 K4 ( 2\u03c32T a20 ) 1\u22122p 3\n\u21d2 E [ Ep\u0302T ] \u2264 ( K1 K4 + K2 K4 ( 2\u03c32T a20 ) 1\u22122p 3 ) p\u0302 1\u22122p\n\u2264 (\n2K1 K4\n) p\u0302 1\u22122p\n+ ( 2K2 K4 ) p\u0302 1\u22122p ( 2\u03c32T a20 ) p\u0302 3 .\nIf K4E 1\u22122p p\u0302 [ Ep\u0302T ] \u2265 K1 +K2 ( 2\u03c32T/a20 ) 1\u22122p 3 , then we know\nE [ Ep\u0302T ] \u2264 (2K4)p\u0302 E1\u22122p [ Ep\u0302T ]( 1 + 2\u03c32T\na20\n) 2pp\u0302 3\n\u21d2 E [ Ep\u0302T ] \u2264 (2K4) p\u0302 2p ( 1 + 2\u03c32T\na20\n) p\u0302 3\n.\nCombining two results, we know when q 6= 14\nE [ Ep\u0302T ] \u2264 (\n2K1 K4\n) p\u0302 1\u22122p\n+ ( 2K2 K4 ) p\u0302 1\u22122p ( 2\u03c32T a20 ) p\u0302 3 + (2K4) p\u0302 2p ( 1 + 2\u03c32T a20 ) p\u0302 3\n\u2264 (\n2K1 K4\n) p\u0302 1\u22122p\n+ (( 2K2 K4 ) p\u0302 1\u22122p + (2K4) p\u0302 2p )( 1 + 2\u03c32T a20 ) p\u0302 3 .\nFollowing a similar approach, we can prove for q = 14 ,there is\nE [ Ep\u0302T ] \u2264 K1 +K2 log(1 + 2\u03c32T a20 ) + K4 p\u0302 log 1 + E [ Ep\u0302T ] b2p\u03020 p\u0302(1 + 2\u03c32T a20 ) p\u0302 3\nNow we use Lemma I.9 to get\nE [ Ep\u0302T ] \u2264 2K1 + 2K2 log(1 + 2\u03c32Ta20 ) + 2K4 p\u0302 log 4K4 ( 1 + 2\u03c3 2T a20 ) p\u0302 3 b2p\u03020 p\u0302( 1 + 2\u03c32T a20 ) p\u0302 3 + b2p\u03020\n= ( 2K1 + 2 ( K2 +\nK4 3\n) log ( 1 + 2\u03c32T\na20\n) +\n2K4 p\u0302 log 4K4\nb2p\u03020\n)p\u0302( 1 + 2\u03c32T\na20\n) p\u0302 3\n+ b2p\u03020 .\nFinally, we have\nE [ Ep\u0302T ] \u2264 ( 2K1 K4 ) p\u0302 1\u22122p + (( 2K2 K4 ) p\u0302 1\u22122p + (2K4) p\u0302 2p )( 1 + 2\u03c3 2T a20 ) p\u0302 3 q 6= 14( 2K1 + 2 ( K2 + K4 3 ) log ( 1 + 2\u03c3 2T a20 ) + 2K4p\u0302 log 4K4 b2p\u03020 )p\u0302 ( 1 + 2\u03c3 2T a20 ) p\u0302 3 + b2p\u03020 q = 1 4 .\nCase 2: E [ Dp\u0302T ] \u2265 E [ Ep\u0302T ] . In this case, we will finally prove\nE [ Dp\u0302T ] \u2264 ( K5 + ( K6 +\nK7 3\n) log ( 1 + 2\u03c32T\na20\n) +K7 log\nK8 +K9 b0\n) p\u0302 1\u2212p ( 1 + 2\u03c32T\na20\n) p\u0302 3\nNote that by Holder inequality\nE [ Dp\u0302T ] = E [ a p\u0302q 1\u2212p T+1D p\u0302 T \u00d7 a \u2212 p\u0302q1\u2212p T+1 ] \u2264 E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E 1\u2212p\u2212p\u0302 1\u2212p [ a \u2212 p\u0302q1\u2212p\u2212p\u0302 T+1\n] = E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E 1\u2212p\u2212p\u0302 1\u2212p [( 1 + H\u0303T /a 2 0 ) 2p\u0302q 3(1\u2212p\u2212p\u0302)\n] = E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E p\u0302 3 [ 1 + H\u0303T /a 2 0 ] ,\nwhere the last step is by 2p\u0302q3(1\u2212p\u2212p\u0302) = (1\u2212p)p\u0302 3(1\u2212p\u2212p\u0302) = 1. We know\nE [ aqT+1D 1\u2212p T ] \u2264 K5 +K6E [ log\na20 + H\u0303T a20\n] +K7E log K8 +K9 ( 1 + H\u0303T /a 2 0 )1/3 b0 (e)\n\u2264 K5 +K6 log a20 + E\n[ H\u0303T ] a20 +K7 log K8 +K9E [( 1 + H\u0303T /a 2 0 )1/3] b0\n(f) \u2264 K5 +K6 log a20 + E\n[ H\u0303T ] a20 +K7 log K8 +K9 ( 1 + E [ H\u0303T ] /a20 )1/3 b0\n(g) \u2264 K5 +K6 log a20 + 2\u03c3 2T\na20 +K7 log\nK8 +K9 ( 1 + 2\u03c32T/a20 )1/3 b0 ,\nwhere (e) is by the concavity of log function, (f) holds due to E [ X1/3 ] \u2264 E1/3 [X] for X \u2265 0, (g)\nis by E [ H\u0303T ] \u2264 2\u03c32T . Then we have E [ Dp\u0302T ] \u2264 E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E p\u0302 3 [ 1 + H\u0303T /a 2 0\n] \u2264 ( K5 +K6 log a20 + 2\u03c3 2T\na20 +K7 log\nK8 +K9 ( 1 + 2\u03c32T/a20 )1/3 b0 ) p\u0302 1\u2212p ( 1 + 2\u03c32T a20 ) p\u0302 3\n\u2264 ( K5 + ( K6 +\nK7 3\n) log ( 1 + 2\u03c32T\na20\n) +K7 log\nK8 +K9 b0\n) p\u0302 1\u2212p ( 1 + 2\u03c32T\na20\n) p\u0302 3\n.\nFinally, combining Case 1 and Case 2 and using (31), we get the desired result and finish the proof E [ Hp\u0302T ] \u22644 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]}\n\u22644 ( 2K1 K4 ) p\u0302 1\u22122p + (( 2K2 K4 ) p\u0302 1\u22122p + (2K4) p\u0302 2p )( 1 + 2\u03c3 2T a20 ) p\u0302 3 q 6= 14( 2K1 + 2 ( K2 + K4 3 ) log ( 1 + 2\u03c3 2T a20 ) + 2K4p\u0302 log 4K4 b2p\u03020 )p\u0302 ( 1 + 2\u03c3 2T a20 ) p\u0302 3 + b2p\u03020 q = 1 4\n+ 4 ( K5 + ( K6 +\nK7 3\n) log ( 1 + 2\u03c32T\na20\n) +K7 log\nK8 +K9 b0\n) p\u0302 1\u2212p ( 1 + 2\u03c32T\na20\n) p\u0302 3\n.\nG ANALYSIS OF META-STORM-SG FOR GENERAL p\nIn this section, we give a general analysis for our Algorithm META-STORM-SG. Readers will see p = 12 is a very special corner case. First we recall the choices of at and bt:\nat+1 = (1 + t\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u2016/a20)\u22122/3,\nbt = (b 1/p 0 + t\u2211 i=1 \u2016di\u20162)p/aqt+1\nwhere p, q satisfy p+ 2q = 1, p \u2208 [ 1 4 , 1 2 ] . a0 > 0 and b0 > 0 are absolute constants. Naturally, we have a1 = 1. We will finally prove the following theorem.\nTheorem G.1. Under the assumptions 1-4, by defining p\u0302 = 2(1\u2212p)3 \u2208 [ 1 3 , 1 2 ] , we have\nE [ Hp\u0302T ] \u22644C91 [( 2\u03c32T )p\u0302 \u2264 4C9]+ 4C101 [(2\u03c32T )p\u0302 \u2264 4C10]\n+ 4 ( 2C1 C3 ) p\u0302 1\u22122p + (( 2C2 C3 ) p\u0302 1\u22122p + (2C3) p\u0302 2p )( 1 + 2(2\u03c32T) p\u0302 a2p\u03020 ) 1 3 p 6= 12( C1 + ( C2 p\u0302 + C3 p\u0302 ) log ( 1 + (2\u03c32T) p\u0302\nmin{a2p\u03020 /2,4b2p\u03020 }\n))p\u0302( 1 + 2(2\u03c32T) p\u0302\na2p\u03020\n) 1 3\np = 12\n.\n+ 4 ( C4 + (3C5 + C6) log a 2/3 0 + 2 ( 2\u03c32T )1/3 a 2/3 0 + C6 log 2C7 + 2C8 b0 ) p\u0302 1\u2212p\n\u00d7 ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 )1/3 where Ci, i \u2208 [10] are some constants only depending on a0, b0, \u03c3, G\u0302, \u03b2, p, q, F (x1) \u2212 F \u2217. To simplify our final bound, we only indicate the dependency on \u03b2 and F (x1)\u2212F \u2217 when \u03c3 6= 0 and T is big enough to eliminate C9 and C10\nE [ Hp\u0302T ] = O (( (F (x1)\u2212 F \u2217) p\u0302 1\u2212p + \u03b2 p\u0302 p log p\u0302 1\u2212p \u03b2 + \u03b2 p\u0302 p log p\u0302 1\u2212p ( 1 + \u03c32T )) (1 + \u03c32T ) p\u0302 3 ) .\nRemark G.2. For all i \u2208 [10], the constant Ci will be defined in the proof that follows.\nAgain, by the concavity of xp\u0302, we have the following convergence theorem, of which the proof is omitted.\nTheorem G.3. Under the assumptions 1-4 by defining p\u0302 = 2(1\u2212p)3 \u2208 [ 1 3 , 1 2 ] , when \u03c3 6= 0 and T is big enough, we have E [ \u2016\u2207F (xout)\u20162p\u0302 ] = O ( (F (x1)\u2212 F \u2217) p\u0302 1\u2212p + \u03b2 p\u0302 p log p\u0302 1\u2212p \u03b2 + \u03b2 p\u0302 p log p\u0302 1\u2212p ( 1 + \u03c32T ))( 1 T p\u0302 + \u03c32p\u0302/3 T 2p\u0302/3 ) .\nHere, we give a more explicit convergence dependency for p = 12 used in Theorem 2.1. Theorem G.4. Under the assumptions 1-4, when p = 12 , by setting \u03bb = min { 1, (a0/G\u0302) 2 } (which\nis used in C4 to C8 and C10) we get the best dependency on G\u0302. For simplicity, under the setting a0 = b0 = \u03b7 = 1, we have E [ \u2016\u2207F (xout)\u20162/3 ] = O W11 [( \u03c32T )1/3 \u2264W1] T 1/3 + ( W2 +W3 log 2/3 ( 1 + \u03c32T ))( 1 T 1/3 + \u03c32/9 T 2/9 )\nwhere W1 = O ( F (x1)\u2212 F \u2217 + \u03c32 + G\u03022 + \u03b2 ( 1 + G\u03022 ) log ( \u03b2 + G\u03022\u03b2 )) , W2 =\nO ( (F (x1)\u2212 F \u2217)2/3 + \u03c34/3 + G\u03024/3 + (1 + G\u03024/3)\u03b22/3 log2/3 ( \u03b2 + G\u03022\u03b2 )) and W3 =\nO ( (1 + G\u03024/3)\u03b22/3 ) .\nTo start with, we first state the following useful bound for at:\nLemma G.5. \u2200t \u2265 1, there is\na \u22123/2 t+1 \u2212 a \u22123/2 t \u2264 (G\u0302/a0)2.\nProof.\na \u22123/2 t+1 \u2212 a \u22123/2 t = \u2016\u2207f(xt, \u03bet)\u20162/a20 \u2264 (G\u0302/a0)2.\nG.1 ANALYSIS OF ET\nFollowing a similar approach, we define a random time \u03c4 satisfying\n\u03c4 = max {[T ] , at \u2265 C0} where\nC0 := min { 1, (a0/G\u0302) 4 } .\nNote that {\u03c4 = t} = {at \u2265 C0, at+1 < C0} \u2208 Ft, this means \u03c4 is a stopping time. We now prove a useful proposition of \u03c4 :\nLemma G.6. \u2200t \u2265 \u03c4 + 1, we have\na\u22121t+1 \u2212 a \u22121 t \u22642/3.\nProof. Let h(y) = y2/3. Due to the concavity, we know h(y1) \u2212 h(y2) \u2264 h\u2032(y2)(y1 \u2212 y2) = 2(y1\u2212y2) 3y\n1/3 2\n. Now we have\na\u22121t+1 \u2212 a \u22121 t = (a \u22123/2 t + \u2016\u2207f(xt, \u03bet)\u20162/a20)2/3 \u2212 (a \u22123/2 t ) 2/3\n\u2264 2a 1/2 t \u2016\u2207f(xt, \u03bet)\u20162\n3a20 \u2264 2a\n1/2 t G\u0302 2\n3a20 \u2264 2 3\nwhere the last step is by at \u2264 a\u03c4+1 < C0 \u2264 (a0/G\u0302)4. G.1.1 BOUND ON E [ E\u03c4,3/2\u22122` ] FOR ` \u2208 [ 1 4 , 1 2 ] Similar to the analysis of META-STORM, we choose to bound E [ E\u03c4,3/2\u22122` ] . We first prove the\nfollowing bound on E [ E\u03c4,3/2\u22122` ] :\nLemma G.7. For any ` \u2208 [ 1 4 , 1 2 ] , we have\nE [ E\u03c4,3/2\u22122` ] \u2264 \u03c3 2 + 24a20 + 4G\u0302 2\nC 2`\u22121/2 0\n+ 2\u03b72\u03b22\nC 2`\u22121/2 0\nE [ T\u2211 t=1 \u2016dt\u20162 b2t ] .\nProof. We start from Lemma E.2,\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162\n+ 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1.\nSumming up from 1 to \u03c4 \u2212 1 and taking the expectations on both sides, we obtain\nE [E\u03c4\u22121,1] \u2264 E [ \u03c4\u22121\u2211 t=1 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162\n+ 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 ]\n= E [ \u2016 1\u20162 \u2212 \u2016 \u03c4\u20162 + \u03c4\u22121\u2211 t=1 2\u2016Zt+1\u20162\n+ 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 ]\n\u2264 E [ \u2016 1\u20162 \u2212 \u2016 \u03c4\u20162 + T\u2211 t=1 2\u2016Zt+1\u20162\n+ 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + \u03c4\u22121\u2211 t=1 Mt+1\n]\n\u21d2 E [ E\u03c4\u22121,1 + \u2016 \u03c4\u20162 ] \u2264 \u03c32 + E [ T\u2211 t=1 2\u2016Zt+1\u20162\n+ 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + \u03c4\u22121\u2211 t=1 Mt+1 ] Because C0 \u2264 1, a\u03c4+1 \u2264 1, 2`\u2212 1/2 \u2265 0 and 3/2\u2212 2` \u2265 0, so we have\nC 2`\u22121/2 0 a 3/2\u22122` \u03c4+1 \u2264 1.\nBesides, for t \u2264 \u03c4 \u2212 1, by the definition of \u03c4 , we have C0 \u2264 at+1, then we know\nC 2`\u22121/2 0 a 3/2\u22122` t+1 \u2264 a 2`\u22121/2 t+1 a 3/2\u22122` t+1 = at+1.\nThese two results give us\nC 2`\u22121/2 0 E\u03c4,3/2\u22122` = C 2`\u22121/2 0 \u03c4\u2211 t=1 a 3/2\u22122` t+1 \u2016 t\u20162 \u2264 \u03c4\u22121\u2211 t=1 at+1\u2016 t\u20162 + \u2016 \u03c4\u20162\n= E\u03c4\u22121,1 + \u2016 \u03c4\u20162,\nwhich implies\nE [ C\n2`\u22121/2 0 E\u03c4,3/2\u22122`\n] \u2264 \u03c32 + E [ T\u2211 t=1 2\u2016Zt+1\u20162\n+ 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + \u03c4\u22121\u2211 t=1 Mt+1 ] LetMt := \u2211t i=1Mi \u2208 Ft with M1 = 0. For s \u2264 t, we know E [Mt|Fs] = 0, hence Mt is a martingale. Note that \u03c4 is a bounded stopping time, hence by optional sampling theorem\nE [ \u03c4\u22121\u2211 t=1 Mt+1 ] = E [M\u03c4 ] = 0.\nNow we have\nE [ C\n2`\u22121/2 0 E\u03c4,3/2\u22122`\n] \u2264 \u03c32 + E [ T\u2211 t=1 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 ] .\nBy Lemma E.3\nE [ \u2016Zt+1\u20162 | Ft ] \u2264 \u03b72\u03b22 \u2016dt\u2016 2\nb2t .\nBesides, under our current choice, at+1 \u2208 Ft, E [ a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162|Ft ] =a2t+1E [ \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162|Ft\n] \u2264a2t+1E [ \u2016\u2207f(xt+1, \u03bet+1)\u20162|Ft ] .\nUsing these two bounds, we have\nE [ C\n2`\u22121/2 0 E\u03c4,3/2\u22122`\n] \u2264 \u03c32 + E [ T\u2211 t=1 2\u03b72\u03b22 \u2016dt\u20162 b2t + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u20162 ]\n= \u03c32 + E [ T\u2211 t=1 2\u03b72\u03b22 \u2016dt\u20162 b2t + 2a20 \u00d7 \u2016\u2207f(xt+1, \u03bet+1)\u20162/a20 (1 + \u2211t i=1 \u2016\u2207f(xi, \u03bei)\u20162/a20)4/3 ]\n\u2264 \u03c32 + 24a20 + 4G\u03022 + 2\u03b72\u03b22E [ T\u2211 t=1 \u2016dt\u20162 b2t ] ,\nwhere the last inequality holds by Lemma I.4. Dividing both sides by C2`\u22121/20 , we get the desired bound immediately\nE [ E\u03c4,3/2\u22122` ] \u2264 \u03c3 2 + 24a20 + 4G\u0302 2\nC 2`\u22121/2 0\n+ 2\u03b72\u03b22\nC 2`\u22121/2 0\nE [ T\u2211 t=1 \u2016dt\u20162 b2t ] .\nG.1.2 BOUND ON E [ET,1\u22122`] FOR ` \u2208 [ 1 4 , 1 2 ] With the previous result on E [ E\u03c4,3/2\u22122` ] , we can bound E [ET,1\u22122`].\nLemma G.8. For any ` \u2208 [ 1 4 , 1 2 ] , we have\nE [ET,1\u22122`] \u2264 C1(`) + C2(`) E [( H\u0302T /a 2 0 ) 4`\u22121 3 ] ` > 14 E [ log (\n1 + H\u0302T /a 2 0 )] ` = 14\n+ E [ T\u2211 t=1 ( G\u03022 a20C 2`\u22121/2 0 a2`t+1 + 1 ) 6\u03b72\u03b22 \u2016dt\u20162 a2`t+1b 2 t ] ,\nwhere\nC1(`) := 3\n\u03c32 + 6G\u03022 + G\u03022 ( \u03c32 + 24a20 + 4G\u0302 2 )\na20C 2`\u22121/2 0 C2(`) := { 18a20 4`\u22121 ` > 1 4\n6a20 ` = 1 4\n.\nProof. Starting from Lemma E.2 as well\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1.\nDividing both sides by a2`t+1 and taking expectations, we have\nE [ a1\u22122`t+1 \u2016 t\u20162 ] \u2264 E\n[ \u2016 t\u20162\na2`t+1 \u2212 \u2016 t+1\u2016 2 a2`t+1 + 2 a2`t+1 \u2016Zt+1\u20162\n+ 2a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + Mt+1 a2`t+1\n] . (32)\nNote that under our current choice, at+1 \u2208 Ft, hence we have E [ Mt+1 a2`t+1 ] = E [ E [Mt+1|Ft] a2`t+1 ] = 0;\nE [ \u2016Zt+1\u20162\na2`t+1\n] = E [ E [ \u2016Zt+1\u20162|Ft ] a2`t+1 ] \u2264 E [ \u03b72\u03b22 \u2016dt\u20162 a2`t+1b 2 t ] ;\nE [ a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 ] = E [ a2\u22122`t+1 E [ \u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162|Ft ]] \u2264 E [ a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u20162 ] ,\nwhere the second bound holds by Lemma E.3. Plugging these three bounds into (32), we know\nE [ a1\u22122`t+1 \u2016 t\u20162 ] \u2264 E\n[ \u2016 t\u20162\na2`t+1 \u2212 \u2016 t+1\u2016 2 a2`t+1 + 2\u03b72\u03b22 \u2016dt\u20162 a2`t+1b 2 t\n+ 2a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u20162 ] .\nNow sum up from 1 to T to get\nE [ET,1\u22122`]\n\u2264E [ T\u2211 t=1 \u2016 t\u20162 a2`t+1 \u2212 \u2016 t+1\u2016 2 a2`t+1 + 2\u03b72\u03b22 \u2016dt\u20162 a2`t+1b 2 t + 2a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u20162 ]\n\u2264\u03c32 + E T\u2211 t=1 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162\ufe38 \ufe37\ufe37 \ufe38\n(i)\n+2\u03b72\u03b22 T\u2211 t=1 \u2016dt\u20162 a2`t+1b 2 t + T\u2211 t=1\n2a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u20162\ufe38 \ufe37\ufe37 \ufe38 (ii) . (33)\nFor (i), we split the time by \u03c4 T\u2211 t=1 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 = \u03c4\u2211 t=1 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162 + T\u2211 t=\u03c4+1 ( a\u22122`t+1 \u2212 a \u22122` t ) \u2016 t\u20162\n\u2264 \u03c4\u2211 t=1 ( a \u22123/2 t+1 \u2212 a \u22123/2 t ) a 3/2\u22122` t+1 \u2016 t\u20162 + T\u2211 t=\u03c4+1 ( a\u22121t+1 \u2212 a \u22121 t ) a1\u22122`t+1 \u2016 t\u20162\n\u2264 G\u0302 2\na20 \u03c4\u2211 t=1 a 3/2\u22122` t+1 \u2016 t\u20162 + T\u2211 t=\u03c4+1 2 3 a1\u22122`t+1 \u2016 t\u20162\n\u2264 G\u0302 2\na20 \u03c4\u2211 t=1 a 3/2\u22122` t+1 \u2016 t\u20162 + T\u2211 t=1 2 3 a1\u22122`t+1 \u2016 t\u20162\n= G\u03022\na20 E\u03c4,3/2\u22122` +\n2 3 ET,1\u22122`,\nwhere the second inequality is by Lemma G.5 and Lemma G.6.\nNext, for (ii), we use Lemma I.2 to get\nT\u2211 t=1 2a2\u22122`t+1 \u2016\u2207f(xt+1, \u03bet+1)\u20162\n=2a20 T\u2211 t=1 \u2016\u2207f(xt+1, \u03bet+1)\u20162/a20( 1 + \u2211t i=1 \u2016\u2207f(xi, \u03bei)\u20162/a20 )4(1\u2212`)/3\n\u22642a20 \u00d7 3G\u03022 a20 + 11\u22124(1\u2212`)/3 (\u2211T i=1 \u2016\u2207f(xi,\u03bei)\u2016 2 a20 )1\u22124(1\u2212`)/3 4(1\u2212 `)/3 < 1 log ( 1 + \u2211T i=1 \u2016\u2207f(xi,\u03bei)\u2016 2\na20\n) 4(1\u2212 `)/3 = 1\n\n=6G\u03022 + 6a20 4`\u22121 ( H\u0302T /a 2 0 ) 4`\u22121 3\n` > 14 2a20 log ( 1 + H\u0302T /a 2 0 ) ` = 14 .\nPlugging these two bounds into (33), we have\nE [ET,1\u22122`] \u2264 \u03c32 + 6G\u03022 + E\n[ G\u03022\na20 E\u03c4,3/2\u22122` +\n2 3 ET,1\u22122` + 2\u03b7 2\u03b22 T\u2211 t=1 \u2016dt\u20162 a2`t+1b 2 t\n]\n+ 6a20 4`\u22121 ( H\u0302T /a 2 0 ) 4`\u22121 3\n` > 14 2a20 log ( 1 + H\u0302T /a 2 0 ) ` = 14 .\nThus\nE [ET,1\u22122`] \u2264 3 ( \u03c32 + 6G\u03022 ) + 3G\u03022 a20 E [ E\u03c4,3/2\u22122` ] + 18a20 4`\u22121E [( H\u0302T /a 2 0 ) 4`\u22121 3 ] ` > 14 6a20E [ log ( 1 + H\u0302T /a 2 0 )] ` = 14\n+ 6\u03b72\u03b22E [ T\u2211 t=1 \u2016dt\u20162 a2`t+1b 2 t ]\nPlugging the bound on E [ E\u03c4,3/2\u22122` ] in Lemma G.7, we finally get\nE [ET,1\u22122`] \u2264 3 \u03c32 + 6G\u03022 + G\u03022 ( \u03c32 + 24a20 + 4G\u0302 2 )\na20C 2`\u22121/2 0 \ufe38 \ufe37\ufe37 \ufe38\nC1(`)\n+C2(`) E [( H\u0302T /a 2 0 ) 4`\u22121 3 ] ` > 14 E [ log (\n1 + H\u0302T /a 2 0 )] ` = 14\n+ E [ T\u2211 t=1 ( G\u03022 a20C 2`\u22121/2 0 a2`t+1 + 1 ) 6\u03b72\u03b22 \u2016dt\u20162 a2`t+1b 2 t ] ,\nwhere\nC2(`) :=\n{ 18a20 4`\u22121 ` > 1 4\n6a20 ` = 1 4\n.\nG.1.3 BOUND ON E [ ET,1/2 ] The following bound on E [ ET,1/2 ] will be useful when we bound DT .\nCorollary G.9. We have\nE [ ET,1/2 ] \u2264 C1 (1/4) + C2 (1/4)E [ log ( 1 + H\u0302T /a 2 0 )] + E\n[ T\u2211 t=1 ( G\u03022 a20 a 1/2 t+1 + 1 ) 6\u03b72\u03b22 \u2016dt\u20162 a 1/2 t+1b 2 t ] .\nProof. Take ` = 14 in Lemma G.8.\nG.1.4 BOUND ON E [ a1\u22122qT+1 ET ] Lemma G.10. Given p+ 2q = 1,p \u2208 [ 1 4 , 1 2 ] , we have\nE [ a1\u22122qT+1 ET ] \u2264 C1 + C2E [( H\u0302T /a 2 0 ) 4q\u22121 3 ] + C3E [ D1\u22122pT ] q > 14 C1 + C2E [ log ( 1 + H\u0302T /a 2 0 )] + C3E [ log (\n1 + DT b20 )] q = 14 ,\nwhere C1 := C1(q)\nC2 := C2(q)\nC3 := ( G\u03022 a20C 2q\u22121/2 0 + 1 ) 6\u03b72\u03b22 4q\u22121 q > 1 4( G\u03022\na20 + 1 ) 6\u03b72\u03b22 q = 14 .\nProof. When p 6= 12 \u21d4 q > 1 4 , by Lemma G.8, taking ` = q, we know E [ET,1\u22122q] \u2264 C1(q) + C2(q)E [( H\u0302T /a 2 0 ) 4q\u22121 3 ] + E [ T\u2211 t=1 ( G\u03022 a20C 2q\u22121/2 0 a2qt+1 + 1 ) 6\u03b72\u03b22 \u2016dt\u20162 a2qt+1b 2 t ]\n\u2264 C1(q) + C2(q)E [( H\u0302T /a 2 0 ) 4q\u22121 3 ] + ( G\u03022\na20C 2q\u22121/2 0\n+ 1 ) 6\u03b72\u03b22E [ T\u2211 t=1 \u2016dt\u20162 a2qt+1b 2 t ] (a) = C1(q) + C2(q)E [( H\u0302T /a 2 0 ) 4q\u22121 3 ]\n+\n( G\u03022\na20C 2q\u22121/2 0\n+ 1 ) 6\u03b72\u03b22 \u00d7 E T\u2211 t=1 \u2016dt\u20162( b 1/p 0 + \u2211t i=1 \u2016di\u20162 )2p \n(b) \u2264 C1(q) + C2(q)E [( H\u0302T /a 2 0 ) 4q\u22121 3 ] + ( G\u03022\na20C 2q\u22121/2 0\n+ 1 ) 6\u03b72\u03b22E [ D1\u22122pT 1\u2212 2p ] (c) = C1(q) + C2(q)E [( H\u0302T /a 2 0 ) 4q\u22121 3 ] + ( G\u03022\na20C 2q\u22121/2 0\n+ 1\n) 6\u03b72\u03b22\n4q \u2212 1 E [ D1\u22122pT ] ,\nwhere (a) is by\na2qt+1b 2 t = a 2q t+1\n( b 1/p 0 + \u2211t i=1 \u2016di\u20162 )2p a2qt+1 = ( b 1/p 0 + t\u2211 i=1 \u2016di\u20162 )2p ,\n(b) is by Lemma I.1, (c) is by 1\u2212 2p = 4q \u2212 1. When p = 12 \u21d4 q = 1 4 , by a similar argument, we have\nE [ET,1\u22122q] \u2264 C1(q) + C2(q)E [ log ( 1 + H\u0302T /a 2 0 )] +\n( G\u03022\na20 + 1\n) 6\u03b72\u03b22E [ log ( 1 +\nDT b20\n)] .\nNow we can define\nC3 := ( G\u03022 a20C 2q\u22121/2 0 + 1 ) 6\u03b72\u03b22 4q\u22121 q > 1 4( G\u03022\na20 + 1 ) 6\u03b72\u03b22 q = 14 .\nThe final step is by noticing for 1\u2212 2q = p > 0\nET,1\u22122q = T\u2211 t=1 a1\u22122qt+1 \u2016 t\u20162 \u2265 a 1\u22122q T+1 T\u2211 t=1 \u2016 t\u20162 = a1\u22122qT+1 ET .\nG.2 ANALYSIS OF DT\nWe will prove the following bound Lemma G.11. Given p+ 2q = 1,p \u2208 [ 1 4 , 1 2 ] , we have\nE [ aqT+1D 1\u2212p T ] \u2264 C4 + C5E [ log\na20 + H\u0302T a20\n] + C6E log C7 + C8 ( 1 + H\u0302T /a 2 0 )1/3 b0 where\nC4 := b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03bbC1 (1/4)\n\u03b7\u03b2max , C5 :=\n\u03bbC2 (1/4)\n\u03b7\u03b2max ,\nC6 := (C7 + C8)\n1 p\u22121\n1\u2212 p , C7 :=\n( 1 + 6\u03bbG\u03022\na20\n) \u03b7\u03b2max, C8 := ( 1\n\u03bb + 6\u03bb\n) \u03b7\u03b2max,\n\u03bb > 0 can be any number.\nProof. The same as before, we start from Lemma E.4 E [ aqT+1D 1\u2212p T ] \u2264 b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217)+E [ T\u2211 t=1 ( \u03b7\u03b2max + \u03b7\u03b2max a 1/2 t+1\u03bb \u2212 bt ) \u2016dt\u20162 b2t ] + \u03bbE [ ET,1/2 ] \u03b7\u03b2max\nwhere \u03bb > 0 is used to reduce the order of G\u0302 in the final bound. In the proof of the general case , we don\u2019t choose \u03bb explicitly anymore. Plugging in the bound on E [ ET,1/2 ] in Corollary G.9, we know\nE [ aqT+1D 1\u2212p T ] \u2264 b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03bbC1 (1/4) \u03b7\u03b2max + \u03bbC2 (1/4) \u03b7\u03b2max E\n[ log\na20 + H\u0302T a20\n]\n+ E [ T\u2211 t=1 (( 1 + 6\u03bbG\u03022 a20 ) \u03b7\u03b2max + ( 1 \u03bb + 6\u03bb ) \u03b7\u03b2max a 1/2 t+1 \u2212 bt ) \u2016dt\u20162 b2t ]\n= C4 + C5E [ log\na20 + H\u0302T a20\n]\n+ E T\u2211 t=1 (( 1 + 6\u03bbG\u03022 a20 ) \u03b7\u03b2max + ( 1 \u03bb + 6\u03bb ) \u03b7\u03b2max a 1/2 t+1 \u2212 bt ) \u2016dt\u20162\nb2t\ufe38 \ufe37\ufe37 \ufe38 (i)\n . (34)\nApplying Lemma E.5 to (i), we get\n(i) \u2264\n(( 1 + 6\u03bbG\u0302 2\na20 + 1\u03bb + 6\u03bb\n) \u03b7\u03b2max ) 1 p\u22121\n1\u2212 p\n\u00d7 log\n( 1 + 6\u03bbG\u0302 2\na20\n) \u03b7\u03b2max + ( 1 \u03bb + 6\u03bb ) \u03b7\u03b2max ( 1 + H\u0302T /a 2 0 )1/3 b0\n= C6 log C7 + C8\n( 1 + H\u0302T /a 2 0 )1/3 b0\nBy using this bound to (34), the proof is completed.\nG.3 COMBINE THE BOUNDS AND THE FINAL PROOF.\nFrom Lemma G.10, we have\nE [ a1\u22122qT+1 ET ] \u2264 C1 + C2E [( H\u0302T /a 2 0 ) 4q\u22121 3 ] + C3E [ D1\u22122pT ] q > 14 C1 + C2E [ log ( 1 + H\u0302T /a 2 0 )] + C3E [ log (\n1 + DT b20 )] q = 14\nFrom Lemma G.11, we have\nE [ aqT+1D 1\u2212p T ] \u2264 C4 + C5E [ log\na20 + H\u0302T a20\n] + C6E log C7 + C8 ( 1 + H\u0302T /a 2 0 )1/3 b0 Now let\np\u0302 = 2(1\u2212 p) 3 \u2208 [ 1 3 , 1 2 ] .\nApply Lemma E.1, we have\nE [ Hp\u0302T ] \u2264 2p\u0302+1 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} \u2264 4 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} , (35)\nNow we can give the final proof of Theorem G.1.\nProof. First, we have\nE [ H\u0302p\u0302T ] = E ( T\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u20162 )p\u0302\n\u2264 E ( T\u2211 i=1 2\u2016\u2207F (xi)\u20162 + 2\u2016\u2207f(xi, \u03bei)\u2212\u2207F (xi)\u20162 )p\u0302\n= E (2HT + 2 T\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207F (xi)\u20162 )p\u0302\n\u2264 E 2p\u0302Hp\u0302T + ( 2\nT\u2211 i=1\n\u2016\u2207f(xi, \u03bei)\u2212\u2207F (xi)\u20162 )p\u0302\n= 2p\u0302E [ Hp\u0302T ] + E (2 T\u2211 i=1 \u2016\u2207f(xi, \u03bei)\u2212\u2207F (xi)\u20162 )p\u0302 \u2264 2p\u0302E [ Hp\u0302T ] + Ep\u0302 [( 2\nT\u2211 i=1\n\u2016\u2207f(xi, \u03bei)\u2212\u2207F (xi)\u20162 )]\n\u2264 2p\u0302E [ Hp\u0302T ] + ( 2\u03c32T )p\u0302 \u2264 22p\u0302+1 max{E [Ep\u0302T ] ,E [Dp\u0302T ]}+ (2\u03c32T )p\u0302 \u2264 4 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} + ( 2\u03c32T )p\u0302 . (36)\nNow we consider following two cases:\nCase 1: E [ Ep\u0302T ] \u2265 E [ Dp\u0302T ] . In this case, we will finally prove\nE [ Ep\u0302T ] \u2264 ( 2C1 C3 ) p\u0302 1\u22122p + (( 2C2 C3 ) p\u0302 1\u22122p + (2C3) p\u0302 2p )( 1 + 2(2\u03c32T) p\u0302 a2p\u03020 ) 1 3 +C91 [( 2\u03c32T )p\u0302 \u2264 4C9] q 6= 14( C1 + ( C2 p\u0302 + C3 p\u0302 ) log ( 1 + (2\u03c32T) p\u0302 min{a2p\u03020 /2,4b2p\u03020 } ))p\u0302( 1 + 2(2\u03c32T) p\u0302 a2p\u03020 ) 1 3 +C91 [( 2\u03c32T )p\u0302 \u2264 4C9] q = 14 .\nwhere C9 is a constant. Note that by Holder inequality E [ Ep\u0302T ] = E [ a (1\u22122q)p\u0302 T+1 E p\u0302 T \u00d7 a \u2212(1\u22122q)p\u0302 T+1 ] \u2264 Ep\u0302 [ a1\u22122qT+1 ET ] E1\u2212p\u0302 [ a \u2212(1\u22122q)p\u0302 1\u2212p\u0302 T+1\n] = Ep\u0302 [ a1\u22122qT+1 ET ] E1\u2212p\u0302 [ (1 + H\u0302T /a 2 0) 2(1\u22122q)p\u0302 3(1\u2212p\u0302)\n] (a) = Ep\u0302 [ a1\u22122qT+1 ET ] E1\u2212p\u0302 [ (1 + H\u0302T /a 2 0) 2pp\u0302 3(1\u2212p\u0302)\n] (b)\n\u2264 Ep\u0302 [ a1\u22122qT+1 ET ] E 2p 3 [ (1 + H\u0302T /a 2 0) p\u0302 ]\n\u2264 Ep\u0302 [ a1\u22122qT+1 ET ] E 2p 3 [ 1 + ( H\u0302T /a 2 0 )p\u0302] where (a) is by 1\u2212 2q = p, (b) is due to 2p3(1\u2212p\u0302) = 2p 1+2p < 1.\nFirst, if q 6= 14 , we have E [ a1\u22122qT+1 ET ] \u2264 C1 + C2E [( H\u0302T /a 2 0 ) 4q\u22121 3 ] + C3E [ D1\u22122pT ] (c)\n\u2264 C1 + C2E 1\u22122p 3p\u0302 [( H\u0302T /a 2 0 )p\u0302] + C3E 1\u22122p p\u0302 [ Dp\u0302T ] (d) \u2264 C1 + C2 4E [ Ep\u0302T ] + ( 2\u03c32T\n)p\u0302 a2p\u03020 1\u22122p 3p\u0302 + C3E 1\u22122p p\u0302 [ Ep\u0302T ] ,\nwhere (c) is by 4q\u221213 = 1\u22122p 3 \u2264 2\u22122p 3 = p\u0302 and p \u2265 1 4 \u21d2 1 \u2212 2p \u2264 2\u22122p 3 = p\u0302, (d) is by (36) and E [ Dp\u0302T ] \u2264 E [ Ep\u0302T ] . Then we know\nE [ Ep\u0302T ] \u2264 Ep\u0302 [ a1\u22122qT+1 ET ] E 2p 3 [ 1 + ( H\u0302T /a 2 0 )p\u0302]\n\u2264 C1 + C2 4E [ Ep\u0302T ] + ( 2\u03c32T )p\u0302 a2p\u03020 1\u22122p 3p\u0302 + C3E 1\u22122p p\u0302 [ Ep\u0302T ] p\u0302\n\u00d7 1 + 4E [ Ep\u0302T ] + ( 2\u03c32T )p\u0302 a2p\u03020 2p 3 .\nIf 4E [ Ep\u0302T ] \u2264 ( 2\u03c32T )p\u0302 , we will get\nE [ Ep\u0302T ] \u2264 C1 + C2(2 (2\u03c32T )p\u0302 a2p\u03020 ) 1\u22122p 3p\u0302 + C3E 1\u22122p p\u0302 [ Ep\u0302T ] p\u0302( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 2p 3 .\nIf C3E 1\u22122p p\u0302 [ Ep\u0302T ] \u2264 C1 + C2 ( 2(2\u03c32T) p\u0302\na2p\u03020\n) 1\u22122p 3p\u0302\n, we have\nE 1\u22122p p\u0302 [ Ep\u0302T ] \u2264 C1 C3 + C2 C3\n( 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1\u22122p 3p\u0302\n\u21d2 E [ Ep\u0302T ] \u2264 C1 C3 + C2 C3 ( 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1\u22122p 3p\u0302 p\u0302 1\u22122p\n\u2264 (\n2C1 C3\n) p\u0302 1\u22122p\n+ ( 2C2 C3 ) p\u0302 1\u22122p ( 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1 3 .\nIf C3E 1\u22122p p\u0302 [ Ep\u0302T ] \u2265 C1 + C2 ( 2(2\u03c32T) p\u0302\na2p\u03020\n) 1\u22122p 3p\u0302\n, we have\nE [ Ep\u0302T ] \u2264 ( 2C3E 1\u22122p p\u0302 [ Ep\u0302T ])p\u0302( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 2p 3\n= (2C3) p\u0302 E1\u22122p [ E 2(1\u2212p) 3\nT\n]( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 2p 3\n\u21d2 E [ Ep\u0302T ] \u2264 (2C3) p\u0302 2p ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1 3 .\nCombining two cases, we know under 4E [ Ep\u0302T ] \u2264 ( 2\u03c32T )p\u0302 E [ Ep\u0302T ] \u2264 (\n2C1 C3\n) p\u0302 1\u22122p\n+ ( 2C2 C3 ) p\u0302 1\u22122p ( 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1 3 + (2C3) p\u0302 2p ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1 3\n\u2264 (\n2C1 C3\n) p\u0302 1\u22122p\n+ (( 2C2 C3 ) p\u0302 1\u22122p + (2C3) p\u0302 2p )( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1 3 .\nNow if 4E [ Ep\u0302T ] \u2265 ( 2\u03c32T )p\u0302 , then we have\nE [ Ep\u0302T ] \u2264 C1 + C2 8E [ Ep\u0302T ] a2p\u03020 1\u22122p 3p\u0302 + C3E 1\u22122p p\u0302 [ Ep\u0302T ] p\u03021 + 8E [ Ep\u0302T ] a2p\u03020 2p 3\n\u2264 C p\u03021 + C p\u03022 8E [ Ep\u0302T ] a2p\u03020 1\u22122p 3 + C p\u03023E 1\u22122p [ Ep\u0302T ] 1 + 8E [ Ep\u0302T ] a2p\u03020 2p 3 . (37)\nWe claim there is a constant C9 such that E [ Ep\u0302T ] \u2264 C9 because the highest order of E [ Ep\u0302T ] is only\n1\u2212 2p+ 2p3 = 1\u2212 4p 3 < 1. Here we give the order of C9 directly without proof\nC9 = O ( a2p\u03020 + ( C1 C3 ) p\u0302 1\u22122p + ( C 3p\u0302 2 2 + C 3p\u0302 4p 3 ) 1\nap\u03020\n) .\nHence, when q 6= 14 , we finally have E [ Ep\u0302T ] \u2264 (\n2C1 C3\n) p\u0302 1\u22122p\n+ (( 2C2 C3 ) p\u0302 1\u22122p + (2C3) p\u0302 2p )( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 ) 1 3 +C91 [( 2\u03c32T )p\u0302 \u2264 4C9] .\nFollowing a similar approach, we can prove for q = 14 ,there is\nE [ Ep\u0302T ] \u2264 C1 + (C2 p\u0302 + C3 p\u0302 ) log 1 + (2\u03c32T )p\u0302 min { a2p\u03020 /2, 4b 2p\u0302 0 } p\u0302(1 + 2 (2\u03c32T )p\u0302 a2p\u03020 ) 1 3 + C9,\nwhere\nC9 = O\n( C\n1/2 1 +\n( C\n1/2 2 + C 1/2 3\n) log1/2 C2 + C3\na2p\u03020 b p\u0302 0\n+ a2p\u03020 + a 3p\u0302 0 + a p\u0302 0b 2p\u0302 0\n) .\nFinally, we have\nE [ Ep\u0302T ] \u2264 ( 2C1 C3 ) p\u0302 1\u22122p + (( 2C2 C3 ) p\u0302 1\u22122p + (2C3) p\u0302 2p )( 1 + 2(2\u03c32T) p\u0302 a2p\u03020 ) 1 3 +C91 [( 2\u03c32T )p\u0302 \u2264 4C9] q 6= 14( C1 + ( C2 p\u0302 + C3 p\u0302 ) log ( 1 + (2\u03c32T) p\u0302 min{a2p\u03020 /2,4b2p\u03020 } ))p\u0302( 1 + 2(2\u03c32T) p\u0302 a2p\u03020 ) 1 3 +C91 [( 2\u03c32T )p\u0302 \u2264 4C9] q = 14 .\nCase 2: E [ Ep\u0302T ] \u2264 E [ Dp\u0302T ] . In this case, we will finally prove\nE [ Dp\u0302T ] \u2264 ( C4 + (3C5 + C6) log a 2/3 0 + 2 ( 2\u03c32T )1/3 a 2/3 0 + C6 log 2C7 + 2C8 b0 ) p\u0302 1\u2212p\n\u00d7 ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 )1/3 + C10.\nwhere C10 is a constant. Note that by Holder inequality E [ Dp\u0302T ] = E [ a qp\u0302 1\u2212p T+1D p\u0302 T \u00d7 a \u2212 qp\u03021\u2212p T+1 ] \u2264 E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E 1\u2212p\u2212p\u0302 1\u2212p [ a \u2212 qp\u03021\u2212p\u2212p\u0302 T+1\n] = E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E 1\u2212p\u2212p\u0302 1\u2212p [( 1 + H\u0302T /a 2 0 ) 2qp\u0302 3(1\u2212p\u2212p\u0302)\n] (e) \u2264 E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E 1 3 [( 1 + H\u0302T /a 2 0\n)p\u0302] \u2264 E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E 1 3 [ 1 + ( H\u0302T /a 2 0\n)p\u0302] where (e) is by 2q3(1\u2212p\u2212p\u0302) = 1\u2212p 3(1\u2212p\u2212p\u0302) = 1. We know\nE [ aqT+1D 1\u2212p T ] \u2264C4 + C5E [ log\na20 + H\u0302T a20\n] + C6E log C7 + C8 ( 1 + H\u0302T /a 2 0 )1/3 b0 \n= C4 + C5 p\u0302 E log(a20 + H\u0302T a20 )p\u0302+ C6 3p\u0302 E log C7 + C8 ( 1 + H\u0302T /a 2 0 )1/3 b0 3p\u0302 \n(f) \u2264C4 + C5 p\u0302 E\n[ log a2p\u03020 + H\u0302 p\u0302 T\na2p\u03020\n] + C6 3p\u0302 E log (2C7) 3p\u0302 + (2C8) 3p\u0302 ( 1 + ( H\u0302T /a 2 0 )p\u0302) b3p\u03020 (g) \u2264C4 + C5 p\u0302 log a2p\u03020 + E [ H\u0302p\u0302T ] a2p\u03020 + C6 3p\u0302 log (2C7) 3p\u0302 + (2C8) 3p\u0302 ( 1 + E [ H\u0302p\u0302T ] /a2p\u03020 ) b3p\u03020\n(h) \u2264C4 + C5 p\u0302\nlog a2p\u03020 + 4E\n[ Dp\u0302T ] + ( 2\u03c32T )p\u0302 a2p\u03020\n+ C6 3p\u0302 log\n(2C7) 3p\u0302 + (2C8) 3p\u0302 ( 1 + 4E[Dp\u0302T ]+(2\u03c3 2T) p\u0302\na2p\u03020 ) b3p\u03020\nwhere (f) is by (x+ y)p \u2264 xp + yp,(x+ y)q \u2264 (2x)q + (2y)q for 0 \u2264 x, y, 0 \u2264 p \u2264 1, q \u2265 0, (g) holds by the concavity of log function, (h) is due to (36) and E [ Ep\u0302T ] \u2264 E [ Dp\u0302T ] . Then we know\nE [ Dp\u0302T ] \u2264 E p\u0302 1\u2212p [ aqT+1D 1\u2212p T ] E 1 3 [ 1 + ( H\u0302T /a 2 0 )p\u0302]\n\u2264 C4 + C5 p\u0302 log a2p\u03020 + 4E [ Dp\u0302T ] + ( 2\u03c32T )p\u0302 a2p\u03020\n+ C6 3p\u0302 log\n(2C7) 3p\u0302 + (2C8) 3p\u0302 ( 1 + 4E[Dp\u0302T ]+(2\u03c3 2T) p\u0302\na2p\u03020 ) b3p\u03020 p\u0302 1\u2212p\n\u00d7 1 + 4E [ Dp\u0302T ] + ( 2\u03c32T )p\u0302 a2p\u03020 1/3 . If 4E [ Dp\u0302T ] \u2264 ( 2\u03c32T )p\u0302 , we will get\nE [ Dp\u0302T ] \u2264 C4 + C5p\u0302 log a 2p\u0302 0 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 + C6 3p\u0302 log (2C7) 3p\u0302 + (2C8) 3p\u0302 ( 1 + 2(2\u03c32T) p\u0302 a2p\u03020 ) b3p\u03020 p\u0302 1\u2212p\n\u00d7 ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 )1/3\n\u2264 ( C4 + ( C5 p\u0302 + C6 3p\u0302 ) log a2p\u03020 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 + C6 3p\u0302 log (2C7) 3p\u0302 + (2C8) 3p\u0302 b3p\u03020 ) p\u0302 1\u2212p\n\u00d7 ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 )1/3\n\u2264 ( C4 + (3C5 + C6) log a 2/3 0 + 2 ( 2\u03c32T )1/3 a 2/3 0 + C6 log 2C7 + 2C8 b0 ) p\u0302 1\u2212p\n\u00d7 ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 )1/3 .\nIf 4E [ Dp\u0302T ] \u2265 ( 2\u03c32T )p\u0302 , we have\nE [ Dp\u0302T ] \u2264 C4 + C5p\u0302 log a 2p\u0302 0 + 8E [ Dp\u0302T ] a2p\u03020 + C6 3p\u0302 log (2C7) 3p\u0302 + (2C8) 3p\u0302 ( 1 + 8E[Dp\u0302T ] a2p\u03020 ) b3p\u03020 p\u0302 1\u2212p\n\u00d7 1 + 8E [ Dp\u0302T ] a2p\u03020 1/3 . (38) which implies there is a constant C10 such that E [ Dp\u0302T ] \u2264 C10. Here we give the order of C10 directly without proof\nC10 = O ( a2p\u03020 + a 3p\u0302 0 + C4 + C6 log\nC7 + C8 b0 + (C5 + C6) log C5 + C6\na3p\u03020\n)\nCombining these two results, we know\nE [ Dp\u0302T ] \u2264 ( C4 + (3C5 + C6) log a 2/3 0 + 2 ( 2\u03c32T )1/3 a 2/3 0 + C6 log 2C7 + 2C8 b0 ) p\u0302 1\u2212p\n\u00d7 ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 )1/3 + C101 [( 2\u03c32T )p\u0302 \u2264 4C10] . Finally, combining Case 1 and Case 2 and using 35, we get the desired result and the finish the proof\nE [ Hp\u0302T ] \u22644 max { E [ Ep\u0302T ] ,E [ Dp\u0302T\n]} \u22644C91 [( 2\u03c32T )p\u0302 \u2264 4C9]+ 4C101 [(2\u03c32T )p\u0302 \u2264 4C10]\n+ 4 ( 2C1 C3 ) p\u0302 1\u22122p + (( 2C2 C3 ) p\u0302 1\u22122p + (2C3) p\u0302 2p )( 1 + 2(2\u03c32T) p\u0302 a2p\u03020 ) 1 3 q 6= 14( C1 + ( C2 p\u0302 + C3 p\u0302 ) log ( 1 + (2\u03c32T) p\u0302\nmin{a2p\u03020 /2,4b2p\u03020 }\n))p\u0302( 1 + 2(2\u03c32T) p\u0302\na2p\u03020\n) 1 3\nq = 14\n.\n+ 4 ( C4 + (3C5 + C6) log a 2/3 0 + 2 ( 2\u03c32T )1/3 a 2/3 0 + C6 log 2C7 + 2C8 b0 ) p\u0302 1\u2212p\n\u00d7 ( 1 + 2 ( 2\u03c32T )p\u0302 a2p\u03020 )1/3\nH ALGORITHM META-STORM-NA AND ITS ANALYSIS FOR GENERAL p\nAlgorithm META-STORM-NA is shown in Algorithm 4. To highlight the differences with META-STORM-SG and META-STORM, we set at only based on the time round t, not using the stochastic gradients. This is the reason that the convergence of this algorithm does not depend on bounded stochastic gradients or bounded stochastic gradients differences assumptions. Moreover, the requirement of p \u2208 ( 0, 12 ] is also more relaxed compared with our previous algorithms.\nAlgorithm 4 META-STORM-NA Input: Initial point x1 \u2208 Rd\nParameters: a0 > \u221a 2 3 , b0, \u03b7, p \u2208 ( 0, 12 ] , p+ 2q = 1 Sample \u03be1 \u223c D, d1 = \u2207f(x1, \u03be1) for t = 1, \u00b7 \u00b7 \u00b7 , T do: at+1 = ( 1 + t/a20\n)\u2212 23 bt = (b 1/p 0 + \u2211t i=1 \u2016di\u20162)p/a q t+1 xt+1 = xt \u2212 \u03b7bt dt Sample \u03bet+1 \u223c D dt+1 = \u2207f(xt+1, \u03bet+1) + (1\u2212 at+1)(dt \u2212\u2207f(xt, \u03bet+1))\nend for Output xout = xt where t \u223c Uniform ([T ]).\nNow we give the main convergence result, Theorem H.1, of META-STORM-NA. As we discussed before, it can achieve the rate O\u0303(1/T 3) under the weakest assumptions 1-3, however, with losing the adaptivity to the variance parameter \u03c3 as a tradeoff. Theorem H.1. Under the assumptions 1-3, by defining p\u0302 = 1 \u2212 p \u2208 [ 1 2 , 1 ) , we have (omitting the dependency on \u03b7, a0 and b0)\nE [ Hp\u0302T ] = O (( F (x1)\u2212 F \u2217 + \u03b2 p\u0302 p log (\u03b2T ) + \u03c32 log T + \u03c32p\u0302 ) T p\u0302 3 ) .\nBy combining the above theorem with the concavity of xp\u0302, we give the following convergence guarantee omitting the proof: Theorem H.2. There is\nE [ \u2016\u2207F (xout)\u20162p\u0302 ] = O ( F (x1)\u2212 F \u2217 + \u03b2 p\u0302 p log (\u03b2T ) + \u03c32 log T + \u03c32p\u0302\nT 2p\u0302 3\n) .\nNote that 2p\u0302 \u2265 1, hence the criterion, E [ \u2016\u2207F (xout)\u20162p\u0302 ] , used in Theorem H.2 is strictly stronger than E [\u2016\u2207F (xout)\u2016]. In the following sections, we will give a proof of Theorem H.1. H.1 BOUND ON E [ ET,1/2\n] Lemma H.3. Given p+ 2q = 1, p \u2208 ( 0, 12 ] , we have\nE [ ET,1/2 ] \u2264 \u03c32 ( 1 + 2a20 log ( 1 + T/a20 )) + 2\u03b72\u03b22E [\u2211T t=1 \u2016dt\u20162 a 1/2 t+1b 2 t ] 1\u2212 2/(3a20) .\nProof. We start from Lemma E.2,\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1.\nDividing both sides by a1/2t+1, summing up from 1 to T and taking the expectations on both sides, we obtain\nE [ ET,1/2 ]\n\u2264E [ T\u2211 t=1 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 a 1/2 t+1 ]\n\u2264\u03c32 + E [ T\u2211 t=1 ( a\u22121t+1 \u2212 a \u22121 t ) a 1/2 t+1\u2016 t\u20162 + 2 a 1/2 t+1 \u2016Zt+1\u20162\n+ 2a 3/2 t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +\nMt+1\na 1/2 t+1 ] Because at+1is not random, we know\nE\n[ 2\na 1/2 t+1\n\u2016Zt+1\u20162 ] \u2264 E [ 2\u03b72\u03b22\u2016dt\u20162\na 1/2 t+1b 2 t\n] ,\nE [ 2a\n3/2 t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 ] \u2264 2a3/2t+1\u03c32,\nE\n[ Mt+1\na 1/2 t+1\n] = 0,\nwhere the first inequality is by Lemma E.3. Besides, by the concavity of x2/3 and a0 > \u221a 2 3 , we know\na\u22121t+1 \u2212 a \u22121 t = ( 1 + t/a20 )2/3 \u2212 (1 + (t\u2212 1) /a20)2/3 \u2264 2\n3a20 (1 + (t\u2212 1) /a20) 1/3 \u2264 2 3a20 < 1.\nThen we have\nE [ ET,1/2 ] \u2264 \u03c32 + E\n[ 2\n3a20 ET,1/2 + T\u2211 t=1 2\u03b72\u03b22\u2016dt\u20162 a 1/2 t+1b 2 t + 2a 3/2 t+1\u03c3 2\n]\n\u21d2 E [ ET,1/2 ] \u2264 \u03c32 ( 1 + 2 \u2211T t=1 a 3/2 t+1 ) + 2\u03b72\u03b22E [\u2211T t=1 \u2016dt\u20162 a 1/2 t+1b 2 t ] 1\u2212 2/(3a20) .\nNote that T\u2211 t=1 a 3/2 t+1 = T\u2211 t=1\n1\n1 + t/a20 \u2264 a20 log\n( 1 + T/a20 ) .\nSo we know\nE [ ET,1/2 ] \u2264 \u03c32 ( 1 + 2a20 log ( 1 + T/a20 )) + 2\u03b72\u03b22E [\u2211T t=1 \u2016dt\u20162 a 1/2 t+1b 2 t ] 1\u2212 2/(3a20) .\nH.2 BOUND ON E [ET ] Lemma H.4. Given p+ 2q = 1, p \u2208 ( 0, 12 ] , we have\nE [ET ] \u2264 6a20\u03c3\n2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) + 2\u03b72\u03b22(1 + T/a20) 2p 3 1\u2212 2/(3a20) E[D1\u22122pT ] 1\u22122p p 6= 1 2 E [ log (\n1 + DT b20 )] p = 12 .\nProof. We start from Lemma E.2,\nat+1\u2016 t\u20162 \u2264 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1.\nDividing both sides by at+1, summing up from 1 to T and taking the expectations on both sides, we obtain\nE [ET ]\n\u2264E [ T\u2211 t=1 \u2016 t\u20162 \u2212 \u2016 t+1\u20162 + 2\u2016Zt+1\u20162 + 2a2t+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 +Mt+1 at+1 ]\n\u2264\u03c32 + E [ T\u2211 t=1 ( a\u22121t+1 \u2212 a \u22121 t )\ufe38 \ufe37\ufe37 \ufe38 \u22642/(3a20) \u2016 t\u20162 + 2 at+1 \u2016Zt+1\u20162\n+ 2at+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + Mt+1 at+1 ] \u2264\u03c32 + E [ 2\n3a20 ET + T\u2211 t=1 2 at+1 \u2016Zt+1\u20162\n+ 2at+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 + Mt+1 at+1\n] .\nBecause at+1is not random, we know\nE [ 2\nat+1 \u2016Zt+1\u20162\n] \u2264 E [ 2\u03b72\u03b22\u2016dt\u20162\nat+1b2t\n] ,\nE [ 2at+1\u2016\u2207f(xt+1, \u03bet+1)\u2212\u2207F (xt+1)\u20162 ] \u2264 2at+1\u03c32,\nE [ Mt+1 at+1 ] = 0,\nwhere the first inequality is by Lemma E.3. Then we know\nE [ET ] \u2264 \u03c32 + E\n[ 2\n3a20 ET + T\u2211 t=1 2\u03b72\u03b22\u2016dt\u20162 at+1b2t + 2at+1\u03c3 2\n]\n\u21d2 E [ET ] \u2264 \u03c32 ( 1 + 2 \u2211T t=1 at+1 ) + 2\u03b72\u03b22E [\u2211T t=1 \u2016dt\u20162 at+1b2t ] 1\u2212 2/(3a20) .\nNote that there is T\u2211 t=1 \u2016dt\u20162 at+1b2t = T\u2211 t=1\n\u2016dt\u20162\na1\u22122qt+1 ( b 1/p 0 + Dt )2p (a)= T\u2211 t=1\n\u2016dt\u20162\napt+1 ( b 1/p 0 + Dt )2p \u2264 (1 + T/a20) 2p 3\nT\u2211 t=1 \u2016dt\u20162( b 1/p 0 + Dt\n)2p (b) \u2264 (1 + T/a20) 2p 3 D1\u22122pT 1\u22122p p 6= 1 2 log (\n1 + DT b20 ) p = 12 ,\nwhere (a) is by 1\u2212 2q = p, (b) is by Lemma I.1. Besides T\u2211 t=1 at+1 = T\u2211 t=1 1 (1 + t/a20) 2/3 \u2264 3a20 ( 1 + T/a20 )1/3 \u2212 3a20 < 3a20 (1 + T/a20)1/3 \u2212 2. So we know\nE [ET ] \u2264 6a20\u03c3\n2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) + 2\u03b72\u03b22(1 + T/a20) 2p 3 1\u2212 2/(3a20) E[D1\u22122pT ] 1\u22122p p 6= 1 2 E [ log (\n1 + DT b20 )] p = 12 .\nH.3 BOUND ON E [ D1\u2212pT ] Lemma H.5. Given p+ 2q = 1, p \u2208 ( 0, 12 ] , we have\nE [ D1\u2212pT ] \u2264 ( 1 + T/a20 ) 1\u2212p 3 ( b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03c32 ( 1 + 2a20 log ( 1 + T/a20 )) \u03b7\u03b2max (1\u2212 2/(3a20)) )\n+\n( 1 + T/a20 ) 1\u2212p 3\n1\u2212 p\n( 3a20 \u2212 1 3a20 \u2212 2 4\u03b7\u03b2max ) 1 p\u22121 log\n( 1 +\n9a20\u22122 3a20\u22122\n( 1 + T/a20 )1/3) \u03b7\u03b2max\nb0 .\nProof. The same as before, we start from Lemma E.4 E [ aqT+1D 1\u2212p T ] \u2264 b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217)\n+ E [ T\u2211 t=1 ( \u03b7\u03b2max + \u03b7\u03b2max a 1/2 t+1\u03bb \u2212 bt ) \u2016dt\u20162 b2t ] + \u03bbE [ ET,1/2 ] \u03b7\u03b2max .\nNow we simply take \u03bb = 1 and use Lemma H.3 to get E [ aqT+1D 1\u2212p T ] \u2264 b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03c32 ( 1 + 2a20 log ( 1 + T/a20 )) \u03b7\u03b2max (1\u2212 2/(3a20))\n+ E T\u2211 t=1 (( 1 + 9a20 \u2212 2 (3a20 \u2212 2) a 1/2 t+1 ) \u03b7\u03b2max \u2212 bt ) \u2016dt\u20162\nb2t\ufe38 \ufe37\ufe37 \ufe38 (i) . (39) Applying Lemma E.5 to (i), we get\n(i) \u2264\n( 3a20\u22121 3a20\u22122 4\u03b7\u03b2max ) 1 p\u22121\n1\u2212 p log\n( 1 +\n9a20\u22122 3a20\u22122\n( 1 + T/a20 )1/3) \u03b7\u03b2max\nb0 .\nNote that aqT+1 = a 1\u2212p 2 T+1 = ( 1 + T/a20 )\u2212 1\u2212p3 is deterministic, by multiplying both sides of (39) by( 1 + T/a20 ) 1\u2212p 3 , we get the desired result.\nH.4 COMBINE THE BOUNDS AND THE FINAL PROOF.\nFrom Lemma H.4, we have\nE [ET ] \u2264 6a20\u03c3\n2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) + 2\u03b72\u03b22(1 + T/a20) 2p 3 1\u2212 2/(3a20) E[D1\u22122pT ] 1\u22122p p 6= 1 2 E [ log (\n1 + DT b20 )] p = 12 .\nFrom Lemma H.5, we have E [ D1\u2212pT ] \u2264 ( 1 + T/a20 ) 1\u2212p 3 ( b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03c32 ( 1 + 2a20 log ( 1 + T/a20 )) \u03b7\u03b2max (1\u2212 2/(3a20)) )\n+\n( 1 + T/a20 ) 1\u2212p 3\n1\u2212 p\n( 3a20 \u2212 1 3a20 \u2212 2 4\u03b7\u03b2max ) 1 p\u22121 log\n( 1 +\n9a20\u22122 3a20\u22122\n( 1 + T/a20 )1/3) \u03b7\u03b2max\nb0 .\nNow let\np\u0302 = 1\u2212 p \u2208 [ 1\n2 , 1\n) .\nApply Lemma E.1, we know E [ Hp\u0302T ] \u2264 4 max { E [ Ep\u0302T ] ,E [ Dp\u0302T ]} . (40)\nNow we can give the final proof of Theorem H.1.\nProof. Now we consider following two cases:\nCase 1: p 6= 12 . Note that by Holder inequality E [ Ep\u0302T ] = Ep\u0302 [ET ] ,\nE [ D1\u22122pT ] \u2264 E 1\u22122p p\u0302 [ Dp\u0302T ] .\nSo we know\nE [ Ep\u0302T ] \u2264 ( 6a20\u03c3 2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) + 2\u03b72\u03b22(1 + T/a20) 2p 3 (1\u2212 2/(3a20))(1\u2212 2p) E [ D1\u22122pT ])p\u0302\n\u2264\n( 6a20\u03c3 2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) + 2\u03b72\u03b22(1 + T/a20) 2p 3 (1\u2212 2/(3a20))(1\u2212 2p) E 1\u22122p p\u0302 [ Dp\u0302T ])p\u0302 .\nNow if E [ Ep\u0302T ] \u2265 E [ Dp\u0302T ] , we know\nE [ Ep\u0302T ] \u2264 ( 6a20\u03c3 2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) + 2\u03b72\u03b22(1 + T/a20) 2p 3 (1\u2212 2/(3a20))(1\u2212 2p) E 1\u22122p p\u0302 [ Ep\u0302T ])p\u0302\n\u2264\n( 6a20\u03c3 2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) )p\u0302 + ( 2\u03b72\u03b22(1 + T/a20) 2p 3 (1\u2212 2/(3a20))(1\u2212 2p) )p\u0302 E1\u22122p [ Ep\u0302T ]\nThen if ( 2\u03b72\u03b22(1+T/a20) 2p 3\n(1\u22122/(3a20))(1\u22122p)\n)p\u0302 E1\u22122p [ Ep\u0302T ] \u2264 ( 6a20\u03c3 2(1+T/a20) 1/3\n1\u22122/(3a20)\n)p\u0302 , we know\nE [ Ep\u0302T ] \u2264 2 ( 6a20\u03c3 2 ( 1 + T/a20 )1/3 1\u2212 2/(3a20) )p\u0302 = ( 2 1 p\u0302 6a20\u03c3 2 1\u2212 2/(3a20) )p\u0302 ( 1 + T/a20 ) p\u0302 3\nIf ( 2\u03b72\u03b22(1+T/a20) 2p 3\n(1\u22122/(3a20))(1\u22122p)\n)p\u0302 E1\u22122p [ Ep\u0302T ] \u2265 ( 6a20\u03c3 2(1+T/a20) 1/3\n1\u22122/(3a20)\n)p\u0302 , we know\nE [ Ep\u0302T ] \u2264 2 ( 2\u03b72\u03b22(1 + T/a20) 2p 3\n(1\u2212 2/(3a20))(1\u2212 2p)\n)p\u0302 E1\u22122p [ Ep\u0302T ]\n\u21d2 E [ Ep\u0302T ] \u2264\n( 2 1 p\u0302 2\u03b72\u03b22\n(1\u2212 2/(3a20))(1\u2212 2p)\n) p\u0302 2p (\n1 + T/a20 ) p\u0302 3 .\nHence under E [ Ep\u0302T ] \u2265 E [ Dp\u0302T ] , we get\nE [ Ep\u0302T ] \u2264 ( 2 1p\u0302 2\u03b72\u03b22 (1\u2212 2/(3a20))(1\u2212 2p) ) p\u0302 2p + ( 2 1 p\u0302 6a20\u03c3 2 1\u2212 2/(3a20) )p\u0302(1 + T/a20) p\u03023 . Then by using (40), we know\nE [ Hp\u0302T ] \u22644 max { E [ Ep\u0302T ] ,E [ Dp\u0302T\n]} \u22644 ( 1 + T/a20 ) p\u0302 3\n\u00d7 ( 2 1p\u0302 2\u03b72\u03b22 (1\u2212 2/(3a20))(1\u2212 2p) ) p\u0302 2p + ( 2 1 p\u0302 6a20\u03c3 2 1\u2212 2/(3a20) )p\u0302\n+b 1 p\u22121 0 + 2\n\u03b7 (F (x1)\u2212 F \u2217) +\n\u03c32 ( 1 + 2a20 log ( 1 + T/a20 )) \u03b7\u03b2max (1\u2212 2/(3a20))\n+\n( 3a20\u22121 3a20\u22122 4\u03b7\u03b2max ) 1 p\u22121\n1\u2212 p log\n( 1 +\n9a20\u22122 3a20\u22122\n( 1 + T/a20 )1/3) \u03b7\u03b2max\nb0 =O (( F (x1)\u2212 F \u2217 + \u03b2 p\u0302 p log (\u03b2T ) + \u03c32 log T + \u03c32p\u0302 ) T p\u0302 3 ) .\nCase 2: p = 12 . By a similar proof, we still have E [ Hp\u0302T ] \u2264O (( F (x1)\u2212 F \u2217 + \u03b2 p\u0302 p log (\u03b2T ) + \u03c32 log T + \u03c32p\u0302 ) T p\u0302 3 )\nI BASIC INEQUALITIES\nIn this section, we prove some technical lemmas used in our proof. Lemma I.1. For c0 > 0, ci\u22651 \u2265 0, p \u2208 (0, 1], we have\nT\u2211 t=1\nct (c0 + \u2211t i=1 ci) p \u2264 11\u2212p (\u2211T i=1 ci )1\u2212p p 6= 1 log ( 1 + \u2211T i=1 ci c0 ) p = 1 .\nProof. We first prove the case p 6= 1. From Lemma 3 in Levy et al. (2021), for b1 > 0, bi\u22652 \u2265 0, p \u2208 (0, 1), we have\nT\u2211 t=1\nbt ( \u2211t i=1 bi) p \u2264 1 1\u2212 p ( T\u2211 i=1 bi )1\u2212p .\nNow we define T0 = min {t \u2208 [T ] : ct > 0} .\nBy the definition of T0, we know for any 1 \u2264 t \u2264 T0 \u2212 1, ct = 0. Then we have T\u2211 t=1 ct (c0 + \u2211t i=1 ci) p = T0\u22121\u2211 t=1 ct (c0 + \u2211t i=1 ci) p + T\u2211 t=T0 ct (c0 + \u2211T0\u22121 i=1 ci + \u2211t i=T0 ci)p\n= T\u2211 t=T0\nct (c0 + \u2211t i=T0 ci)p \u2264 T\u2211 t=T0\nct ( \u2211t i=T0 ci)p\n\u2264 1 1\u2212 p\n( T\u2211\ni=T0\nci\n)1\u2212p = 1\n1\u2212 p ( T\u2211 i=1 ci )1\u2212p .\nFor p = 1, we know\nT\u2211 t=1\nct c0 + \u2211t i=1 ci = T\u2211 t=1 1\u2212 c0 + \u2211t\u22121 i=1 ci c0 + \u2211t i=T0 ci\n\u2264 T\u2211 t=1 log c0 + \u2211t i=T0 ci c0 + \u2211t\u22121 i=1 ci\n= log ( 1 + \u2211T i=1 ci c0 ) ,\nwhere the inequality holds by 1\u2212 1x \u2264 log x.\nLemma I.2. For c0 > 0, ci\u22651 \u2208 (0, c], p \u2208 (0, 1], we have\nT\u2211 t=1\nct+1 (c0 + \u2211t i=1 ci) p \u2264 3c cp0 + 11\u2212p (\u2211T i=1 ci )1\u2212p p 6= 1 log ( 1 + \u2211T i=1 ci c0 ) p = 1 .\nProof. Define\nT0 = min\n{ t \u2208 [T ] ,\nt\u2211 i=1 ci \u2265 c\n} ,\nthen we know T\u2211 t=1 ct+1 (c0 + \u2211t i=1 ci) p \u2264 T\u22121\u2211 t=1\nct+1 (c0 + \u2211t i=1 ci) p + c cp0\n= c\ncp0 + T0\u22121\u2211 t=1\nct+1 (c0 + \u2211t i=1 ci) p + T\u22121\u2211 t=T0\nct+1 (c0 + \u2211T0 i=1 ci + \u2211t i=T0+1 ci)p\n\u2264 c cp0 + T0\u22121\u2211 t=1 ct+1 cp0 + T\u22121\u2211 t=T0\nct+1 (c0 + c+ \u2211t i=T0+1 ci)p\n\u2264 3c cp0 + T\u22121\u2211 t=T0\nct+1 (c0 + \u2211t+1 i=T0+1 ci)p\n(a) \u2264 3c cp0 + 1 1\u2212p (\u2211T i=T0+1 ci )1\u2212p p 6= 1 log ( 1 + \u2211T i=T0+1 ci\nc0\n) p = 1\n\u2264 3c cp0 + 11\u2212p (\u2211T i=1 ci )1\u2212p p 6= 1 log ( 1 + \u2211T i=1 ci c0 ) p = 1\nwhere (a) is by Lemma I.1.\nLemma I.3. For c0 > 0, ci\u22651 \u2208 (0, c], p \u2208 (0, 1], we have\nT\u2211 t=1\nct+1 (c0 + \u2211t\u22121 i=1 ci) p \u2264 6c cp0 + 11\u2212p (\u2211T i=1 ci )1\u2212p p 6= 1 log ( 1 + \u2211T i=1 ci c0 ) p = 1 .\nProof. Define\nT0 = min\n{ t \u2208 [T ] :\nt\u22121\u2211 i=1 ci \u2265 c\n} .\nThen we know T\u2211 t=1 ct+1 (c0 + \u2211t\u22121 i=1 ci) p = T0\u22121\u2211 t=1\nct+1 (c0 + \u2211t i=1 ci) p + T\u2211 t=T0\nct+1 (c0 + \u2211T0\u22121 i=1 ci + \u2211t\u22121 i=T0 ci)p\n\u2264 T0\u22121\u2211 t=1 ct+1 cp0 + T\u2211 t=T0\nct+1 (c0 + c+ \u2211t\u22121 i=T0 ci)p\n\u2264 3c cp0 + T\u2211 t=T0\nct+1 (c0 + \u2211t i=T0 ci)p\n(a) \u2264 6c cp0 + 1 1\u2212p (\u2211T i=T0 ci )1\u2212p p 6= 1 log ( 1 + \u2211T i=T0 ci\nc0\n) p = 1\n\u2264 6c cp0 + 11\u2212p (\u2211T i=1 ci )1\u2212p p 6= 1 log ( 1 + \u2211T i=1 ci c0 ) p = 1\nwhere (a) is by Lemma I.2.\nLemma I.4. (Lemma 6 in Levy et al. (2021)), for ci\u22651 \u2208 (0, c], we have T\u2211 t=1 ct (1 + \u2211t\u22121 i=1 ci) 4/3 \u2264 12 + 2c.\nLemma I.5. For ci\u22651 \u2208 (0, c], we have, we have T\u2211 t=1 ct+1 (1 + \u2211t\u22121 i=1 ci) 4/3 \u2264 12 + 5c.\nProof. Define\nT0 = min\n{ t \u2208 [T ] :\nt\u22121\u2211 i=1 ci \u2265 c\n} .\nThen we know T\u2211 t=1\nct+1 (1 + \u2211t\u22121 i=1 ci) 4/3 = T0\u22121\u2211 t=1\nct+1 (1 + \u2211t\u22121 i=1 ci) 4/3 + T\u2211 t=T0\nct+1 (1 + \u2211t\u22121 i=1 ci) 4/3\n\u2264 T0\u22121\u2211 t=1 ct+1 + T\u2211 t=T0\nct+1 (1 + \u2211T0\u22121 i=1 ci + \u2211t\u22121 i=T0 ci)4/3\n\u2264 3c+ T\u2211\nt=T0\nct+1 (1 + c+ \u2211t\u22121 i=T0 ci)4/3\n\u2264 3c+ T\u2211\nt=T0\nct+1 (1 + \u2211t i=T0 ci)4/3\n\u2264 12 + 5c,\nwhere the last inequality is by Lemma I.4.\nLemma I.6. Given 0 \u2264 x \u2264 y \u2264 1, 0 < ` \u2264 1, we have( (1\u2212 x1/`)2\nx2 \u2212 (1\u2212 y 1/`)2 y2\n)2 \u2264 y 2 \u2212 x2\n`2x4y2 .\nProof. Note that( (1\u2212 x1/`)2\nx2 \u2212 (1\u2212 y 1/`)2 y2\n)2 = ( 1\u2212 x1/`\nx +\n1\u2212 y1/`\ny\n)2( 1\u2212 x1/`\nx \u2212 1\u2212 y 1/` y )2 \u2264 ( 1\nx +\n1\ny\n)2( 1\u2212 x1/`\nx \u2212 1\u2212 y 1/` y\n)2 ,\nnow let h(x) = 1\u2212x 1/l x , we can find h \u2032(x) = \u2212 (1\u2212`)x 1/`+` `x2 \u2264 0. Hence\n1\u2212 x1/` x \u2212 1\u2212 y 1/` y = h(x)\u2212 h(y) \u2265 0.\nBesides, let g(x) = h(x)\u2212 1`x , we can find that\ng\u2032(x) = (1\u2212 `)\n( 1\u2212 x1/` ) `x2 \u2265 0.\nThis means h(x)\u2212 1\n`x \u2212 h(y) + 1 `y = g(x)\u2212 g(y) \u2264 0,\nwhich implies\n0 \u2264 h(x)\u2212 h(y) \u2264 1 `x \u2212 1 `y .\nThus we finally have( (1\u2212 x1/`)2\nx2 \u2212 (1\u2212 y 1/`)2 y2\n)2 \u2264 ( 1\nx +\n1\ny\n)2 (h(x)\u2212 h(y))2\n\u2264 ( 1\nx +\n1\ny\n)2( 1\n`x \u2212 1 `y )2 = ( y2 \u2212 x2 )2 `2x4y4\n\u2264 y 2 \u2212 x2\n`2x4y2 .\nLemma I.7. Given 0 \u2264 x \u2264 y \u2264 1, 0 < ` \u2264 12 , we have( (1\u2212 x1/`)x1/`\u22122 \u2212 (1\u2212 y1/`)y1/`\u22122 )2 \u2264 y 2 \u2212 x2\n`2x2 y2/`\u22124.\nProof. If ` = 12 , then we know( (1\u2212 x1/`)x1/`\u22122 \u2212 (1\u2212 y1/`)y1/`\u22122 )2 = ( y2 \u2212 x2 )2 \u2264 (y2 \u2212 x2) y2 \u2264 4 (y2 \u2212 x2) x2y2 y2\n= y2 \u2212 x2\n`2x2y2 y2/`\u22124.\nIf ` 6= 12 , let h(x) denote (1 \u2212 x 1/`)x1/`\u22122, then we know h\u2032(x) = x1/`\u22123 2(`\u22121)x 1/`\u22122`+1 ` . By Taylor\u2019s expansion, there exists x \u2264 z \u2264 y, such that\nh(x)\u2212 h(y) = h\u2032(z)(x\u2212 y)\n= z1/`\u22123 2 (`\u2212 1) z1/` \u2212 2`+ 1\n` (x\u2212 y).\nThis will give us( (1\u2212 x1/`)x1/`\u22122 \u2212 (1\u2212 y1/`)y1/`\u22122 )2 = (h(x)\u2212 h(y))2\n= z2/`\u22126 \u00d7 ( 2 (`\u2212 1) z1/` \u2212 2`+ 1 )2 `2 \u00d7 (y \u2212 x)2 \u2264 y 2/`\u22124\nx2 \u00d7 1 `2 \u00d7 ( y2 \u2212 x2 ) = y2 \u2212 x2\n`2x2 y2/`\u22124.\nLemma I.8. Given m,n \u2265 0, For 0 \u2264 x \u2264 m, we have (m\u2212 x)xn \u2264 ( m\nn+ 1\n)n+1 nn.\nProof. Note that\nlog ((m\u2212 x)xn) = log (m\u2212 x) + n log x = log (m\u2212 x) + n log x n + n log n\n(a) \u2264 (n+ 1) log ( m\u2212 x n+ 1 + n n+ 1 \u00d7 x n ) + n log n\n= (n+ 1) log m\nn+ 1 + n log n = log\n(( m\nn+ 1\n)n+1 nn )\nwhere (a) is by the concavity of log function. Then we know (m\u2212 x)xn \u2264 (\nm n+1\n)n+1 nn.\nLemma I.9. Given X,A,B \u2265 0, C > 0, D \u2265 0, 0 \u2264 u \u2264 1,if we have X \u2264 ( A+B log ( 1 + X\nC\n))u D,\nthen there is\nX \u2264 ( 2A+ 2B log 4uBD\nC +\n( C\nD\n)1/u)u D.\nEspecially, when D \u2265 1, we know X \u2264 ( 2A+ 2B log 4uBD\nC + C1/u\n)u D.\nProof. Let Y = (X/D)1/u, then we know Y \u2264 A+B log ( 1 + DY u\nC ) = A+ uB log ( 1 + DY u\nC )1/u (a) \u2264 A+ uB log ( 21/u + ( 2D\nC\n)1/u Y )\n= A+ uB log 21/u + uB log ( 1 + ( D\nC\n)1/u Y )\n= A+B log 2 + uB log 1 +\n( D C )1/u Y\n2uB ( D C\n)1/u + uB log ( 2uB ( D\nC )1/u) (b)\n\u2264 A+B log 2 + (C/D) 1/u\n2 + Y 2 + uB log 2uB +B log D C\n\u2264 Y 2 +A+B log 4uBD C +\n(C/D) 1/u\n2 ,\nwhere (a) is by (x+ y)p \u2264 (2x)p + (2y)p, for x, y \u2265 0, p \u2265 1. (b) is by log x \u2264 x\u2212 1 \u2264 x. Then we know\nY \u2264 2A+ 2B log 4uBD C +\n( C\nD )1/u \u21d2 X \u2264 ( 2A+ 2B log 4uBD\nC +\n( C\nD\n)1/u)u D."
}
],
"year": 2022,
"abstractText": "We study the application of variance reduction (VR) techniques to general nonconvex stochastic optimization problems. In this setting, the recent work STORM (Cutkosky & Orabona, 2019) overcomes the drawback of having to compute gradients of \u201cmega-batches\u201d that earlier VR methods rely on. There, STORM utilizes recursive momentum to achieve the VR effect and is then later made fully adaptive in STORM+ (Levy et al., 2021), where full-adaptivity removes the requirement for obtaining certain problem-specific parameters such as the smoothness of the objective and bounds on the variance and norm of the stochastic gradients in order to set the step size. However, STORM+ crucially relies on the assumption that the function values are bounded, excluding a large class of useful functions. In this work, we propose META-STORM, a generalized framework of STORM+ that removes this bounded function values assumption while still attaining the optimal convergence rate for non-convex optimization. META-STORM not only maintains full-adaptivity, removing the need to obtain problem specific parameters, but also improves the convergence rate\u2019s dependency on the problem parameters. Furthermore, META-STORM can utilize a large range of parameter settings that subsumes previous methods allowing for more flexibility in a wider range of settings. Finally, we demonstrate the effectiveness of META-STORM through experiments across common deep learning tasks. Our algorithm improves upon the previous work STORM+ and is competitive with widely used algorithms after the addition of per-coordinate update and exponential moving average heuristics.",
"creator": "LaTeX with hyperref"
},
"output": [
[
"1. The choice of parameter p seems to have a large effect on the algorithm\u2019s convergence."
],
[
"1. The proposed META-STORM introduces an additional learning rate that needs to be tuned. (Note STORM and STORM+ do not require this additional hyperparameter.) Thus, META-STORM is not a fully adaptive algorithm. The authors need to explain this.",
"2. How will relaxing the assumption of bounded function value STORM+ affect the optimization process in practice? Can the authors provide some examples with unbound function values that STORM+ would fail? Or this assumption is only required for the analysis of STORM+? The authors may need to provide more explanations here."
],
[
"1. **\"The numerical experiments are relatively weak compared to its theoretical part.\"**",
"2. **\"(W1) Numerical experiments on Cifar10 is not optimal, for example I assume that there is no learning rate decay by examining the loss curve of SGD. And the final test accuracy seems not matching the best that one can achieve (93 to 94%).\"**",
"3. **\"(W2) The paper is somehow contradicted: it motivates fully adaptively to parameters when choosing learning rates, yet has to tune through grid search for best numerical performance in numerical tests.\"**",
"4. **\"(W3) It is not clear what are the specific heuristics used for in META-STORM (H). Moreover, can the authors explain the reasons for coordinate-wise learning rate in META-STORM(H)? It looks to me that the b_t update is a normalized adaptive learning rate, which is a simplified version of coordinate-wise learning rate. It is hence less intuitive for me to make adaptivity adaptive.\"**",
"5. **\"(W4) How does an epoch counts in the experiments? Since the proposed algorithms compute two gradients while SGD and Adam only needs one.\"**"
],
[
"1. \"I think the technical section 3.1 and 3.2 could be written slightly more clearly to highlight which parts of the analysis are inherited from prior work, and which are new contributions. The current way written without reading the previous papers it is not immediate to me which part helps remove the additional / stronger assumptions in prior work. Also is there any difference in the adaptivity design part compared with prior work?\"",
"2. \"The final complexity result stated in theorems doesn't agree on units (i.e. see definition of kappa, Q_1 in the paper etc.).\"",
"3. \"When scaling the function by an additional factor L, the convergence result doesn't seem scale-invariant, which is a bit counter-intuitive to me. Would there be a proper analysis that could make this scale-invariant?\"",
"4. \"In the experiments, the figure could be improved a little bit to make each line of comparison look clearer (i.e. the color of STORM+ and another META-STORM-SG(H) looks quite confusing).\"",
"5. \"Some parts of the comparison doesn't feel very fair since you are comparing META-STORM with heuristic to STORM+ without. It may be beneficial to use same heuristic on both methods and then compare.\""
]
],
"review_num": 4,
"item_num": [
1,
2,
5,
5
]
}