diff --git "a/39FST4oBgHgl3EQfZTjC/content/tmp_files/2301.13791v1.pdf.txt" "b/39FST4oBgHgl3EQfZTjC/content/tmp_files/2301.13791v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/39FST4oBgHgl3EQfZTjC/content/tmp_files/2301.13791v1.pdf.txt" @@ -0,0 +1,8492 @@ +Improved Algorithms for Multi-period Multi-class Packing Problems +with Bandit Feedback +Wonyoung Kim 1 Garud Iyengar 1 Assaf Zeevi 1 +Abstract +We consider the linear contextual multi-class +multi-period packing problem (LMMP) where +the goal is to pack items such that the total vector +of consumption is below a given budget vector +and the total value is as large as possible. We +consider the setting where the reward and the con- +sumption vector associated with each action is +a class-dependent linear function of the context, +and the decision-maker receives bandit feedback. +LMMP includes linear contextual bandits with +knapsacks and online revenue management as spe- +cial cases. We establish a new more efficient esti- +mator which guarantees a faster convergence rate, +and consequently, a lower regret in such problems. +We propose a bandit policy that is a closed-form +function of said estimated parameters. When the +contexts are non-degenerate, the regret of the pro- +posed policy is sublinear in the context dimen- +sion, the number of classes, and the time hori- +zon T when the budget grows at least as +√ +T. We +also resolve an open problem posed in Agrawal & +Devanur (2016), and extend the result to a multi- +class setting. Our numerical experiments clearly +demonstrate that the performance of our policy is +superior to other benchmarks in the literature. +1. Introduction +In the multi-period packing problem (MPP) the decision- +maker “packs” the arrivals so that the total consumption +across a set a resources is below a given budget vector +and the reward is maximized. A variant of the packing +problem, where items consume multiple resources and the +decisions must be made sequentially with bandit feedback +for a fixed time horizon, is known as bandits with knapsacks +(Agrawal & Devanur, 2014a; Badanidiyuru et al., 2018; +Immorlica et al., 2019). MPPs also arise in online revenue +1Columbia University, New York, NY, USA. Correspondence +to: <>. +Preliminary work. Under review by the International Conference +on Machine Learning (ICML). Do not distribute. +management (Besbes & Zeevi, 2012; Ferreira et al., 2018). +MPPs in the literature assume that all arrivals belong to a +single class. However, in several application domains (e.g., +operations, healthcare, and e-commerce), the arrivals are +heterogeneous, and personalizing decisions to each distinct +population or class is of paramount importance. In this +paper we consider a class of linear multi-class multi-period +packing problems (LMMP). At each round, there is a single +arrival that belongs to one of J classes, and the decision- +maker observes the d-dimensional context and the cost for +K different available actions. The outcome of selecting an +action is a random sample of the reward and a consumption +vector for m resources with an expected value that is a class- +dependent linear function of the d-dimensional contexts. +The goal of the problem is to minimize the cumulative regret +over a time horizon T while ensuring that the total resource +consumed is at most B. +The LMMP problem is a generalization of several prob- +lems including linear contextual bandits with knapsacks +(LinCBwK) introduced by Agrawal & Devanur (2016). +They proposed an online mirror descent-based algorithm +that achieves ˜O(OPT/B · d +√ +T) regret when the budget +B for each of the m resources is Ω( +√ +dT 3/4), where OPT +is the reward obtained by the oracle policy. Although the +regret bound is meaningful for B ≥ Ω(d +√ +T), establishing +the regret bound for smaller budget values was left as an +open problem. Chu et al. (2011) established a regret bound +sublinear in d for the linear contextual bandit setting, which +is a special case of LinCBwK with no budget constraints. +Thus, the following question remained open: “Is there an +algorithm for LinCBwK that achieves sublinear dependence +on d with budget B = Ω( +√ +T)?” +We propose a novel algorithm and an improved estimation +strategy that settles this open problem and generalizes the +result to the more general class of LMMP. The proposed +algorithm achieves ˜O(OPT/B +√ +JdT) regret with budget +B = Ω( +√ +JdT) under non-degenerate contexts. While re- +gret of the existing algorithms grows linearly in the number +of classes J, our estimator is able to pool data from differ- +ent classes and avoids linear dependence on J. To reiterate, +the improved regret bound results from the novel estimator +which yields faster convergence rates. +arXiv:2301.13791v1 [stat.ML] 31 Jan 2023 + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Our main contributions are summarized as follows: +• We propose a new problem class – linear multi-class +multi-period packing problems (LMMP). This problem +generalizes a variety of problems including LinCBwK +and online revenue management problems to the multi- +class setting. +• We propose a novel estimator that uses contexts for +all actions (including the contexts in skipped rounds) +and yields O( +� +Jd/n) convergence rate for J classes, +context dimension d, and n admitted arrivals (Theorem +4.2). +• We propose a novel AMF (Allocate to the Maximum +First) algorithm which achieves ˜O(OPT/B +√ +JdT) +regret with budget B = Ω( +√ +JdT) where OPT is +the reward obtained by oracle policy (Theorem 5.1). +For the single class setting with J = 1, we improve +the existing bound by +√ +d and show that the bound +is valid when B = Ω( +√ +dT), and thus resolving an +open problem posed in Agrawal & Devanur (2016) +regarding LinCBwK. +• We evaluate our proposed algorithm on a suite of syn- +thetic experiments and demonstrate its superior perfor- +mance. +All proofs omitted from the front matter can be found in the +Appendix. +2. Related Works +There are two streams of work that are relevant for +LMMP. In online revenue management literature, Gallego & +Van Ryzin (1994) introduced the dynamic pricing problem +where the demand is a known function of price (action). Bes- +bes & Zeevi (2009) and Besbes & Zeevi (2012) extended the +problem under unknown demands with multiple resource +constraints. Ferreira et al. (2018) proposed a Thompson +sampling-based algorithm and extended it to contextual ban- +dits with knapsacks. When the expected demand is a linear +function of the price vector, the dynamic pricing problem +is a special case of linear contextual bandits with knap- +sack (LinCBwK) proposed by Agrawal & Devanur (2016). +The LinCBwk is a common generalization of bandits with +knapsacks (Badanidiyuru et al., 2018; Immorlica et al., 2019; +Li et al., 2021) and online stochastic packing problems (Feld- +man et al., 2010; Agrawal & Devanur, 2014b; Devanur et al., +2011). Recently, Sankararaman & Slivkins (2021) proved a +logarithmic regret bound for LinCBwK when there exists a +problem-dependent gap between the reward of the optimal +action and the other actions. Instead of the gap assump- +tion, we require non-degeneracy of the stochastic contexts +(see Assumption 3 for a precise definition) to obtain a re- +gret bound sublinear in d and extends to the case when the +contexts are generated from J different class. +Amani et al. (2019) proposed a variant of LinCBwK where +the selected action must satisfy a single constraint with high +probability in all rounds, i.e., LinCBwK with anytime con- +straints. Moradipari et al. (2021) and Pacchiano et al. (2021) +proposed a Thompson sampling-based algorithm and an +upper confidence bound-based algorithm, respectively, for +LinCBwK with a single anytime constraint. Liu et al. (2021) +highlighted the difference between global and anytime con- +straints, and proposed an pessimistic-optimistic algorithm +for the anytime constraints. We focus on the global con- +straints; however, we note that the extension to the anytime +constraints is straightforward with minor modifications. +2.1. Notation +Let R+ denote the set of positive real numbers. For two +real numbers a, b ∈ R, we write a ∧ b := min{a, b} and +a ∨ b := max{a, b}. For a natural number N, let [N] = +{1, . . . , N}. +3. Linear Multi-period Packing Problem +Let [J] denote the set of classes with arrival probabilities +p = {pj}j∈[J], where pmin := minj∈[J] pj > 0. In each +round t ∈ [T], the covariates {x(j) +k,t ∈ [0, 1]d : k ∈ [K]} +and costs {c(j) +k,t ∈ [0, 1] : k ∈ [K]} are drawn from a class- +specific distribution Fj. We assume that the class arrival +probabilities p are known to the decision-maker; however, +the distributions {Fj}j∈[J] are not known. +At time t ∈ [T], the decision-maker observes an arrival of +the form (jt, {x(jt) +k,t , c(jt) +k,t : k ∈ [K]}), where jt ∈ [J] is +the arrived class. Upon observing the arrival, the decision- +maker can either take one of K different actions or skip the +arrival. When the arrival is skipped, the decision-maker does +not obtain any rewards or consume any resources. When +the decision-maker chooses an action at ∈ [K], the reward +and consumption of the resource are given by +E +� +r(jt) +at,t +��� Ht +� += +� +θ(jt) +⋆ +�⊤ +x(jt) +at,t ∈ [−1, 1], +E +� +b(jt) +at,t +��� Ht +� += +� +W (jt) +⋆ +�⊤ +x(jt) +at,t ∈ [0, 1]m, +for some unknown class-specific parameters θ(j) +⋆ +∈ [0, 1]d +and W (j) +⋆ +∈ [0, 1]d×m. The sigma algebra Ht is generated +by the class-specific variables {js, x(js) +k,s , c(js) +k,s , : s ∈ [t], k ∈ +[K]}, actions {as : s ∈ At}, consumption vectors {b(js) +as,s : +s ∈ At−1} and rewards {r(js) +as,t : s ∈ At−1}, where At +is the rounds admitted by the decision-maker until round +t. The process terminates at the horizon T or runs out of + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +budget B ∈ Rm ++ for some resources r ∈ [m]. The problem +reduces to LinCBwK when the number of class is J = 1 +and the costs are c(j) +k,t = 0 +Let ρ = B/T ∈ Rm ++ denote per-period budget for m re- +sources. Without loss of generality, one can assume that +ρ(r) = ρ for all r ∈ [m], By rescaling W (j) +⋆ . We assume +that ρ is known to the decision-maker and B is possibly un- +known at first, but known at the end of the round. This case +happens when the total budget B is difficult to count in the +early rounds. When ρ is not available, the decision-maker +requires B and T to compute ρ. However, this assumption +is more practical than in Agrawal & Devanur (2016) where +B and OPT must be known to the decision-maker. When +OPT is unknown, they estimate OPT with +√ +T number +of rounds, which requires the knowledge of T and budget +B = Ω( +√ +dT +3 +4 ). Instead of estimating OPT, we use ρ to +avoid the required budget B = Ω( +√ +dT +3 +4 ). +We benchmark the performance of the decision-maker’s pol- +icy relative to that of an oracle who knows the distributions +{Fj : j ∈ [J]} and the parameters {θ(j) +⋆ , W (j) +⋆ +: j ∈ [J]}, +but does not know the arrivals {(jt, x(j) +k,t, c(j) +k,t) : t ∈ [T]} +a-priori. In this case, the optimal static policy for the oracle +{π⋆(j) +k +: j ∈ [J], k ∈ [K]} is the solution to the following +optimization problem: +max +π(j) +k +J +� +j=1 +K +� +k=1 +pjπ(j) +k E(xk,ck)∼Fj +�� +θ(j) +⋆ +�⊤ +xk − ck +� +s.t. +j +� +j=1 +K +� +k=1 +pjπ(j) +k Exk∼Fj +�� +W (j) +⋆ +�⊤ +xk +� +≤ ρ, +K +� +k=1 +π(j) +k +≤ 1, ∀j ∈ [J], +π(j) +k +≥ 0, ∀j ∈ [J], ∀k ∈ [K], +(1) +Let π⋆ denote the optimal oracle policy. Then the expected +reward obtained by the oracle is +OPT := +T +J +� +j=1 +K +� +k=1 +pjπ⋆(j) +k +E(xk,ck)∼Fj +�� +θ(j) +⋆ +�⊤ +xk − ck +� +. +Let π := {π(j) +k,t : j ∈ [J], k ∈ [K], t ∈ [T]} denote the +adapted (randomized) control policy of the decision-maker, +i.e. she chooses action k ∈ [K] when the arrival at time +t ∈ [T] belongs to class j ∈ [J]. Note that �K +k=1 π(j) +k,t ≤ 1 +in order to allow the decision-maker to skip an arrival and +save the inventory for later use. Our goal is to compute a +policy that minimizes the cumulative regret Rπ +T defined as +Rπ +T := OPT − E +� T +� +t=1 +Rπ +t +� +, +where Rπ +t := �K +k=1 π(jt) +k,t E +�� +θ(jt) +⋆ +�⊤ +x(jt) +k,t − c(jt) +k,t +� +is the +expected reward obtained by policy π at time t. +For the LMMP problem, we assume the following regularity +conditions on the stochastic processes. +Assumption 1. (Sub-Gaussian and bounded errors) For +each t ∈ [T], the error of the reward ηk,t = r(jt) +k,t − +� +θ(jt) +⋆ +�⊤ +x(jt) +k,t is conditionally zero-mean σr-sub-Gaussian +for a fixed constant σr ≥ 0, i.e. E [exp (vηk,t)| Ht] ≤ +exp +� +v2σ2 +r +2 +� +for all v ∈ R. For the consumption vectors, +E +� +v⊤{b(jt) +k,t − (W (jt) +⋆ +)⊤x(jt) +k,t } +��� Ht +� +≤ exp( ∥v∥2 +2σ2 +b +2 +) for +all v ∈ Rm. +Assumption 2. (Independently distributed contexts and +costs) The set of contexts {x(j) +k,t : k ∈ [K]} and {c(j) +k,t : +k ∈ [K]} are generated independently over t ∈ [T]. The +contexts and cost in the same round and class can be corre- +lated with each other. +Assumption 3. (Positive definiteness of average covari- +ances) For each t ∈ [T] and j ∈ [J], there exists α > 0, +such that +λmin +� +E +� +1 +K +K +� +k=1 +x(j) +k,t +� +x(j) +k,t +�⊤ +�� +≥ α. +Assumptions 1 and 2 are standard in stochastic contex- +tual bandits with knapsacks literature (Agrawal & Devanur, +2016; Sankararaman & Slivkins, 2021; Sivakumar et al., +2022). In the multi-class case, Assumption 2 implies that all +the contexts are drawn independently over time steps, but +their distribution may vary depending on the class. Assump- +tion 3 implies that the density of the covariate distribution is +non-degenerate. Recent contextual bandit literature (without +constraints) exploits Assumption 3 to improve the depen- +dency of d on the regret bound (Bastani & Bayati, 2020; +Kim et al., 2021; Bastani et al., 2021; Oh et al., 2021). The +contexts with independent Gaussian perturbation used in +Kannan et al. (2018); Sivakumar et al. (2020; 2022) satisfy +the Assumption 3. +4. Proposed Method +In this section, we present our proposed estimator for the +parameters {θ(j) +⋆ , W (j) +⋆ +: j ∈ [J]} and the proposed bandit +policy. +4.1. Proposed Estimator +In sequential decision-making problems with contexts, the +decision-maker observes the contexts for all actions, but +the reward for only selected actions, i.e. the rewards for +unselected actions remain missing. Kim & Paik (2019); + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Dimakopoulou et al. (2019); Kim et al. (2021) use doubly +robust (DR) method to handle the missing rewards for the +linear contextual bandits. However, extensions to LinCBwK +or LMMP problem have not explored yet. +We adapt the DR method to the LMMP problem. For each +n ∈ N, let τ(n) be the round when the n-th admission +happens (recall the bandit policy allows for skipping some +arrivals). Clearly, n ≤ τ(n) < τ(n + 1) holds. Let +Θ⋆ := +� +� +� +� +θ(1) +⋆ +... +θ(J) +⋆ +� +� +� +� , W⋆ := +� +� +� +� +W (1) +⋆ +... +W (J) +⋆ +� +� +� +� , ˜Xk,n := +� +� +� +� +� +� +0d +... +x +(jτ(n)) +k,τ(n) +0d +� +� +� +� +� +� +denote the stacked parameter vectors, and zero padded con- +texts where x +(jτ(n)) +k,τ(n) is located after the j − 1 of 0d vectors. +Then the score for the ridge estimator for Θ⋆ at round τ(n) +is: +n +� +ν=1 +� +r +(jτ(ν)) +aτ(ν),τ(ν) − Θ⊤ ˜Xaτ(ν),ν +� +˜Xaτ(ν),ν += +n +� +ν=1 +K +� +k=1 +I +� +aτ(ν) = k +� � +r +(jτ(ν)) +k,τ(ν) − Θ⊤ ˜Xk,ν +� +˜Xk,ν, +where Θ ∈ RJ·d. Dividing the score by the probability +π +(jτ(ν)) +k,τ(ν) gives the inverse probability weighted (IPW) score, +n +� +ν=1 +K +� +k=1 +I +� +aτ(ν) = k +� +π +(jτ(ν)) +k,τ(ν) +� +r +(jτ(ν)) +k,τ(ν) − Θ⊤ ˜Xk,ν +� +˜Xk,ν. +To obtain the DR score, Bang & Robins (2005); Kim et al. +(2021) proposed to subtract the nuisance tangent space gen- +erated by an imputed estimator ˇΘ: +n +� +ν=1 +K +� +k=1 +I +� +aτ(ν) = k +� +π +(jτ(ν)) +k,τ(ν) +� +˜X⊤ +k,ν ˇΘ − ˜X⊤ +k,νΘ +� +˜Xk,ν, +from the IPW score. Then the following DR score +n +� +ν=1 +K +� +k=1 +� +r +DR(jτ(ν)) +k,τ(ν) +− ˜X⊤ +k,νΘ +� +˜Xk,ν, +(2) +is obtained where +rDR( ˇΘ) +k,ν +:=I +� +aτ(ν)=k +� +π +(jτ(ν)) +k,τ(ν) +r +(jτ(ν)) +k,τ(ν) + +� +� +�1−I +� +aτ(ν)=k +� +π +(jτ(ν)) +k,τ(ν) +� +� +� +˜X⊤ +k,ν ˇΘ. +(3) +The score (2) has a similar form with the score equation +for the ridge estimator. The difference with the ridge es- +timator is that it uses contexts for all actions k ∈ [K] +with the pseudo-reward rDR( ˇΘ) +k,ν +which is unbiased, i.e., +E[rDR( ˇΘ) +k,ν +] = E[r +(jτ(ν)) +k,τ(ν) ], for any given ˇΘ ∈ RJ·d. Adding +the ℓ2 regularization norm and solving (2) leads to the DR +estimator: +� n +� +ν=1 +K +� +k=1 +˜Xk,ν ˜X⊤ +k,ν+IJ·d +�−1� n +� +ν=1 +K +� +k=1 +˜Xk,νrDR( ˇΘ) +k,τ(ν) +� +. +The main advantage of the DR estimator is that it uses +contexts from all K actions. However, in our policy, some +π +(jτ(ν)) +k,τ(ν) can be zero, and therefore, the pseudo-reward (3) is +not defined. To handle this problem, we propose to introduce +a random variable. After taking an action at round τ(ν) +and observing the selected action aτ(ν), the decision-maker +samples hν from the distribution: +φk,ν:=P +� +hν = k| Hτ(n) +� += +� +� +� +1− +16(K−1) log( dJ +δ ) +λmin(Fν) +k=aτ(ν) +16 log( dJ +δ ) +λmin(Fν) +k̸=aτ(ν) +(4) +where Fν := �ν,K +i,k=1 ˜Xk,i ˜X⊤ +k,i+16d(K −1) log +� dJ +δ +� +IJ·d +is the Gram matrix of contexts from ν admitted rounds +and δ ∈ (0, 1) is the confidence level. We would like to +emphasize that hν is sampled after observing the actions +aτ(ν) and does not affect the policies until round τ(ν). +Sampling the random variables hν after choosing actions +is motivated by bootstrap methods (Efron & Tibshirani, +1994) and resampling methods (Good, 2006). To obtain the +unbiased pseudo-rewards similar to (3), we resample the +action with another non-zero probabilities. The probabil- +ities {φk,ν : k ∈ [K]} is designed to control the level of +exploration and exploitation for future rounds based on the +ratio of confidence level to the number of admitted rounds. +When the minimum eigenvalue of Fν is small compared +to log(1/δ), the distribution of hν is less concentrated on +aτ(ν) and tends to explore other actions. As ν increases, the +probabilities {φk,ν : k ∈ [K]} concentrates on aτ(ν), and +the decision-maker tends to exploit. +Since we obtain non-zero probabilities {φk,ν : k ∈ [K], ν ∈ +[n]}, we define novel unbiased pseudo-rewards: +˜rk,ν :=I (hν =k) +φk,ν +r +(jτ(ν)) +k,τ(ν) + +� +1− I (hν =k) +φk,ν +� +˜X⊤ +k,νˇΘn, (5) +where the imputation estimator ˇΘn is an IPW estimator with + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +new probabilities: +ˇΘt :=A−1 +n +� � +ν∈Ψn +K +� +k=1 +I (hν =k) +φk,ν +˜Xk,νr +(jτ(ν)) +k,τ(ν) ++ +� +ν /∈Ψn +˜Xaτ(ν),νr +(jτ(ν)) +aτ(ν),τ(ν) +� +, +An := +� +ν∈Ψn +K +� +k=1 +I (hν =k) +φk,ν +˜Xk,ν ˜X⊤ +k,ν ++ +� +ν /∈Ψn +˜Xaτ(ν),ν ˜X⊤ +aτ(ν),ν + IJ·d, +Ψn := +� +ν ∈ [n] : hν = aτ(ν) +� +. +The set Ψn is introduced because we cannot observe +I(hν=k) +φk,ν +r +(jτ(ν)) +k,τ(ν) in case of hν ̸= aτ(ν). In other words, we +use the pseudo-rewards in (5) only at the rounds that satisfy +hν = aτ(ν). Then our estimator with n admitted samples is +defined as +�Θn :=V −1 +n +� +� +� +� +ν∈Ψn +K +� +k=1 +˜Xk,ν˜rk,ν + +� +ν /∈Ψn +˜Xaτ(ν),νr +(jτ(ν)) +aτ(ν),τ(ν) +� +� +� +Vn := +� +ν∈Ψn +K +� +k=1 +˜Xk,ν ˜X⊤ +k,ν + +� +ν /∈Ψn +˜Xaτ(ν),ν ˜X⊤ +aτ(ν),ν +IJ·d. +(6) +Analogous to the construction of (6), we can also define the +estimator for the resource consumption parameters {W (j) +⋆ +: +j ∈ [J]}, +� +Wn :=V −1 +n +� � +ν∈Ψn +K +� +k=1 +˜Xk,ν ˜b⊤ +k,ν+ +� +ν /∈Ψn +˜Xaτ(ν),νb +(jτ(ν))⊤ +aτ(ν)τ(ν) +� +, +(7) +where the pseudo-consumption vectors and the imputation +estimator are +˜bk,ν := I (hν =k) +φk,ν +b +(jτ(ν)) +aτ(ν),τ(ν)+ +� +1− I (hν =k) +φk,ν +� +ˇ +W⊤ +n ˜Xk,ν, +ˇ +Wn := A−1 +n +� � +ν∈Ψn +K +� +k=1 +I (hν =k) +φk,ν +˜Xk,ν +� +b +(jτ(ν)) +k,ν +�⊤ ++ +� +ν /∈Ψn +˜Xaτ(ν),ν +� +b +(jτ(ν)) +aτ(ν),τ(ν) +�⊤ +� +. +The two estimators use the novel Gram matrix Vn defined +in (6) consist of contexts from all K actions. Now, we +present estimation error bounds normalized by the novel +Gram matrix Vn. +Theorem 4.1. (Self-normalized bound for the estimator) +Suppose Assumptions 1 and 2 hold. For each t ∈ [T], +denote nt the number of admitted arrivals until round t and +Ψnt := {ν ∈ [nt] : hν = aτ(ν)}, where hν is defined in +(4). Suppose Fnt := �nt +ν=1 +�K +k=1 ˜Xk,ν ˜X⊤ +k,ν + 16d(K − +1) log Jd +δ IJ·d satisfies +λmin(Fnt)≥12Kd +� nt +� +ν=1 +48(K−1) log +� Jd +δ +� +λmin(Fν) ++2 log Jd +δ +� +, +(8) +for δ ∈ (0, 1). For each r ∈ [m] , let � +Wnt,r and W⋆,r +be the r-th column of � +Wnt and W⋆, respectively. Denote +βσ(δ) := 8 +√ +Jd + 96σ +� +Jd log 4 +δ. Then with probability +at least 1 − 4(m + 1)δ, +����Θnt − Θ∗��� +Vnt +≤βσr(δ), +max +r∈[m] +���� +Wnt,r − W⋆,r +��� +Vnt +≤βσb(δ). +(9) +The widely used self-normalized bound in Abbasi-Yadkori +et al. (2011) uses the Gram matrix consisting of selected +contexts only, while our bounds are normalized by Vnt +This change in the Gram matrix enables us to develop a +fast convergence rate. The condition (8) is required for +the eigenvalues of the Gram matrix Fnt to be large so that +the probability φaτ(ν),ν is large and the estimators use the +pseudo rewards and pseudo consumption vectors for most +of the rounds. We show in Lemma 5.3 that the condition (8) +requires at most rounds logarithmic in T, and does not affect +the main order of the regret bound. +Using the novel estimators, we define the estimates for +utility and resource consumption. Denote C(j) +t +:= {s ∈ [t] : +js = j} and +�u(j) +k,t := +���C(j) +t +��� +−1 � +s∈C(j) +t +�� +�θ(j) +t−1 +�⊤ +x(j) +k,s − c(j) +k,s +� +, +�b(j) +k,t := +���C(j) +t +��� +−1 � +s∈C(j) +t +� +� +W (j) +t−1 +�⊤ +x(j) +k,s. +(10) +The estimates (10) use the average of contexts in the same +class to estimate the expected value over the context dis- +tribution. In this way, the decision-maker effectively uses +previous contexts in all rounds including the skipped rounds. +Next, we establish the convergence rate for the estimators +�u(j) +k,t and �b(j) +k,t. +Theorem 4.2. (Convergence rate for the estimates) Suppose +Assumptions 1-3 hold. Denote the expected utility u⋆(j) +k +:= +E(xk,ck)∼Fj +�� +θ(j) +⋆ +�⊤ +xk − ck +� +and consumption b⋆(j) +k +:= +Exk∼Fj +�� +W (j) +⋆ +�⊤ +xk +� +. Set γt,σ(δ) := +16√ +J log(JKT ) +√ +t ++ +4 +√ +2βσ(δ) +√nt +, where nt is the number of admitted arrivals until + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +round t and βσ(δ) is defined in Theorem 4.1. Suppose +t ≥ 8dα−1p−1 +min log JT, δ ∈ (0, T −1) and Fnt satisfies (8). +Then with probability at least 1 − 4(m + 1)δ − 7T −1, +� +� +� +� +J +� +j=1 +pj max +k∈[K] +���u⋆(j) +k +− �u(j) +k,t+1 +��� +2 +≤ γt,σr(δ), +� +� +� +� +J +� +j=1 +pj max +k∈[K] +���b⋆(j) +k +− �b(j) +k,t+1 +��� +2 +∞ ≤ γt,σb(δ). +(11) +The convergence rate of the estimates is O( +√ +Jdn−1/2 +t +). In +deriving the fast rate, the novel Gram matrix Vnt plays a +significant role. To prove Theorem 4.2, we bound the sum +of squared maximum prediction error as follows: +1 +nt +� +s∈Ψnt +max +k∈[K] +�� +θ(j) +⋆ −�θ(j) +t +�⊤ +x(j) +k,s +�2 += 1 +nt +� +s∈Ψnt +max +k∈[K] +� +θ(j) +⋆ −�θ(j) +t +� � +x(j) +k,sx(j) +k,s +�⊤� +θ(j) +⋆ −�θ(j) +t +� +≤ 1 +nt +� +s∈Ψnt +� +θ(j) +⋆ −�θ(j) +t +� � K +� +k=1 +x(j) +k,sx(j) +k,s +�⊤� +θ(j) +⋆ −�θ(j) +t +� +≤ 1 +nt +���θ(j) +⋆ −�θ(j) +t +��� +2 +Vnt +. +Such a bound is not available if the Gram matrix is con- +structed using only contexts corresponding to selected ac- +tions. In this way, we obtain faster convergence rate for the +estimates for utility and consumption vectors. +4.2. Proposed Algorithm +Let (K + 1)-th action denote skipping the arrival and +π(j) +K+1,t := P (Skip the round t| Ht) denote the probabil- +ity of skipping the arrival. Since the decision-maker must +choose an action or skip the round, we have �K+1 +k=1 π(j) +k,t = 1. +When the decision-maker skips round t, we set x(j) +K+1,t := 0, +c(j) +K+1,t := 0, and b(j) +K+1,t := 0. In round t, the randomized +bandit policy is given by the optimal solution of the follow- +ing optimization problem: +max +π(jt) +k,t +K+1 +� +k=1 +π(jt) +k,t +� +�u(jt) +k,t + γt−1,σr(δ) +√pjt +I (k ∈ [K]) +� +, +s.t. +K+1 +� +k=1 +π(jt) +k,t +� +�b(jt) +k,t − γt−1,σb(δ) +√pjt +1m +� +≤ ρt ∨ 0, +K+1 +� +k=1 +π(jt) +k,t = 1, +π(jt) +k,t ≥ 0, +∀k ∈ [K + 1], +(12) +Algorithm 1 Allocate to the Maximum First algorithm +(AMF) +INPUT: confidence lengths γθ, γb > 0, confidence level +δ ∈ (0, 1). +Initialize F0 := 16d(K − 1) log Jd +δ IJ·d, ρ1 := ρ, �Θ0 := +0J·d, � +W0 := 0J·d×m +for t = 1 to T do +Observe arrival (jt, {x(jt) +k,t , c(jt) +k,t }k∈[K]). +if Ft−1 does not satisfy (8) then +Take action at = arg maxk∈[K] ρ∥�b(jt) +k,t ∥−1 +∞ . +else +Compute �u(jt) +k,t and �b(jt) +k,t with �θ(jt) +t−1 and � +W(jt) +t−1. +Compute ˜u(jt) +k,t := �u(jt) +k,t + +γθ +√nt and ˜b(jt) +k,t := �b(jt) +k,t − +γb +√nt 1m. +Take action at with the policy �π(jt) +1,t , . . . , �π(jt) +K+1,t de- +fined in (13). +end if +if at ∈ [K] then +Observe r(jt) +at,t and b(jt) +at,t, then estimate �Θt and � +Wt +as in (6) and (7), respectively. +Update Ft = Ft−1 + �K +k=1 ˜Xk,t ˜X⊤ +k,t. +end if +Update available resource ρt+1 = ρt + ρ − b(jt) +at,t. +if �t +s=1 b(js) +as,s ≥ Tρ then +Exit +end if +end for +where ρt := tρ − �t−1 +s=1 b(js) +as,s is the difference between +the used resources and planned budget until round t. The +algorithm is optimistic in that it uses upper confidence bound +(UCB) in rewards and lower confidence bound (LCB) in +consumption while it regulates the consumption to be less +than tρ with ρt. In this way, the problem (12) balances +between admitting the arrivals and saving the resources for +later use. Next, we show that the optimal solution (12) is +available in a closed-form. +Lemma 4.3. (Optimal policy for bandit) Let ˜u(jt) +k,t +:= +�u(jt) +k,t ++ p−1/2 +jt +γt−1,σr(δ)I (k ∈ [K]) and ˜b(jt) +k,t (r) +:= +�b(jt) +k,t (r) − p−1/2 +jt +γt−1,σb(δ), for r ∈ [m]. For i ∈ [K + 1], +let ˜u(jt) +k⟨i⟩,t be an sequence of ordered variables of ˜u(jt) +k,t in +decreasing order, i.e. ˜u(jt) +k⟨1⟩,t ≥ ˜u(jt) +k⟨2⟩,t ≥ · · · ≥ ˜u(jt) +k⟨K+1⟩t. +When there is a tie between ˜u(jt) +k⟨i⟩,t and ˜u(jt) +k⟨i+1⟩,t, the index +k⟨i⟩ with the higher value for +� +� min +r∈[m] +ρt(r) ∨ 0 − �i−1 +h=1 �π(jt) +k⟨h⟩,t˜b(jt) +k⟨h⟩,t(r) +˜b(jt) +k⟨h⟩,t(r) +� +� + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +goes first. Then the policy defined as, +�π(jt) +k⟨1⟩,t = +� +� min +r∈[m] +ρt(r)∨0 +˜b(jt) +k⟨1⟩,t(r) +� +� ∧ 1, +�π(jt) +k⟨i⟩,t = +� +�min +r∈[m] +ρt(r)∨0−�i−1 +h=1�π(jt) +k⟨h⟩,t˜b(jt) +k⟨h⟩,t(r) +˜b(jt) +k⟨i⟩,t(r) +� +� +∧ +� +1 − +i−1 +� +h=i +�π(jt) +k⟨h⟩,t +� +, ∀i ∈ [2, K + 1], +(13) +is the optimal solution to (12). +Since the objective function of (12) is linear, we can obtain +the maximum value by permuting the objective coefficients +in decreasing order and allocating the greatest possible prob- +ability value in decreasing order of the objective coefficients. +Note that �π(jt) +k,t is automatically set to zero when the utility +is negative. This is because of the probability of skipping +the arrival, �π(jt) +K+1,t = 1−�l−1 +h=1 �π(jt) +k⟨h⟩,t, when ˜u(jt) +K+1,t is the +l-th largest weighted utility function and all the remaining +probability is allocated to �π(jt) +K+1,t. Therefor, the probabili- +ties for actions k with ˜u(jt) +k,t < ˜u(jt) +K+1,t := 0 are all zero. +Our proposed algorithm, Allocate to the Maximum First +(AMF) is presented in Algorithm 1. The algorithm first ex- +plores with the least consumption action until the eigenvalue +condition for the estimator (8) holds. In each round of ex- +ploration, the Gram matrix of all actions is added to Fnt, +and any choice of action increases the eigenvalue of Fnt. +Once the condition (8) holds, the algorithm solves the prob- +lem (12) by computing the closed-form policy (13). The +computational complexity of our algorithm is discussed in +Appendix A.4. +5. Regret Analysis +In this section, we present our regret bound and regret anal- +ysis for the AMF algorithm. +Theorem 5.1. (Regret bound of AMF) Suppose Assumptions +1-3 hold. Let Mα,p,T := 1152α−2p−2 +min log T + 96α−1p−1 +min +and Cσ(δ) := 8 +√ +2(8 + 96σ +� +log 4 +δ ). Suppose T and +ρ satisfies T ≥ 8dα−1p−1 +min log JdT, and ρ ≥ +� +Jd/T. +Setting γθ = 16√J log JKT + 4 +√ +2βσr(δ) and γb = +16√J log JKT + 4 +√ +2βσb(δ), the regret bound of AMF is +R�π +T ≤ +� +2+ OPT +ρT +�� +4d log JdT +αpmin ++2dMα,p,T log Jd +δ +15 ++ +� +96 +� +log JKT +3Cσr∨σb(δ) +�� +JdT log T +10mT 3δ +� +. +For δ ∈ (0, m−1T −3), the regret bound is +R�π +T = O +�OPT +Tρ +� +JdT log mJKT log T +� +. +(14) +The regret bound (14) holds when the hyperparameter δ = +m−1T −3, which requires the knowledge of T. However, +in practice, selecting another value of δ does not affect the +performance of the algorithm. We provide the discussion on +the sensitivity to the hyperparameter choice in Section 6.3. +Setting B = Tρ, the main term of the regret bound is +˜O(OPT/B +√ +JdT) for B = Ω( +√ +JdT). The sublinear +dependence of the regret bound on J, d, and T is a direct +consequence of the improved ˜O( +� +Jd/nt) convergence rate +for the parameter estimates. Agrawal & Devanur (2016) +establish a regret bound Rπ +T = ˜O(OPT/B · d +√ +T) for the +LinCBwK when B = Ω( +√ +dT 3/4). Our bound for LMMP +(which subsumes LinCBwK as a special case) is improved +by a +√ +d factor, and is valid under budget constraints that +relaxed from Ω( +√ +dT +3 +4 ) to Ω( +√ +dT +1 +2 ). +For the proof of the regret bound, we first present the lower +bound of the reward obtained by our algorithm. +Lemma 5.2. Let ˜u(j) +k,t and �b(j) +k,t be the estimates defined in +(10). Denote �π the policy of AMF. Define the good events, +Et := +� +˜u(j) +k,t and �b(j) +k,t satisfies (11). +� +, +Mt := {Fnt satisfies (8).} , +(15) +and Gt := Et ∩ Mt−1. Let τ be the stopping time for the +algorithm and ξ := inft∈[T ] {Mt−1 ∩ {ρt > 0}} be the +starting time after the exploration for condition (8). Then, +the total reward +E +� T +� +t=1 +R�π +t +� +≥ OPT +T +E [τ − ξ]− +� +2+ OPT +ρT +� T +� +t=1 +P(Gc +t ) +−2 +� +1+ OPT +ρT +�� +� +� +�TE +� T +� +t=1 +γt−1,σr∨σb(δ)2I (at ∈ [K]) +� +. +The lower bound consists of three main terms. The first +term OP T +T +E [τ − ξ] relates to the time span for which the +algorithm uses the optimal policy (13). The second term +(2+ OP T +ρT ) �T +t=1P (Gc +t ) is the sum of the probability of bad +events Mc +t−1 over which the minimum eigenvalue of the +Gram matrix Fnt is not large enough for the fast conver- +gence rate, and the event Ec +t over which the estimator goes +out of the confidence interval. And, the third term con- +sists of the sum of confidence lengths for the reward and +consumption. +The following result bounds τ, ξ and the sum of bad events +{Mc +t : t ∈ [T]}. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Lemma 5.3. Suppose Assumptions 1-3 holds and ρ > +� +Jd/T Let Mα,p,T and γt,σ(δ) denote the variables +defined in Theorem 5.1 and Theorem 4.2, respectively. +Then, for any δ ∈ (0, 1/T 2), the starting time ξ := +inft∈[T ]{Mt−1 ∩ {ρt > 0}} and the stopping time τ of +the AMF algorithm is bounded as +E [ξ] ≤ 1+dMα,p,T log +� Jd +δ +� ++T 2δ +ρ ++ 1, +E[T −τ]≤ 4(m + 1)Tδ + 7 + 2γ1,σb(δ) +ρ +, +and for Mt defined as in (15), +T +� +t=1 +P +� +Mc +t−1 +� +≤ T 2δ + dMα,p,T log +�Jd +δ +� +. +The regret bound follows from bounding the probability of +Ec +t with Theorem 4.2 and showing that the sum of square +of γt,σ(δ) is O(Jd log T). The bound holds because the +summation of γt,σ(δ)2 = ˜O( Jd +nt ) over the rounds that at ∈ +[K] happens is �nT +n=1 O(Jd/n) = O(Jd log T). +6. Numerical results +We report the cumulative regrets for given budgets. For the +computation of the regret, we use the following settings. For +each round t ∈ [T] there exists the optimal action whose +reward is 1 with consumption ρ, while the reward of other +actions is less than 1 and the consumption is possibly greater +than ρ. In this case, we can compute the instantaneous regret +by subtracting the reward of a selected action from 1. Detail +settings of the parameter and contexts are in Appendix A.1. +6.1. Regret R�π +T as a function of d +Figure 1 plots log(R�π +T ) vs. log(d) for a single-class (J = +1) LMMP for T = {5000, 20000} and the budget B = +√ +dT, where our ˜O( OP T +B +√ +JdT) regret bound implies that +log(R�π +T ) is constant over d. The regression line on the +plot is nearly flat and the slope of the best fit line is 0.136 +(resp. 0.008) for T = 5000 (resp. T = 20000). The weak +increase in T = 5000 is captured by the O(d log JdmT) +term in our bound, which diminishes for large T. +6.2. Comparison of AMF with OCO +In order to compare AMF with OCO (Agrawal & Devanur, +2016), we set the costs c(1) +k,t = 0 and J = 1. The hyperpa- +rameters for AMF were set to γθ = 1, γb = 1 and δ = 0.01. +Figure 2(a) (resp. (b)) plots the cumulative regret of the +two algorithms with budget B = +√ +dT +3 +4 (resp. B = +√ +dT). +Note that OCO requires a minimum budget B = +√ +dT +3 +4 +whereas AMF requires a lower minimum budget of B = +Figure 1. Logarithm of cumulative regret of the proposed AMF +algorithm on various dimension d when the per-period budget is +ρ = +� +d/T. The gray (resp. black) line is the best fit line on the +points when T = 5000 (resp. T = 20000). +(a) Regret comparison with budget B = +√ +dT 3/4 +(b) Regret comparison with budget B = +√ +dT +Figure 2. Regret of AMF and OCO algorithms for K = 20 and +m = 20. The line and shade represent the average and standard +deviation based on 20 independent experiments. Additional results +on different K and m are in Section A.2. +√ +dT. The regret lines cross because AMF is allowed to skip +arrivals whereas OCO does not skip arrivals. The sudden +bend points at the end of the round in OCO show that it +runs out of budget and has regret = 1. In all cases, our +algorithm performs better and the performance gap increases +as d increases. Note that regret plot for OCO never flattens +out for most cases, where the regret of AMF flattens as t +increases. This is because our new estimator, that uses +contexts from all actions with unbiased pseudo-rewards (5) +for unselected actions, has significantly faster convergence +rate as compared with the estimator used in OCO. +6.3. Sensitivity Analysis +Our proposed AMF algorithm has three hyperparameters: γθ, +γb and δ. The choice of hyperparameters is not sensitive +because the effect of γθ and γb diminishes fast by n−1/2 +t +term and our policy finds the order of the utilities rather than +their absolute values. For δ, which controls the sampling +probabilities (4) in estimators and the exploration rounds +in (8), it also has small effect. This is because the minimum +eigenvalue of Fnt increases in Ω(nt)-rate and reduces the +effect of log 1 +δ terms in (4) and (8). Therefore, our algorithm +guarantees the similar performance for other hyperparame- +ters than specified in Theorem 5.1. For details including the +numerical results and specific recommendations for choos- +ing the hyperparameters, see Appendix A.3. + +7.5 +7.0 +6.5 +. +6.0 +5.5 - +O +5.0 +1.0 +1.5 +2.0 +2.5 +3.0 +log d2000 +OCO +1500 +AMF +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points3500 +OCo +3000 +AMF +2500 +2000 +1500 - +1000- +500 +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision pointsImproved Algorithms for Multi-period Packing Problems with Bandit Feedback +References +Abbasi-Yadkori, Y., P´al, D., and Szepesv´ari, C. Improved +algorithms for linear stochastic bandits. In Advances in +Neural Information Processing Systems, pp. 2312–2320, +2011. +Agrawal, S. and Devanur, N. Linear contextual bandits with +knapsacks. Advances in Neural Information Processing +Systems, 29, 2016. +Agrawal, S. and Devanur, N. R. +Bandits with concave +rewards and convex knapsacks. In Proceedings of the +fifteenth ACM conference on Economics and computation, +pp. 989–1006, 2014a. +Agrawal, S. and Devanur, N. R. Fast algorithms for online +stochastic convex programming. In Proceedings of the +twenty-sixth annual ACM-SIAM symposium on Discrete +algorithms, pp. 1405–1424. SIAM, 2014b. +Amani, S., Alizadeh, M., and Thrampoulidis, C. Linear +stochastic bandits under safety constraints. In Advances in +Neural Information Processing Systems, pp. 9252–9262, +2019. +Azuma, K. Weighted sums of certain dependent random +variables. Tohoku Mathematical Journal, Second Series, +19(3):357–367, 1967. +Badanidiyuru, A., Kleinberg, R., and Slivkins, A. Bandits +with knapsacks. Journal of the ACM (JACM), 65(3):1–55, +2018. +Bang, H. and Robins, J. M. Doubly robust estimation in +missing data and causal inference models. Biometrics, 61 +(4):962–973, 2005. +Bastani, H. and Bayati, M. Online decision making with +high-dimensional covariates. Operations Research, 68 +(1):276–294, 2020. +Bastani, H., Bayati, M., and Khosravi, K. +Mostly +exploration-free algorithms for contextual bandits. Man- +agement Science, 67(3):1329–1349, 2021. +Besbes, O. and Zeevi, A. Dynamic pricing without knowing +the demand function: Risk bounds and near-optimal algo- +rithms. Operations Research, 57(6):1407–1420, 2009. +Besbes, O. and Zeevi, A. Blind network revenue manage- +ment. Operations research, 60(6):1537–1550, 2012. +Boyd, S., Boyd, S. P., and Vandenberghe, L. Convex opti- +mization. Cambridge university press, 2004. +Chu, W., Li, L., Reyzin, L., and Schapire, R. Contextual +bandits with linear payoff functions. +In Proceedings +of the Fourteenth International Conference on Artificial +Intelligence and Statistics, pp. 208–214, 2011. +Devanur, N. R., Jain, K., Sivan, B., and Wilkens, C. A. Near +optimal online algorithms and fast approximation algo- +rithms for resource allocation problems. In Proceedings +of the 12th ACM conference on Electronic commerce, pp. +29–38, 2011. +Dimakopoulou, M., Zhou, Z., Athey, S., and Imbens, G. +Balanced linear contextual bandits. In Proceedings of the +AAAI Conference on Artificial Intelligence, volume 33, +pp. 3445–3453, 2019. +Efron, B. and Tibshirani, R. J. An introduction to the boot- +strap. CRC press, 1994. +Feldman, J., Henzinger, M., Korula, N., Mirrokni, V. S., and +Stein, C. Online stochastic packing applied to display ad +allocation. In European Symposium on Algorithms, pp. +182–194. Springer, 2010. +Ferreira, K. J., Simchi-Levi, D., and Wang, H. Online +network revenue management using thompson sampling. +Operations research, 66(6):1586–1602, 2018. +Gallego, G. and Van Ryzin, G. Optimal dynamic pricing of +inventories with stochastic demand over finite horizons. +Management science, 40(8):999–1020, 1994. +Good, P. I. Resampling methods. Springer, 2006. +Immorlica, N., Sankararaman, K. A., Schapire, R., and +Slivkins, A. Adversarial bandits with knapsacks. In +2019 IEEE 60th Annual Symposium on Foundations of +Computer Science (FOCS), pp. 202–219. IEEE, 2019. +Kannan, S., Morgenstern, J. H., Roth, A., Waggoner, B., and +Wu, Z. S. A smoothed analysis of the greedy algorithm +for the linear contextual bandit problem. Advances in +neural information processing systems, 31, 2018. +Kim, G. and Paik, M. C. Doubly-robust lasso bandit. In +Advances in Neural Information Processing Systems, pp. +5869–5879, 2019. +Kim, W., Kim, G.-S., and Paik, M. C. Doubly robust thomp- +son sampling with linear payoffs. In Beygelzimer, A., +Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Ad- +vances in Neural Information Processing Systems, 2021. +Lattimore, T. and Szepesv´ari, C. Bandit algorithms. Cam- +bridge University Press, 2020. +Lee, J. R., Peres, Y., and Smart, C. K. A gaussian up- +per bound for martingale small-ball probabilities. Ann. +Probab., 44(6):4184–4197, 11 2016. +doi: 10.1214/ +15-AOP1073. +Li, X., Sun, C., and Ye, Y. The symmetry between arms and +knapsacks: A primal-dual approach for bandits with knap- +sacks. In Meila, M. and Zhang, T. (eds.), Proceedings + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +of the 38th International Conference on Machine Learn- +ing, volume 139 of Proceedings of Machine Learning +Research, pp. 6483–6492. PMLR, 18–24 Jul 2021. +Liu, X., Li, B., Shi, P., and Ying, L. An efficient pessimistic- +optimistic algorithm for stochastic linear bandits with +general constraints. +Advances in Neural Information +Processing Systems, 34:24075–24086, 2021. +Moradipari, A., Amani, S., Alizadeh, M., and Thram- +poulidis, C. Safe linear thompson sampling with side +information. IEEE Transactions on Signal Processing, +69:3755–3767, 2021. doi: 10.1109/TSP.2021.3089822. +Oh, M.-h., Iyengar, G., and Zeevi, A. Sparsity-agnostic +lasso bandit. In International Conference on Machine +Learning, pp. 8271–8280. PMLR, 2021. +Pacchiano, A., Ghavamzadeh, M., Bartlett, P., and Jiang, H. +Stochastic bandits with linear constraints. In Banerjee, +A. and Fukumizu, K. (eds.), Proceedings of The 24th +International Conference on Artificial Intelligence and +Statistics, volume 130 of Proceedings of Machine Learn- +ing Research, pp. 2827–2835. PMLR, 13–15 Apr 2021. +Sankararaman, K. A. and Slivkins, A. Bandits with knap- +sacks beyond the worst case. Advances in Neural Infor- +mation Processing Systems, 34:23191–23204, 2021. +Sivakumar, V., Wu, S., and Banerjee, A. Structured linear +contextual bandits: A sharp and geometric smoothed anal- +ysis. In International Conference on Machine Learning, +pp. 9026–9035. PMLR, 2020. +Sivakumar, V., Zuo, S., and Banerjee, A. Smoothed adver- +sarial linear contextual bandits with knapsacks. In Inter- +national Conference on Machine Learning, pp. 20253– +20277. PMLR, 2022. +Tropp, J. A. User-friendly tail bounds for sums of random +matrices. Foundations of computational mathematics, 12 +(4):389–434, 2012. +Tropp, J. A. An introduction to matrix concentration inequal- +ities. Foundations and Trends® in Machine Learning, 8 +(1-2):1–230, 2015. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +(a) Regret comparison under K = 10 and m = 10 +(b) Regret comparison under K = 20 and m = 10 +(c) Regret comparison under K = 10 and m = 20 +(d) Regret comparison under K = 20 and m = 20 +Figure 3. Regret comparison of AMF and OCO algorithms under B = dT 3/4. The line and shade represent the average and standard +deviation based on 20 repeated experiments. +A. Supplementary for Experiments +A.1. Settings of Parameters and Contexts for Regret Computation +For numerical experiments, we devise a setting where explicit regret computation is available. We set J = 1 and c(1) +k,t = 0 for +OCO to be compatible with the setting. For x ∈ R+, let ⌈x⌉ be the smallest integer greater than equal to x. For parameters, +we set θ⋆ = (−1, · · · , −1, ⌈d/2⌉−1, · · · , ⌈d/2⌉−1) and +W⋆ = +� +� +� +� +� +� +� +� +� +� +ρ⌈d/2⌉−1 +· · · +ρ⌈d/2⌉−1 +... +· · · +... +ρ⌈d/2⌉−1 +· · · +ρ⌈d/2⌉−1 +ρ +· · · +ρ +... +... +... +ρ +· · · +ρ +� +� +� +� +� +� +� +� +� +� +, +where the ⌈d/2⌉−1 and ρ⌈d/2⌉−1 terms are in the first ⌈d/2⌉ entries. +For contexts, we set the optimal action as +(0, · · · , 0, 1, · · · , 1), and for other actions, we set (U0,0.05, · · · , U0,0.05, U0,−0.05, · · · , U0,−0.05),where Ua,b the uniform +random variable supports on [a, b]. Then we have the optimal arm with reward 1 and consumption ρ, while other arms have +reward less than 1 and consumption more than ρ. +A.2. Additional Results on Regret Comparison. +Figure 3 (a)-(d) show the regret comparison of AMF and OCO on different terms of K = 10, 20, m = 10, 20, and B = dT 3/4. +Similar to the results in Figure 2(a), our algorithm has less regret than OCO in all cases, especially at the end of the rounds. +The crossing line occurs when our algorithm skips in the middle round when ρt < 0 while OCO does not skip until the +inventory runs out. +Figure 4 (a)-(d) show the regret of AMF and OCO algorithm on various K = 10, 20 and m = 10, 20 with budget B = +√ +dT. +Even in the smaller budget, our algorithm AMF does not run out the inventory and gains more reward than OCO. The gap of +the performance tends to be larger than B = +√ +dT +3 +4 case. + +1000 +d=10, K=20,m=20 +OCO +AMF +800 +600 +400 +200 : +0 · +0 +2000 +4000 +6000 +8000 +10000 +Decision points1000 +d=20, K=20, m=20 +OCO +AMF +800 +600 +400 +200 : +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points1000 +d=10,K=10, m=10 +OCO +AMF +800 +600 +400 +200 +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points1000 +d=20,K=10,m=10 +OCO +AMF +800 +600 - +400 - +200 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points1000 +d=10,K=20, m=10 +OCO +AMF +800 +600 +400 +200 : +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points1000 +d=20,K=20,m=10 +OCO +AMF +800 +600 +400 +200 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points1000 +d=10,K=10, m=20 +OCO +AMF +800 +600 +400 +200 : +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points1000 +d=20,K=10, m=20 +OCO +AMF +800 +600 +400 +200 : +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision pointsImproved Algorithms for Multi-period Packing Problems with Bandit Feedback +(a) Regret comparison under K = 10 and m = 10 +(b) Regret comparison under K = 20 and m = 10 +(c) Regret comparison under K = 10 and m = 20 +(d) Regret comparison under K = 20 and m = 20 +Figure 4. Regret comparison of AMF and OCO algorithms under B = +√ +dT. The line and shade represent the average and standard +deviation based on 20 repeated experiments. +(a) On various γθ +(b) On various γb +(c) On various δ +Figure 5. The reward and inventory of AMF on various hyperparameters γθ, γb and δ. The solid (resp. dashed) line represents the reward +(resp. inventory). The line and shade represent the average and standard deviation based on 10 repeated experiments, respectively. +A.3. Sensitivity Analysis +In this experiment, we present the sensitivity of our algorithm to various hyperparameters. The number of classes is J = 3 +with a uniform prior p = (1/3, 1/3, 1/3)⊤ and every d = 5 elements of K = 10 contexts are generated from the uniform +distribution on [ kj +KJ − 1, kj +KJ + 1] for k ∈ [K] and j ∈ [J]. The costs are generated from the uniform distribution on +[ k(J−j+1)−1 +KJ +, k(J−j+1)+1 +KJ +] for k ∈ [K] and j ∈ [J]. Each element of θ(j) +⋆ +and W (j) +⋆ +is generated from U0,1 and fixed +throughout the experiment. The generated rewards and consumption vectors are not truncated to one to impose more +variability, because our algorithm does not show apparent sensitivity on bounded rewards and consumption vectors. The +budget is ρ = dT −1/2 with a time horizon of T = 2000. +In our algorithm, there are three hyperparameters: (i) a confidence bound for the reward γθ, (ii) a confidence bound for +the consumption γb and (iii) confidence level δ which affects the minimum eigenvalue condition (8). Figure 5(a) and 5(b) +show the reward and inventory of our algorithm on various γθ ∈ {0.01, 0.1, 1} and γb ∈ {0.01, 0.1, 1}. Outside of the +hyperparameter regions, the variability of the reward and the inventory of the algorithm are hardly visible. The algorithm +consumes the budget earlier than previous experiments because the consumption vector is not bounded to 1. As γθ and +γb increase the algorithm is more optimistic and admits the arrival more often, which leads to faster consumption of the +resource. Increasing γθ has small effect on the inventory because the algorithm automatically skips when ρt < 0, i.e., when +the consumption is too fast. However, when γb increases, the LCB of consumption is small and the algorithm uses more + +2500 +d=10,K=10,m=10 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points2500 +d=20,K=10,m=10 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points2500 +d=10,K=20,m=10 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points2500 +d=20,K=20,m=10 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points2500 +d=10,K=10,m=20 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points2500 +d=20, K=10,m=20 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points2500 +d=10,K=20,m=20 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points2500 +d=20, K=20, m=20 +OCO +AMF +2000 +1500 - +1000 - +500 - +0 +0 +2000 +4000 +6000 +8000 +10000 +Decision points700 +600 +500 +400 +300 +200 +Ye=0.01 +100 +Ye=0.10 +0 +Ye=1.00 +0 +250 +500 +750 +1000 +1250 +1500 +1750 +2000 +Decision points700 +600 +500 +400 +300 +200 +Yb=0.01 +100 +Yb=0.10 +0 +Yb=1.00 +0 +250 +500 +750 +1000 +1250 +1500 +1750 +2000 +Decision points700 +600 +500 +400 +300 +200 +6=1e-01 +§=1e-04 +100 +6=1e-07 +0 +0 +250 +500 +750 +1000 +1250 +1500 +1750 +2000 +Decision pointsImproved Algorithms for Multi-period Packing Problems with Bandit Feedback +resource than tρ. For the specific value of the hyperparameters we recommend to use grid search on γθ × γb ∈ [0, 1]2 to +maximize the reward. +Figure 5(c) shows the reward and inventory of AMF on various δ ∈ {10−1, 10−4, 10−7}. When δ ≥ 10−1 (resp. δ ≤ 10−7) +the reward and inventories are same with δ = 10−1 (resp. δ = 10−7). As δ decreases, the threshold for condition (8) +increases and the algorithm explores more with minimum possible consumption. This results in the slower consumption +of the resource. However, we recommend using δ = 0.1, which is greater than the specified value in Theorem 5.1 for the +algorithm to start using its policy in earlier rounds. +A.4. Computational Complexity of AMF +The computational complexity of our algorithm is ˜O(d3mKT + Jd3T) where the main order occurs from updating the +estimators and computing the eigenvalues of J symmetric positive-definite matrix Fnt. Note that Computing estimators +does not depend on J because the algorithm updates only jt-th variables for each t ∈ [T]. +B. Missing Proofs +B.1. Proof of Theorem 4.1 +Proof. Because the construction of �Θt and � +Wt is the same, the bound for the � +Wt follows immediately from the bound for +�Θt by replacing {r +(jτ(ν)) +aτ(ν),τ(ν) : ν ∈ [nt]} with m entries of {b +(jτ(ν)) +aτ(ν),τ(ν) : ν ∈ [nt]}. Thus, it is sufficient to prove the bound +for �Θt. +Step 1. Estimation error decomposition: +Let us fix t ∈ [T] throughout the proof. For each ν ∈ [nt] and k ∈ [K], denote +Xk,ν := ˜Xk,ν ˜X⊤ +k,ν. Then we can write +Vnt := +� +ν∈Ψnt +K +� +k=1 +Xk,ν + +� +ν /∈Ψnt +Xaτ(ν),ν + IJ·d, +Ant := +� +ν∈Ψnt +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν + +� +ν /∈Ψnt +Xk,ν + IJ·d. +Denote the errors ˜ηk,ν := ˜rk,ν − ˜X⊤ +k,νΘ⋆ and ηk,ν := r +(jτ(ν)) +k,τ(ν) − ˜X⊤ +k,νΘ⋆. By the definition of the estimator �Θnt, +����Θnt − Θ∗��� +Vnt += +������ +V −1/2 +nt +� +� +�−Θ∗ + +� +ν∈Ψnt +K +� +k=1 +˜ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηk,ν ˜Xaτ(ν),ν +� +� +� +������ +2 +≤ λmax +� +V −1/2 +nt +� +∥Θ∗∥2 + +������ +V −1/2 +nt +� +� +� +� +ν∈Ψnt +K +� +k=1 +˜ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηk,ν ˜Xaτ(ν),ν +� +� +� +������ +2 +≤ +√ +Jd + +������ +V −1/2 +nt +� +� +� +� +ν∈Ψnt +K +� +k=1 +˜ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηk,ν ˜Xaτ(ν),ν +� +� +� +������ +2 +, +(16) +where and the last inequality holds because +���θ(j) +⋆ +��� +2 ≤ +√ +d. Plugging in ˜rk,ν defined in (5), +˜ηk,ν ˜Xk,ν = +� +1 − I (hν = k) +φk,ν +� +˜Xk,ν ˜X⊤ +k,ν +�ˇΘt − Θ∗� ++ I (hν = k) +φk,ν +ηk,ν ˜Xk,ν, + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +and the term � +ν∈Ψnt +�K +k=1 ˜ηk,ν ˜Xk,ν is decomposed as, +� +ν∈Ψnt +K +� +k=1 +˜ηk,ν ˜Xk,ν = +� +ν∈Ψnt +K +� +k=1 +�� +1 − I (hν = k) +φk,ν +� +Xk,ν +�ˇΘt − Θ∗� ++ I (hν = k) +φk,ν +ηk,ν ˜Xk,ν +� +. +(17) +By definition of the IPW estimator ˇΘt, +� +ν∈Ψnt +K +� +k=1 +� +1 − I (hν = k) +φk,ν +� +Xk,ν +�ˇΘt − Θ∗� += +� +� +� +� +ν∈Ψnt +K +� +k=1 +� +1 − I (hν = k) +φk,ν +� +Xk,ν +� +� +� A−1 +nt +� +�−Θ∗ + +� +ν∈Ψnt +K +� +k=1 +I (hν = k) +φk,ν +ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηaτ(ν),ν ˜Xaτ(ν),ν +� +� += (Vnt − Ant) A−1 +nt +� +�−Θ∗ + +� +ν∈Ψnt +K +� +k=1 +I (hν = k) +φk,ν +ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηaτ(ν),ν ˜Xaτ(ν),ν +� +� +:= (Vnt − Ant) A−1 +nt (−Θ∗ + Snt) , +(18) +where +Snt := +� +ν∈Ψnt +K +� +k=1 +I (hν = k) +φk,ν +ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηaτ(ν),ν ˜Xaτ(ν),ν, +then, +����Θnt − Θ∗��� +Vnt +≤ +(16) +√ +Jd + +������ +V −1/2 +nt +� +� +� +� +ν∈Ψnt +K +� +k=1 +˜ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηk,ν ˜Xaτ(ν),ν +� +� +� +������ +2 += +(17),(18) +√ +Jd + +���V −1/2 +nt +� +(Vnt − Ant) A−1 +nt (−Θ∗ + Snt) + Snt +���� +2 . +By triangular inequality, +����Θt − Θ∗��� +Vnt +≤ +√ +Jd + +���V −1/2 +nt +� +(Vnt − Ant) A−1 +nt (−Θ∗ + Snt) + Snt +���� +2 +≤ +√ +Jd + +���V −1/2 +nt +(Vnt − Ant) A−1 +nt (−Θ∗ + Snt) +��� +2 + ∥Snt∥V −1 +nt +≤ +√ +Jd + +��� +� +V 1/2 +nt A−1 +nt V 1/2 +nt +− IJ·d +� � +−V −1/2 +nt +Θ∗ + V −1/2 +t +Snt +���� +2 + ∥Snt∥V −1 +nt +≤ +√ +Jd + +���V 1/2 +nt A−1 +nt V 1/2 +nt +− IJ·d +��� +2 +���−V −1/2 +nt +Θ∗ + V −1/2 +t +Snt +��� +2 + ∥Snt∥V −1 +nt +≤ +√ +Jd + +���V 1/2 +nt A−1 +nt V 1/2 +nt +− IJ·d +��� +2 +�√ +Jd + ∥Snt∥V −1 +nt +� ++ ∥Snt∥V −1 +nt += +����V 1/2 +nt A−1 +nt V 1/2 +nt +− IJ·d +��� +2 + 1 +� �√ +Jd + ∥Snt∥V −1 +nt +� +. +(19) +Step 2. Bounding the ∥ · ∥2 of the matrix in (19) +We claim that +V 1/2 +nt A−1 +nt V 1/2 +nt +⪰ 1 +8IJ·d +(20) + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Define Fnt := �nt +ν=1 +�K +k=1 Xk,ν + 16Kd log( Jd +δ )IJ·d. Then we have Vnt ⪯ Fnt and V 1/2 +nt A−1 +nt V 1/2 +nt +⪰ F −1/2 +nt +AntF −1/2 +nt +. +Now we decompose the matrix Ant as +F −1/2 +nt +AntF −1/2 +nt +=F −1/2 +nt +� nt +� +ν=1 +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν + IJ·d +� +F −1/2 +nt ++ F −1/2 +nt +� +� � +ν /∈Ψnt +� +Xaτ(ν),ν − +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν +�� +� F −1/2 +nt +. +(21) +For each ν ∈ [nt], the matrix �K +k=1 +I(hν=k) +φk,ν +F −1/2 +nt +Xk,νF −1/2 +nt +symmetric positive definite and +λmax +� +8 log Jd +δ +K +� +k=1 +I (hν = k) +φk,ν +F −1/2 +nt +Xk,νF −1/2 +nt +� +≤8 log Jd +δ +K +� +k=1 +I (hν = k) +φk,ν +λmax +� +F −1 +nt +� +≤8 log Jd +δ +λmin(Fν) +16 log +� Jd +δ +�λmax +� +F −1 +nt +� +≤1 +2 +λmin(Fν) +λmin(Fnt) +≤1 +2. +(22) +With the filtration F0 := Ht and Fn := F0 ∪ {hν : ν ∈ [n]}, we use Lemma C.3 to have with probability at least 1 − δ, +8 log Jd +δ F −1/2 +nt +� nt +� +ν=1 +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν + IJ·d +� +F −1/2 +nt +⪰ 8 log Jd +δ F −1/2 +nt +� nt +� +ν=1 +K +� +k=1 +Xk,ν + IJ·d +� +F −1/2 +nt += 4 log Jd +δ IJ·d − log Jd +δ IJ·d += 3 log Jd +δ IJ·d, +which implies +F −1/2 +nt +� nt +� +ν=1 +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν + IJ·d +� +F −1/2 +nt +⪰ 3 +8IJ·d, +(23) +and the first term in (21) is bounded as +F −1/2 +nt +AntF −1/2 +nt +⪰ 3 +8IJ·d + F −1/2 +nt +� +� � +ν /∈Ψnt +� +Xaτ(ν),ν − +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν +�� +� F −1/2 +nt +. +(24) +To bound the other term, observe that for ν /∈ Ψnt, +E +� K +� +k=1 +I (hν = k) +φk,ν +Xk,ν +����� Ht +� += +� +i̸=aτ(ν) +K +� +k=1 +φi,ν +I (i = k) +φk,ν +Xk,ν = +� +k̸=aτ(ν) +Xk,ν. +Because (22) holds for ν /∈ Ψnt, we can use Lemma C.3, to have with probability at least 1 − δ +8 log Jd +δ F −1/2 +nt +� +� � +ν /∈Ψnt +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν +� +� F −1/2 +nt +⪯ 12 log Jd +δ F −1/2 +nt +� +� � +ν /∈Ψnt +� +k̸=aτ(ν) +Xk,ν +� +� F −1/2 +nt ++ log Jd +δ IJ·d. +Rearranging the terms, +F −1/2 +nt +� +� � +ν /∈Ψnt +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν +� +� F −1/2 +nt +⪯ 3 +2F −1/2 +nt +� +� � +ν /∈Ψnt +� +k̸=aτ(ν) +Xk,ν +� +� F −1/2 +nt ++ 1 +8IJ·d. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Thus the second term in (24) is bounded as, +F −1/2 +nt +� +� � +ν /∈Ψnt +� +Xaτ(ν),ν − +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν +�� +� F −1/2 +nt +⪰ F −1/2 +nt +� +� � +ν /∈Ψnt +� +� +�Xaτ(ν),ν − 3 +2 +� +ν /∈Ψt +� +k̸=aτ(ν) +Xk,ν +� +� +� +� +� F −1/2 +nt +− 1 +8IJ·d +⪰ −3 +2F −1/2 +nt +� +� � +ν /∈Ψnt +K +� +k=1 +Xk,ν +� +� F −1/2 +nt +− 1 +8IJ·d +⪰ − +�3dK +2 +��Ψc +nt +�� λmax +� +F −1 +nt +� ++ 1 +8 +� +IJ·d, +(25) +where the last inequality holds by λmax(Xk,ν) ≤ d. By Lemma C.3, with probability at least 1 − δ, +1 +2 +��Ψc +nt +�� = 1 +2 +nt +� +ν=1 +I +� +hν ̸= aτ(ν) +� +≤ 3 +2 +nt +� +ν=1 +� +k̸=aτ(ν) +φk,ν + log 1 +δ , +which implies +3dK +2 +��Ψc +nt +�� λmax +� +F −1 +nt +� +≤ +3Kd +2λmin(Fnt) +� +� +�3 +nt +� +ν=1 +� +k̸=aτ(ν) +φk,ν + 2 log 1 +δ +� +� +� += +3Kd +2λmin(Fnt) +� nt +� +ν=1 +48 (K − 1) log +� Jd +δ +� +λmin(Fν) ++ 2 log 1 +δ +� +Because the assumption (8), +λmin(Fnt) ≥ 12Kd +� nt +� +ν=1 +48 (K − 1) log +� Jd +δ +� +λmin(Fν) ++ 2 log Jd +δ +� +, +implies +3dK +2 +��Ψc +nt +�� λmax +� +F −1 +nt +� +≤ +3Kd +2λmin(Fnt) +� nt +� +ν=1 +48 (K − 1) log +� Jd +δ +� +λmin(Fν) ++ 2 log 1 +δ +� +≤ 1 +8, +(26) +plugging in (25), with probability at least 1 − 2δ, +F −1/2 +nt +� +� � +ν /∈Ψnt +� +Xaτ(ν),ν − +K +� +k=1 +I (hν = k) +φk,ν +Xk,ν +�� +� F −1/2 +nt +⪰ −1 +4IJ·d. +With (24), +F −1/2 +nt +AntF −1/2 +nt +⪰ 1 +8IJ·d, +which proves (20) and the claim implies +���V 1/2 +nt A−1 +nt V 1/2 +nt +− IJ·d +��� +2 ≤ 7. +Step 3. Bounding the self-normalized vector-valued martingale Snt +Let F0 be a sigma algebra generated by contexts +{x(js) +k,s : k ∈ [K], s ∈ [t]}, and Ψt. Define filtration as Fν := σ(F0 ∪ Hτ(ν+1)). Then Sν is a RJ·d-valued martingale + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +because +E [Sν − Sν−1| Fν−1] = E +� +I (ν ∈ Ψnt) +K +� +k=1 +I (hν = k) +φk,ν +ηk,ν ˜Xk,ν + I (ν /∈ Ψnt) ηaτ(ν),τ(ν) ˜Xk,ν +����� Fν−1 +� += E +� +I (ν ∈ Ψnt) +K +� +k=1 +I +� +aτ(ν) = k +� +φk,ν +ηk,ν ˜Xk,ν + I (ν /∈ Ψnt) ηaτ(ν),τ(ν) ˜Xk,ν +����� Fν−1 +� += E +�� +I (ν ∈ Ψnt) +φaτ(ν),ν ++ I (ν /∈ Ψnt) +� +ηaτ(ν),ν ˜Xk,ν +����� Fν−1 +� += E +�� +I (ν ∈ Ψnt) +φaτ(ν),ν ++ I (ν /∈ Ψnt) +� +ηaτ(ν),ν ˜Xk,ν +����� Hτ(ν) +� += 0, +where the second equality holds by definition of Ψnt and the fourth inequality holds because the distribution of {x(js) +k,s : k ∈ +[K], s ∈ (τ(ν), t]} is independent of Hτ(ν) by Assumption 2. By Assumption 1, for any λ ∈ R, +E +� +exp +� +λ +� +I (ν ∈ Ψnt) +φaτ(ν),ν ++I (ν /∈ Ψnt) +� +ηk,aτ(ν) +������ Fν−1 +� +≤ E +� +�exp +� +�λ2σ2 +2 +� +I (ν ∈ Ψnt) +φaτ(ν),ν ++I (ν /∈ Ψnt) +�2� +� +������ +Fν−1 +� +� +≤ exp +� +2λ2σ2 +r +� +, +Thus, +� +I(ν∈Ψnt) +φaτ(ν),ν + I (ν /∈ Ψnt) +� +ηk,aτ(ν) is 2σr-sub-Gaussian. Because +∥Snt∥V −1 +t += +������ +� +ν∈Ψnt +K +� +k=1 +I (hν = k) +φk,ν +ηk,ν ˜Xk,ν + +� +ν /∈Ψnt +ηaτ(ν),ν ˜Xaτ(ν),ν +������ +V −1 +nt += +����� +nt +� +ν=1 +� +I (ν ∈ Ψnt) +φaτ(ν),ν ++ I (ν /∈ Ψnt) +� +ηaτ(ν),ν ˜Xk,ν +����� +V −1 +nt += +����� +nt +� +ν=1 +� +I (ν ∈ Ψnt) +φaτ(ν),ν ++ I (ν /∈ Ψnt) +� +ηaτ(ν),νV −1/2 +nt +˜Xk,ν +����� +2 +, +by Lemma C.6, with probability at least 1 − δ, +∥St∥V −1 +t +≤ +����� +nt +� +ν=1 +� +I (ν ∈ Ψnt) +φaτ(ν),ν ++ I (ν /∈ Ψt) +� +ηaτ(ν),νV −1/2 +nt +˜Xk,ν +����� +2 +, +≤12σr +� +� +� +� +nt +� +ν=1 +���V −1/2 +nt +˜Xk,ν +��� +2 +2 log 4 +δ +≤12σr +� +Jd log 4 +δ , +where the last inequality holds because +nt +� +ν=1 +���V −1/2 +nt +˜Xk,ν +��� +2 +2 = +nt +� +ν=1 +˜X⊤ +k,νV −1 +nt +˜Xk,ν = Tr +� nt +� +ν=1 +˜X⊤ +k,νV −1 +nt +˜Xk,ν +� += Tr +� nt +� +ν=1 +˜Xk,ν ˜X⊤ +k,νV −1 +nt +� +≤ Tr +� +VntV −1 +nt +� += Jd. +With (19), the proof is completed + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +B.2. Proof of Theorem 4.2 +Proof. Similar to the proof of Theorem 4.1, the bound for consumption vector immediately follows from the bound for the +utilities. Therefore we provide the proof for the utility bound. +Step 1. Decomposition: +For each k ∈ [K] and j ∈ [J], +���u⋆(j) +k +− ˜u(j) +k,t+1 +��� ≤ +�����E +� +x(j) +k +�⊤ +θ(j) +⋆ +− +� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +x(j) +k,s +�⊤ �θ(j) +t +������ ++ +�����E +� +c(j) +k +� +− +� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) c(j) +k,s +������ +≤ +�����E +� +x(j) +k +�⊤ +θ(j) +⋆ +− +� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +x(j) +k,s +�⊤ +θ(j) +⋆ +������ ++ +�����E +� +c(j) +k +� +− +� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) c(j) +k,s +������ ++ +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +x(j) +k +�⊤ +θ(j) +⋆ +− +� +x(j) +k,s +�⊤ +θ(j) +⋆ +������ ++ +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +c(j) +k +� +− c(j) +k,s +������ ++ +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� . +Taking maximum over k ∈ [K] gives the decomposition, +max +k∈[K] +���u⋆(j) +k +− ˜u(j) +k,t+1 +��� ≤ max +k∈[K] +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +x(j) +k +�⊤ +θ(j) +⋆ +− +� +x(j) +k,s +�⊤ +θ(j) +⋆ +������ ++ max +k∈[K] +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +c(j) +k +� +− c(j) +k,s +������ ++ max +k∈[K] +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� . +(27) +Step 2. +Bounding the difference between expectation and empirical distribution: +The random variables +�� +x(j) +k,s +�⊤ +θ(j) +⋆ +: s ∈ [t] +� +and {c(j) +k,s : s ∈ [t]} are IID by Assumption 2. Using Lemma C.1, +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +x(j) +k +�⊤ +θ(j) +⋆ +− +� +x(j) +k,s +�⊤ +θ(j) +⋆ +������ += +1 +�t+1 +s=1 I (js = j) +����� +t+1 +� +s=1 +I (js = j) +� +E +� +x(j) +k +�⊤ +θ(j) +⋆ +− +� +x(j) +k,s +�⊤ +θ(j) +⋆ +������ +≤ +4 +��t+1 +s=1 I (js = j) +� +log JKT, + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +and +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +c(j) +k +� +− c(j) +k,s +������ ≤ +4 +��t+1 +s=1 I (js = j) +� +log JKT +with probability at least 1 − 4(JKT)−1. By Lemma C.3, with probability at least 1 − (JT)−1, +t+1 +� +s=1 +I (js = j) ≥ 1 +2pj (t + 1) − 2 log JT ≥ 1 +4pj (t + 1) , +(28) +where the last inequality holds by the assumption t ≥ 8dα−1p−1 +min log JT. Summing up the probability bounds, with +probability at least 1 − 5T −1, +max +k∈[K] +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +x(j) +k +�⊤ +θ(j) +⋆ +− +� +x(j) +k,s +�⊤ +θ(j) +⋆ +������ ≤ +4 +��t+1 +s=1 I (js = j) +� +log JKT +≤ +8 +� +pj (t + 1) +� +log JKT, +max +k∈[K] +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +E +� +c(j) +k +� +− c(j) +k,s +������ ≤ +8 +� +pj (t + 1) +� +log JKT. +Plugging in the decomposition (27), for each j ∈ [J], +max +k∈[K] +���u⋆(j) +k +− ˜u(j) +k,t+1 +��� ≤ +16 +� +pj (t + 1) +� +log JKT ++ max +k∈[K] +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t−1 − θ(j) +⋆ +�⊤ +x(j) +k,s +����� . +Taking square and summing up over j ∈ [J] gives +J +� +j=1 +pj max +k∈[K] +���u⋆(j) +k +− ˜u(j) +k,t+1 +��� +2 +≤16J log JKT +t + 1 ++ +J +� +j=1 +pj max +k∈[K] +����� +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� +2 +, +(29) +Step 3. Bounding the prediction error: +By Cauchy-Schwartz inequality and (28), +J +� +j=1 +max +k∈[K] pj +1 +��t+1 +s=1 I (js = j) +�2 +����� +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� +2 +≤ +J +� +j=1 +max +k∈[K] pj +1 +�t+1 +s=1 I (js = j) +t+1 +� +s=1 +I (js = j) +�� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +�2 +≤ +J +� +j=1 +max +k∈[K] +4pj +pj (t + 1) +t+1 +� +s=1 +I (js = j) +�� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +�2 += +4 +(t + 1) +J +� +j=1 +max +k∈[K] +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +�t+1 +� +s=1 +I (js = j) x(j) +k,s +� +x(j) +k,s +�⊤ +� � +�θ(j) +t +− θ(j) +⋆ +� +≤ +4 +(t + 1) +J +� +j=1 +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +�t+1 +� +s=1 +K +� +k=1 +I (js = j) x(j) +k,s +� +x(j) +k,s +�⊤ +� � +�θ(j) +t +− θ(j) +⋆ +� += +4 +(t + 1) +� +�Θt − Θ⋆�⊤ +�t+1 +� +s=1 +K +� +k=1 +˜Xk,s ˜X⊤ +k,s +� � +�Θt − Θ⋆� +, +(30) + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +where Θ⋆ := (θ(1) +⋆ , . . . , θ(J) +⋆ +)T ∈ RJ·d and +˜Xk,s := +� +� +� +� +� +0d +... +x(js) +k,s +0d +� +� +� +� +� ∈ RJ·d, +where the context x(js) +k,s is located after js − 1 of 0d vectors. We claim that +1 +t + 1 +t+1 +� +s=1 +K +� +k=1 +˜Xk,s ˜X⊤ +k,s ⪯ 2E +� +˜Xk,1 ˜X⊤ +k,1 +� +⪯ +7 +|Ψnt| +� +s∈Ψnt +K +� +k=1 +˜Xk,s ˜X⊤ +k,s, +(31) +with probability at least 1 − 2T −1. The matrix Xs := �K +k=1 ˜Xk,s ˜X⊤ +k,sis symmetric nonnegative definite which satisfies +λmax +� +1 +2dK Xs +� +≤ 1 +2. +By Lemma C.3, with probability at least 1 − T −1, +1 +2Kd +t+1 +� +s=1 +Xs ⪯ +3 +4Kd +t+1 +� +s=1 +E [Xs] + (log JdT) IJ·d, +which implies +1 +t + 1 +t+1 +� +s=1 +Xs ⪯ +3 +2 (t + 1) +t+1 +� +s=1 +E [Xs] + 2dK +t + 1 (log JdT) IJ·d. +(32) +By Assumption 3, for s ∈ [t + 1], +λmin(E [Xs]) =λmin +� +� +� +� +� +� +� +� +� +� +p1Exk∼F1 +��K +k=1 xkx⊤ +k +� +0 +0 +0 +... +0 +0 +0 +pJExk∼FJ +��K +k=1 xkx⊤ +k +� +� +� +� +� +� +� +� +� +� +� +≥λmin +� +� +� +� +� +� +p1KαId +0 +0 +0 +... +0 +0 +0 +pJKαId +� +� +� +� +� +� +≥Kpminα. +For t ≥ 8dα−1p−1 +min log JdT , +t+1 +� +s=1 +E [Xs] ⪰ +t+1 +� +s=1 +λmin(E [Xs]) IJ·d ⪰ (t + 1) KpminαIJ·d ⪰ 4dK +� +log Jd +δ +� +Plugging in (32) proves the first inequality of (31), +1 +t + 1 +t+1 +� +s=1 +Xs ⪯ +2 +(t + 1) +t+1 +� +s=1 +E [Xs] = 2E [X1] , +where the equality holds because EXs = EX1 for all s ∈ [T]. To prove the second inequality, +E [X1] = |Ψnt|−1 � +ν∈Ψnt +E +� +Xτ(ν) +� +, + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +and by Lemma C.3, with probability at least 1 − T −1, +1 +2Kd +� +ν∈Ψnt +Xτ(ν) ⪰ +1 +4Kd +� +ν∈Ψnt +E +� +Xτ(ν) +� +− (log JdT) IJ·d. +Rearranging the terms, +� +ν∈Ψnt +E +� +Xτ(ν) +� +⪯ 2 +� +ν∈Ψnt +Xτ(ν) + 4Kd (log JdT) IJ·d +(33) +By definition of Fnt, +� +ν∈Ψnt +Xτ(ν) =Fnt − +� +ν /∈Ψnt +Xτ(ν) − 16d(K − 1) log Jd +δ IJ·d +⪰Fnt − +� +Kd +��Ψc +nt +�� + 16d(K − 1) log Jd +δ +� +IJ·d. +(34) +Because the condition (8) holds, we can use (26) to have, +Kd +��Ψc +nt +�� ≤ +1 +12λmax +� +F −1 +nt +� = λmin(Fnt) +12 +, +(35) +and +Fnt − +� +Kd +��Ψc +nt +�� + 16d(K − 1) log Jd +δ +� +IJ·d ⪰Fnt − +� 1 +12λmin(Fnt) + 16d(K − 1) log Jd +δ +� +IJ·d +⪰ +�11 +12λmin(Fnt) − 16d(K − 1) log Jd +δ +� +IJ·d +⪰ +�11 +1224Kd log Jd +δ IJ·d − 16d(K − 1) log Jd +δ +� +IJ·d +⪰ +� +6dK log Jd +δ +� +IJ·d +⪰ {6dK log JdT} IJ·d, +(36) +where the third inequality holds by condition (8) and the last inequality holds by δ < T −1. Collecting the bounds (34) +and (36) +� +ν∈Ψnt +Xτ(ν) ⪰ Fnt − +� +Kd +��Ψc +nt +�� + 16d(K − 1) log Jd +δ +� +IJ·d ⪰ {6dK log JdT} IJ·d. +Plugging in (33), +� +ν∈Ψnt +E +� +Xτ(ν) +� +⪯ 2 +� +ν∈Ψnt +Xτ(ν) + 4Kd (log JdT) IJ·d ⪯ 7 +2 +� +ν∈Ψnt +Xτ(ν), +proves the second inequality in claim (20). From (30), +J +� +j=1 +max +k∈[K] pj +1 +��t+1 +s=1 I (js = j) +�2 +����� +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� +2 +≤ +4 +(t + 1) +� +�Θt − Θ⋆�⊤ +�t+1 +� +s=1 +K +� +k=1 +˜Xk,s ˜X⊤ +k,s +� � +�Θt − Θ⋆� +≤ +28 +|Ψnt| +� +�Θt − Θ⋆�⊤ +� +� +� +� +s∈Ψnt +K +� +k=1 +˜Xk,s ˜X⊤ +k,s +� +� +� +� +�Θt − Θ⋆� +≤ +28 +|Ψnt| +� +�Θt − Θ⋆�⊤ +{Vnt} +� +�Θt − Θ⋆� += +28 +|Ψnt| +����Θt − Θ⋆��� +2 +Vnt +(37) + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +On bounding the normalizing matrix, the novel Gram matrix Vnt plays a crucial role. To obtain an upper bound for (37), we +need a matrix whose eigenvalue is greater than that of: +� +ν∈Ψnt +Xτ(ν) = +� +ν∈Ψnt +K +� +k=1 +˜Xk,τ(ν) ˜X⊤ +k,τ(ν), +(38) +However, with � +ν∈Ψnt ˜Xaτ(ν),τ(ν) ˜X⊤ +aτ(ν),τ(ν) , a Gram matrix consist of only selected contexts, we cannot bound the +matrix (38). Instead, by using a Gram matrix Vt, we can bound (38) as, +� +ν∈Ψnt +Xτ(ν) = +� +ν∈Ψnt +K +� +k=1 +˜Xk,τ(ν) ˜X⊤ +k,τ(ν) +⪯ +� +ν∈Ψnt +K +� +k=1 +˜Xk,τ(ν) ˜X⊤ +k,τ(ν) + +� +ν /∈Ψnt +˜Xaτ(ν),τ(ν) ˜X⊤ +aτ(ν),τ(ν) +⪯Vnt, +and prove the bound (37) to relate the prediction error to the self-normalized bound. From (37), by Theorem 4.1 +J +� +j=1 +max +k∈[K] pj +1 +��t+1 +s=1 I (js = j) +�2 +����� +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� +2 +≤ 28 +|Ψnt| +����Θt − Θ⋆��� +2 +Vnt +≤ 28 +|Ψnt|βσr(δ)2, +with probability at least 1 − 4(m + 1)δ. Because |Ψnt| + +��Ψc +nt +�� = nt and (35) holds, +|Ψnt| ≥ nt − +��Ψc +nt +�� ≥ nt − λmin(Fnt) +12Kd +≥ nt − ntKd +12Kd = 11 +12nt. +Thus, +J +� +j=1 +max +k∈[K] pj +1 +��t+1 +s=1 I (js = j) +�2 +����� +t+1 +� +s=1 +I (js = j) +� +�θ(j) +t +− θ(j) +⋆ +�⊤ +x(j) +k,s +����� +2 +≤ 28 +|Ψnt|βσr(δ)2 +≤12 +11 · 28 +nt +βσr(δ)2 +≤32 +nt +βσr(δ)2. +From (29), +� +� +� +� +J +� +j=1 +pj max +k∈[K] +���u⋆(j) +k +− ˜u(j) +k,t+1 +��� +2 +≤ 16√J log JKT +√ +t ++ 4 +√ +2βσr(δ) +√nt +and the proof is completed. +B.3. Proof of Lemma 4.3 +Proof. Suppose a feasible policy ˜π(j) +k,t for the optimization problem (1) satisfies +K+1 +� +k=1 +˜π(jt) +k,t ˜u(jt) +k,t > +K+1 +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t , +which is equivalent to +K+1 +� +l=1 +˜π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t > +K+1 +� +l=1 +�π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t. +(39) + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Without loss of generality we assume �u(jt) +k⟨l⟩,t ≥ 0 (Because �K+1 +l=1 ˜π(jt) +k⟨l⟩,t = �K+1 +l=1 �π(jt) +k⟨l⟩,t = 1, we can subtract �u(jt) +k⟨K+1⟩,t +on both side of (39)). By the constraints on the resources, +˜π(jt) +k⟨1⟩,t ≤ +� +� min +r∈[m] +ρt(r) +b(jt) +k⟨1⟩,t(r) +� +� ∧ 1 = �π(jt) +k⟨1⟩,t +Suppose ˜π(jt) +k⟨1⟩,t < �π(jt) +k⟨1⟩,t. Because �K+1 +l=1 ˜π(jt) +k⟨l⟩,t = �K+1 +l=1 �π(jt) +k⟨l⟩,t = 1, by Lemma C.2, +K+1 +� +l=1 +˜π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t ≤ +K+1 +� +l=1 +�π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t, +which contradicts with (39). Thus we have ˜π(jt) +k⟨1⟩,t = �π(jt) +k⟨1⟩,t and +K+1 +� +l=2 +˜π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t > +K+1 +� +l=2 +�π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t. +(40) +Again, by the constraints on the resources,˜π(jt) +k⟨2⟩,t ≤ �π(jt) +k⟨2⟩,t. Suppose ˜π(jt) +k⟨2⟩,t < �π(jt) +k⟨2⟩,t. Because �K+1 +l=2 ˜π(jt) +k⟨l⟩,t = +�K+1 +l=2 �π(jt) +k⟨l⟩,t, by Lemma C.2, +K+1 +� +l=2 +˜π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t ≤ +K+1 +� +l=2 +�π(jt) +k⟨l⟩,t˜u(jt) +k⟨l⟩,t. +which contradicts with (40). Thus we have ˜π(jt) +k⟨2⟩,t = �π(jt) +k⟨2⟩,t Recursively, we have ˜π(jt) +k⟨l⟩,t = �π(jt) +k⟨l⟩,t for all l ∈ [K + 1]. Thus +there exist no feasible solution ˜π(j) +k,t such that (39) holds and the proof is completed. +B.4. Proof of Lemma 5.2 +Proof. For each t ∈ [T], denote the good events Gt := Et ∩ Mt−1. +Step 1. Bounds for the estimates ˜u(jt) +k,t and ˜b(jt) +k,t : +For each t ∈ [T] and k ∈ [K], +˜u(jt) +k,t = ˜u(jt) +k,t − u⋆(jt) +k ++ u⋆(jt) +k +=γt−1,σr(δ) +√pjt ++ �u(jt) +k,t − u⋆(jt) +k ++ u⋆(jt) +k +≥ +γt−1,σr(δ) − √pjt maxk∈[K] +����u(jt) +k,t − u⋆(jt) +k +��� +√pjt ++ u⋆(jt) +k +. +Under the event Gt, +√pjt max +k∈[K] +���u⋆(jt) +k +− �u(jt) +k,t +��� = +� +pjt max +k∈[K] +���u⋆(jt) +k +− �u(jt) +k,t +��� +2 +≤ +� +� +� +� +J +� +j=1 +pj max +k∈[K] +���u⋆(j) +k +− �u(j) +k,t +��� +2 +≤γt−1,σr(δ), +which implies +˜u(jt) +k,t ≥ u⋆(jt) +k +. +(41) +Similarly, +˜b(jt) +k,t ≤ b⋆(jt) +k +. +(42) + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Another useful bound for ˜u(jt) +k,t is +E +�� +t∈U +K +� +k=1 +�π(jt) +k,t +���˜u(jt) +k,t − u⋆(jt) +k +��� I (Gt) +� +≤ 2γt−1,σr(δ) +� +E [I (at ∈ [K])]. +(43) +This bound is proved by the tower property of conditional expectation and Cauchy-Schwartz inequality, +E +� K +� +k=1 +�π(jt) +k,t +���˜u(jt) +k,t − u⋆(jt) +k +��� I (Gt) +� +=E +� +max +k∈[K] +���˜u(jt) +k,t − u⋆(jt) +k +��� +K +� +k=1 +�π(jt) +k,t I (Gt) +� +=E +� +max +k∈[K] +���˜u(jt) +k,t − u⋆(jt) +k +��� I (at ∈ [K]) I (Gt) +� +=E +� +� +J +� +j=1 +pj max +k∈[K] +���˜u(j) +k,t − u⋆(j) +k +��� I (at ∈ [K]) I (Gt) +� +� +≤E +� +� +� +� +� +� +J +� +j=1 +pj max +k∈[K] +���˜u(j) +k,t − u⋆(j) +k +��� +2 +� +� +� +� +J +� +j=1 +pjI (at ∈ [K])I (Gt) +� +� +By definition of ˜u(j) +k,t and triangular inequality for ℓ2-norm, +� +� +� +� +J +� +j=1 +pj max +k∈[K] +���˜u(j) +k,t − u⋆(j) +k +��� +2 +I (Gt) = +� +� +� +� +J +� +j=1 +pj max +k∈[K] +�����u(j) +k,t − u⋆(j) +k ++ γt−1,σr(δ) +√pj +���� +2 +I (Gt) +≤ +� +� +� +� +� +� +J +� +j=1 +pj max +k∈[K] +����u(j) +k,t − u⋆(j) +k +��� +2 ++ +� +� +� +� +J +� +j=1 +pj +�γt−1,σr(δ) +√pj +�2 +� +� I (Gt) +≤2γt−1,σr(δ)I (Gt) +≤2γt−1,σr(δ). +Then by Jensen’s inequality, +E +� K +� +k=1 +�π(jt) +k,t +���˜u(jt) +k,t − u⋆(jt) +k +��� I (Gt) +� +≤E +� +� +� +� +� +� +J +� +j=1 +pj max +k∈[K] +���˜u(j) +k,t − u⋆(j) +k +��� +2 +� +� +� +� +J +� +j=1 +pjI (at ∈ [K])I (Gt) +� +� +≤2γt−1,σr(δ)E +� +� +� +� +� +� +J +� +j=1 +pjI (at ∈ [K]) +� +� +≤2γt−1,σr(δ) +� +� +� +� +�E +� +� +J +� +j=1 +pjI (at ∈ [K]) +� +� +=2γt−1,σr(δ) +� +E [I (at ∈ [K])], +which proves (43). Similarly, +E +� K +� +k=1 +�π(jt) +k,t +���˜b(jt) +k,t − b⋆(jt) +k +��� +∞ I (Gt) +� +≤ 2γt−1,σb(δ) +� +E [I (at ∈ [K])] +(44) +Step 2. Reward decomposition: +Let τ be the stopping time of the algorithm and let U := {t ∈ [τ] : ρt > 0}. Then for +t /∈ U, the allocated resource is ρt ∨ 0 = 0 and the algorithm skips the round. Thus, +E +� T +� +t=1 +R�π +t +� +=E +�� +t∈U +R�π +t +� +. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Then, the reward is decomposed as +E +�� +t∈U +R�π +t +� +=E +�� +t∈U +R�π +t I (Gt) +� ++ E +�� +t∈U +R�π +t I (Gc +t ) +� +≥E +�� +t∈U +K +� +k=1 +�π(jt) +k,t u⋆(jt) +k +I (Gt) +� +− +T +� +t=1 +P (Gc +t ) +≥E +�� +t∈U +K +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t I (Gt) +� +− E +�� +t∈U +K +� +k=1 +�π(jt) +k,t +���˜u(jt) +k,t − u⋆(jt) +k +��� I (Gt) +� +− +T +� +t=1 +P (Gc +t ) +≥E +�� +t∈U +K +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t I (Gt) +� +− +T +� +t=1 +E +� K +� +k=1 +�π(jt) +k,t +���˜u(jt) +k,t − u⋆(jt) +k +��� I (Gt) +� +− +T +� +t=1 +P (Gc +t ) . +By the bound (43), +T +� +t=1 +E +� K +� +k=1 +�π(jt) +k,t +���˜u(jt) +k,t − u⋆(jt) +k +��� I (Gt) +� +≤2 +T +� +t=1 +γt−1,σr(δ) +� +E [I (at ∈ [K])] +≤2 +� +� +� +�T +T +� +t=1 +γt−1,σr(δ)2E [I (at ∈ [K])] +=2 +� +� +� +�TE +� T +� +t=1 +γt−1,σr(δ)2I (at ∈ [K]) +� +where the last ineqaulity holds by Cauchy-Schwartz inequality. Thus, the reward is decomposed as +E +� T +� +t=1 +R�π +t +� +=E +�� +t∈U +R�π +t +� +≥E +�� +t∈U +K +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t I (Gt) +� +− 2 +� +� +� +�TE +� T +� +t=1 +γt−1,σr(δ)2I (at ∈ [K]) +� +− +T +� +t=1 +P (Gc +t ) +(45) +Step 3. A lower bound for ρt: +Denote u1 < u2 < . . . < u|U| the indexes in U. For s /∈ U, we have ρs = 0m and +b(js) +as,s = 0m. Thus for ν ∈ [|U| − 1], +ρuν+1 = uν+1ρ − +uν+1−1 +� +s=1 +b(js) +as,s = uν+1ρ − +uν +� +s=1 +b(js) +as,s. +(46) +By the resource constrain at round uν, +K +� +k=1 +�π(juν ) +k,uν ˜b(juν ) +k,uν ≤uνρ − +uν−1 +� +s=1 +b(js) +as,s +=uνρ + b(juν ) +auν ,uν − +uν +� +s=1 +b(js) +as,s. +Plugging in (46), +ρuν+1 ≥ (uν+1 − uν) ρ − b(juν ) +auν ,uν + +K +� +k=1 +�π(juν ) +k,uν ˜b(juν ) +k,uν +≥ (uν+1 − uν) ρ − b(juν ) +auν ,uν + +K +� +k=1 +�π(juν ) +k,uν b⋆(juν ) +k ++ +K +� +k=1 +�π(juν ) +k,uν +� +˜b(juν ) +k,uν − b⋆(juν ) +k +� +. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Taking conditional expectation on both sides gives +E +� +ρuν+1 +�� juν+1 +� +≥E +� +uν+1 − uν| juν+1 +� +ρ + E +� +−b(juν ) +auν ,uν + +K +� +k=1 +�π(juν ) +k,uν b⋆(juν ) +k +����� juν+1 +� ++ E +� K +� +k=1 +�π(juν ) +k,uν +� +˜b(juν ) +k,uν − b⋆(juν ) +k +������ juν+1 +� +=E +� +uν+1 − uν| juν+1 +� +ρ + E +� K +� +k=1 +�π(juν ) +k,uν +� +˜b(juν ) +k,uν − b⋆(juν ) +k +�� +≥E +� +uν+1 − uν| juν+1 +� +ρ + E +� K +� +k=1 +�π(juν ) +k,uν +� +˜b(juν ) +k,uν − b⋆(juν ) +k +� +I (Guν) +� +− P +� +Gc +uν +� +1m, +where the equality holds by Assumption 1 and +E +�� +−b(juν ) +auν ,uν + +K +� +k=1 +�π(juν ) +k,uν b⋆(juν ) +k +�� +=E +�� +−b⋆(juν ) +auν ++ +K +� +k=1 +�π(juν ) +k,uν b⋆(juν ) +k +�� +=E +�� +− +K +� +k=1 +�π(juν ) +k,uν b⋆(juν ) +k ++ +K +� +k=1 +�π(juν ) +k,uν b⋆(juν ) +k +�� +=0. +For the last term, by the bound (44), +E +� K +� +k=1 +�π(juν ) +k,uν +� +˜b(juν ) +k,uν − b⋆(juν ) +k +� +I (Guν) +� +≥ − E +� K +� +k=1 +�π(juν ) +k,uν +���˜b(juν ) +k,uν − b⋆(juν ) +k +��� +∞ I (Guν) +� +1m +≥ − E +� +2γuν−1,σb(δ) +� +E [I (auν ∈ [K])| uν] +� +1m. +Thus we obtain a lower bound, +E +� +ρuν+1 +�� juν+1 +� +≥E +� +uν+1 − uν| juν+1 +� +ρ − P +� +Gc +uν +� +1m +− 2E +� +γuν−1,σb(δ) +� +E [I (auν ∈ [K])| uν] +� +1m. +(47) +Step 4. An upper bound for OPT +In the optimization problem (1), all constraints are linear with respect to the variable +and there exist a feasible solution. Thus the problem satisfies the Slater’s condition and strong duality (Boyd et al., 2004). +Then, +OPT +T += max +π(j) +k +min +λ∈Rm ++ +min +µ(j)≥0 min +ν(j) +k +≥0 +L +� +π(j) +k , λ, µ(j), ν(j) +k +� +, +where L is the Lagrangian function: +L +� +π(j) +k , λ, µ(j), ν(j) +k +� +:= +J +� +j=1 +K +� +k=1 +pjπ(j) +k u⋆(j) +k ++ +� +�ρ − +J +� +j=1 +K +� +k=1 +pjπ(j) +k b⋆(j) +k +� +� +⊤ +λ ++ +J +� +j=1 +µ(j) +� +1 − +K +� +k=1 +π(j) +k,1 +� ++ +J +� +j=1 +K +� +k=1 +ν(j) +k π(j) +k,1. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Minimizing over µ(j) and ν(j) +k +gives +min +µ(j) +t +≥0 +min +ν(j) +k,t≥0 +L +� +π(j) +k , λ, µ(j), ν(j) +k +� += +� +� +� +�J +j=1 +�K +k=1 pjπ(j) +k u⋆(j) +k ++ +� +ρ − �J +j=1 +�K +k=1 pjπ(j) +k b⋆(j) +k +�⊤ +λ +�K +k=1 π(j) +k +≤ 1, π(j) +k +≥ 0 +−∞ +o.w. +, +which implies +OPT +T += max +π(j) +k +min +λ∈Rm ++ +min +µ(j) +t +≥0 +min +ν(j) +k,t≥0 +L +� +π(j) +k , λ, µ(j), ν(j) +k +� +≤ +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0 +min +λ∈Rm ++ +J +� +j=1 +K +� +k=1 +pjπ(j) +k u⋆(j) +k ++ +� +�ρ − +J +� +j=1 +K +� +k=1 +pjπ(j) +k b⋆(j) +k +� +� +⊤ +λ += +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0 +min +λ∈Rm ++ +J +� +j=1 +pj +� +� +� +K +� +k=1 +π(j) +k u⋆(j) +k ++ +� +ρ − +K +� +k=1 +π(j) +k b⋆(j) +k +�⊤ +λ +� +� +� +≤ min +λ∈Rm ++ +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0 +J +� +j=1 +pj +� +� +� +K +� +k=1 +π(j) +k u⋆(j) +k ++ +� +ρ − +K +� +k=1 +π(j) +k b⋆(j) +k +�⊤ +λ +� +� +� +≤ min +λ∈Rm ++ +J +� +j=1 +pj +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0 +� +� +� +K +� +k=1 +π(j) +k u⋆(j) +k ++ +� +ρ − +K +� +k=1 +π(j) +k b⋆(j) +k +�⊤ +λ +� +� +� . +Let {¯π(j) +k +: j ∈ [J], k ∈ [K]} be the maximizer. If ρ − �K +k=1 ¯π(j) +k b⋆(j) +k +is negative for some element and j ∈ [J], then the +optimal value becomes −∞. Thus +OPT +T +≤ min +λ∈Rm ++ +J +� +j=1 +pj +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0 +� +� +� +K +� +k=1 +π(j) +k u⋆(j) +k ++ +� +ρ − +K +� +k=1 +π(j) +k b⋆(j) +k +�⊤ +λ +� +� +� += min +λ∈Rm ++ +J +� +j=1 +pj +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0,ρ−�K +k=1 π(j) +k +b⋆(j) +k +≥0 +� +� +� +K +� +k=1 +π(j) +k u⋆(j) +k ++ +� +ρ − +K +� +k=1 +π(j) +k b⋆(j) +k +�⊤ +λ +� +� +� += +J +� +j=1 +pj +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0,ρ−�K +k=1 π(j) +k +b⋆(j) +k +≥0 +� K +� +k=1 +π(j) +k u⋆(j) +k +� +. +For each j ∈ [J] and v ∈ Rm ++, let ˜π(j) +k,v be the solution to the optimization problem: +max +π(j) +k,v +K +� +k=1 +π(j) +k,vu⋆(j) +k +s.t. +K +� +k=1 +π(j) +k,vb⋆(j) +k +≤ v. +(48) +Then, +OPT +T +≤ +J +� +j=1 +pj +max +�K +k=1 π(j) +k +≤1,π(j) +k +≥0,ρ−�K +k=1 π(j) +k +b⋆(j) +k +≥0 +� K +� +k=1 +π(j) +k u⋆(j) +k +� += +J +� +j=1 +pj +K +� +k=1 +˜π(j) +k,ρu⋆(j) +k +. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +For each ν ∈ [|U| − 1], +E +� +(uν+1 − uν) OPT +T +� +≤E +� +�(uν+1 − uν) +J +� +j=1 +pj +K +� +k=1 +˜π(j) +k,ρu⋆(j) +k +� +� +=E +� +(uν+1 − uν) +K +� +k=1 +˜π +(juν+1) +k,ρ +u +⋆(juν+1) +k +� +In (48), all constraints are linear with respect to the variable and there exist a feasible solution. Thus the problem satisfies +the Slater’s condition and strong duality (Boyd et al., 2004). The dual problem of (48) is +min +λ(j) +v ∈Rm ++ +v⊤λ(j) +v +s.t. +� +b⋆(j) +k +�⊤ +λ(j) +v +≥ u⋆(j) +k +, +∀k ∈ [K]. +(49) +Let ˜λ(j) +v +be the solution to (49). By strong duality, for each ν ∈ [|U| − 1], +E +� +(uν+1 − uν) +K +� +k=1 +˜π +(juν+1) +k,ρ +u +⋆(juν+1) +k +� += E +� +(uν+1 − uν) ρ⊤˜λ +(juν+1) +ρ +� += E +� +E +� +(uν+1 − uν) ρ| juν+1 +�⊤ ˜λ +(juν+1) +ρ +� += E +�� +P +� +Gc +uν +� ++ 2E +�� +E [I (auν ∈ [K])| uν]γuν−1,λ(σb) +�� +1⊤ +m˜λ +(juν+1) +ρ +� ++ E +�� +E +� +(uν+1 − uν) ρ| juν+1 +� +− P +� +Gc +uν +� +− 2E +�� +E [I (auν ∈ [K])| uν]γuν−1,σb(δ) +�� +1⊤ +m˜λ +(juν+1) +ρ +� +. +(50) +For the first term, we observe the dual problem of (1), +min +λ∈Rm ++ +ρ1⊤ +mλ +s.t.λ⊤b⋆(j) +k +≥ u⋆(j) +k +, ∀j ∈ [J], ∀k ∈ [K]. +(51) +Comparing to the dual problem (49), when v = ρ1m and j = juν+1, (51) has more constraints than (49) with same objective +function. Denote λ⋆ be the solution to (51). Then, +ρ1⊤ +m˜λ +(juν+1) +ρ1m +≤ ρ1⊤ +mλ⋆ = OPT +T +, +where the last equality holds by strong duality for the oracle problem (1). Thus the first term in (50) is bounded as +E +�� +P +� +Gc +uν +� ++ 2E +�� +E [I (auν ∈ [K])| uν]γuν−1,λ(σb) +�� +1⊤ +m˜λ +(juν+1) +ρ1m +� +≤ +� +P +� +Gc +uν +� ++ 2E +�� +E [I (auν ∈ [K])| uν]γuν−1,λ(σb) +�� OPT +ρT . +For the second term in (50), we observe that ˜λ(juν +1) +E[ ρuν+1|juν+1] is a feasible solution to (49) when v = ρ1m and j = juν+1. +Thus +E +�� +E +� +(uν+1 − uν) ρ| juν+1 +� +−2E +�� +E [I (auν ∈[K])| uν]γuν−1,λ(σb) +� +−P +� +Gc +uν +�� +1⊤ +m˜λ +(juν+1) +ρ1m +� +≤E +��� +E +� +(uν+1 − uν) ρ| juν+1 +� +−2E +�� +E [I (auν ∈[K])| uν]γuν−1,λ(σb) +� +−P +� +Gc +uν +�� +∨ 0 +� +1⊤ +m˜λ +(juν+1) +ρ1m +� +≤E +��� +E +� +(uν+1 − uν) ρ| juν+1 +� +−2E +�� +E [I (auν ∈[K])| uν]γuν−1,λ(σb) +� +−P +� +Gc +uν +�� +∨ 0 +� +1⊤ +m˜λ(juν +1) +E[ ρuν+1|juν+1] +� +≤E +�� +E +� +ρuν+1 +�� juν+1 +� +∨ 0m +�⊤ ˜λ(juν +1) +E[ ρuν+1|juν+1] +� +, + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +where the last inequality holds by (47). Because uν+1 ∈ U, we have ρuν+1 > 0 and +E +�� +E +� +ρuν+1 +�� juν+1 +� +∨ 0m +�⊤ ˜λ(juν +1) +E[ ρuν+1|juν+1] +� +≤E +� +E +� +ρuν+1 ∨ 0m +�� juν+1 +�⊤ ˜λ(juν +1) +E[ ρuν+1|juν+1] +� +=E +� +E +� +ρuν+1 +�� juν+1 +�⊤ ˜λ(juν +1) +E[ ρuν+1|juν+1] +� +. +Collecting the bounds, we have +E +� +(uν+1 − uν) OPT +T +� +≤E +� +E +� +ρuν+1 +�� juν+1 +�⊤ ˜λ(juν +1) +E[ ρuν+1|juν+1] +� ++ +� +2E +�� +E [I (auν ∈[K])| u��]γuν−1,σb(δ) +� ++ P +� +Gc +uν +�� OPT +ρT . +Similar to Step 4, by strong duality, +E +� +ρuν+1 +�� juν+1 +�⊤ ˜λ(juν +1) +E[ ρuν+1|juν+1] += +max +�K +k=1 π +(juν +1) +k +≤1,π +(juν +1) +k +≥0 +min +λ∈Rm ++ +K +� +k=1 +π(juν +1) +k +u⋆(juν +1) +k ++ +� +E +� +ρuν+1 +�� juν+1 +� +− +K +� +k=1 +π(juν +1) +k +b⋆(juν +1) +k +�⊤ +λ +≤ min +λ∈Rm ++ +max +�K +k=1 π +(juν +1) +k +≤1,π +(juν +1) +k +≥0 +K +� +k=1 +π(juν +1) +k +u⋆(juν +1) +k ++ +� +E +� +ρuν+1 +�� juν+1 +� +− +K +� +k=1 +π(juν +1) +k +b⋆(juν +1) +k +�⊤ +λ +≤ min +λ∈Rm ++ +E +� +� +max +�K +k=1 π +(juν +1) +k +≤1,π +(juν +1) +k +≥0 +K +� +k=1 +π(juν +1) +k +u⋆(juν +1) +k ++ +� +ρuν+1 − +K +� +k=1 +π(juν +1) +k +b⋆(juν +1) +k +�⊤ +λ +������ +juν+1 +� +� +≤ E +� +� +max +�K +k=1 π +(juν +1) +k +≤1,π +(juν +1) +k +≥0,ρuν+1−�K +k=1 π +(juν +1) +k +b +⋆(juν +1) +k +≥0 +K +� +k=1 +π(juν +1) +k +u⋆(juν +1) +k +������ +juν+1 +� +� += +K +� +k=1 +˜π +(juν+1) +k,ρuν+1 u⋆(juν +1) +k +. +Thus we have +E +� +(uν+1 − uν) OPT +T +� +≤E +� K +� +k=1 +˜π +(juν+1) +k,ρuν+1 u⋆(juν +1) +k +� ++ +� +2E +�� +E [I (auν ∈[K])| uν]γuν−1,σb(δ) +� ++ P +� +Gc +uν +�� OPT +ρT . +Under the event Guν+1, the policy ˜π +(juν+1) +k,ρuν+1 is a feasible solution to the bandit problem (12), +E +� K +� +k=1 +˜π +(juν+1) +k,ρuν+1 u⋆(juν +1) +k +� +≤E +� K +� +k=1 +˜π +(juν+1) +k,ρuν+1 u⋆(juν +1) +k +I +� +Guν+1 +� +� ++ P +� +Gc +uν+1 +� +≤E +� K +� +k=1 +˜π +(juν+1) +k,ρuν+1 ˜u +(juν+1) +k,uν+1 I +� +Guν+1 +� +� ++ P +� +Gc +uν+1 +� +≤E +� K +� +k=1 +�π +(juν+1) +k,uν+1 ˜u +(juν+1) +k,uν+1 I +� +Guν+1 +� +� ++ P +� +Gc +uν+1 +� +. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Thus, for each ν ∈ [|U| − 1], +E +� +(uν+1 − uν) OPT +T +� +≤E +� K +� +k=1 +�π +(juν+1) +k,uν+1 ˜u +(juν+1) +k,uν+1 I +� +Guν+1 +� +� ++ P +� +Gc +uν+1 +� ++ +� +2E +�� +E [I (auν ∈[K])| uν]γuν−1,σb(δ) +� ++ P +� +Gc +uν +�� OPT +ρT . +Summing up over ν, +E +� +� +|U|−1 +� +ν=1 +(uν+1 − uν) OPT +T +� +� ≤E +�� +t∈U +K +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t I (Gt) +� ++ +� +1 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) ++ +� +� +|U|−1 +� +ν=1 +2E +�� +E [I (auν ∈[K])| uν]γuν−1,σb(δ) +� +� +� OPT +ρT +≤E +�� +t∈U +K +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t I (Gt) +� ++ +� +1 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) ++ 2 +� T +� +t=1 +E +�� +E [I (at ∈[K])]γt−1,σb(δ) +�� +OPT +ρT +≤E +�� +t∈U +K +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t I (Gt) +� ++ +� +1 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) ++ 2 +� +� +� +�TE +� T +� +t=1 +γt−1,σb(δ)2I (at ∈ [K]) +� +OPT +ρT , +where the last inequality holds by Cauchy-Schwartz inequality, By (45), +E +� +� +|U|−1 +� +ν=1 +(uν+1 − uν) OPT +T +� +� ≤E +�� +t∈U +K +� +k=1 +�π(jt) +k,t ˜u(jt) +k,t I (Gt) +� ++ +� +1 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) ++ 2 +� +� +� +�TE +� T +� +t=1 +γt−1,σb(δ)2I (at ∈ [K]) +� +OPT +ρT +≤E +� T +� +t=1 +R�π +t +� ++ +� +2 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) ++ 2 +� +� +� +�TE +� T +� +t=1 +γt−1,σb(δ)2I (at ∈ [K]) +� +OPT +ρT +2 +� +� +� +�TE +� T +� +t=1 +γt−1,σr(δ)2I (at ∈ [K]) +� +≤E +� T +� +t=1 +R�π +t +� ++ +� +2 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) +2 +� +1 + OPT +ρT +� � +� +� +�TE +� T +� +t=1 +γt−1,σb∨σr(δ)2I (at ∈ [K]) +� + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Because the last choice of the algorithm happens at round τ, we have ρτ > 0 and u|U| = τ. And by definition, u1 = ξ. Thus +E +� +� +|U|−1 +� +ν=1 +(uν+1 − uν) OPT +T +� +� = E +�� +u|U| − u1 +� OPT +T +� += OPT +T +E [τ − ξ] . +Rearranging the terms +E +� T +� +t=1 +R�π +t +� +≥ E +� +� +|U|−1 +� +ν=1 +(uν+1 − uν) OPT +T +� +� − +� +2 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) +− 2 +� +1 + OPT +ρT +� � +� +� +�TE +� T +� +t=1 +γt−1,σb∨σr(δ)2I (at ∈ [K]) +� +≥ OPT +T +E [τ − ξ] − +� +2 + OPT +ρT +� +T +� +t=1 +P (Gc +t ) +− 2 +� +1 + OPT +ρT +� � +� +� +�TE +� T +� +t=1 +γt−1,σb∨σr(δ)2I (at ∈ [K]) +� +, +completes the proof. +B.5. Proof of Lemma 5.3 +Proof. Let us fix δ ∈ (0, T −2) throughout the proof. +Step 1. Bounding the minimum eigenvalue of {Fν : ν ∈ [nT ]}: +By Lemma C.3, with probability at least 1 − Tδ, +1 +2KdFν = +1 +2Kd +ν +� +u=1 +˜Xk,τ(u) ˜X⊤ +k,τ(u) + 8K − 1 +K +log Jd +δ +⪰ +1 +4Kd +ν +� +u=1 +E +�� +˜Xk,τ(u) ˜X⊤ +k,τ(u) +��� Hτ(u)−1 +� ++ 8K − 1 +K +log Jd +δ − log Jd +δ +⪰ +1 +4Kd +ν +� +u=1 +E +� +˜Xk,τ(u) ˜X⊤ +k,τ(u) +��� Hτ(u)−1 +� +, +for all ν ∈ [nT ]. By Assumption 2 and 3, +λmin +� +E +� +˜Xk,τ(u) ˜X⊤ +k,τ(u) +��� Hτ(u)−1 +�� +=λmin +� +� +� +� +� +p1EXk∼F1 +��K +k=1 XkX⊤ +k +� +0 +0 +0 +... +0 +0 +0 +pJExk∼FJ +��K +k=1 XkX⊤ +k +� +� +� +� +� +� +≥pmin min +j∈[J] λmin +� +EXk∼Fj +� K +� +k=1 +XkX⊤ +k +�� +≥pminKα. +Thus, with probability at least 1 − Tδ, +λmin(Fν) ≥1 +2λmin +� ν +� +u=1 +E +� +˜Xk,u ˜X⊤ +k,u +��� Hu−1 +�� +≥1 +2 +ν +� +u=1 +λmin +� +E +� +˜Xk,u ˜X⊤ +k,u +��� Hu−1 +�� +≥pminKαν +2 +, + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +for all ν ∈ [nT ]. +Step 2. Bounding the probability of Mt: +Under the event proved in Step 1, the event Mt is implied by +pminKαnt +2 +≥ 12Kd +� nt +� +ν=1 +96 (K − 1) log +� Jd +δ +� +αKpminν ++ 2 log Jd +δ +� +, +(52) +for all t ∈ [T]. The left hand side is bounded as +12Kd +� nt +� +ν=1 +96 (K − 1) log +� Jd +δ +� +αKpminν ++ 2 log Jd +δ +� +≤12 · 96Kd log +� Jd +δ +� +log nt +αpmin ++ 24Kd log Jd +δ +≤12 · 96Kd log +� Jd +δ +� +log T +αpmin ++ 24Kd log Jd +δ . +Plugging in (52) and rearranging the terms, +nt ≥ 96d log +�Jd +δ +� �24 log T +α2p2 +min ++ +1 +αpmin +� +, +implies the event Mt for all t ∈ [T] with probability at least 1 − Tδ. In other words, +P (Mc +t) ≤ P +� +nt < dMα,p,T log +�Jd +δ +�� ++ Tδ, +for all t ∈ [T], where Mα,p,T := 96 +� +24 log T +α2p2 +min + +1 +αpmin +� +. +Step 3. Bounding ξ: +Let ˜t = inft∈[T ]{Mt happens} be the first round that Mt happens. After round ˜t, the algorithm +skips the rounds until ρt > 0 holds and then pulls an action according to the policy. Thus, for the round ξ − 1, +(ξ − 1) ρ − +ξ−2 +� +s=1 +b(js) +as,s = (ξ − 1) ρ − +˜t +� +s=1 +b(js) +as,s ≤ 0. +Rearraging the terms, and taking expectation, +E [ξ] ≤ 1 + ρ−1E +� +� +˜t +� +s=1 +b(js) +as,s +� +� ≤ 1 + ρ−1E +�˜t +� +. +(53) +Now we need an upper bound for ˜t. For t ∈ [˜t − 1], the event Mt does not happen and the algorithm admits the arrival for +t ∈ [˜t]. Thus, nt = t for all t ∈ [˜t]. For t = ˜t − 1, the event M˜t−1 does not happen and +λmin +� +Fn˜t−1 +� +≤ 12Kd +�n˜t−1 +� +ν=1 +48 (K − 1) log +� Jd +δ +� +λmin(Fν) ++ 2 log Jd +δ +� +. +By the fact proved in Step 1, with probability at least 1 − Tδ, +pminKαn˜t−1 +2 +≤ 12Kd +�n˜t−1 +� +ν=1 +96 (K − 1) log +� Jd +δ +� +pminKαν ++ 2 log Jd +δ +� +. +Plugging in n˜t−1 = ˜t − 1 and rearranging the terms, +˜t − 1 ≤ 24d +pminα +� +� +� +˜t−1 +� +ν=1 +96 (K − 1) log +� Jd +δ +� +pminKαν ++ 2 log Jd +δ +� +� +� +≤ 24d +pminα +�96 log T +pminα + 2 log Jd +δ +� +. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Then with probability at least 1 − Tδ, +˜t ≤1 + 96d log +�Jd +δ +� �24 log T +α2p2 +min ++ +1 +αpmin +� +:=1 + Mα,p,T d log +�Jd +δ +� +. +Thus, +E +�˜t +� +=E +� +˜tI +� +˜t < 1 + Mα,p,T d log +�Jd +δ +��� ++ E +� +˜tI +� +˜t ≥ 1 + Mα,p,T d log +�Jd +δ +��� +≤1 + dMα,p,T log +�Jd +δ +� ++ TP +� +˜t ≥ 1 + Mα,p,T d log +�Jd +δ +�� +≤1 + dMα,p,T log +�Jd +δ +� ++ T 2δ +Plugging in (53), +E [ξ] ≤1 + ρ−1E +�˜t +� +≤1 + 1 + dMα,p,T log +� Jd +δ +� ++ T 2δ +ρ +. +Step 4. Proving the lower bound for τ: +Let τ be the stopping time of the algorithm. Because the algorithm admits +arrival at round τ, we have ρτ > 0. From the resource constraint in the bandit problem (12), +K +� +k=1 +�π(jτ ) +k,τ +� +�b(jτ ) +k,τ − γτ−1,σb(δ) +√pjτ +1m +� +:= +K +� +k=1 +�π(jτ ) +k,τ ˜b(jτ ) +k,τ :≤ τρ − +τ−1 +� +s=1 +b(js) +as,s +Because algorithm stops at round τ, there exists an r ∈ [m] such that �τ +s=1 b(js) +as,s(r) ≥ Tρ(r). Rearranging the terms, +τρ ≥ +τ−1 +� +s=1 +b(js) +as,s(r) + +K +� +k=1 +�π(jτ ) +k,τ ˜b(jτ ) +k,τ (r) +≥Tρ − b(jτ ) +aτ ,τ(r) + +K +� +k=1 +�π(jτ ) +k,τ ˜b(jτ ) +k,τ (r) +=Tρ − b(jτ ) +aτ ,τ(r) + +K +� +k=1 +�π(jτ ) +k,τ b⋆(jτ ) +k,τ +(r) + +K +� +k=1 +�π(jτ ) +k,τ +� +˜b(jτ ) +k,τ (r) − b⋆(jτ ) +k,τ +(r) +� +≥Tρ − b(jτ ) +aτ ,τ(r) + +K +� +k=1 +�π(jτ ) +k,τ b⋆(jτ ) +k +(r) − +K +� +k=1 +�π(jτ ) +k,τ +���˜b(jτ ) +k,τ − b⋆(jτ ) +k +��� +∞ . +Taking expectation on both side, +E [τρ] ≥Tρ + E +� +−b(jτ ) +aτ ,τ(r) + +K +� +k=1 +�π(jτ ) +k,τ b⋆(jτ ) +k,τ +(r) +� +− E +� K +� +k=1 +�π(jτ ) +k,τ +���˜b(jτ ) +k,τ − b⋆(jτ ) +k +��� +∞ +� +=Tρ − E +� K +� +k=1 +�π(jτ ) +k,τ +���˜b(jτ ) +k,τ − b⋆(jτ ) +k +��� +∞ +� +=Tρ − E +� K +� +k=1 +�π(jτ ) +k,τ +���˜b(jτ ) +k,τ − b⋆(jτ ) +k +��� +∞ I (Eτ ∩ Mτ−1) +� +− E +� K +� +k=1 +�π(jτ ) +k,τ +���˜b(jτ ) +k,τ − b⋆(jτ ) +k +��� +∞ I +� +Ec +τ ∪ Mc +τ−1 +� +� +≥Tρ − 2E +� +γτ−1,σb(δ) +� +E [I (at ∈ [K])| τ] +� +− E +� K +� +k=1 +�π(jτ ) +k,τ +���˜b(jτ ) +k,τ − b⋆(jτ ) +k +��� +∞ I +� +Ec +τ ∪ Mc +τ−1 +� +� +, +(54) + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +where the last inequality holds by (44). +Because ˜b(jτ ) +k,τ (r) ≤ Tρ almost surely, +E +� K +� +k=1 +�π(jτ ) +k,τ +���˜b(jτ ) +k,τ − b⋆(jτ ) +k +��� +∞ I +� +Ec +τ ∪ Mc +τ−1 +� +� +≤TρP +� +Ec +τ ∪ Mc +τ−1 +� +=TρP (Ec +τ) +≤Tρ +� +4(m + 1)δ + 7T −1� +=7ρ + 4(m + 1)Tδ, +where the equality holds because the algorithm takes action according to the policy at round τ and the last inequality holds +by Theorem 4.2. from (54), +E [τρ] ≥Tρ − 7ρ + 4(m + 1)Tρδ − 2E +� +γτ−1,σb(δ) +� +E [I (at ∈ [K])| τ] +� +≥Tρ − 7ρ + 4(m + 1)Tρδ − 2γ1,σb(δ) +Rearranging the terms, +E [T − τ] ≤4(m + 1)Tδ + 7 + 2γτ−1,σb(δ) +ρ +. +Step 5. Proving a bound for the sum of probabilities +Because the algorithm admits the arrival when Mt−1 does not +happen, +Mc +t−1 = Mc +t−1 ∩ {at ∈ [K]} . +Then +P +� +Mc +t−1 +� +=P +� +Mc +t−1 ∩ {at ∈ [K]} +� +=P +� +Mc +t−1 ∩ {at ∈ [K]} ∩ +� +nt−1 ≥ Mα,p,T d log +�Jd +δ +��� ++ P +� +Mc +t−1 ∩ {at ∈ [K]} ∩ +� +nt−1 < Mα,p,T d log +�Jd +δ +��� +≤P +� +Mc +t−1 ∩ +� +nt−1 ≥ Mα,p,T d log +�Jd +δ +��� ++ P +� +{at ∈ [K]} ∩ +� +nt−1 < Mα,p,T d log +�Jd +δ +��� +≤Tδ + P +� +{at ∈ [K]} ∩ +� +nt−1 < Mα,p,T d log +�Jd +δ +��� +, +where the last inequality holds by the fact proved in Step 2. Summing over t ∈ [T], +T +� +t=1 +P +� +Mc +t−1 +� +≤T 2δ + +T +� +t=1 +P +� +{at ∈ [K]} ∩ +� +nt−1 < Mα,p,T d log +�Jd +δ +��� +=T 2δ + E +� T +� +t=1 +I (at ∈ [K]) I +� +nt−1 < Mα,p,T d log +�Jd +δ +��� +. +Set µ := Mα,p,T d log +� Jd +δ +� +and suppose +T +� +t=1 +I (at ∈ [K]) I (nt−1 < µ) > µ. +(55) + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Let τ(1) < τ(2) < · · · < τ(|A|) be the ordered admitted round in A := {t ∈ [T] : at ∈ [K]}. By definition, nτ(ν) = ν +for ν ∈ [|A|]. By (55), the event{at ∈ [K]} happens at least µ + 1 times over the horizon [T] and |A| > µ. For any +ν ∈ (µ, |A|],the number of admitted round is nτ(ν) > µ and +T −1 +� +t=ε +I (nt−1 < µ) I (at ∈ [K]) = +|A| +� +ν=1 +I +� +nτ(ν)−1 < µ +� +I +� +aτ(ν) ∈ [K] +� +≤ +|A| +� +ν=1 +I +� +nτ(ν)−1 < µ +� +I +� +nτ(ν) = nτ(ν)−1 + 1 +� += +|A| +� +ν=1 +I +� +nτ(ν)−1 < µ +� +I +� +ν = nτ(ν)−1 + 1 +� +≤ +|A| +� +ν=1 +I (ν − 1 < µ) , += +|A| +� +ν=1 +I (ν < µ + 1) +=µ, +which contradicts with (55). Thus +E +� T +� +t=1 +I (at ∈ [K]) I +� +nt−1 < Mα,p,T d log +�Jd +δ +��� +≤ µ := Mα,p,T d log +�Jd +δ +� +, +which proves, +T +� +t=1 +P +� +Mc +t−1 +� +≤ T 2δ + Mα,p,T d log +�Jd +δ +� +. +B.6. Proof of Theorem 5.1 +Proof. From Lemma 5.2, rearranging the terms, +R�π +T :=OPT − E +� T +� +t=1 +R�π +t +� +≤OPT +T +{T − E [τ − ξ]} ++ +� +2 + OPT +ρT +� +T +� +t=1 +P +� +Mc +t−1 ∪ Ec +t +� ++ 2 +� +� +� +�TE +� T +� +t=1 +γt−1,σr(δ)2I (at ∈ [K]) +� ++ 2 +� +� +� +�TE +� T +� +t=1 +γt−1,σb(δ)2I (at ∈ [K]) +� +OPT +ρT . + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +By Lemma 5.3, +E [ξ] ≤ 1 + 1+dMα,p,T log +� Jd +δ +� ++T 2δ +ρ +, +E [T − τ] ≤ 4(m + 1)Tδ + 7 + 2γτ−1,σb(δ) +ρ +. +By definition of γt,σ(δ), +E [T − τ] ≤4(m + 1)Tδ + 7 + 32√J log JKT + 8 +√ +2βσb(δ) +ρ +=4(m + 1)Tδ + 7 + 32√J log JKT + Cσ(δ) +√ +Jd +ρ +. +This implies +OPT +T +{T − E [τ − ξ]} +≤ OPT +Tρ +� +ρ + 4(m + 1)Tδ + 8 + 32 +� +J log +�JK +δ +� ++Cσb(δ) +√ +Jd + dMα,p,T log +�Jd +δ +� ++T 2δ +� +≤ OPT +Tρ +� +ρ + 8 + +� +5mT + T 2� +δ + 32 +� +J log +�JK +δ +� ++Cσb(δ) +√ +Jd + dMα,p,T log +�Jd +δ +�� +. +[Step 3. Bounding the sum of probability] Because T ≥ 8dα−1p−1 +min log JdT, by Theorem 4.2 and Lemma 5.3, +T +� +t=1 +P +� +Mc +t−1 ∪ Ec +t +� += +T +� +t=1 +� +P +� +Mc +t−1 +� ++ P (Mt−1 ∩ Ec +t ) +� +≤T 3δ + dMα,p,T log +�Jd +δ +� ++ +T +� +t=1 +P (Mt−1 ∩ Ec +t ) +≤T 3δ + dMα,p,T log +�Jd +δ +� ++ 8dα−1p−1 +min log JdT ++ +T +� +t=8dα−1p−1 +min log JdT +P (Mt−1 ∩ Ec +t ) +≤T 3δ + dMα,p,T log +�Jd +δ +� ++ 8dα−1p−1 +min log JdT + 4(m + 1)Tδ + 7. +By definition of γt,σ(δ) and βσ(δ), +E +� T +� +t=1 +γt−1,σr(δ)2I (at ∈ [K]) +� +=E +� +� +T +� +t=1 +� +16√J log JKT +√t − 1 ++ 4 +√ +2βσr(δ) +√nt−1 +�2 +I (at ∈ [K]) +� +� +≤E +� T +� +t=1 +� +16√J log JKT + 4 +√ +2βσr(δ) +�2 +nt−1 +I (at ∈ [K]) +� +≤E +� T +� +t=1 +� +16√J log JKT + 4 +√ +2βσr(δ) +�2 +nt−1 +I (nt = nt−1 + 1) +� +≤ +� +16 +� +J log JKT + 4 +√ +2βσr(δ) +�2 +log T, + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +where the first inequality holds by nt ≤ t almost surely. Thus by definition of βσ(δ) := 8 +√ +Jd + 96σ +� +Jd log 4 +δ , +2 +� +� +� +�TE +� T +� +t=1 +γt−1,σr(δ)2I (at ∈ [K]) +� +≤ +� +32 +� +J log JKT + 4 +√ +6βσr(δ) +� � +T log T +≤ +� +32 +� +J log JKT + Cσr(δ) +√ +Jd +� � +T log T, +where Cσ(δ) := 8 +√ +2 · +� +8 + 96σ +� +log 4 +δ +� +. Similarly, +2 +� +� +� +�TE +� T +� +t=1 +γt−1,σb(δ)2I (at ∈ [K]) +� +OPT +ρT +≤ +� +32 +� +J log JKT + Cσb(δ) +√ +Jd +� � +T log T OPT +ρT +Collecting the bounds, +R�π +T ≤ OPT +Tρ +� +ρ + 8 + +� +5mT + T 2� +δ + 32 +� +J log JKT +Cσb(δ) +√ +Jd + dMα,p,T log +�Jd +δ +�� ++ +� +2 + OPT +ρT +� � +T 3δ + dMα,p,T log +�Jd +δ +� ++ 4dα−1p−1 +min log JdT + 4(m + 1)Tδ + 7 +� ++ 2 +� +1 + OPT +ρT +� � +32 +� +J log JKT + Cσb∨σr(δ) +√ +Jd +� � +T log T +≤ +� +2 + OPT +ρT +� � � +96 +� +J log JKT + 3Cσr∨σr(δ) +√ +Jd +� � +T log T + 2dMα,p,T log +�Jd +δ +� ++ 4dα−1p−1 +min log JdT + 15 + 10mT 3δ +� +, +Plugging in δ = m−1T −3 proves (14). +C. Technical lemmas +Lemma C.1. (Azuma-Hoeffding’s inequality) Azuma (1967) If a super-martingale (Yt; t ≥ 0) corresponding to filtration +Ft, satisfies |Yt − Yt−1| ≤ ct for some constant ct, for all t = 1, . . . , T, then for any a ≥ 0, +P (YT − Y0 ≥ a) ≤ e +− +a2 +2 �T +t=1 c2 +t . +Thus with probability at least 1 − δ, +YT − Y0 ≤ +� +� +� +�2 log 1 +δ +T +� +t=1 +c2 +t. +Lemma C.2. For a sequence u1 ≥ u2 ≥ · · · ≥ un ≥ 0 and nonnegative real sequences {pi}i∈[n] and {qi}i∈[n] such that +�n +i=1 pi = �n +i=1 qi, if p1 > q1 then +n +� +i=1 +piui ≥ +n +� +i=1 +qiui. +Proof. When n = 1, p1u1 ≥ q1u1, for any u1 ≥ 0. Suppose for any sequence u1 ≥ u2 ≥ · · · ≥ un−1 ≥ 0 and nonnegative +real sequences {pi}i∈[n−1] and {qi}i∈[n−1] such that �n−1 +i=1 pi = �n−1 +i=1 qi, +p1 > q1 =⇒ +n−1 +� +i=1 +piui ≥ +n−1 +� +i=1 +qiui. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +For a sequence u1 ≥ u2 ≥ · · · ≥ un ≥ 0 and nonnegative real sequences {pi}i∈[n] and {qi}i∈[n] such that �n +i=1 pi = +�n +i=1 qi, and p1 > q1, there exist k ∈ [n]\{1} such that pk < qk. In case of k = n, define a sequence +˜qi = qi, +∀i ∈ [n − 2] +˜qn−1 = qn−1 − pn + qn ≥ 0. +Then �n−1 +i=1 ˜qi = �n−1 +i=1 pi and +n +� +i=1 +piui = +n−1 +� +i=1 +piui + pnun +≥ +n−1 +� +i=1 +˜qiui + pnun += +n−1 +� +i=1 +qiui + (−pn + qn) un−1 + pnun +≥ +n−1 +� +i=1 +qiui + (−pn + qn) un + pnun += +n +� +i=1 +qiui. +In case of k ̸= n, denote a sequence +˜qi = qi, +∀i ∈ [n − 1]\{k} +˜qk = qk − pk + qn. +Then �n−1 +i=1 ˜qi = � +j̸=k pi and +n +� +i=1 +piui = +� +i̸=k +piui + pkuk +≥ +n−1 +� +i=1 +˜qiui + pkuk +≥ +n−1 +� +i=1 +qiui − pkuk + qnuk + pkuk += +n−1 +� +i=1 +qkuk + qnuk +≥ +n +� +i=1 +qkuk. +By induction, the proof is complete. +Lemma C.3. Let {Xτ : τ ∈ [t]} be a Rd×d-valued stochastic process adapted to the filtration {Fτ : τ ∈ [t]}, i.e., Xτ +is Fτ-measurable for τ ∈ [t]. Suppose Xτ is a positive definite symmetric matrices such thatλmax(Xτ) ≤ 1 +2.Then with +probability at least 1 − δ, +t +� +τ=1 +Xτ ⪰ 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] − log d +δ Id. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +In addition, with probability at least 1 − δ, +t +� +τ=1 +Xτ ⪯ 3 +2 +t +� +τ=1 +E [Xτ| Fτ−1] + log d +δ Id. +Proof. This proof is an adapted version of Lemma 12.2 in Lattimore & Szepesv´ari (2020) for matrix stochastic process +using the argument of Tropp (2012). For the lower bound, It is sufficient to prove that +λmax +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +� +≤ log d +δ , +with probability at least 1 − δ. By the spectral mapping theorem, +exp +� +λmax +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +�� +≤λmax +� +exp +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +�� +≤Tr +� +exp +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +�� +. +Taking expectation on both side gives, +E exp +� +λmax +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +�� +≤ETr +� +exp +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +�� +=ETr +� +E +� +exp +� +− +t−1 +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] + log exp (−Xt) +������ Ft−1 +�� +≤ETr +� +exp +� +− +t−1 +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] + log E [exp (−Xt)| Ft−1] +�� +. +The last inequality holds due to Lieb’s theorem Tropp (2015). Because ex ≤ 1+ 1 +2xfor all x ∈ [−1/2, 0], and the eigenvalue +of −Xt lies in [−1/2, 0], we have +E [exp (−Xt)| Ft−1] ⪯ I − 1 +2E [Xt| Ft−1] ⪯ exp +� +−1 +2E [Xt| Ft−1] +� +, +by the spectral mapping theorem. Thus we have +E exp +� +λmax +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +�� +≤ ETr +� +exp +� +− +t−1 +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] + log exp +� +−1 +2E [Xt| Ft−1] +��� += ETr +� +exp +� +− +t−1 +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] − 1 +2E [Xt| Ft−1] +�� += ETr +� +exp +� +− +t−1 +� +τ=1 +Xτ + 1 +2 +t−1 +� +τ=1 +E [Xτ| Fτ−1] +�� +≤ ... +≤ ETr (exp (O)) = d + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Now my Markov’s inequality, +P +� +λmax +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +� +> log d +δ +� +≤ E exp +� +λmax +� +− +t +� +τ=1 +Xτ + 1 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +�� +δ +d +≤ δ. +For the upper bound, we prove +λmax +� +t +� +τ=1 +Xτ − 3 +2 +t +� +τ=1 +E [Xτ| Fτ−1] +� +≤ log d +δ , +in a similar way using the fact that ex ≤ 1 + (3/2)x on x ∈ [0, 1/2]. +Lemma C.4. Suppose a random variable X satisfies E[X] = 0, and let Y be an σ-sub-Gaussian random variable. If +|X| ≤ |Y | almost surely, then X is 6σ-sub-Gaussian. +Proof. Because |X| ≤ |Y | +E +� X2 +6σ2 +� +≤E +� Y 2 +6σ2 +� +. +=1 + E +�� ∞ +0 +I (|Y | ≥ x) x +3σ2 e +x2 +6σ2 dx +� +≤1 + +� ∞ +0 +P (|Y | ≥ x) x +3σ2 e +x2 +6σ2 dx. +Because +P (|Y | ≥ x) =P (Y ≥ x) + P (−Y ≤ x) +≤2e− x2 +2σ2 , +we have +E +� X2 +6σ2 +� +≤1 + +� ∞ +0 +2x +3σ2 e− x2 +3σ2 dx +≤2. +Now for any λ ∈ R, +E [exp (λX)] =E +� ∞ +� +n=0 +(λX)n +n! +� +=1 + E +� ∞ +� +n=2 +(λX)n +n! +� +≤1 + E +� +λ2X2 +2 +∞ +� +n=2 +|λX|n−2 +(n − 2)! +� +≤1 + λ2 +2 E +� +X2 exp (|λX|) +� +. + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +Because 6σ2λ2 + +X2 +12σ2 ≥ |λX| , +E [exp (λX)] ≤1 + λ2 +2 exp +� +6σ2λ2� +E +� +X2 exp +� X2 +12σ2 +�� +=1 + 6σ2λ2 exp +� +6σ2λ2� +E +� X2 +12σ2 exp +� X2 +12σ2 +�� +≤1 + 6σ2λ2 exp +� +6σ2λ2� +E +� +exp +� X2 +6σ2 +�� +≤1 + 12σ2λ2 exp +� +6σ2λ2� +≤ +� +1 + 12σ2λ2� +exp +� +6σ2λ2� +≤ exp +�36 +2 σ2λ2 +� +. +Thus X is 6σ-sub-Gaussian. +Lemma C.5. (Lee et al., 2016, Lemma 2.3) Let {Nt} be a martingale on a Hilbert space (H, ∥·∥H). Then there exists a +R2-valued martingale {Pt} such that for any time t ≥ 0, ∥Pt∥2 = ∥Nt∥H and ∥Pt+1 − Pt∥2 = ∥Nt+1 − Nt∥H. +Lemma C.6. (A dimension-free bound for vector-valued martingales.) Let {Fs}t +s=0 be a filtration and {ηs}t +s=1 be a +real-valued stochastic process such that ηs is Fτ-measurable. Let {Xs}t +s=1 be an Rd-valued stochastic process where Xs +is F0-measurable. Assume that {ηs}t +s=1 are σ-sub-Gaussian as in Assumption 1. Then with probability at least 1 − δ, +����� +t +� +s=1 +ηsXs +����� +2 +≤ 12σ +� +� +� +� +t +� +s=1 +∥Xs∥2 +2 +� +log 4t2 +δ . +(56) +Proof. Fix a t ≥ 1. For each s = 1, . . . , t, we have E [ηs| Fs−1] = 0 and Xs is F0-measurable. Thus the stochastic process, +� u +� +s=1 +ηsXs +�t +u=1 +(57) +is a (Rd, ∥·∥2)-martingale. Since (Rd, ∥·∥2) is a Hilbert space, by Lemma C.5, there exists an R2-martingale {Mu}t +u=1 +such that +����� +u +� +s=1 +ηsXs +����� +2 += ∥Mu∥2 , ∥ηuXu∥2 = ∥Mu − Mu−1∥2 , +(58) +and M0 = 0. Set Mu = (M1(u), M2(u))⊤. Then for each i = 1, 2, and u ≥ 2, +|Mi(u) − Mi(u − 1)| ≤ ∥Mu − Mu−1∥2 += ∥ηuXu∥2 += |ηu| ∥Xu∥2 , +almost surely. By Lemma C.4, Mi(u) − Mi(u − 1) is 6σ-sub-Gaussian. By Lemma C.1, for x > 0, +P (|Mi(t)| > x) =P +������ +t +� +u=1 +Mi(u) − Mi(u − 1) +����� > x +� +≤2 exp +� +− +x2 +72tσ2 �t +s=1 ∥Xs∥2 +2 +� +, +for each i = 1, 2. Thus, with probability 1 − δ/2, +Mi(t)2 ≤ 72 +� +t +� +s=1 +∥Xs∥2 +2 +� +σ2 log 4 +δ . + +Improved Algorithms for Multi-period Packing Problems with Bandit Feedback +In summary, with probability at least 1 − δ/2, +����� +t +� +τ=1 +ηsXs +����� +2 += +� +M1(t)2 + M2(t)2 ≤ 6σ +� +� +� +� +t +� +s=1 +∥Xs∥2 +2 +� +2 log 4t2 +δ . +