File size: 105,600 Bytes
837b615
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
{
  "ID": "15hYIH0TUi",
  "Title": "Neural Collaborative Filtering Bandits via Meta Learning",
  "Keywords": "Neural Contextual Bandit, Meta Learning",
  "URL": "https://openreview.net/forum?id=15hYIH0TUi",
  "paper_draft_url": "/references/pdf?id=7pe1kPKnlK",
  "Conferece": "ICLR_2023",
  "track": "Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)",
  "acceptance": "Reject",
  "review_scores": "[['2', '3', '4'], ['3', '5', '3'], ['4', '5', '4'], ['3', '8', '4']]",
  "input": {
    "source": "CRF",
    "title": "Neural Collaborative Filtering Bandits via Meta Learning",
    "authors": [],
    "emails": [],
    "sections": [
      {
        "heading": null,
        "text": "\u221a nT log T ), which is sharper over state-of-the-art related\nworks. In the end, we conduct extensive experiments showing that Meta-Ban outperforms six strong baselines."
      },
      {
        "heading": "1 INTRODUCTION",
        "text": "The contextual multi-armed bandit has been extensively studied in machine learning to resolve the exploitation-exploration dilemma in sequential decision making, with wide applications in personalized recommendation (Li et al., 2010), online advertising (Wu et al., 2016), etc.\nRecommender systems play an indispensable role in many online businesses, such as e-commerce platforms and online streaming services. It is well-known that user collaborative effects are strongly associated with the user preference. Thus, discovering and leveraging collaborative information in recommender systems has been studied for decades. In the relatively static environment, e.g., in a movie recommendation platform where catalogs are known and accumulated ratings for items are provided, the classic collaborative filtering methods can be easily deployed (e.g., matrix/tensor factorization (Su and Khoshgoftaar, 2009)). However, such methods can hardly adapt to more dynamic settings, such as news or short-video recommendation, due to: (1) the lack of cumulative interactions for new users or items; (2) the difficulty of balancing the exploitation of current user-item preference knowledge and the exploration of the new potential matches (e.g., presenting new items to the users).\nTo address this problem, a line of works, clustering of bandits (collaborative filtering bandits) (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021), have been proposed to incorporate collaborative effects among users which are largely neglected by conventional bandit algorithms (Dani et al., 2008; Abbasi-Yadkori et al., 2011; Valko et al., 2013; Ban and He, 2020). These works use the graph-based method to adaptively cluster users and explicitly or implicitly utilize the collaborative effects on user sides while selecting an arm. However, this line of works have a significant limitation that they all build on the linear bandit framework (Abbasi-Yadkori et al., 2011) and the user groups are represented by the simple linear combinations of individual user parameters. The linear reward assumptions and linear representation of groups may not be true in real-world applications (Valko et al., 2013).\nTo learn non-linear reward functions, neural bandits (Collier and Llorens, 2018; Zhou et al., 2020; Zhang et al., 2021; Kassraie and Krause, 2022) have attracted much attention, where a neural network is assigned to learn the reward function along with an exploration strategy (e.g., Upper Confidence Bound (UCB) or Thompson Sampling (TS)). However, this class of works do not incorporate any collaborative effects among users, overlooking the crucial potential in improving recommendation.\nIn this paper, to overcome the above challenges, we introduce the problem, Neural Collaborative Filtering Bandits (NCFB), built on either linear or non-linear reward assumptions while introducing relative groups. Groups are formed by users sharing similar interests/preferences/behaviors. However, such groups are usually not static over specific contents (Li et al., 2016). For example, two users may both like \"country music\" but may have different opinions on \"rock music\". \"Relative groups\" are introduced in NCFB to formulate groups given a specific content, which is more practical in real problems.\nTo solve NCFB, we propose a meta-learning based bandit algorithm, Meta-Ban (Meta-Bandits), distinct from existing related works (i.e., graph-based clustering of linear bandits (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021)). Inspired by recent advances in meta-learning (Finn et al., 2017; Yao et al., 2019), in Meta-Ban, a meta-learner is assigned to represent and rapidly adapt to dynamic groups, which allows the non-linear representation of collaborative effects. And a user-learner is assigned to each user to discover the underlying relative groups. Here, we use neural networks to formulate both meta-learner and user learners, in order to learn linear or non-linear reward functions. To solve the exploitation-exploration dilemma in bandits, Meta-Ban has an informative UCB-type exploration. In the end, we provide rigorous regret analysis and empirical evaluation for Meta-Ban. To the best of our knowledge, this is the first work incorporating collaborative effects in neural bandits. The contributions of this paper can be summarized as follows:\n(1) Problem. We introduce the problem, Neural Collaborative Filtering Bandits (NCFB), to incorporate collaborative effects among users with either linear or non-linear reward assumptions.\n(2)Algorithm. We propose a meta-learning based bandit algorithm working in NCFB, Meta-Ban, where the meta-learner is introduced to represent and rapidly adapt to dynamic groups, along with a new informative UCB-type exploration that utilizes both meta-side and user-side information. Meta-Ban allows the non-linear representation of relative groups based on user learners.\n(3) Theoretical analysis. Under the standard assumptions of over-parameterized neural networks, we prove that Meta-Ban can achieve the regret upper bound of complexity O( \u221a nT log T ), where n is the number of users and T is the number of rounds. Our bound is sharper than existing related works. Moreover, we provide a correctness guarantee of groups detected by Meta-Ban.\n(4) Empirical performance. We evaluate Meta-Ban on 10 real-world datasets and show that Meta-Ban significantly outperforms 6 strong baselines.\nNext, after introducing the problem definition in Section 2, we present the proposed Meta-Ban in Section 3 together with theoretical analysis in Section 4. In the end, we show the experiments in Section 5 and conclude the paper in Section 6. More discussion regarding related work is placed in Appendix Section A.1."
      },
      {
        "heading": "2 NEURAL COLLABORATIVE FILTERING BANDITS",
        "text": "In this section, we introduce the problem of Neural Collaborative Filtering Bandits, motivated by generic recommendation scenarios.\nSuppose there are n users, N = {1, . . . , n}, to serve on a platform. In the tth round, the platform receives a user ut \u2208 N and prepares the corresponding k arms (items) Xt = {xt,1,xt,2, . . . ,xt,k} in which each arm is represented by its d-dimensional feature vector xt,i \u2208 Rd,\u2200i \u2208 {1, . . . , k}. Then, like the conventional bandit problem, the platform will select an arm xt,i \u2208 Xt and recommend it to the user ut. In response to this action, ut will produce a corresponding reward (feedback) rt,i. We use rt,i|ut to represent the reward produced by ut given xt,i, because different users may generate different rewards towards the same arm.\nGroup behavior (collaborative effects) exists among users and has been exploited in recommender systems. In fact, the group behavior is item-varying, i.e., the users who have the same preference on a certain item may have different opinions on another item (Gentile et al., 2017; Li et al., 2016). Therefore, we define a relative group as a set of users with the same opinions on a certain item.\nDefinition 2.1 (Relative Group). In round t, given an arm xt,i \u2208 Xt, a relative group N (xt,i) \u2286 N with respect to xt,i satisfies\n1) \u2200u, u\u2032 \u2208 N (xt,i),E[rt,i|u] = E[rt,i|u\u2032] 2) \u2204 N \u2032 \u2286 N, s.t. N \u2032 satisfies 1) and N (xt,i) \u2282 N \u2032.\nSuch flexible group definition allows users to agree on certain items while disagree on others, which is consistent with the real-world scenario.\nTherefore, given an arm xt,i, the user pool N can be divided into qt,i non-overlapping groups: N1(xt,i),N2(xt,i), . . . ,Nqt,i(xt,i), where qt,i \u2264 n. Note that the group information is unknown to the platform. We expect that the users from different groups have distinct behavior with respect to xt,i. Thus, we provide the following constraint among groups. Definition 2.2 (\u03b3-gap). Given two different groups N (xt,i), N \u2032(xt,i), there exists a constant \u03b3 > 0, such that\n\u2200u \u2208 N (xt,i), u\u2032 \u2208 N \u2032(xt,i), |E[rt,i|u]\u2212 E[rt,i|u\u2032]| \u2265 \u03b3.\nFor any two groups in N , we assume that they satisfy the \u03b3-gap constraint. Note that such an assumption is standard in the literature of online clustering of bandit to differentiate groups (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021).\nReward function. The reward rt,i is assumed to be governed by an unknown function with respect to xt,i given ut: rt,i|ut = hut(xt,i) + \u03b6t,i, (1) where hut is an either linear or non-linear but unknown reward function associated with ut, and \u03b6t,i is a noise term with zero expectation E[\u03b6t,i] = 0. We assume the reward rt,i \u2208 [0, 1] is bounded, as in many existing works (Gentile et al., 2014; 2017; Ban and He, 2021). Note that online clustering of bandits assume hut is a linear function with respect to xt,i (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021).\nRegret analysis. In this problem, the goal is to minimize the pseudo regret of T rounds:\nRT = T\u2211 t=1 E[r\u2217t \u2212 rt | ut], (2)\nwhere rt is the reward received in round t and E[r\u2217t |ut,Xt] = maxxt,i\u2208Xt hut(xt,i). The introduced problem definition above can naturally formulate many recommendation scenarios. For example, for a music streaming service provider, when recommending a song to a user, the platform can exploit the knowledge of other users who have the same opinions on this song, i.e., all \u2018like\u2019 or \u2018dislike\u2019 this song. Unfortunately, the potential group information is usually not available to the platform before the user\u2019s feedback. To solve this problem, we will introduce an approach that can infer and exploit such group information to improve the recommendation, in the next section.\nNotation. Denote by [k] the sequential list {1, . . . , k}. Let xt be the arm selected in round t and rt be the reward received in round t. We use \u2225xt\u22252 and \u2225xt\u22251 to represent the Euclidean norm and Taxicab norm. For each user u \u2208 N , let \u00b5ut be the number of rounds that user u has been served up to round t, i.e., \u00b5ut = \u2211t \u03c4=1 1{u\u03c4 = u}, and T ut be all of u\u2019s historical data up to round t, i.e., T ut = {(x\u03c4 , r\u03c4 ) : u\u03c4 = u \u2227 \u03c4 \u2208 [t]}. Given a group N , all it\u2019s data up to t can be denoted by {T ut }u\u2208N = {T ut |u \u2208 N}. We use standard O,\u0398, and \u2126 to hide constants."
      },
      {
        "heading": "3 PROPOSED ALGORITHM",
        "text": "In this section, we propose a meta-learning based bandit algorithm, Meta-Ban, to tackle the challenges in the NCFB problem as follows: (1) Challenge 1 (C1): Given an arm, how to infer a user\u2019s relative group, and whether the returned group is the true relative group? (2) Challenge 2 (C2): Given a relative group, how to represent the group\u2019s behavior in a parametric way? (3) Challenge 3 (C3): How to generate a model to efficiently adapt to the rapidly-changing relative groups? (4) Challenge 4 (C4): How to balance between exploitation and exploration in bandits with relative groups?\nAlgorithm 1: Meta-Ban 1 Input: T (number of rounds),\u03bd, \u03b3 (group exploration parameter), \u03b1(exploration parameter), \u03bb\n(regularization parameter), \u03b4 (confidence level) ,J1 (number of iterations for user), J2 (number of iterations for meta), \u03b71(user step size), \u03b72(meta step size).\n2 Initialize \u03980; \u03b8u0 = \u03980, \u00b5 u 0 = 0,\u2200u \u2208 N 3 Observe one data for each u \u2208 N 4 for t = 1, 2, . . . , T do 5 Receive a user ut \u2208 N and observe k arms Xt = {xt,1, . . . ,xt,k} 6 for i \u2208 [k] do 7 Determine ut\u2019s relative groups:\nN\u0302ut(xt,i) = {u \u2208 N | |f(xt,i; \u03b8ut\u22121)\u2212 f(xt,i; \u03b8 ut t\u22121)| \u2264 \u03bd\u22121\u03bd \u03b3}. 8 \u0398t,i = GradientDecent_Meta ( N\u0302ut(xt,i),\u0398t\u22121 ) 9 Ut,i = f(xt,i; \u0398t,i) + \u03b1 \u00b7 ( \u2225g(xt,i;\u0398t,i)\u2212g(xt,i;\u03b8ut0 )\u22252\u221a\nt + L+1\u221a\n2\u00b5 ut t\n+ \u221a log(t/\u03b4)\n\u00b5 ut t ) 10 i\u2032 = argi\u2208[k] maxUt,i 11 Play xt,i\u2032 and observe reward rt,i\u2032 12 xt = xt,i\u2032 ; rt = rt,i\u2032 ; \u0398t = \u0398t,i\u2032 13 \u00b5utt = \u00b5 ut t\u22121 + 1 14 \u03b8utt = GradientDecent_User(ut,\u0398t) 15 for u \u2208 N and u \u0338= ut do 16 \u03b8ut = \u03b8 u t\u22121; \u00b5 u t = \u00b5 u t\u22121\nMeta-Ban has one meta-learner \u0398 to represent the group behavior and n user-learners for each user respectively, {\u03b8u}u\u2208N , sharing the same neural network f . Given an arm xt,i, we use g(xt,i; \u03b8) = \u25bd\u03b8f(xt,i; \u03b8) to denote the gradient of f for the brevity. The workflow of Meta-Ban is divided into three parts as follows. Group inference (to C1). As defined in Section 2, each user u \u2208 N is governed by an unknown function hu. It is natural to use the universal approximator (Hornik et al., 1989), a neural network f (defined in Section 4), to learn hu. In round t \u2208 [T ], let ut be the user to serve. Given ut\u2019s past data up to round t \u2212 1, T utt\u22121, we can train parameters \u03b8ut by minimizing the following loss: L ( T utt\u22121; \u03b8ut ) = 12 \u2211 (x,r)\u2208T utt\u22121 (f(x; \u03b8ut)\u2212 r)2.\nLet \u03b8utt\u22121 represent \u03b8 ut trained with T utt\u22121 in round t \u2212 1. The training of \u03b8ut can be conducted by (stochastic) gradient descent, e.g., as described in Algorithm 3.\nTherefore, for each u \u2208 N , we can obtain the trained parameters \u03b8ut\u22121. Then, given ut and an arm xt,i, we return ut\u2019s estimated group with respect to arm xt,i by\nN\u0302ut(xt,i) = {u \u2208 N | |f(xt,i; \u03b8ut\u22121)\u2212 f(xt,i; \u03b8 ut t\u22121)| \u2264 \u03bd \u2212 1 \u03bd \u03b3}. (3)\nwhere \u03b3 \u2208 (0, 1) represents the assumed \u03b3-gap and \u03bd > 1 is a tuning parameter to trade off between the exploration of group members and the cost of playing rounds. Meta learning (to C2 and C3). In this paper, we propose to use one meta-learner \u0398 to represent and adapt to the behavior of dynamic groups. In meta-learning, the meta-learner is trained based on a number of different tasks and can quickly adapt to new tasks with a small amount of new data (Finn et al., 2017). Here, we consider each user u \u2208 N as a task and its collected data T ut as the task distribution. Therefore, Meta-Ban has two phases: User adaptation and Meta adaptation.\nUser adaptation. In the tth round, given ut, after receiving the reward rt, we have available data T utt . Then, the user parameter \u03b8ut is updated in round t based on meta-learner \u0398, denoted by \u03b8 ut t , described in Algorithm 3.\nMeta adaptation. In the tth round, given a group N\u0302ut(xt,i), we have the available collected data {T ut\u22121}u\u2208N\u0302ut (xt,i). The goal of meta-learner is to fast adapt to these users (tasks). Thus, given an arm xt,i, we update \u0398 in round t, denoted by \u0398t,i, by minimizing the following meta loss:\nAlgorithm 2: GradientDecent_Meta (N ,\u0398t\u22121) 1 \u0398(0) = \u0398t\u22121 (or \u03980) 2 for j = 1, 2, . . . , J2 do 3 for u \u2208 N do 4 Collect T ut\u22121 5 Randomly choose T\u0303 u \u2286 T ut\u22121 6 L ( \u03b8u\u00b5ut\u22121 ) = 12 \u2211 (x,r)\u2208T\u0303 u(f(x; \u03b8 u \u00b5ut\u22121 )\u2212 r)2\n7 LN = \u2211 u\u2208N L ( \u03b8u\u00b5ut\u22121 ) + \u03bb\u221a m \u2211 u\u2208N \u2225\u03b8u\u00b5ut\u22121\u22251. 8 \u0398(j) = \u0398(j\u22121) \u2212 \u03b72\u25bd{\u03b8u \u00b5u t\u22121 }u\u2208NLN\n9 Return: \u0398t = \u0398(J2)\nAlgorithm 3: GradientDecent_User (u, \u0398t) 1 Collect T ut # Historical data of u up to round t 2 \u03b8u(0) = \u0398t ( or \u03980) 3 for j = 1, 2, . . . , J1 do 4 Randomly choose T\u0303 u \u2286 T ut 5 L ( T\u0303 u; \u03b8u ) = 12 \u2211 (x,r)\u2208T\u0303 u(f(x; \u03b8 u)\u2212 r)2\n6 \u03b8u(j) = \u03b8 u (j\u22121) \u2212 \u03b71\u25bd\u03b8u(j\u22121)L\n( T\u0303 u; \u03b8u ) 7 Return: \u03b8ut = \u03b8u(J1)\nLN\u0302ut (xt,i) = \u2211 u\u2208N\u0302ut (xt,i) L ( \u03b8u\u00b5ut\u22121 ) + \u03bb\u221a m \u2211 u\u2208N\u0302ut (xt,i) \u2225\u03b8u\u00b5ut\u22121\u22251. where \u03b8 u \u00b5ut\u22121\nare the stored user parameters in Algorithm 3 at round t\u2212 1. Here, we add L1-regularization on meta-learner to prevent overfitting in practice and neutralize vanishing gradient in convergence analysis. Then, the meta-learner is updated by: \u0398 = \u0398\u2212 \u03b72\u25bd{\u03b8u\n\u00b5u t\u22121 } u\u2208N\u0302ut (xt,i) LN\u0302ut (xt,i), where \u03b72 is the meta learning rate. Algorithm 2 shows meta update with stochastic gradient descent (SGD).\nNote that in linear clustering of bandits (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021), they represent the group behavior \u0398 by the linear combination of user-learners, e.g., \u0398 = 1\n|N\u0302ut (xt,i)|\n\u2211 u\u2208N\u0302ut (xt,i) \u03b8u\u00b5ut\u22121 . This may not be true in real world. Instead,\nwe use the meta adaptation to update the meta-learner \u0398 according to N\u0302ut(xt,i), which can represent non-linear combinations of user learners (Finn et al., 2017; Wang et al., 2020b).\nUCB Exploration (to C4). To balance the trade-off between the exploitation of the current group information and the exploration of new matches, we introduce the following UCB-based selection criterion. Based on Lemma C.2, with probability at least 1 \u2212 \u03b4, after T rounds, the cumulative error induced by meta-learner is upper bounded by \u2211T t=1 E\nrt|xt [|f(xt; \u0398t)\u2212 rt| | ut] \u2264\n\u2211T t=1\nO (\u2225g(xt; \u0398t)\u2212 g(xt; \u03b8ut0 )\u22252)\u221a t\ufe38 \ufe37\ufe37 \ufe38\nMeta-side info\n+ \u2211\nu\u2208N \u00b5 u t [ O ( L+ 1\u221a 2\u00b5ut ) + \u221a 2 log(t/\u03b4)\n\u00b5ut\ufe38 \ufe37\ufe37 \ufe38 User-side info\n] , where\ng(xt; \u0398t) incorporates the discriminative information of meta-learner acquired from the collaborative effects within the relative group N\u0302ut(xt) and O( 1\u221a\u00b5ut ) shows the shrinking confidence interval of user-learner to a specific user ut. This bound provides necessary information we should include in the selection criterion (Ut,i in Algorithm 1), which paves the way for the regret analysis (Theorem 4.2). Therefore, we say that the bound Ut,i leverages both the collaborative effects existed in N\u0302ut(xt,i) and ut\u2019s personal behavior for exploitation and exploration. Then, we select an arm according to: xt = argxt,i\u2208Xt max(Ut,i).\nTo sum up, Algorithm 1 depicts the workflow of Meta-Ban. In each round, given a served user and a set of arms, we compute the meta-learner and its bound for each relative group (Line 5-9). Then, we choose the arm according to the UCB-type strategy (Line 10). After receiving the reward, we update the meta-learner for next round (Line 12) and update the user-learner \u03b8ut (Line 13-14) because only ut\u2019s collected data is updated. In the end, we update all the other parameters (Lines 15-16). Remark 3.1 (Time Complexity). Recall that n is the number of users. It takes O(n) to find the group. Given the detected group N\u0302u, let b be the batch size of SGD and J2 be the number of iterations for the updates of Meta-learner. Thus, it takes O(|N\u0302u|bJ) to update the meta-learner. Based on the fast adaptation ability of meta-learner, J2 is a typically small number. b is controlled by the practitioner, and |N\u0302u| is upper bound by n. Therefore, the test time complexity is O(n)+O(|N\u0302u|bJ). In the large recommender system, despite the large number of users, given a serving user u, the computational cost of Meta-Ban is mostly related to the inferred relative group N\u0302u, i.e., O(|N\u0302u|bJ). Inferring N\u0302u is efficient because it takes O(n) and only needs to calculate the output of neural networks. Therefore, as long as we can control the size of N\u0302u, Meta-Ban can work properly. The first solution is to set the hyperparameter \u03b3 to a small value, so |N\u0302u| is usually small. Second, we confine the size of |N\u0302u|, i.e., we always choose the top-100 similar users for u. With a small size of N\u0302u(|N\u0302u| << n), Meta-Ban can do fast meta adaptation to N\u0302u and make prompt decisions. Therefore, it is feasible for Meta-Ban to scale to large recommender systems, with some proper approximated decisions."
      },
      {
        "heading": "4 REGRET ANALYSIS",
        "text": "In this section, we provide the regret analysis of Meta-Ban and the comparison with close related works. The analysis is built in the framework of meta-learning under the the over-parameterized neural networks regimen (Jacot et al., 2018; Allen-Zhu et al., 2019; Zhou et al., 2020).\nGiven an arm xt,i \u2208 Rd with \u2225xt,i\u22252 = 1, t \u2208 [T ], i \u2208 [k], without loss of generality, we define f as a fully-connected network with depth L \u2265 2 and width m:\nf(xt,i; \u03b8 or \u0398) = WL\u03c3(WL\u22121\u03c3(WL\u22122 . . . \u03c3(W1x))) (4)\nwhere \u03c3 is the ReLU activation function, W1 \u2208 Rm\u00d7d, Wl \u2208 Rm\u00d7m, for 2 \u2264 l \u2264 L \u2212 1, WL \u2208 R1\u00d7m, and \u03b8,\u0398 = [vec(W1)\u22ba, vec(W2)\u22ba, . . . , vec(WL)\u22ba]\u22ba \u2208 Rp. To conduct the analysis, we need the following initialization and mild assumptions.\nInitialization. For l \u2208 [L\u2212 1], each entry of Wl is drawn from the normal distribution N (0, 2/m); Each entry of WL is drawn from the normal distribution N (0, 1/m). Assumption 4.1 (Arm Separability). For any pair xt,i,xt\u2032,i\u2032 , t, t\u2032 \u2208 [T ], i, i\u2032 \u2208 [k], (t, i) \u0338= (t\u2032, i\u2032), these exists a constant 0 < \u03c1 \u2264 O( 1L ), such that \u2225xt,i \u2212 xt\u2032,i\u2032\u22252 \u2265 \u03c1.\nAssumption 4.1 is satisfied as long as no two arms are identical. Assumption 4.1 is the standard input assumption in over-parameterized neural networks (Allen-Zhu et al., 2019). Moreover, most of existing neural bandit works (e.g., Assumption 4.2 in (Zhou et al., 2020), 3.4 in (Zhang et al., 2021), 4.1 in (Kassraie and Krause, 2022)) have the comparable assumptions with equivalent constraints. They require that the smallest eigenvalue \u03bb0 of neural tangent kernel (NTK) matrix formed by all arm contexts is positive, which implies that any two arms cannot be identical. As L can be set manually, the condition 0 < \u03c1 \u2264 O( 1L ) can be easily satisfied (e.g., L = 2). Then, we provide the following regret upper bound for Meta-Ban with gradient descent. Theorem 4.2. Given the number of rounds T , assume that each user is uniformly served and set T\u0303 u = T ut , \u2200t \u2208 [T ]. For any \u03b4 \u2208 (0, 1), 0 < \u03c1 \u2264 O( 1L ), 0 < \u03f51 \u2264 \u03f52 \u2264 1, \u03bb > 0, suppose m, \u03b71, \u03b72, J1, J2 satisfy\nm \u2265 \u2126\u0303 ( max { poly(T, L, \u03c1\u22121), e \u221a log(O(Tk)/\u03b4) }) , \u03b71 = \u0398 ( \u03c1\npoly(T, L) \u00b7m\n) ,\n\u03b72 = min\n{ \u0398 ( \u221a n\u03c1\nT 4L2m\n) ,\u0398 ( \u221a \u03c1\u03f52\nT 2L2\u03bbn2\n)} , J1 = \u0398 ( poly(T, L)\n\u03c12 log\n1\n\u03f51 ) J2 = max { \u0398 ( T 5(O(T log2 m)\u2212 \u03f52)L2m\u221a\nn\u03f52\u03c1\n) ,\u0398 ( T 3L2\u03bbn2(O(T log2 m\u2212 \u03f52))\n\u03c1\u03f52\n)} .\n(5)\nThen, with probability at least 1\u2212 \u03b4 over the random initialization, Algorithms 1-3 has the following regret upper bound:\nRT \u2264 O( \u221a n) (\u221a T + L \u221a T + \u221a 2T log(O(T )/\u03b4) ) +O(1).\nComparison with clustering of bandits. The existing works on clustering of bandits (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021) are all based on the linear reward assumption and achieve the following regret bound complexity: RT \u2264 O(d \u221a Tn log T ).\nComparison with neural bandits. The regret analysis in a single neural bandit (Zhou et al., 2020; Zhang et al., 2021) has been developed recently (n = 1 in this case), achieving\nRT \u2264 O(d\u0303 \u221a T log T ), d\u0303 = log det(I+H/\u03bb) log(1 + Tn/\u03bb)\nwhere H is the neural tangent kernel matrix (NTK) (Zhou et al., 2020; Jacot et al., 2018) and \u03bb is a regularization parameter. d\u0303 is the effective dimension first introduced by Valko et al. (2013) to measure the underlying non-linear dimensionality of the NTK kernel space. Remark 4.3 (Improve O( \u221a log T )). It is easy to observe that Meta-Ban achieves O( \u221a T log T ), improving by a multiplicative factor of O( \u221a log T ) over above existing works. Note that these works (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021; Zhou et al., 2020; Zhang et al., 2021) all explicitly apply the Confidence Ellipsoid Bound (Theorem 2 in (Abbasi-Yadkori et al., 2011)) to their analysis, which inevitably introduces the complexity term O(log(T )). In contrast, Meta-Ban builds generalization bound for the user-learner (Lemma E.1), inspired by recent advances in over-parameterized network (Cao and Gu, 2019), which only brings in the complexity term O( \u221a log T ). Then, we show that the estimations of meta-learner and user-learner are close enough when \u03b8 and \u0398 are close enough, to bound the error incurred by the meta-learner (Lemma C.1). Thus, we have a different and novel UCB-type analysis from previous works. These different techniques lead to the non-trivial improvement of O( \u221a log T ).\nRemark 4.4 (Remove Input Dimension). The regret bound of Meta-Ban does not have d or d\u0303. When input dimension is large (e.g., d \u2265 T ), it may cause a considerable amount of error for RT . The effective dimension d\u0303 may also incur this predicament when the determinant of H is very large. As (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017; Li et al., 2019; Ban and He, 2021) build the confidence ellipsoid for \u03b8\u2217 (optimal parameters) based on the linear function E[rt,i | xt,i] = \u27e8xt,i, \u03b8\u2217\u27e9, their regret bounds contain d because of xt,i \u2208 Rd. Similarly, (Zhou et al., 2020; Zhang et al., 2021) construct the confidence ellipsoid for \u03b8\u2217 according to the linear function E[rt,i | xt,i] = \u27e8g(xt,i; \u03b80), \u03b8\u2217 \u2212 \u03b80\u27e9 and thus their regret bounds are affected by d\u0303 due to g(xt,i; \u03b80) \u2208 Rp (d\u0303 reaches to p in the worst case). On the contrary, the generalization bound derived in our analysis is only comprised of the convergence error (Lemma D.1) and the concentration bound (Lemma E.3). These two terms both are independent of d and d\u0303, which paves the way for Meta-Ban to remove the curse of d and d\u0303.\nRemark 4.5 (Remove i.i.d. Arm Assumption). We do not impose any assumption on the distribution of arms. However, the related clustering of bandit works (Gentile et al., 2014; Li et al., 2016; Gentile et al., 2017) assume that the arms are i.i.d. drawn from some distribution in each round, which may not be a mild assumption. In our proof, we build the martingale difference sequence only depending on the reward side (Lemma E.3), which is novel, to derive the generalization bound of user-learner and remove the i.i.d. arm assumption.\nRelative group guarantee. Compared to detected group N\u0302ut(xt,i) (Eq.(3)), we emphasize that Nut(xt,i) (ut \u2208 Nut(xt,i)) is the ground-truth relative group satisfying Definition 2.1. Suppose \u03b3-gap holds among N , we prove that when t is larger than a constant, i.e., t \u2265 T\u0303 (as follows), with probability at least 1 \u2212 \u03b4, it is expected over all selected arms that N\u0302ut(xt) \u2286 Nut(xt) and N\u0302ut(xt) = Nut(xt) if \u03bd \u2265 2. Then, for \u03bd, we have: (1) When \u03bd \u2191, we have more chances to explore collaboration with other users while costing more rounds (T\u0303 \u2191); (2) When \u03bd \u2193, we limit the potential cooperation with other users while saving exploration rounds (T\u0303 \u2193). More details and proof of Lemma 4.6 are in Appendix F.\nLemma 4.6 (Relative group guarantee). Assume the groups in N satisfy \u03b3-gap (Definition 2.2) and the conditions of Theorem 4.2 hold. For any \u03bd > 1, with probability at least 1\u2212 \u03b4 over the random initialization\u201e there exist constants c1, c2, such that when\nt \u2265 n64\u03bd2(1 + \u03bet)\n2 ( log 32\u03bd 2(1+\u03bet) 2\n\u03b32 + 9L2c21+4\u03f51+2\u03b6 2 t 4(1+\u03bet)2 \u2212 log \u03b4 ) \u03b32(1 + \u221a 3n log(n/\u03b4)) = T\u0303 ,\ngiven a user u \u2208 N , it holds uniformly for Algorithms 1-3 that\nE x\u03c4\u223cT ut |x [N\u0302u(x\u03c4 ) \u2286 Nu(x\u03c4 )] and E x\u03c4\u223cT ut |x [N\u0302u(x\u03c4 ) = Nu(x\u03c4 )], if \u03bd \u2265 2,\nwhere x\u03c4 is uniformly drawn from T ut |x and T ut |x = {x\u03c4 : ut = u \u2227 \u03c4 \u2208 [t]} is all the historical selected arms when serving u up to round t."
      },
      {
        "heading": "5 EXPERIMENTS",
        "text": "In this section, we evaluate Meta-Ban\u2019s empirical performance on 8 ML and 2 real-world recommendation datasets, compared to six strong state-of-the-art baselines. We first present the setup and then the results of experiments. More details are in Appendix A.\nML datasets. We totally use 8 public classification datasets: Mnist (LeCun et al., 1998), Notmnist (Bulatov, 2011), Cifar10 (Krizhevsky et al., 2009), Emnist (Letter) (Cohen et al., 2017), Shuttle (Dua and Graff, 2017), Fashion (Xiao et al., 2017), Mushroom (Dua and Graff, 2017), and Magictelescope (Dua and Graff, 2017). Note that ML datasets are widely used for evaluating the performance of neural bandit algorithms (e.g., (Zhou et al., 2020; Zhang et al., 2021)), which test the algorithm\u2019s ability in learning various non-linear functions between rewards and arm contexts. On ML datasets, Meta-Ban considers each class as a user and leverages the similarity among different classes to improve the performance, because Meta-Ban can cluster similar classes. The detailed setup is Appendix A.2.\nRecommendation datasets. We alse use two recommendation datasets for evaluation: Movielens (Harper and Konstan, 2015) and Yelp1. The descriptions are in Appendix A.2.\nBaselines. We compare Meta-Ban to six State-Of-The-Art (SOTA) baselines as follows: (1) CLUB (Gentile et al., 2014); (2) COFIBA (Li et al., 2016); (3) SCLUB (Li et al., 2019); (4) LOCB (Ban and He, 2021); (5) NeuUCB-ONE (Zhou et al., 2020); (6) NeuUCB-IND (Zhou et al., 2020). See detailed descriptions in Appendix A.2. Since LinUCB Li et al. (2010) and KernalUCB Valko et al. (2013) are outperformed by the above baselines, we do not include them in comparison. Results. Figure 1 - 2 shows the regret comparison on ML datasets in which Meta-Ban outperforms all baselines. Each class can be thought of as a user in these datasets. As the rewards are non-linear\n1https://www.yelp.com/dataset\nto the arms on these datasets, conventional linear clustering of bandits (CLUB, COFIBA, SCLUB, LOCB) perform poorly. Thanks to the representation power of neural networks, NeuUCB-ONE obtains better performance. However, it treats all the users as one group, neglecting the disparity among groups. In contrast, NeuUCB-IND deals with the user individually, not taking collaborative knowledge among users into account. Meta-Ban significantly outperforms all the baselines, because Meta-Ban can exploit the common knowledge of the correct group of classes where the samples from these classes have non-trivial correlations, and train the parameters on the previous group to fast adapt to new tasks, which existing works do not possess. Figure 3 reports the regret comparison on recommendation datasets where Meta-Ban still outperforms all baselines. Since these two datasets contain considerably inherent noise, all algorithms show the linear growth of regret. As rewards are almost linear to the arms on these two datasets, conventional clustering of bandits (CLUB, COFIBA, SCLUB, LOCB) achieve comparable performance. But they still are outperformed by Meta-Ban because a simple vector cannot accurately represent a user\u2019s behavior. Similarly, because Meta-Ban can discover and leverage the group information automatically, it obtains the best performance surpassing NeuUCB-ONE and NeuUCB-IND.\nFurthermore, we conducted hyper-parameter sensitivity study in Appendix A.3."
      },
      {
        "heading": "6 CONCLUSION",
        "text": "In this paper, we introduce the problem, Neural Collaborative Filtering Bandits, to incorporate collaborative effects in bandits with generic reward assumptions. Then, we propose, Meta-Ban, to solve this problem, where a meta-learner is assigned to represent and rapidly adapt to dynamic groups, along with a new informative UCB-type exploration. Moreover, we provide the regret analysis of Meta-Ban and shows that Meta-Ban can achieve a sharper regret upper bound than the close related works. In the end, we conduct extensive experiments to evaluate its empirical performance compared to SOTA baselines."
      },
      {
        "heading": "A SUPPLEMENTARY",
        "text": "In this section, we first introduce the related works and present the experiments setup coming with extensive ablation studies."
      },
      {
        "heading": "A.1 RELATED WORK",
        "text": "In this section, we briefly review the related works, including clustering of bandits and neural bandits.\nClustering of bandits. CLUB Gentile et al. (2014) first studies exploring collaborative effects among users in contextual bandits where each user hosts an unknown vector to represent the behavior based on the linear reward function. CLUB formulates user similarity on an evolving graph and selects an arm leveraging the clustered groups. Then, Li et al. (2016); Gentile et al. (2017) propose to cluster users based on specific contents and select arms leveraging the aggregated information of conditioned groups. Li et al. (2019) improve the clustering procedure by allowing groups to split and merge. Ban and He (2021) use seed-based local clustering to find overlapping groups, different from globally clustering on graphs. Korda et al. (2016); Yang et al. (2020); Wu et al. (2021) also study clustering of bandits with various settings in recommendation system. However, all the series of works are based on the linear reward assumption, which may fail in many real-world applications.\nNeural bandits. Allesiardo et al. (2014) use a neural network to learn each action and then selects an arm by the committee of networks with \u03f5-greedy strategy. Lipton et al. (2018); Riquelme et al. (2018) adapt the Thompson Sampling to the last layer of deep neural networks to select an action. However, these approaches do not provide regret analysis. Zhou et al. (2020) and Zhang et al. (2021) first provide the regret analysis of UCB-based and TS-based neural bandits, where they apply ridge regression on the space of gradients. Ban et al. (2021a) study a combinatorial problem in multiple neural bandits with a UCB-based exploration. Jia et al. (2021) perturb the training samples for incorporating both exploitation and exploration portions. EE-Net(Ban et al., 2021b) proposes to use another neural network for exploration. Xu et al. (2020) combine the last-layer neural network embedding with linear UCB to improve the computation efficiency. Unfortunately, all these methods neglect the collaborative effects among users in contextual bandits. Dutta et al. (2019) use an off-theshelf meta-learning approach to solve the contextual bandit problem in which the expected reward is formulated as Q-function. Santana et al. (2020) propose a Hierarchical Reinforcement Learning framework for recommendation in dynamic experiment, where a meta-bandit is used for the select independent recommender system. Kassraie and Krause (2022) revisit Neural-UCB type algorithm and show the O\u0303(T ) regret bound without the restrictive assumptions on the context. Maillard and Mannor (2014); Hong et al. (2020) study the latent bandit problem where the reward distribution of arms are conditioned on some unknown discrete latent state and prove the O\u0303(T ) regret bound for their algorithm as well.\nKey Differences from Related Work. We emphasize that we made important improvements compared to each aspect. (1) Compared to (Gentile et al., 2017), the only similarity is that we adopt the idea of leveraging relative groups. (2) Compared to NeuUCB (Zhou et al., 2020), in addition to the fact that they do not incorporate collaborative filtering effects, we have provided important technical improvements. The UCB in NeuUCB has to maintain a gradient outer product matrix (Zt in NeuUCB) which occupies space Rp\u00d7p (\u03b8 \u2208 Rp), and only incorporates user-side information. The new UCB introduced in our paper does not need to keep the gradient matrix and contains both group-side and user-side information. (3) Compared to (Wang et al., 2020a), we achieved the convergence of meta-learner in the online learning setting with bandit feedback, where we need to tackle the challenge that the training data of each round may come from different user distributions."
      },
      {
        "heading": "A.2 EXPERIMENTS SETUP AND ADDITIONAL RESULTS",
        "text": "ML Datasets. In all ML datasets, following the evaluation setting of existing works (Zhou et al., 2020; Valko et al., 2013; Deshmukh et al., 2017), we transform the classification problem into a bandit problem. Take Mnist as an example. Given an image x \u2208 Rd, it will be transformed into 10 arms, x1 = (x\u22a4, 0, . . . , 0)\u22a4,x2 = (0,x\u22a4, . . . , 0)\u22a4, . . . ,x10 = (0, 0, . . . ,x\u22a4)\u22a4, matching 10 class in sequence. The reward is defined as 1 if the index of selected arm equals x\u2019 ground-truth class; Otherwise, the reward is 0. In the experiments of Cifar10, Emnist, and Shuttle, we consider each class as a user and randomly draw a class first and then randomly draw a sample from the class. In\nthe experiments of Mnist and Notmnist (in Figure 1), we add these datasets together as these two both are 10-class classification datasets, to increase the difficulty of this problem. Thus, we consider these two datasets as two groups, where each class can be thought of as a user. In each round, we randomly select a group (i.e., Mnist or Notmnist), and then we randomly choose an image from a class (user). Note that we run all approaches on the Mnist as well (in Figure 2) only this time instead of on Mnist and Notmnist together (in Figure 1).\nMovielens (Harper and Konstan, 2015) and Yelp2 datasets. MovieLens is a recommendation dataset consisting of 25 million reviews between 1.6 \u00d7 105 users and 6 \u00d7 104 movies. Yelp is a dataset released in the Yelp dataset challenge, composed of 4.7 million review entries made by 1.18 million users to 1.57\u00d7 105 restaurants. For both these two datasets, we extract ratings in the reviews and build the rating matrix by selecting the top 2000 users and top 10000 restaurants(movies). Then, we use the singular-value decomposition (SVD) to extract a normalized 10-dimensional feature vector for each user and restaurant(movie). The goal of this problem is to select the restaurants (movies) with bad ratings (due to the imbalance of these two datasets, i.e., most of the entries have good ratings). Given an entry with a specific user, we generate the reward by using the user\u2019s rating stars for the restaurant(movie). If the user\u2019s rating is less than 2 stars (5 stars totally), its reward is 1; Otherwise, its reward is 0. From these two datasets, as a single user may not have enough entries to run the experiments, we use K-means to divide users into 50 clusters, where each cluster forms a new user. Therefore, the user pool totally consists of 50 users for these two datasets. Then, in each round, a user to serve ut is randomly drawn from the user pool. For the arm pool, we randomly choose one restaurant (movie) rated from ut with reward 1 and randomly pick the other 9 restaurants (movies) rated by ut with 0 reward. Therefore, there are totally 10 arms in each round. We conduct experiments on these two datasets, respectively.\nBaselines. We compare Meta-Ban to six State-Of-The-Art (SOTA) baselines as follows: (1) CLUB (Gentile et al., 2014) clusters users based on the connected components in the user graph and refine the groups incrementally. When selecting arm, it uses the newly formed group parameter instead of user parameter with UCB-based exploration; (2) COFIBA (Li et al., 2016) clusters on both user and arm sides based on evolving graph, and chooses arms using a UCB-based exploration strategy; (3) SCLUB (Li et al., 2019) improves the algorithm CLUB by allowing groups to merge and split to enhance the group representation; (4) LOCB (Ban and He, 2021) uses the seed-based clustering and allow groups to be overlapped, and chooses the best group candidates when selecting arms; (5) NeuUCB-ONE (Zhou et al., 2020) uses one neural network to formulate all users and select arms via a UCB-based recommendation; (6) NeuUCB-IND (Zhou et al., 2020) uses one neural network to formulate one user separately (totally N networks) and apply the same strategy to choose arms. Since LinUCB (Li et al., 2010) and KernalUCB (Valko et al., 2013) are outperformed by the above baselines, we do not include them in comparison.\nConfigurations. For all the methods, they all have two parameters: \u03bb that is to tune regularization at initialization and \u03b1 which is to adjust the UCB value. To find their best performance, we conduct the grid search for \u03bb and \u03b1 over (0.01, 0.1, 1) and (0.0001, 0.001, 0.01, 0.1) respectively. For LOCB, the number of random seeds is set as 20 following their default setting. For Meta-Ban, we set \u03bd as 5 and \u03b3 as 0.4 to tune the group set. To compare fairly, for NeuUCB and Meta-Ban, we use the same simple neural network with 2 fully-connected layers and the width m is set as 100. To save the running time, we train the neural networks every 10 rounds in first 1000 rounds and train the neural networks every 100 rounds afterwards. In the end, we choose the best results for the comparison and report the mean and standard deviation (shadows in figures) of 10 runs for all methods.\nA.3 SENSITIVITY STUDY FOR \u03bd AND \u03b3\nIn this section, we conduct the ablation study for the group parameter \u03bd. Here, we set \u03b3 as a fixed value 0.4 and change the value of \u03bd to find the effects on Meta-Ban\u2019s performance.\nFigure 4 shows the varying of performance of Meta-Ban with respect to \u03bd. When setting \u03bd = 1.1, the exploration range of groups is very narrow. This means, in each round, the inferred group size |N\u0302ut(xt,i)| tends to be small. Although the members in the inferred group N\u0302ut(xt,i) is more likely to be the true member of ut\u2019s relative group, we may lose many other potential group members in the beginning phase. When setting \u03bd = 5, the exploration range of groups is wider. This indicates we\n2https://www.yelp.com/dataset\nhave more chances to include more members in the inferred group, while this group may contain some false positives. With a larger size of group, the meta-learner \u0398 can exploit more information. Therefore, Meta-Ban with \u03bd = 5 outperforms \u03bd = 1.1. But, keep increasing \u03bd does not always mean improve the performance, since the inferred group may consist of some non-collaborative users, bringing into noise. Therefore, in practice, we usually set \u03bd as a relatively large number. Even we can set \u03bd as the monotonically decreasing function with respect to t.\nA.4 SENSITIVITY STUDY FOR \u03b1\nFigure 5 depicts the sensitivity of Meta-Ban with regard to \u03b1. Meta-Ban shows the robust performance as \u03b1 is varying, which stems from the strong discriminability of meta learner and the derived upper bound. Despite that the magnitude of \u03b1 is changing, the order of arms ranked by Meta-Ban is slightly influenced. Thus, the Meta-Ban can obtain the robust performance, alleviating the hyperparameter tuning."
      },
      {
        "heading": "A.5 ABLATION STUDY FOR INPUT DIMENSION",
        "text": "We run the experiments on MovieLens dataset with different input dimensions and report the results as follows. Table 1 summarizes the final regret of each method with different input dimensions (5 runs). Meta-Ban keeps the similar regret with the small fluctuation. This fluctuation is acceptable given that different input dimensions may contain different amount of information. NeuUCB-ONE and NeuUCB-IND also use the neural network to learn the reward function, so they have the similar property. In contrast, the regret of linear bandits (CLUB, COFIBA, SCLUB, LOCB) is affected more drastically by the input dimensions, which complies with their regret analysis."
      },
      {
        "heading": "B PROOF OF THEOREM 4.2",
        "text": "Theorem B.1 (Theorem 4.2 restated). Given the number of rounds T , assume that each user is uniformly served and set T\u0303 u = T ut , \u2200t \u2208 [T ]. For any \u03b4 \u2208 (0, 1), 0 < \u03c1 \u2264 O( 1L ), 0 < \u03f51 \u2264 \u03f52 \u2264 1, \u03bb > 0, suppose m, \u03b71, \u03b72, J1, J2 satisfy\nm \u2265 \u2126\u0303 ( max { poly(T, L, \u03c1\u22121), e \u221a log(O(Tk)/\u03b4) }) , \u03b71 = \u0398 ( \u03c1\npoly(T, L) \u00b7m\n) ,\n\u03b72 = min\n{ \u0398 ( \u221a n\u03c1\nT 4L2m\n) ,\u0398 ( \u221a \u03c1\u03f52\nT 2L2\u03bbn2\n)} , J1 = \u0398 ( poly(T, L)\n\u03c12 log\n1\n\u03f51 ) J2 = max { \u0398 ( T 5(O(T log2 m)\u2212 \u03f52)L2m\u221a\nn\u03f52\u03c1\n) ,\u0398 ( T 3L2\u03bbn2(O(T log2 m\u2212 \u03f52))\n\u03c1\u03f52\n)} .\n(6)\nThen, with probability at least 1\u2212 \u03b4, Algorithms 1-3 has the following regret upper bound:\nRT \u2264 2 \u221a n ( \u221a \u03f51T +O ( L \u221a T ) + (1 + \u03be1) \u221a 2T log(T/\u03b4) ) +O ( T \u221a logm\u03b2 4/3 T L 4 ) + ZT\nwhere\n\u03beT =2 +O ( T 4nL logm\n\u03c1 \u221a m\n) +O ( T 5nL2 log11/6 m\n\u03c1m1/6\n) ,\n\u03b2T = O(n2T 3\n\u221a \u03f52 log\n2 m) +O(T 2 log2 m\u2212 t\u03f52)\u03c11/2\u03bbn O(\u03c1 \u221a m\u03f52) ,\nZT =O\n( T 5L2 log11/6 m\n\u03c1m1/6\n) + T (L+ 1)2 \u221a m logm\u03b2\n4/3 t +O\n( LT 4\n\u03c1 \u221a m logm ) +O ( L4T 5\n\u03c14/3m2/3 log4/3 m\n) .\nWith the proper choice of m, we have\nRT \u2264 O( \u221a n) (\u221a T + L \u221a T + \u221a 2T log(T/\u03b4) ) +O(1). (7)\nProof. Let x\u2217t = argmaxxt,i\u2208Xt hut(xt,i) given Xt, ut, and let \u0398 \u2217 t be corresponding parameters trained by Algorithm 2 based on N\u0302 utt (x\u2217t ). Then, for the regret of one round t \u2208 [T ], we have\nRt|ut = E\nrt,i|xt,i,i\u2208[k] [r\u2217t \u2212 rt | ut]\n= E rt,i|xt,i,i\u2208[k]\n[ r\u2217t \u2212 f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 ) + f(x \u2217 t ; \u03b8 ut,\u2217 t\u22121 )\u2212 rt ] = E\nrt,i|xt,i,i\u2208[k]\n[ r\u2217t \u2212 f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 ) + f(x \u2217 t ; \u03b8 ut,\u2217 t\u22121 )\u2212 f(x\u2217t ; \u0398\u2217t ) | N\u0302 ut t (x \u2217 t ) + f(x \u2217 t ; \u0398 \u2217 t ) | N\u0302 ut t (x \u2217 t )\u2212 rt ] \u2264 E\nr\u2217t |x\u2217t\n[ r\u2217t \u2212 f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 ) ] + |f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 )\u2212 f(x\u2217t ; \u0398\u2217t ) | N\u0302 ut t (x \u2217 t )|\n+ E rt|xt\n[ f(x\u2217t ; \u0398 \u2217 t ) | N\u0302 ut t (x \u2217 t )\u2212 rt ] where the expectation is taken over rt,i conditioned on xt,i for each i \u2208 [k], \u03b8ut,\u2217t\u22121 are intermediate user parameters introduced in Lemma E.4 trained on Bayes-optimal pairs by Algorithm 3, e.g., (x\u2217t\u22121, r \u2217 t\u22121), and \u0398 \u2217 t are meta parameters trained on the group N\u0302 ut t (x \u2217 t ) using Algorithm 2. Then,\nthe cumulative regret of T rounds can be upper bounded by\nRT = T\u2211 t=1 Rt|ut\n\u2264 T\u2211\nt=1\nE r\u2217t |x\u2217t\n[ |r\u2217t \u2212 f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 )| ] + T\u2211 t=1 |f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 )\u2212 f(x\u2217t ; \u0398\u2217t )|+ T\u2211 t=1 E rt|xt [ f(x\u2217t ; \u0398 \u2217 t ) | N\u0302 ut t (x \u2217 t )\u2212 rt ] (a)\n\u2264 \u2211 u\u2208N [\u221a \u03f51\u00b5uT +O ( L \u221a \u00b5ut ) + (1 + \u03be1) \u221a 2\u00b5ut log(T/\u03b4) ]\n+ T\u2211 t=1 [ \u03b2t \u00b7 \u2225g(x\u2217t ; \u0398\u2217t )\u2212 g(x\u2217t ; \u03b8 ut,\u2217 0 )\u22252 + Zt ] + T\u2211 t=1 E rt|xt [ f(x\u2217t ; \u0398 \u2217 t ) | N\u0302 ut t (x \u2217 t )\u2212 rt ] (b)\n\u2264 \u2211 u\u2208N [\u221a \u03f51\u00b5uT +O ( L \u221a \u00b5ut ) + (1 + \u03be1) \u221a 2\u00b5ut log(T/\u03b4) ]\n+ T\u2211 t=1 [\u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03b8ut0 )\u22252 + Zt] + T\u2211 t=1 E rt|xt [ f(xt; \u0398t) | N\u0302 utt (xt)\u2212 rt ] where (a) is the applications of Lemma E.2 and Lemma C.1, and (b) is due to the selection criterion of Algorithm 1 where \u03b8ut0 = \u03b8 ut,\u2217 0 according to our initialization. Thus, we have\nRT = T\u2211 t=1 Rt|ut\n\u2264 \u2211 u\u2208N [\u221a \u03f51\u00b5uT +O ( L \u221a \u00b5ut ) + (1 + \u03be1) \u221a 2\u00b5ut log(T/\u03b4) ]\n+ T\u2211 t=1 [\u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03980)\u22252 + Zt]\n+ T\u2211 t=1 E rt|xt [ f(xt; \u0398t) | N\u0302 utt (xt)\u2212 f(xt; \u03b8 ut t\u22121) + f(xt; \u03b8 ut t\u22121)\u2212 rt ] \u2264 \u2211 u\u2208N [\u221a \u03f51\u00b5uT +O ( L \u221a \u00b5ut ) + (1 + \u03be1) \u221a 2\u00b5ut log(T/\u03b4) ]\n+ T\u2211 t=1 [\u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03980)\u22252 + Zt]\n+ T\u2211 t=1 |f(xt; \u0398t) | N\u0302 utt (xt)\u2212 f(xt; \u03b8 ut t\u22121)|+ T\u2211 t=1 E rt|xt [ f(xt; \u03b8 ut t\u22121)\u2212 rt ] (c)\n\u22642 \u2211 u\u2208N [\u221a \u03f51\u00b5uT +O ( L \u221a \u00b5ut ) + (1 + \u03be1) \u221a 2\u00b5ut log(T/\u03b4) ]\n+ 2 T\u2211 t=1 [\u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03980)\u22252 + Zt]\n(d) \u22642 \u221a n \u221a\u03f51T +O (L\u221aT)+ (1 + \u03be1)\ufe38 \ufe37\ufe37 \ufe38 I3 \u221a 2T log(T/\u03b4)  + 2\nT\u2211 t=1\n\u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03980)\u22252\ufe38 \ufe37\ufe37 \ufe38 I1 +2\nT\u2211 t=1\nZt\ufe38 \ufe37\ufe37 \ufe38 I2\nwhere (c) is an application of Lemma E.1 and Lemma C.1 and (d) is an application of Lemma E.1 with Hoeffding-Azuma inequality.\nFor I1, recall that \u03b2t = O(n2t3\n\u221a \u03f52 log2 m)+O(t2 log2 m\u2212t\u03f52)\u03c11/2\u03bbn\nO(\u03c1 \u221a m\u03f52) . Then, using Theorem 5 in (AllenZhu et al., 2019), we have\nI1 \u2264 T\u2211\nt=1\n\u03b2t \u00b7 O (\u221a logm\u03b2 1/3 t L 3\u2225g(xt; \u03980)\u22252 )\n\u2264\ufe38\ufe37\ufe37\ufe38 E2\nO ( T \u221a logm\u03b2\n4/3 T L\n4 )\n\u2264\ufe38\ufe37\ufe37\ufe38 E3 O(1) (8)\nwhere , E2 is as the Lemma E.10 and E3 is because of the choice of m (\u03b2t has the complexity of O\u0303 (\n1 m1/2\n) and m \u2265 \u2126\u0303(T 30)).\nFor I2, recall that Zt = O ( (t\u22121)4L2 log11/6 m \u03c1m1/6 ) +(L+1)2 \u221a m logm\u03b2 4/3 t +O ( L ( (t\u22121)3 \u03c1 \u221a m logm ))\n+O(L\u03b2t) +O ( L4 ( (t\u22121)3 \u03c1 \u221a m logm )4/3) . Then, we have\nI2 \u2264O\n( T 5L2 log11/6 m\n\u03c1m1/6\n) + T (L+ 1)2 \u221a m logm\u03b2\n4/3 t +O\n( LT 4\n\u03c1 \u221a m logm ) +O ( L4T 5\n\u03c14/3m2/3 log4/3 m ) =ZT .\n(9)\nI2 has the complexity of O\u0303 (\n1 m1/6\n) . Therefore, I2 \u2264 O(1) when m \u2265 \u2126\u0303(T 30).\nFor I3, as the choice of m, we have (1 + \u03be1) \u2264 O(1). The proof is complete."
      },
      {
        "heading": "C BRIDGE META-LEARNER AND USER-LEARNER",
        "text": "Lemma C.1. For any \u03b4 \u2208 (0, 1), \u03c1 \u2208 (0,O( 1L )], 0 < \u03f51 \u2264 \u03f52 \u2264 1, \u03bb > 0, suppose m, \u03b71, \u03b72, J1, J2 satisfy the conditions in Eq.(6). Then, with probability at least 1\u2212 \u03b4, for any t \u2208 [T ] and xt satisfying \u2225xt\u22252 = 1, given the serving user u \u2208 N and \u0398t returned by Algorithm 2 based on N\u0302 ut (xt), it holds uniformly for Algorithms 1-3 that\n|f(xt; \u03b8ut\u22121)\u2212 f(xt; \u0398t)| \u2264 \u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03b8u0 )\u22252 + Zt, (10)\nwhere\n\u03b2t = O(n2t3\n\u221a \u03f52 log\n2 m) +O(t2 log2 m\u2212 t\u03f52)\u03c11/2\u03bbn O(\u03c1 \u221a m\u03f52) ,\nZt =O\n( (t\u2212 1)4L2 log11/6 m\n\u03c1m1/6\n) + (L+ 1)2 \u221a m logm\u03b2\n4/3 t\n+O ( L ( (t\u2212 1)3\n\u03c1 \u221a m logm\n)) .\nProof. First, we have\n|f(xt; \u03b8ut\u22121)\u2212 f(xt; \u0398t)| \u2264 |fut(xt; \u03b8ut\u22121)\u2212 \u27e8g(xt; \u03b8ut\u22121), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9 \u2212 f(xt; \u03b8u0 )|\ufe38 \ufe37\ufe37 \ufe38 I1\n+ |\u27e8g(xt; \u03b8ut\u22121), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9+ f(xt; \u03b8u0 )\u2212 f(xt; \u0398t)|\ufe38 \ufe37\ufe37 \ufe38 I2\n(11)\nwhere the inequality is using Triangle inequality. For I1, based on Lemma E.9, we have\nI1 \u2264 O(w1/3L2 \u221a m log(m))\u2225\u03b8ut\u22121 \u2212 \u03b8u0 \u22252 \u2264 O\n( t4L2 log11/6 m\n\u03c1m1/6\n) ,\nwhere the second equality is based on the Lemma E.8 (4): \u2225\u03b8ut\u22121 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut\u22121) 3 \u03c1 \u221a m logm ) \u2264\nO (\n(t\u22121)3 \u03c1 \u221a m\nlogm ) = w.\nFor I2, we have |\u27e8g(xt; \u03b8ut\u22121), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9+ f(xt; \u03b8u0 )\u2212 f(xt; \u0398t)|\n\u2264\ufe38\ufe37\ufe37\ufe38 E1 |\u27e8g(xt; \u03b8ut\u22121), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9 \u2212 \u27e8g(xt; \u0398t),\u0398t \u2212\u03980\u27e9| + |\u27e8g(xt; \u0398t),\u0398t \u2212\u03980\u27e9+ f(xt; \u03b8u0 )\u2212 f(xt; \u0398t)| \u2264\ufe38\ufe37\ufe37\ufe38 E2 |\u27e8g(xt; \u03b8ut\u22121), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9 \u2212 \u27e8g(xt; \u03b8u0 ),\u0398t \u2212\u03980\u27e9|\ufe38 \ufe37\ufe37 \ufe38 I3\n+ |\u27e8g(xt; \u03b8u0 ),\u0398t \u2212\u03980\u27e9 \u2212 \u27e8g(xt; \u0398t),\u0398t \u2212\u03980\u27e9|\ufe38 \ufe37\ufe37 \ufe38 I4 + |\u27e8g(xt; \u0398t),\u0398t \u2212\u03980\u27e9+ f(xt; \u03b8u0 )\u2212 f(xt; \u0398t)|\ufe38 \ufe37\ufe37 \ufe38 I5\n(12)\nwhere E1, E2 use Triangle inequality. For I3, we have\n|\u27e8g(xt; \u03b8ut\u22121), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9 \u2212 \u27e8g(xt; \u03b8u0 ),\u0398t \u2212\u03980\u27e9| \u2264|\u27e8g(xt; \u03b8ut\u22121), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9 \u2212 \u27e8g(xt; \u03b8u0 ), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9|+ |\u27e8g(xt; \u03b8u0 ), \u03b8ut\u22121 \u2212 \u03b8u0 \u27e9 \u2212 \u27e8g(xt; \u03b8u0 ),\u0398t \u2212\u03980\u27e9| \u2264 \u2225g(xt; \u03b8ut\u22121)\u2212 g(xt; \u03b8u0 )\u22252 \u00b7 \u2225\u03b8ut\u22121 \u2212 \u03b8u0 \u22252\ufe38 \ufe37\ufe37 \ufe38\nM1\n+ \u2225g(xt; \u03b8u0 )\u22252 \u00b7 \u2225\u03b8ut\u22121 \u2212 \u03b8u0 \u2212 (\u0398t \u2212\u03980)\u22252\ufe38 \ufe37\ufe37 \ufe38 M2\n(13) For M1, we have\nM1 \u2264\ufe38\ufe37\ufe37\ufe38 E3\nO ( (t\u2212 1)3\n\u03c1 \u221a m logm\n) \u00b7 \u2225g(xt; \u03b8ut\u22121)\u2212 g(xt; \u03b8u0 )\u22252\n\u2264\ufe38\ufe37\ufe37\ufe38 E4 O\n( L4 ( (t\u2212 1)3\n\u03c1 \u221a m logm )4/3) (14) where E3 is the application of Lemma E.8 and E4 utilizes Theorem 5 in Allen-Zhu et al. (2019) with Lemma E.8. For M2, we have\n\u2225g(xt; \u03980)\u22252 ( \u2225\u03b8ut\u22121 \u2212 \u03b8u0 \u2212 (\u0398t \u2212\u03980)\u22252 ) \u2264\u2225g(xt; \u03980)\u22252 ( \u2225\u03b8ut\u22121 \u2212 \u03b8u0 \u22252 + \u2225\u0398t \u2212\u03980\u22252\n) \u2264\ufe38\ufe37\ufe37\ufe38 E5 O(L) \u00b7 [ O ( (t\u2212 1)3 \u03c1 \u221a m logm ) + \u03b2t\n] (15) where E5 use Lemma E.10, E.8, and D.1. Combining Eq.(14) and Eq.(C), we have\nI3 \u2264 O ( L4 ( (t\u2212 1)3\n\u03c1 \u221a m logm\n)4/3) +O ( L ( (t\u2212 1)3\n\u03c1 \u221a m logm\n)) +O(L\u03b2t). (16)\n. For I4, we have I4 =|\u27e8g(xt; \u03980),\u0398t \u2212\u03980\u27e9 \u2212 \u27e8g(xt; \u0398t),\u0398t \u2212\u03980\u27e9|\n\u2264\u2225g(xt; \u0398t)\u2212 g(xt; \u03980)\u22252\u2225\u0398t \u2212\u03980\u22252 \u2264\u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03980)\u22252\n(17)\nwhere the first inequality is because of Cauchy\u2013Schwarz inequality and the last inequality is by Lemma D.1. For I5, we have\nI5 = |\u27e8g(xt; \u0398t),\u0398t \u2212\u03980\u27e9+ f(xt; \u03980)\u2212 f(xt; \u0398t)| \u2264 (L+ 1)2 \u221a m logm\u03b2 4/3 t (18)\nwhere this inequality uses Lemma D.2 with Lemma D.1.\nCombing Eq.(11), (16), (17), and (18) completes the proof.\nLemma C.2. For any \u03b4 \u2208 (0, 1), \u03c1 \u2208 (0,O( 1L )], 0 < \u03f51 \u2264 \u03f52 \u2264 1, \u03bb > 0, suppose m, \u03b71, \u03b72, J1, J2 satisfy the conditions in Eq.(6). Then, with probability at least 1\u2212 \u03b4 over the random initialization, after t rounds, the error induced by meta-learner is upper bounded by:\nT\u2211 t=1 E rt|xt [|f(xt; \u0398t)\u2212 rt| | ut]\n\u2264 T\u2211\nt=1\nO (\u2225g(xt; \u0398t)\u2212 g(xt; \u03b8ut0 )\u22252)\u221a t + \u2211 u\u2208N \u00b5ut [ O ( L+ 1\u221a 2\u00b5ut ) + \u221a 2 log(t/\u03b4) \u00b5ut ] .\n(19)\nwhere the expectation is taken over rt conditioned on xt."
      },
      {
        "heading": "Proof.",
        "text": "T\u2211\nt=1\nE rt|xt [|f(xt; \u0398t)\u2212 rt||ut]\n= T\u2211 t=1 E rt|xt [|f(xt; \u0398t)\u2212 f(xt; \u03b8utt\u22121) + f(xt; \u03b8 ut t\u22121)\u2212 rt| | ut]\n\u2264 T\u2211\nt=1 |f(xt; \u0398t)\u2212 f(xt; \u03b8utt\u22121)|\ufe38 \ufe37\ufe37 \ufe38 I1 +\nT\u2211 t=1 E rt|xt\n[|f(xt; \u03b8utt\u22121)\u2212 rt| | ut]\ufe38 \ufe37\ufe37 \ufe38 I2\n.\n(20)\nFor I1, applying Lemma C.1, with probability at least 1\u2212 \u03b4, for any \u2225xt,j\u22252 = 1, we have\nI1 \u2264 T\u2211\nt=1\n(\u03b2t \u00b7 \u2225g(xt; \u0398t)\u2212 g(xt; \u03b8u0 )\u22252 + Zt) E1 \u2264 T\u2211 t=1 O (\u2225g(xt; \u0398t)\u2212 g(xt; \u03b8ut0 )\u22252)\u221a t\n(21)\nwhere E1 is the result of choice of m (m \u2265 \u2126\u0303(T 27)) for \u03b2t and Zt. For I2, based on the Lemma E.1, with probability at least 1\u2212 \u03b4, for any \u03f51 \u2208 (0, 1], we have\nI2 \u2264 \u2211 u\u2208N [\u221a \u03f51\u00b5ut +O ( L \u221a \u00b5ut ) + (1 + \u03bet) \u221a 2\u00b5ut log(t/\u03b4) ]\n\u2264 \u2211 u\u2208N \u00b5ut [ O ( L+ 1\u221a 2\u00b5ut ) + \u221a 2 log(t/\u03b4) \u00b5ut ] .\n(22)\nThe proof is complete."
      },
      {
        "heading": "D ANALYSIS FOR META-LEARNER",
        "text": "Lemma D.1. Given any \u03b4 \u2208 (0, 1), 0 < \u03f51 \u2264 \u03f52 \u2264 1, \u03bb > 0, \u03c1 \u2208 (0,O( 1L )], suppose m, \u03b71, \u03b72, J1, J2 satisfy the conditions in Eq.(6) and \u03980, \u03b8u0 are randomly initialized ,\u2200u \u2208 N . Then, with probability at least 1\u2212 \u03b4, these hold for Algorithms 1-3:\n1. Given any N \u2286 N , define LN (\u0398t,i) = 12 \u2211\nu\u2208N (x,r)\u2208T ut\u22121\n(f(x; \u0398t,i)\u2212 r)2, where \u0398t,i is re-\nturned by Algorithm 2 given N . Then, we have LN (\u0398t,i) \u2264 \u03f52 in J2 rounds.\n2. For any j \u2208 [J2], \u2225\u0398(j) \u2212\u0398(0)\u22252 \u2264 O(n2t3\n\u221a \u03f52 log2 m)+O(t2 log2 m\u2212t\u03f52)\u03c11/2\u03bbn\nO(\u03c1 \u221a m\u03f52)\n= \u03b21.\nProof. Define the sign matrix\nsign(\u03b8[i]) = { 1 if \u03b8[i] \u2265 0; \u22121 if \u03b8[i] < 0\n(23)\nwhere \u03b8[i] is the i-th element in \u03b8.\nFor the brevity, we use \u03b8\u0302ut to denote \u03b8\u0302 u \u00b5ut , For each u \u2208 N , we have T ut\u22121. Given a group N , then recall that\nLN = \u2211 u\u2208N L ( \u03b8\u0302ut ) + \u03bb\u221a m \u2211 u\u2208N \u2225\u03b8\u0302ut \u22251.\nThen, in round t+ 1, for any j \u2208 [J2] we have \u0398(j) \u2212\u0398(j\u22121) = \u03b72 \u00b7 \u25bd{\u03b8\u0302ut }u\u2208NLN\n= \u03b72 \u00b7 (\u2211 n\u2208N \u25bd\u03b8\u0302ut L+ \u03bb\u221a m \u2211 u\u2208N sign(\u03b8\u0302ut ) ) (24)\nAccording to Theorem 4 in (Allen-Zhu et al., 2019), given \u0398(j),\u0398(j\u22121), we have\nLN (\u0398(j)) \u2264LN (\u0398(j\u22121))\u2212 \u27e8\u25bd\u0398(j\u22121)LN ,\u0398(j) \u2212\u0398(j\u22121)\u27e9 + \u221a tLN (\u0398(j\u22121)) \u00b7 w1/3L2 \u221a m logm \u00b7 O(\u2225\u0398(j) \u2212\u0398(j\u22121)\u22252) +O(tL2m)\u2225\u0398(j) \u2212\u0398(j\u22121)\u222522\n\u2264\ufe38\ufe37\ufe37\ufe38 E1 LN (\u0398(j\u22121))\u2212 \u03b72\u2225 \u2211 n\u2208N \u25bd\u03b8\u0302ut L+ \u03bb\u221a m \u2211 u\u2208N sign(\u03b8\u0302ut )\u22252\u2225\u25bd\u0398(j\u22121)LN \u22252+\n+ \u03b72w 1/3L2 \u221a tm logm\u2225 \u2211 n\u2208N \u25bd\u03b8\u0302ut L+ \u03bb\u221a m \u2211 u\u2208N sign(\u03b8\u0302ut )\u22252 \u221a LN (\u0398(j\u22121))\n+ \u03b722O(tL2m)\u2225 \u2211 n\u2208N \u25bd\u03b8\u0302ut L+ \u03bb\u221a m \u2211 u\u2208N sign(\u03b8\u0302ut )\u222522\n(25)\n\u21d2 LN (\u0398(j)) \u2264 LN (\u0398(j\u22121))\u2212 \u03b72 \u221a n \u2211 u\u2208N \u2225\u25bd\u03b8\u0302ut L\u22252\u2225\u25bd\u0398(j\u22121)LN \u22252+\n+ \u03b72w 1/3L2 \u221a tnm logm \u2211 n\u2208N \u2225\u25bd\u03b8\u0302ut L\u22252 \u221a LN (\u0398(j\u22121)) + \u03b722O(tL2m)n \u2211 n\u2208N \u2225\u25bd\u03b8\u0302ut L\u2225 2 2\n\u2212 \u03b72\u03bb\u221a m \u2225\u25bd\u0398(j\u22121)LN \u22252 + \u03b72w\n1/3nL2 \u221a t logm\u03bb \u221a LN (\u0398(j\u22121)) +O(2\u03b722tL2)\u03bb2n2\n(26)\n\u21d2 LN (\u0398(j)) \u2264\ufe38\ufe37\ufe37\ufe38 E2 LN (\u0398(j\u22121))\u2212\u03b72 \u221a n \u2211 u\u2208N \u03c1m t\u00b5ut \u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121))+\ufe38 \ufe37\ufe37 \ufe38 I1\n+\u03b72w 1/3L2m \u221a t\u03c1n logm \u2211 n\u2208N \u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121)) + \u03b722t2L2m2n \u2211 n\u2208N\nL(\u03b8\u0302ut )\ufe38 \ufe37\ufe37 \ufe38 I1\n\u2212 \u03b72\u03bb\n\u221a \u03c1\nt\n\u221a LN (\u0398(j\u22121)) + \u03b72w1/3nL2 \u221a t logm\u03bb \u221a LN (\u0398(j\u22121)) +O(2\u03b722tL2)\u03bb2n2\ufe38 \ufe37\ufe37 \ufe38\nI2\n(27) where E1 is because of Cauchy\u2013Schwarz inequality inequality, E2 is due to Theorem 3 in (Allen-Zhu et al., 2019), i.e., the gradient lower bound. Recall that\n\u03b72 = min\n{ \u0398 ( \u221a n\u03c1\nt4L2m\n) ,\u0398 ( \u221a \u03c1\u03f52\nt2L2\u03bbn2\n)} , LN (\u03980) \u2264 O(t log2 m)\nJ2 = max\n{ \u0398 ( t5(O(t log2 m)\u2212 \u03f52)L2m\u221a\nn\u03f52\u03c1\n) ,\u0398 ( t3L2\u03bbn2(O(t log2 m\u2212 \u03f52))\n\u03c1\u03f52\n)} .\n(28)\nBefore achieving LN (\u0398(j)) \u2264 \u03f52, we have, for each u \u2208 N , L(\u03b8\u0302ut ) \u2264 LN (\u0398(j\u22121)), for I1, we have\nI1 \u2264\u2212 \u03b72 \u221a n \u2211 u\u2208N \u03c1m t\u00b5ut \u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121))+\n+ \u03b72w 1/3L2m \u221a t\u03c1n logm \u2211 n\u2208N \u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121)) + \u03b722t2L2m2n \u2211 n\u2208N \u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121))\n\u2264\u2212 \u03b72n \u221a n\u03c1m\nt2\n\u2211 n\u2208N \u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121))\n+ ( \u03b72w 1/3L2m \u221a t\u03c1n logm+ \u03b722t 2L2m2n ) \u2211\nn\u2208N\n\u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121))\n\u2264\ufe38\ufe37\ufe37\ufe38 E3\n\u2212\u0398 ( \u03b72n \u221a n\u03c1m\nt2\n) \u2211 n\u2208N \u221a L(\u03b8\u0302ut )LN (\u0398(j\u22121))\n\u2264\ufe38\ufe37\ufe37\ufe38 E4\n\u2212\u0398 ( \u03b72n \u221a n\u03c1m\nt2\n) \u2211 n\u2208N L(\u03b8\u0302ut )\n(29) where E3 is because of the choice of \u03b72. As LN (\u03980) \u2264 O(t log2 m), we have LN (\u0398(j)) \u2264 \u03f52 in J\u0398 rounds. For I2, we have\nI2 \u2264\ufe38\ufe37\ufe37\ufe38 E5 \u2212 \u03b72\u03bb\n\u221a \u03c1\nt\n\u221a \u03f52 + \u03b72w 1/3nL2 \u221a t logm\u03bb \u221a LN (\u0398(0)) +O(2\u03b722tL2)\u03bb2n2\n\u2264\ufe38\ufe37\ufe37\ufe38 E6 \u2212 \u03b72\u03bb\n\u221a \u03c1\nt\n\u221a \u03f52 + \u03b72w 1/3nL2 \u221a t logm\u03bb \u221a O(t log2 m) +O(2\u03b722tL2)\u03bb2n2\n\u2264 ( \u2212 \u03b72 \u221a \u03c1\nt\n\u221a \u03f52 + \u03b72w 1/3nL2 \u221a t logm \u221a O(t log2 m) +O(2\u03b722tL2)\u03bbn2 ) \u03bb\n\u2264\ufe38\ufe37\ufe37\ufe38 E7\n\u2212\u0398( \u03b72 \u221a \u03c1\u03f52\nt )\u03bb\n(30)\nwhere E5 is by LN (\u0398(j\u22121)) \u2265 \u03f52 and LN (\u0398(j\u22121)) \u2264 LN (\u0398(0)), E6 is according to Eq.(28), and E7 is because of the choice of \u03b72.\nCombining above inequalities together, we have LN (\u0398(j)) \u2264LN (\u0398(j\u22121))\u2212\u0398 ( \u03b72n \u221a n\u03c1m\nt2\n) \u2211 n\u2208N L(\u03b8\u0302ut )\u2212\u0398( \u03b72 \u221a \u03c1\u03f52 t )\u03bb\n\u2264LN (\u0398(j\u22121))\u2212\u0398( \u03b72 \u221a \u03c1\u03f52\nt )\u03bb\n(31)\nThus, because of the choice of J2, \u03b72, we have\nLN (\u0398(J2)) \u2264 LN (\u0398(0))\u2212 J2 \u00b7\u0398( \u03b72 \u221a \u03c1\u03f52\nt )\u03bb\n\u2264 O(t log2 m)\u2212 J2 \u00b7\u0398( \u03b72 \u221a \u03c1\u03f52\nt ) \u2264 \u03f52.\n(32)\nThe proof of (1) is completed.\nAccording to Lemma E.8, For any j \u2208 [J1], L(\u03b8u(j)) \u2264 (1\u2212 \u2126( \u03b7\u03c1m d\u00b5ut 2 ))L(\u03b8u(j\u22121)). Therefore, for any u \u2208 [n], we have \u221a\nL(\u03b8\u0302ut ) \u2264 J1\u2211 j=0 \u221a L(\u03b8u(j)) \u2264 O ( (\u00b5ut ) 2 \u03b71\u03c1m ) \u00b7 \u221a L(\u03b8u(0))\n\u2264 O ( (\u00b5ut ) 2\n\u03b71\u03c1m\n) \u00b7 O( \u221a \u00b5ut log 2 m),\n(33)\nwhere the last inequality is because of Lemma E.8 (3).\nSecond, we have\n\u2225\u0398(J2) \u2212\u03980\u22252 \u2264 J2\u2211 j=1 \u2225\u0398(j) \u2212\u0398(j\u22121)\u22252\n\u2264 J2\u2211 j=1 \u03b72\u2225 \u2211 n\u2208N \u25bd\u03b8\u0302ut L+ \u03bb\u221a m \u2211 u\u2208N sign(\u03b8\u0302ut )\u22252 \u2264 J2\u2211 j=1 \u03b72\u2225 \u2211 u\u2208N\n\u25bd\u03b8\u0302ut L\u2225F\ufe38 \ufe37\ufe37 \ufe38 I3\n+ J2\u03b72\u03bbn\u221a\nm\n(34)\nFor I3, we have J2\u2211 j=1 \u03b72\u2225 \u2211 u\u2208N \u25bd\u03b8\u0302ut L\u22252 \u2264 J2\u2211 j=1 \u03b72 \u221a |N | \u2211 u\u2208N \u2225\u25bd\u03b8\u0302ut L\u22252\n\u2264\ufe38\ufe37\ufe37\ufe38 E8 J2\u2211 j=1 \u03b72 \u221a n \u2211 u\u2208N \u2225\u25bd\u03b8\u0302ut L\u22252\n\u2264\ufe38\ufe37\ufe37\ufe38 E9 O J2\u2211 j=1 (\u03b72) \u221a ntm \u2211 u\u2208N \u221a L(\u03b8\u0302ut )\n(35)\n\u21d2 J2\u2211 j=1 \u03b72\u2225 \u2211 u\u2208N \u25bd\u03b8\u0302ut L\u22252 \u2264\ufe38\ufe37\ufe37\ufe38 E10 O(\u03b72) \u221a ntm \u2211 u\u2208N J2\u2211 j=1 \u221a L(\u03b8\u0302ut )\n\u2264\ufe38\ufe37\ufe37\ufe38 E11 O(\u03b72) \u221a ntm \u00b7 n \u00b7 O\n( (\u00b5ut ) 2\n\u03b71\u03c1m\n) \u00b7 O( \u221a \u00b5ut log 2 m)\n\u2264 O\n( \u03b72n 3/2t5/2 \u221a t log2 m\n\u03b71\u03c1 \u221a m\n) (36)\nwhere E1 is because of |N | \u2264 n, E2 is due to Theorem 3 in (Allen-Zhu et al., 2019), and E3 is as the result of Eq.(33).\nCombining Eq.(34) and Eq.(36), we have\n\u2225\u0398(J2) \u2212\u03980\u22252 \u2264 O\n( \u03b72n 3/2t3 \u221a log2 m+ J2\u03b72\u03b71\u03c1\u03bbn\n\u03b71\u03c1 \u221a m\n)\n\u2264O\n( \u03b72n 3/2t3 \u221a log2 m+O(t2 log2 m\u2212 t\u03f52))\u03b71 \u221a \u03c1\u03bbn\n\u03b71\u03c1 \u221a m\u03f52\n)\n\u2264O(n 2t3 \u221a \u03f52 log\n2 m) +O(t2 log2 m\u2212 t\u03f52)\u03c11/2\u03bbn O(\u03c1 \u221a m\u03f52)\n=\u03b2t.\n(37)\nThe proof is completed."
      },
      {
        "heading": "D.1 ANCILLARY LEMMAS",
        "text": "Lemma D.2 ((Wang et al., 2020a)). Suppose m satisfies the condition2 in Eq.(6), if\n\u2126(m\u22123/2L\u22123/2[log(TkL2/\u03b4)]3/2) \u2264 \u03bd \u2264 O((L+ 1)\u22126 \u221a m).\nthen with probability at least 1\u2212 \u03b4, for all \u0398,\u0398\u2032 satisfying \u2225\u0398\u2212\u03980\u22252 \u2264 \u03bd and \u2225\u0398\u2032 \u2212\u03980\u22252 \u2264 \u03bd, x \u2208 Rd, \u2225x\u22252 = 1, we have\n|f(x; \u0398)\u2212 f(x; \u0398\u2032)\u2212 \u27e8\u25bd\u0398f(x; \u0398),\u0398\u2032 \u2212\u0398\u27e9| \u2264 O(\u03bd4/3(L+ 1)2 \u221a m logm).\nLemma D.3. With probability at least 1 \u2212 \u03b4, set \u03b72 = \u0398( \u03bd\u221a2tm ), for any \u0398 \u2032 \u2208 Rp satisfying \u2225\u0398\u2032 \u2212\u03980\u22252 \u2264 \u03b21 , such that t\u2211\n\u03c4=1\n|f(x\u03c4 ; \u0398(j) \u2212 r\u03c4 | \u2264 t\u2211\n\u03c4=1\n|f(x\u03c4 ; \u0398\u2032)\u2212 r\u03c4 |+O ( 3L \u221a t\u221a\n2\n)\nProof. Then, the proof is a direct application of Lemma 4.3 in (Cao and Gu, 2019) by setting the loss as L\u03c4 (\u0398\u0302\u03c4 ) = |f(x\u03c4 ; \u0398\u0302\u03c4 )\u2212 r\u03c4 |, R = \u03b21 \u221a m, \u03f5 = LR\u221a\n2\u03bdt , and \u03bd = R2."
      },
      {
        "heading": "E ANALYSIS FOR USER-LEARNER",
        "text": "Lemma E.1. For any \u03b4 \u2208 (0, 1), 0 < \u03c1 \u2264 O( 1L ), suppose 0 < \u03f51 \u2264 1 and m, \u03b71, J1 satisfy the conditions in Eq.(6). After T rounds, with probability 1 \u2212 \u03b4 over the random initialization, the cumulative error induced by the user-learners is upper bounded by\n1\nT T\u2211 t=1 E rt|xt [|f(xt; \u03b8utt\u22121)\u2212 rt| | T ut t\u22121, ut]\n\u2264 \u221a n [\u221a \u03f51 T +O ( LR\u221a T ) +O(1 + \u03bet) \u221a 2 log(T/\u03b4) T ] ,\nwhere the expectation is taken over rut conditioned on x u t and T ut is the historical data of u up to round t.\nProof. Applying Lemma E.3 over all users, we have\n1\nT T\u2211 t=1 E rt|xt [|f(xt; \u03b8utt\u22121)\u2212 rt| | T ut t\u22121, ut]\n= 1\nT \u2211 u\u2208N \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut E rt|xt [|f(x\u03c4 ; \u03b8ut\u22121)\u2212 r\u03c4 | | T ut\u22121, u]\n\u2264 1 T \u2211 u\u2208N [\u221a \u03f51\u00b5uT +O ( L \u221a \u00b5ut ) + (1 + \u03bet) \u221a 2\u00b5ut log(T/\u03b4) ] (38)\nwhere we applied the union bound to \u03b4 over all n users and so we get log(T/\u03b4) because of\u2211 u\u2208N \u00b5 u T = T . Then, given a user u, then, \u00b5 u T = \u2211T t=1 1{ut = u} where 1{ut = u} is the\nindicator function. Then, applying Hoeffding-Azuma inequality on the sequence \u221a\n\u00b5uT ,\u2200u \u2208 N , we have \u2211\nu\u2208N\n\u221a \u00b5uT \u2264 \u2211 u\u2208N E[ \u221a \u00b5uT ] + \u221a 2n log(1/\u03b4)\n= \u221a nT + \u221a 2n log(1/\u03b4).\nThen, by simplification, we have\n1\nT T\u2211 t=1 E rt|xt [|f(xt; \u03b8utt\u22121)\u2212 rt| | T ut t\u22121, ut]\n\u2264 \u221a n [\u221a \u03f51 T +O ( L\u221a T ) +O(1 + \u03bet) \u221a 2 log(T/\u03b4) T ] .\n(39)\nThe proof is complete.\nLemma E.2. For any \u03b4 \u2208 (0, 1), 0 < \u03c1 \u2264 O( 1L ), suppose 0 < \u03f51 \u2264 1 and m, \u03b71, J1 satisfy the conditions in Eq.(6). In round t \u2208 [T ], given u \u2208 N , let\nx\u2217t = arg max xt,i,i\u2208[k] hu(xt,i)\nthe Bayes-optimal arm for u and r\u2217t is the corresponding reward. Then, with probability at least 1\u2212 \u03b4 over the random initialization, after T rounds, with probability 1\u2212 \u03b4 over the random initialization, the cumulative error induced by the user-learners is upper bounded by:\n1\nT T\u2211 t=1 E r\u2217t |x\u2217t [|f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 )\u2212 r\u2217t | | T ut,\u2217 t\u22121 , ut]\n\u2264 \u221a n [\u221a \u03f51 T +O ( L\u221a T ) +O(1 + \u03bet) \u221a 2 log(T/\u03b4) T ] .\nwhere the expectation is taken over r\u2217\u03c4 conditioned on x \u2217 \u03c4 , T u,\u2217 t = {(x\u2217\u03c4 , r\u2217\u03c4 ) : u\u03c4 = u, \u03c4 \u2208 [t]} are stored Bayes-optimal pairs up to round t for u, and \u03b8ut,\u2217t\u22121 are the parameters trained on T ut,\u2217 t\u22121 according to Algorithm 3 in round t\u2212 1.\nProof. Based on Lemma E.4, we have\n1\nT T\u2211 t=1 E r\u2217t |x\u2217t [|f(x\u2217t ; \u03b8 ut,\u2217 t\u22121 )\u2212 r\u2217t | | T ut,\u2217 t\u22121 , ut]\n= 1\nT \u2211 u\u2208N \u2211 (x\u2217\u03c4 ,r \u2217 \u03c4 )\u2208T u,\u2217 t E r\u2217t |x\u2217t [|f(x\u2217\u03c4 ; \u03b8 u,\u2217 t\u22121)\u2212 r\u2217\u03c4 | | T u,\u2217 t\u22121 , u]\n\u2264 1 T \u2211 u\u2208N [\u221a \u03f51\u00b5uT +O ( L \u221a \u00b5ut ) + (1 + \u03bet) \u221a 2\u00b5ut log(T/\u03b4) ] (40)\nwhere we applied the union bound to \u03b4 over all n users and so we get log(T/\u03b4) because of\u2211 u\u2208N \u00b5 u T = T . Then, given a user u, then, \u00b5 u T = \u2211T t=1 1{ut = u} where 1{ut = u} is the\nindicator function. Then, applying Hoeffding-Azuma inequality on the sequence \u221a\n\u00b5uT ,\u2200u \u2208 N , we have \u2211\nu\u2208N\n\u221a \u00b5uT \u2264 \u2211 u\u2208N E[ \u221a \u00b5uT ] + \u221a 2n log(1/\u03b4)\n= \u221a nT + \u221a 2n log(1/\u03b4).\nLet \u03b2NT = maxn\u2208N \u03f51. Then, we have\n1\nT T\u2211 t=1 E r\u2217t |x\u2217t [|f(xt; \u03b8ut,\u2217t\u22121 )\u2212 r\u2217t | | T ut,\u2217 t\u22121 , ut]\n\u2264 \u221a n [\u221a \u03f51 T +O ( L\u221a T ) +O(1 + \u03bet) \u221a 2 log(T/\u03b4) T ] .\n(41)\nThe proof is complete.\nLemma E.3. For any \u03b4 \u2208 (0, 1), 0 < \u03c1 \u2264 O( 1L ), suppose 0 < \u03f51 \u2264 1 and m, \u03b71, J1 satisfy the conditions in Eq.(6). In a round \u03c4 where u \u2208 N is serving user, let x\u03c4 be the arm selected by some fixed policy \u03c0\u03c4 and r\u03c4 is the corresponding received reward. Then, with probability at least 1\u2212 \u03b4 over the randomness of initialization, after t \u2208 [T ] rounds, the cumulative regret induced by u is upper bounded by:\n1\n\u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut E r\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 | | \u03c0\u03c4 , u]\n\u2264 \u221a\n\u03f51 \u00b5ut\n+O (\n3L\u221a 2\u00b5ut\n) + (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4)\n\u00b5ut .\nwhere the expectation is taken over r\u03c4 conditioned on x\u03c4 and T ut = {(x\u03c4 , r\u03c4 ) : u\u03c4 = u, \u03c4 \u2208 [t]} is the historical data of u up to round t.\nProof. According to Lemma E.5, with probability at least 1\u2212 \u03b4, given any \u2225x\u22252 = 1, r \u2264 1, for any round \u03c4 in which u is the serving user, we have\n|f(x; \u03b8u\u03c4\u22121)\u2212 r| \u2264 \u03bet + 1.\nHere, we will apply the union bound of \u03b4 over all \u00b5uT rounds, to make this bound hold for every round of u. Then, in a round \u03c4 where u is the serving user, let x\u03c4 be the arm selected by some fixed policy \u03c0\u03c4 and r\u03c4 is the corresponding reward. Then, we define\nV\u03c4 = E r\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 |]\u2212 |f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 |, (42)\nwhere the expectation is taken over r\u03c4 conditioned on x\u03c4 . Then, we have\nE[V\u03c4 |Fu\u03c4 ] = E r\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 |]\u2212 E[|f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 | | Fu\u03c4 ] = 0\nwhere Fu\u03c4 denotes the \u03c3-algebra generated by T u\u03c4\u22121. Thus, we have the following form:\n1\n\u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut V\u03c4 = 1 \u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut E r\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 |]\u2212 1 \u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut |f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 |. (43) Because V1, . . . , V\u00b5ut is the martingale difference sequence, applying Hoeffding-Azuma inequality over V1, . . . , V\u00b5ut , we have\n1\n\u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut E r\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 | | \u03c0\u03c4 , u]\n\u2264 1 \u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut\n|f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 |\ufe38 \ufe37\ufe37 \ufe38 I1 +(1 + \u03bet)\n\u221a 2 log(1/\u03b4)\n\u00b5ut .\n(44)\nFor I1, for any \u03b8\u0303 satisfying \u2225\u03b8\u0303 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm ) , we have\n1\n\u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut |f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 | (a) \u2264 1 \u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut |f(x\u03c4 ; \u03b8\u0303)\u2212 r\u03c4 |+O ( 3L\u221a 2\u00b5ut ) (b)\n\u2264 1 \u00b5ut\n\u221a \u00b5ut \u221a \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut (f(x\u03c4 ; \u03b8\u0303)\u2212 r\u03c4 )2 +O ( 3L 2 \u221a \u00b5ut )\n\u2264\ufe38\ufe37\ufe37\ufe38 I3\n\u221a 2\u03f51 \u00b5ut +O ( 3L\u221a \u00b5ut ) .\n(45)\nwhere I2 is because of Lemma E.6 and I3 is the direct application of Lemma E.8 (2): there exists \u03b8\u0303 satisfying \u2225\u03b8\u0303 \u2212 \u03b8u0 \u22252 \u2264 O\n( (\u00b5ut ) 3\n\u03c1 \u221a m\nlogm ) such that 12 \u2211\u00b5ut \u03c4=1(f(x\u03c4 ; \u03b8\u0303)\u2212 r\u03c4 )2 \u2264 \u03f51.\nCombing Eq.(44) and Eq.(45), we have\n1\n\u00b5ut \u2211 (x\u03c4 ,r\u03c4 )\u2208T ut E r\u03c4 |x\u03c4 [|f(xt; \u03b8ut\u22121)\u2212 rt| | \u03c0\u03c4 , u] ] \u2264 \u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(1/\u03b4) \u00b5ut . (46) Then, applying the union bound over \u03b4, for any i \u2208 [k], \u03c4 \u2208 [\u00b5ut ].\nBased on Lemma E.8 (4), for any \u03b8\u0302u\u03c4 , \u03c4 \u2208 [t], we have \u2225\u03b8\u0302u\u03c4 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm ) . Thus, it\nholds that \u2225\u03b8u\u03c4 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm ) .\nThen, apply the union bound of \u03b4 over all \u00b5uT rounds. The proof is completed.\nLemma E.4. For any \u03b4 \u2208 (0, 1), 0 < \u03c1 \u2264 O( 1L ), suppose 0 < \u03f51 \u2264 1 and m, \u03b71, J1 satisfy the conditions in Eq.(6). In a round \u03c4 where u \u2208 N is the serving user, let x\u2217\u03c4 be the arm selected according to Bayes-optimal policy \u03c0\u2217:\nx\u2217\u03c4 = arg max x\u03c4,i,i\u2208[k] hu(x\u03c4,i),\nand r\u2217\u03c4 is the corresponding reward. Then, with probability at least 1 \u2212 \u03b4 over the randomness of initialization, after t \u2208 [T ] rounds, the cumulative regret induced by u with policy \u03c0\u2217 is upper bounded by:\n1\n\u00b5ut \u2211 (x\u2217\u03c4 ,r \u2217 \u03c4 )\u2208T u,\u2217 t E r\u2217\u03c4 |x\u2217\u03c4 [|f(x\u2217\u03c4 ; \u03b8 u,\u2217 \u03c4\u22121)\u2212 r\u2217\u03c4 | | \u03c0\u2217, u]\n\u2264 \u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4) \u00b5ut .\nwhere the expectation is taken over r\u2217\u03c4 conditioned on x \u2217 \u03c4 , T u,\u2217 t = {(x\u2217\u03c4 , r\u2217\u03c4 ) : u\u03c4 = u, \u03c4 \u2208 [t]} are stored Bayes-optimal pairs up to round t for u, and \u03b8u,\u2217\u03c4\u22121 are the parameters trained on T u,\u2217 \u03c4\u22121 according to Algorithm 3 in round \u03c4 \u2212 1.\nProof. This proof is analogous to Lemma E.3. In a round \u03c4 where u is the serving user, we define\nV\u03c4 = E r\u2217\u03c4 |x\u2217\u03c4\n[|f(x\u2217\u03c4 ; \u03b8\u0302 u,\u2217 \u03c4\u22121)\u2212 r\u2217\u03c4 |]\u2212 |f(x\u2217\u03c4 ; \u03b8\u0302 u,\u2217 \u03c4\u22121)\u2212 r\u2217\u03c4 |. (47)\nwhere the expectation is taken over r\u2217\u03c4 conditioned on x \u2217 \u03c4 . Then, we have\nE[V\u03c4 |F\u03c4 ] = E r\u2217\u03c4 |x\u2217\u03c4\n[|f(x\u2217\u03c4 ; \u03b8\u0302 u,\u2217 \u03c4\u22121)\u2212 r\u03c4,\u2217|]\u2212 E[|f(x\u2217\u03c4 ; \u03b8\u0302 u,\u2217 \u03c4\u22121)\u2212 r\u2217\u03c4 | | F\u03c4 ] = 0\nTherefore, V1, . . . , V\u00b5ut is the martingale difference sequence. Then, following the same procedure of Lemma E.3, we can derive\n1\n\u00b5ut \u2211 (x\u2217\u03c4 ,r \u2217 \u03c4 )\u2208T u,\u2217 t E r\u2217\u03c4 |x\u2217\u03c4 [|f(x\u2217\u03c4 ; \u03b8 u,\u2217 \u03c4\u22121)\u2212 r\u2217\u03c4 | | u]\n\u2264 \u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(1/\u03b4) \u00b5ut .\nBased on Lemma E.8 (4), for any \u03b8\u0302u,\u2217\u03c4 , \u03c4 \u2208 [t], we have \u2225\u03b8\u0302u,\u2217\u03c4 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm ) . Thus, it\nholds that \u2225\u03b8u,\u2217\u03c4 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm ) ."
      },
      {
        "heading": "E.1 ANCILLARY LEMMAS",
        "text": "Lemma E.5. Suppose m, \u03b71, \u03b71 satisfy the conditions in Eq. (6). With probability at least 1\u2212 \u03b4, for any x with \u2225x\u22252 = 1 and t \u2208 [T ], u \u2208 N , it holds that\n|f(x; \u03b8ut )| \u2264 2 +O ( t4nL logm\n\u03c1 \u221a m\n) +O ( t5nL2 log11/6 m\n\u03c1m1/6\n) = \u03bet.\nProof. This is an application of Lemma C.3 in (Ban et al., 2021b). Let \u03b80 be randomly initialized. Then applying Lemma E.9, for any \u2225x\u22252 = 1 and \u2225\u03b8\u0302ut \u2212 \u03b80\u2225 \u2264 w, we have\n|f(x; \u03b8\u0302ut )| \u2264 |f(x; \u03b80)|\ufe38 \ufe37\ufe37 \ufe38 I1\n+|\u27e8\u25bd\u03b80f(xi; \u03b80), \u03b8\u0302ut \u2212 \u03b80\u27e9|+O(L2 \u221a m log(m))\u2225\u03b8\u0302ut \u2212 \u03b80\u22252w1/3\n\u2264 2\u2225x\u22252\ufe38 \ufe37\ufe37 \ufe38 I1 + \u2225\u25bd\u03b80f(xi; \u03b80)\u22252\u2225\u03b8\u0302ut \u2212 \u03b80\u22252\ufe38 \ufe37\ufe37 \ufe38 I2\n+O(L2 \u221a\nm log(m)) \u2225\u03b8\u0302ut \u2212 \u03b80\u22252w1/3\ufe38 \ufe37\ufe37 \ufe38 I3\n\u2264 2 +O(L) \u00b7 O ( t3\n\u03c1 \u221a m logm ) \ufe38 \ufe37\ufe37 \ufe38\nI2\n+O ( L2 \u221a m log(m) ) \u00b7 O ( t3\n\u03c1 \u221a m logm )4/3 \ufe38 \ufe37\ufe37 \ufe38\nI3 = 2 +O ( t3L logm\n\u03c1 \u221a m\n) +O ( t4L2 log11/6 m\n\u03c1m1/6\n) (48)\nwhere I1 is an application of Lemma 7.3 in (Allen-Zhu et al., 2019), I2 is by Lemma E.10 (1) and Lemma E.8 (4), and I3 is due to Lemma E.8 (4).\nLemma E.6. For any \u03b4 \u2208 (0, 1), suppose m satisfy the conditions in Eq.(6) and \u03bd = \u0398((\u00b5ut )6/\u03c12). Then, with probability at least 1 \u2212 \u03b4, set \u03b71 = \u0398( \u03bd\u221a2\u00b5ut m ) for algorithm 1-3, for any \u03b8\u0303 satisfying\n\u2225\u03b8\u0303 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm ) such that\n\u00b5ut\u2211 \u03c4=1 |f(x\u03c4 ; \u03b8u\u03c4\u22121)\u2212 r\u03c4 | \u2264 \u00b5ut\u2211 \u03c4=1 |f(x\u03c4 ; \u03b8\u0303)\u2212 r\u03c4 |+O ( 3L \u221a \u00b5ut\u221a 2 ) (49)\nProof. This is a direct application of Lemma 4.3 in (Cao and Gu, 2019) by setting the loss as L\u03c4 (\u03b8 u \u03c4\u22121) = |f(x\u03c4 ; \u03b8u\u03c4\u22121) \u2212 r\u03c4 |, and R = (\u00b5ut ) 3\n\u03c1 logm, \u03f5 = LR\u221a 2\u03bd\u00b5ut , and \u03bd = \u03bd\u2032R2, where \u03bd\u2032 is some small enough absolute constant. Then, for any \u03b8\u0303 satisfying \u2225\u03b8\u0303 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm )\n, there exist a small enough absolute constant \u03bd\u2032, such that\nt\u2211 \u03c4=1 L\u03c4 (\u03b8\u0302 u \u03c4\u22121) \u2264 t\u2211 \u03c4=1 L\u03c4 (\u03b8\u0303) + 3\u00b5 u t \u03f5. (50)\nThen, replacing \u03f5 completes the proof.\nLemma E.7 (Lemma C.2 (Ban et al., 2021b)). For any \u03b4 \u2208 (0, 1), \u03c1 \u2208 (0,O( 1L )), suppose the conditions in Theorem 4.2 are satisfied. Then, with probability at least 1\u2212 \u03b4, in each round t \u2208 [T ], for any \u2225x\u22252 = 1, \u03b8u,\u2217t\u22121, \u03b8ut\u22121 satisfying \u2225\u03b8 u,\u2217 t\u22121 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3 \u03c1 \u221a m logm ) and \u2225\u03b8ut\u22121 \u2212 \u03b8u0 \u22252 \u2264\nO ( (\u00b5ut ) 3\n\u03c1 \u221a m\nlogm ) , we have\n(1) |f(x; \u03b8u,\u2217t\u22121)\u2212 f(x; \u03b8ut\u22121)|\n\u2264 ( 1 +O ( tL3 log5/6 m\n\u03c11/3m1/6\n)) O ( Lt3\n\u03c1 \u221a m logm\n) +O ( t4L2 log11/6 m\n\u03c14/3m1/6 ) =\u03b6t\n(51)\n(2)\u2225\u25bd\u03b8ut\u22121f1(x; \u03b8 u t\u22121)\u22252 \u2264\n( 1 +O ( tL3 log5/6 m\n\u03c11/3m1/6\n)) O(L) . (52)\nLemma E.8 (Theorem 1 in (Allen-Zhu et al., 2019)). For any 0 < \u03f51 \u2264 1, 0 < \u03c1 \u2264 O(1/L). Given a user u, the collected data {x\u03c4 , ru\u03c4 } \u00b5ut \u03c4=1, suppose m, \u03b71, J1 satisfy the conditions in Eq.(6). Define\nL (\u03b8u) = 12 \u2211 (x,r)\u2208T ut (f(x; \u03b8u)\u2212 r)2. Then with probability at least 1\u2212 \u03b4, these hold that:\n1. For any j \u2208 [J ], L(\u03b8u(j)) \u2264 (1\u2212 \u2126( \u03b71\u03c1m \u00b5ut 2 ))L(\u03b8u(j\u22121))\n2. L(\u03b8\u0302u\u00b5ut ) \u2264 \u03f51 in J1 = poly(\u00b5ut ,L) \u03c12 log(1/\u03f51) rounds.\n3. L(\u03b8u0 ) \u2264 O(\u00b5ut log 2 m).\n4. For any j \u2208 [J ], \u2225\u03b8u(j) \u2212 \u03b8 u (0)\u22252 \u2264 O\n( (\u00b5ut ) 3\n\u03c1 \u221a m\nlogm ) .\nLemma E.9 (Lemma 4.1, (Cao and Gu, 2019)). Suppose O(m\u22123/2L\u22123/2[log(TnL2/\u03b4)]3/2) \u2264 w \u2264 O(L\u22126[logm]\u22123/2). Then, with probability at least 1 \u2212 \u03b4 over randomness of \u03b80, for any t \u2208 [T ], \u2225x\u22252 = 1, and \u03b8, \u03b8\u2032 satisfying \u2225\u03b8 \u2212 \u03b80\u2225 \u2264 w and \u2225\u03b8\u2032 \u2212 \u03b80\u2225 \u2264 w , it holds uniformly that\n|f(x; \u03b8)\u2212 f(x; \u03b8\u2032)\u2212 \u27e8\u25bd\u03b8\u2032f(x; \u03b8\u2032), \u03b8 \u2212 \u03b8\u2032\u27e9| \u2264 O(w1/3L2 \u221a m log(m))\u2225\u03b8 \u2212 \u03b8\u2032\u22252.\nLemma E.10. For any \u03b4 \u2208 (0, 1), suppose m, \u03b71, J1 satisfy the conditions in Eq.(6) and \u03b80 are randomly initialized. Then, with probability at least 1\u2212 \u03b4, for any \u2225x\u22252 = 1, these hold that\n1. \u2225\u25bd\u03b80f(x; \u03b80)\u22252 \u2264 O(L),\n2. |f(x; \u03b80)| \u2264 2.\nProof. For (2), based on Lemma 7.1 in (Allen-Zhu et al., 2019), we have |f(x; \u03b80)| \u2264 2. Denote by D the ReLU function. For any l \u2208 [L],\n\u2225\u25bdWlf(x; \u03b80)\u2225F \u2264 \u2225WLDWL\u22121 \u00b7 \u00b7 \u00b7DWl+1\u2225F \u00b7 \u2225DWl+1 \u00b7 \u00b7 \u00b7x\u2225F \u2264 O( \u221a L)\nwhere the inequality is according to Lemma 7.2 in (Allen-Zhu et al., 2019). Therefore, we have \u2225\u25bd\u03b80f(x; \u03b80)\u22252 \u2264 O(L)."
      },
      {
        "heading": "F RELATIVE GROUP GUARANTEE",
        "text": "In this section, we provide a relative group guarantee with the expectation taken over all past selected arms. For u, u\u2032 \u2208 N \u2227 u \u0338= u\u2032, we define\nE x\u03c4\u223cT ut |x [N\u0302u(x\u03c4 )] = {u, u\u2032 : E x\u03c4\u223cT ut |x [|f(x\u03c4 ; \u03b8ut\u22121)]\u2212 E x\u03c4\u223cT ut |x [f(x\u03c4 ; \u03b8 u\u2032 t\u22121)|] \u2264 \u03bd \u2212 1 \u03bd \u03b3} (53)\nand E\nx\u03c4\u223cT ut |x [Nu(x\u03c4 )] = {u, u\u2032 : E x\u03c4\u223cT ut |x [E[r\u03c4 |u]] = E x\u03c4\u223cT ut |x [E[r\u03c4 |u\u2032]]} (54)\nwhere N\u0302u(x\u03c4 ) is the detected group and Nu(x\u03c4 ) is the ground-truth group. Then, we provide the following lemma. Lemma F.1 (Lemma 4.6 Restated). Assume the groups in N satisfy \u03b3-gap (Definition 2.2) and the conditions of Theorem 4.2 are satisfied. For any \u03b4 \u2208 (0, 1), \u03bd > 1, with probability at least 1 \u2212 \u03b4 over the random initialization, there exist constants c1, c2, such that when\nt \u2265 n64\u03bd2(1 + \u03bet)\n2 ( log 32\u03bd 2(1+\u03bet) 2\n\u03b32 + 9L2c21+4\u03f51+2\u03b6 2 t 4(1+\u03bet)2 \u2212 log \u03b4 ) \u03b32(1 + \u221a 3n log(n/\u03b4)) = T\u0303 ,\ngiven a user u \u2208 N , it holds uniformly for Algorithms 1-3 that E\nx\u03c4\u223cT ut |x [N\u0302u(x\u03c4 ) \u2286 Nu(x\u03c4 )]\nand E x\u03c4\u223cT ut |x [N\u0302u(x\u03c4 ) = Nu(x\u03c4 )], if \u03bd \u2265 2,\nwhere x\u03c4 is uniformly drawn from T ut |x and T ut |x = {x\u03c4 : ut = u \u2227 \u03c4 \u2208 [t]} is all the historical selected arms when serving u up to round t. Recall that\n\u03b6t =\n( 1 +O ( tL3 log5/6 m\n\u03c11/3m1/6\n)) O ( Lt3\n\u03c1 \u221a m logm\n) +O ( t4L2 log11/6 m\n\u03c14/3m1/6\n) ;\n\u03bet = 2 +O ( t4nL logm\n\u03c1 \u221a m\n) +O ( t5nL2 log11/6 m\n\u03c1m1/6\n) .\nProof. Given two user u, u\u2032 \u2208 N and an arm x\u03c4 , let r\u03c4 be the reward u generated on x\u03c4 and r\u2032\u03c4 be the reward u\u2032 generated on x\u03c4 . Then, in round t \u2208 [T ], we have\nE x\u03c4\u223cT ut |x\n[ E\nr\u03c4 ,r\u2032\u03c4 [|r\u03c4 \u2212 r\u2032\u03c4 | | x\u03c4 ] ] = 1\n\u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u03c4 ,r\u2032\u03c4 [|r\u03c4 \u2212 r\u2032\u03c4 | | x\u03c4 ]\n= 1\n\u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u03c4 ,r\u2032\u03c4 [|r\u03c4 \u2212 f(x\u03c4 ; \u03b8ut\u22121) + f(x\u03c4 ; \u03b8ut\u22121)\u2212 f(x\u03c4 ; \u03b8u \u2032 t\u22121) + f(x\u03c4 ; \u03b8 u\u2032 t\u22121)\u2212 r\u2032\u03c4 | | x\u03c4 ]\n\u2264 1 \u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u03c4 |x\u03c4 [|r\u03c4 \u2212 f(x\u03c4 ; \u03b8ut\u22121)| | u] + 1 \u00b5ut \u2211 x\u03c4\u2208T ut |x [|f(x\u03c4 ; \u03b8ut\u22121)\u2212 f(x\u03c4 ; \u03b8u \u2032 t\u22121)|]\n+ 1\n\u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u2032\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u \u2032 t\u22121)\u2212 r\u2032\u03c4 | | u\u2032],\n(55) where the expectation is taken over r\u03c4 , r\u2032\u03c4 conditioned on x\u03c4 . According to Lemma E.3 and Corollary F.2 respectively, for each u \u2208 N , we have\n1\n\u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u03c4 |x\u03c4 [|r\u03c4 \u2212 f(x\u03c4 ; \u03b8ut\u22121)| | u],\n\u2264 \u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4) \u00b5ut ;\n1\n\u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u2032\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u \u2032 t\u22121)\u2212 r\u2032\u03c4 | | u\u2032]\n\u2264 \u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4) \u00b5ut + \u03b6t.\n(56)\nDue to the setting of Algorithm 1, |f(x\u03c4 ; \u03b8ut\u22121) \u2212 f(x\u03c4 ; \u03b8u \u2032 t\u22121)| \u2264 \u03bd\u22121\u03bd \u03b3 for any u, u \u2032 \u2208 N\u0302ut(x\u03c4 ), given x\u03c4 \u2208 T ut |x. Therefore, we have\nE x\u03c4\u223cT ut |x\n[ E\nr\u03c4 ,r\u2032\u03c4 [|r\u03c4 \u2212 r\u2032\u03c4 | | x\u03c4 ]\n]\n\u2264\u03bd \u2212 1 \u03bd \u03b3 + 2 (\u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4) \u00b5ut + \u03b6t ) (57) Next, we need to lower bound t as the following:\u221a\n2\u03f51 t + 3Lc1\u221a 2\u00b5ut + (1 + \u03bet)\n\u221a 2 log(t/\u03b4)\n\u00b5ut + \u03b6t \u2264\n\u03b3\n2\u03bd(\u221a 2\u03f51 \u00b5ut + 3Lc1\u221a 2\u00b5ut + (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4) \u00b5ut + \u03b6t )2 \u2264 \u03b3 2 4\u03bd2\n\u21d2 4 (\u221a2\u03f51 \u00b5ut )2 + ( 3Lc1\u221a 2\u00b5ut )2 + ( (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4) \u00b5ut )2 + (\u03b6t) 2  \u2264 \u03b32 4\u03bd2\n(58)\nBy simple calculations, we have\nlog\u00b5ut \u2264 \u03b32\u00b5ut 32\u03bd2(1 + \u03bet)2 \u2212 9L 2c21 + 4\u03f51 + 2\u03b6 2 t 4(1 + \u03bet)2 + log \u03b4 (59)\nThen, based on Lemme 8.1 in (Ban and He, 2021), we have\n\u00b5ut \u2265 64\u03bd2(1 + \u03bet) 2\n\u03b32\n( log 32\u03bd2(1 + \u03bet) 2\n\u03b32 +\n9L2c21 + 4\u03f51 + 2\u03b6 2 t\n4(1 + \u03bet)2 \u2212 log \u03b4\n) (60)\nGiven the binomially distributed random variables, x1, x2, . . . , xt, where for \u03c4 \u2208 [t], x\u03c4 = 1 with probability 1/n and x\u03c4 = 0 with probability 1\u2212 1/n. Then, we have\n\u00b5ut = t\u2211 \u03c4=1 x\u03c4 and E[\u00b5ut ] = t n . (61)\nThen, apply Chernoff Bounds on the \u00b5ut with probability at least 1\u2212 \u03b4, for each u \u2208 N , we have\n\u00b5ut \u2264\n( 1 + \u221a 3n log(n/\u03b4)\nt\n) t\nn \u21d2 t \u2265 n\u00b5\nu t 1 + \u221a 3n log(n/\u03b4) (62)\nCombining Eq.(60) and Eq.(62), we have: When\nt \u2265 n64\u03bd2(1 + \u03bet)\n2 ( log 32\u03bd 2(1+\u03bet) 2\n\u03b32 + 9L2c21+4\u03f51+2\u03b6 2 t 4(1+\u03bet)2 \u2212 log \u03b4 ) \u03b32(1 + \u221a 3n log(n/\u03b4)) = T\u0303\nit holds uniformly that:\n2 (\u221a 2\u03f51 \u00b5ut + 3L\u221a 2\u00b5ut + (1 + \u03bet) \u221a 2 log(t/\u03b4) t + \u03b62t ) \u2264 \u03b3 \u03bd .\nThis indicates for any u, u\u2032 \u2208 N and satisfying |f(x\u03c4 ; \u03b8ut\u22121) \u2212 f(x\u03c4 ; \u03b8u \u2032 t\u22121)| \u2264 \u03bd\u22121\u03bd \u03b3, i.e., u, u \u2032 \u2208 N\u0302u(x\u03c4 ), we have\nE x\u03c4\u223cT ut |x\n[ E\nr\u03c4 ,r\u2032\u03c4 [|r\u03c4 \u2212 r\u2032\u03c4 | | x\u03c4 ]\n] \u2264 \u03b3. (63)\nThis implies E x\u03c4\u223cT ut |x [N\u0302u(x\u03c4 ) \u2286 Nu(x\u03c4 )].\nFor any u, u\u2032 \u2208 Nu(x\u03c4 ), we have\nE x\u03c4\u223cT ut |x\n[ E\nr\u03c4 ,r\u2032\u03c4 [r\u03c4 \u2212 r\u2032\u03c4 | x\u03c4 ] ] = 1\n\u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u03c4 |x\u03c4 [r\u03c4 \u2212 f(x\u03c4 ; \u03b8ut\u22121) | u] + 1 \u00b5ut \u2211 x\u03c4\u2208T ut |x [f(x\u03c4 ; \u03b8 u t\u22121)\u2212 f(x\u03c4 ; \u03b8u \u2032 t\u22121)]\n+ 1\n\u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u2032\u03c4 |x\u03c4 [f(x\u03c4 ; \u03b8 u\u2032 t\u22121)\u2212 r\u2032\u03c4 | u\u2032]\n=0\nThus, when t \u2265 T\u0303 , we have\nE x\u03c4\u223cT ut |x\n[|f(x\u03c4 ; \u03b8ut\u22121)\u2212 f(x\u03c4 ; \u03b8u \u2032 t\u22121)|]\n\u2264 1 \u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u03c4 |x\u03c4 [|r\u03c4 \u2212 f(x\u03c4 ; \u03b8ut\u22121)| | u] + 1 \u00b5ut \u2211 x\u03c4\u2208T ut |x E r\u2032\u03c4 |x\u03c4 [|f(x\u03c4 ; \u03b8u \u2032 t\u22121)\u2212 r\u2032\u03c4 | | u\u2032] \u2264\u03b3 \u03bd\nBecause \u03b3\u03bd \u2264 \u03bd\u22121 \u03bd \u03b3 when \u03bd \u2265 2. Thus, we have Ex\u03c4\u223cT ut |x [|f(x\u03c4 ; \u03b8ut\u22121) \u2212 f(x\u03c4 ; \u03b8u \u2032 t\u22121)|] \u2264 \u03bd\u22121\u03bd \u03b3. Therefore, by induction, this is enough to show E x\u03c4\u223cT ut |x [N\u0302u(x\u03c4 ) = Nu(x\u03c4 )] when \u03bd \u2265 2 and t \u2265 T\u0303 . The proof is completed.\nCorollary F.2. For any \u03b4 \u2208 (0, 1), \u03c1 \u2208 (0,O( 1L )], suppose 0 < \u03f51 \u2264 1 and m, \u03b71, J1 satisfy the conditions in Eq.(6). In each round t \u2208 [T ], given u \u2208 N , let (xt,j , rt,j) be pair produced by some\npolicy \u03c0j . Then, with probability at least 1\u2212 \u03b4 over the random initialization, for the user-learner \u03b8ut\u22121, we have\n1\n\u00b5ut \u2211 (r\u03c4,j ,x\u03c4,j)\u2208T ut |\u03c0j E r\u03c4,j |x\u03c4,j [ |r\u03c4,j \u2212 f(x\u03c4,j ; \u03b8u\u03c4\u22121)| | \u03c0j , u ] \u2264\n\u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(\u00b5ut /\u03b4) \u00b5ut + \u03b6t,\nwhere T ut |\u03c0j = {(r\u03c4,j ,x\u03c4,j) : ut = u, \u03c4 \u2208 [t]} is the historical data according to \u03c0j in the rounds where u is the serving user.\nProof. By the application of Lemma E.4, there exists \u03b8u,jt\u22121 satisfying \u2225\u03b8 u,j t\u22121 \u2212 \u03b8u0 \u22252 \u2264 O ( (\u00b5ut ) 3\n\u03c1 \u221a m\nlogm )\ntrained on T ut |\u03c0j following Algorithm 3. Then, similar to Lemma E.2, we have\n1\n\u00b5ut \u2211 (r\u03c4,j ,x\u03c4,j)\u2208T ut |\u03c0j E r\u03c4,j |x\u03c4,j [ |rt,j \u2212 f(xt,j ; \u03b8ut\u22121)| ] \u2264 1\n\u00b5ut \u2211 (r\u03c4,j ,x\u03c4,j)\u2208T ut |\u03c0j E r\u03c4,j |x\u03c4,j [ |rt,j \u2212 f(xt,j ; \u03b8u,jt\u22121)| ] \ufe38 \ufe37\ufe37 \ufe38\nI1\n+ 1\n\u00b5ut \u2211 (r\u03c4,j ,x\u03c4,j)\u2208T ut |\u03c0j\n[|f(xt,j ; \u03b8u,jt\u22121)\u2212 f(xt,j ; \u03b8ut\u22121)|]\ufe38 \ufe37\ufe37 \ufe38 I2\n\u2264 \u221a 2\u03f51 \u00b5ut +O ( 3L\u221a 2\u00b5ut ) + (1 + \u03bet) \u221a 2 log(O(\u00b5uk)/\u03b4) \u00b5u + \u03b6t.\n(64)\nwhere I1 is an application of Lemma E.4 and I2 is because of E.7. The proof is complete."
      }
    ],
    "year": 2022,
    "abstractText": "Contextual multi-armed bandits provide powerful tools to solve the exploitationexploration dilemma in decision making, with direct applications in the personalized recommendation. In fact, collaborative effects among users carry the significant potential to improve the recommendation. In this paper, we introduce and study the problem by exploring \u2018Neural Collaborative Filtering Bandits\u2019, where the rewards can be non-linear functions and groups are formed dynamically given different specific contents. To solve this problem, we propose a meta-learning based bandit algorithm, Meta-Ban (meta-bandits), where a meta-learner is designed to represent and rapidly adapt to dynamic groups, along with an informative UCBbased exploration strategy. Furthermore, we analyze that Meta-Ban can achieve the regret bound of O( \u221a nT log T ), which is sharper over state-of-the-art related works. In the end, we conduct extensive experiments showing that Meta-Ban outperforms six strong baselines.",
    "creator": "LaTeX with hyperref"
  },
  "output": [
    [
      "1. \"The clarity and presentation in the paper can be significantly improved.\"",
      "2. \"The algorithm is not well motivated and described.\"",
      "3. \"The theoretical results are not well discussed and the proofs in the appendix could be presented better.\"",
      "4. \"I had to take multiple passes through the paper to clearly understand the notation, the main algorithm, and the theoretical results.\"",
      "5. \"The algorithm needs to be clearly explained. what is the motivation behind introducing a meta learner in Algorithm 2? It is never mentioned.\"",
      "6. \"What happens if we solve the objective in Algorithm 2 exactly (i.e., increase J2 and don't warm-start \u0398(0) with \u0398t\u22121)? Will the resulting algorithm have poor performance?\"",
      "7. \"Why is \u21131 norm used in Algorithm 2? Why not \u21132 norm?\"",
      "8. \"It is not clear how the proposed algorithm is different from the algorithms proposed in prior works such as Li et al. 2016 [1]. Is the extension to nonlinear reward functions straightforward? What are the key challenges and insights needed for this extension? These things were never discussed in the paper.\"",
      "9. \"A proof sketch in the main paper would have helped the reader understand the proof better. Currently, it is very hard to understand the key ideas in the proof.\"",
      "10. \"I tried looking at the appendix, it is poorly organized and without a high level overview of the proof, I found it hard to read through the appendix.\"",
      "11. \"Some key details in the experiment section are missing. For example, details on how the classification datasets are converted into collaborative bandit problem should be added to the main paper.\"",
      "12. \"There are several issues with the notation. Here are few examples: L in section 3 appeared before it was defined. Operator grad_{theta^u for u in N} in line 8 of algorithm 2 was never defined.\"",
      "13. \"The regret looks too good to be true. why is there no dependence on d? Even in the linear case (without and collaborative filtering stuff), the minimax regret has dependence on dimension. Are there any assumptions that are causing this to happen? In particular, is assumption 4.1 the main reason for these rates? A discussion on this would be appreciated.\"",
      "14. \"On a related note, why is there only a T1/2 term in the regret? The NTK kernel is a very complex kernel with slowly decaying eigen spectrum. It's worst case regret bound in standard contextual bandit setting scales as O(T1\u2212d\u22121). Can the authors explain what is causing this discrepancy?\"",
      "15. \"The cluster sizes and the number of clusters never came up in the analysis at all (qt,i). Shouldn't the regret depend on these quantities? I'd appreciate if the authors make this dependence more explicit.\"",
      "16. \"The analysis in the paper seems to be in the NTK regime. In fact, the paper relies on a number of results proved in the past works on NTK. There are some caveats with these results that the authors never brought up in the paper.\"",
      "17. \"Most of the past works on NTK assume certain kind of initialization for the NN. But the initialization scheme used in Algorithm 3 doesn't match with the past works. Given this, do the past results carry over directly to the setting in this paper? In particular, do Lemmas E.6 to E.10 follow directly from past works?\"",
      "18. \"The result in Theorem 4.2 doesn't say anything about how the context vectors are generated suggesting that the result holds even if the context vectors are generated adversarially. But I doubt if this is the case. The NTK results such as the ones in [2] require the context vectors for all the rounds to be generated by an oblivious adversary before the start of the algorithm. If the context vectors are generated adversarially, the NTK analysis in these works doesn't go through. Can the authors comment on this?\"",
      "19. \"I have a concern regarding how the ML classification datasets are converted to the collaborative bandits problem. Based on the description in Appendix A.2, it looks like the reward function for any given context vector is the same for all the users (i.e., h_u(x) is independent of u). This suggests that there is a single cluster for any given arm. If this is the case, the problem simply boils down to the neural contextual bandit problem studied in [2]. Then, why is there a huge difference in performance between the proposed technique and NeuUCB-ONE?\""
    ],
    [
      "1. In the first step of the proposed algos, it would like to infer a user's relative group. The question is what assumptions made on these relative group? are they independent? can a user belong to multiple groups? If they are dependent, how does it impact the algorithm in terms of regret analysis and performance?",
      "2. Also, what are constraints or structure properties do we want to place when designing relative group?",
      "3. In Challenge 1, the paper also asked whether the returned group is the true relative group, but was not clear on its importance and what's the consequence if it is not true relative group (sorry if overlook).",
      "4. In Page 4 when defining Group Inference, the paper mentioned that it is natural to use the universal approximator. Could they explain intuition on why and this is chosen?",
      "5. In challenge 3, the paper mentioned the rapidly-changing relative groups, and wonder if the authors could provide more detail information on what mathematical assumptions and properties on them, as they would likely impact how we think/propose solutions in terms of efficacy and efficiency."
    ],
    [
      "1. \"Although the experiments yield good results, I believe the authors should focus more on recommendation datasets/tasks, rather than ML datasets. Specifically, among all the datasets used in the paper, only two of them (i.e., MovieLens and Yelp) are common recommendation datasets.\"",
      "2. \"In addition, the preprocessing steps in Appendix A.2 seem to be not common in my opinion. For example, 'If the user\u2019s rating is less than 2 stars (5 stars totally), its reward is 1; Otherwise, its reward is 0' or 'we use K-means to divide users into 50 clusters'. Could you please give more explanations of the preprocessing steps? How did we decide the \u2018threshold\u2019 of 2 stars to set the reward of 1? How did we come up with 50 clusters? Can we have more/less clusters?\"",
      "3. \"Also from Configurations paragraph of Appendix A.2, the authors use only a simple neural network of 2 fully-connected layers, which is a bit surprising to me as this paper introduces neural collaborative filtering bandits but with only a simple neural network of 2 layers. Can we have a larger neural network with more layers? If not, why?\"",
      "4. \"In Section 5 Experiments, NeuUCB-ONE and NeuUCB-IND seem to perform quite well. Would it be possible to combine NeuUCB based models with \u2018relative groups' and compare them with Meta-Ban? Moreover, can we also compare Meta-Ban with NeuMF [1] beside NeuUCB-* so that we can compare our method with two groups of baselines: i) neural network and ii) neural network + bandits for personalized recommendation tasks?\"",
      "5. \"In the Appendix A.5, the authors mentioned that \u2018This fluctuation is acceptable given that different input dimensions may contain different amount of information\u2019. How can we determine if the fluctuation is \u2018acceptable\u2019?\""
    ],
    [
      "1. \"The authors assume that the reward is bounded, but the noise seems to be unbounded. So it seems that equation (1) is not accurate. Why do the author not assume a sub-gaussian noise as in (Zhou et al 2020)?\"",
      "2. \"But in comparison to the state of the art, the regime where the neural network is analyzed is even more unrealistic than that of the state of the art. Indeed at each time step, the needed number of steps of the gradient descent in the order of T^56, while it was in \\tilde O (T) in (Zhou et al 2020).\"",
      "3. \"In the experimental section, the value of J_2 is not given. I suggest to the authors to reassure the reader by giving the used value of J_2.\""
    ]
  ],
  "review_num": 4,
  "item_num": [
    19,
    5,
    5,
    3
  ]
}