{ "ID": "-59_mb1lOf4", "Title": "Communication-Efficient and Drift-Robust Federated Learning via Elastic Net", "Keywords": "Federated learning, Data heterogeneity, Optimization", "URL": "https://openreview.net/forum?id=-59_mb1lOf4", "paper_draft_url": "/references/pdf?id=RROBDnQ3k-", "Conferece": "ICLR_2023", "track": "General Machine Learning (ie none of the above)", "acceptance": "Reject", "review_scores": "[['2', '3', '4'], ['2', '3', '4'], ['2', '3', '3'], ['4', '3', '4']]", "input": { "source": "CRF", "title": "Communication-Efficient and Drift-Robust Federated Learning via Elastic Net", "authors": [], "emails": [], "sections": [ { "heading": "1 INTRODUCTION", "text": "Federated learning (FL) is a collaborative method that allows many clients to contribute individually to training a global model by sharing local models rather than private data. Each client has a local training dataset, which it does not want to share with the global server. Instead, each client computes an update to the current global model maintained by the server, and only this update is communicated. FL significantly reduces the risks of privacy and security (McMahan et al., 2017; Li et al., 2020a), but it faces crucial challenges that make the federated settings distinct from other classical problems (Li et al., 2020a) such as expensive communication costs and client drift problems due to heterogeneous local training datasets and heterogeneous systems (McMahan et al., 2017; Li et al., 2020a; Konec\u030cny\u0300 et al., 2016a;b).\nCommunicating models is a critical bottleneck in FL, in particular when the federated network comprises a massive number of devices (Bonawitz et al., 2019; Li et al., 2020a; Konec\u030cny\u0300 et al., 2016b). In such a scenario, communication in the federated network may take a longer time than that of local computation by many orders of magnitude because of limited communication bandwidth and device power (Li et al., 2020a). To reduce such communication cost, several strategies have been proposed (Konec\u030cny\u0300 et al., 2016b; Li et al., 2020a). In particular, Konec\u030cny\u0300 et al. (2016b) proposed several methods to form structured local updates and approximate them, e.g., subsampling and quantization. Reisizadeh et al. (2020); Xu et al. (2020) also proposed an efficient quantization method for FL to reduce the communication cost.\nAlso, in general, as the datasets that local clients own are heterogeneous, trained models on each local data are inconsistent with the global model that minimizes the global empirical loss (Karimireddy et al., 2020; Malinovskiy et al., 2020; Acar et al., 2021). This issue is referred to as the client drift problem. In order to resolve the client drift problem, FedProx (Li et al., 2020b) added a proximal term to a local objective function and regulated local model updates. Karimireddy et al. (2020) proposed SCAFFOLD algorithm that transfers both model updates and control variates to resolve the client drift problem. FedDyn (Acar et al., 2021) dynamically regularizes local objective functions to resolve the client drift problem.\nUnlike most prior works focusing on either the communication cost problem or the client drift problem, we propose a technique that effectively resolves the communication cost and client drift problems simultaneously.\nContributions In this paper, we propose FedElasticNet, a new framework for communicationefficient and drift-robust FL. It repurposes the \u21131-norm and \u21132-norm regularizers of the elastic net (Zou & Hastie, 2005), by which it successfully improves (i) communication efficiency by adopting the \u21131-norm regularizer and (ii) robustness to heterogeneous local datasets by adopting the \u21132-norm regularizer.\nFedElasticNet is a general framework; hence, it can be integrated with prior FL algorithms such as FedAvg (McMahan et al., 2017), FedProx (Li et al., 2020b), SCAFFOLD (Karimireddy et al., 2020), and FedDyn (Acar et al., 2021) so as to resolve the client drift problem as well as the communication cost problem. Further, it incurs no additional costs in training. Empirically, we show that FedElasticNet enhances communication efficiency while maintaining the classification accuracy even for heterogeneous datasets, i.e., the client drift problem is resolved. Theoretically, we characterize the impact of the regularizer terms. Table 1 compares the prior methods and the proposed FedElasticNet if integrated with FedDyn (Algorithm 3)." }, { "heading": "2 RELATED WORK", "text": "To address the communication cost and client drift problems, numerous approaches were proposed. Here, we describe closely related works that we consider baseline methods. The comprehensive reviews can be found in Kairouz et al. (2021); Li et al. (2020a).\nFedAvg (McMahan et al., 2017) is one of the most commonly used methods. FedAvg tackles the communication bottleneck issue by performing multiple local updates before communicating to the server. It works well for homogeneous datasets across clients (McMahan et al., 2017; Karimireddy et al., 2020), but it is known that FedAvg may diverge when local datasets are heterogeneous (Zhao et al., 2018; Li et al., 2020a).\nFedProx (Li et al., 2020b) addressed the data heterogeneity problem. FedProx introduces an \u21132-norm regularizer to the local objective functions to penalize local updates that are far from the server\u2019s model and thus to limit the impact of variable local updates (Li et al., 2020b). Although FedProx is more robust to heterogeneous datasets than FedAvg, the regularizer does not result in aligning the global and local stationary points (Acar et al., 2021). Also, we note that FedProx does not improve communication efficiency compared to that of FedAvg.\nSCAFFOLD (Karimireddy et al., 2020) defined client drift that the model created by aggregating local models and the optimal global model is inconsistent because of heterogeneous local datasets. SCAFFOLD communicates the trained local models and the clients\u2019 control variates so as to resolve the client drift problem. Hence, SCAFFOLD requires twice the communication cost compared to other FL algorithms.\nFedDyn (Acar et al., 2021) dynamically updates its local regularizers at each round to ensure that the local clients\u2019 optima are asymptotically consistent with stationary points of the global empirical loss. Unlike SCAFFOLD, FedDyn resolves the client drift problem without incurring additional communication costs. However, FedDyn\u2019s communication cost is not improved compared to FedAvg and FedProx.\nZou & Hastie (2005) proposed the elastic net to encourage the grouping effect, in other words, to encourage strongly correlated covariates to be in or out of the model description together (Hu et al., 2018). Initially, the elastic net was proposed to overcome the limitations of Lasso (Tibshirani, 1996) imposing an \u21131-norm penalty on the model parameters. For instance of a linear least square problem,\nthe objective of Lasso is to solve\nmin \u03b8 \u2225y \u2212X\u03b8\u222522 + \u03bb1 \u2225\u03b8\u22251 , (1)\nwhere y is the outcome and X is the covariate matrix. Lasso performs both variable selection and regularization to enhance the prediction accuracy and interpretability of the resulting model. However, it has some limitations, especially for high-dimensional models. If a group of variables is highly correlated, then Lasso tends to select only one variable from the group and does not care which one is selected (Zou & Hastie, 2005). The elastic net overcomes these limitations by adding an \u21132-norm penalty. The objective of the elastic net is to solve\nmin \u03b8 \u2225y \u2212 X\u03b8\u222522 + \u03bb2 2 \u2225\u03b8\u222522 + \u03bb1 \u2225\u03b8\u22251 . (2)\nThe elastic net simultaneously enables automatic variable selection and continuous shrinkage by the \u21131-norm regularizer and enables to select groups of correlated variables by its \u21132-norm regularizer (Zou & Hastie, 2005). We will leverage the elastic net approach to resolve the critical problems of FL: expensive communication cost and client drift problems." }, { "heading": "3 PROPOSED METHOD: FEDELASTICNET", "text": "We assume that m local clients communicate with the global server. For the kth client (where k \u2208 [m]) participating in each training round, we assume that a training data feature x \u2208 X and its corresponding label y \u2208 Y are drawn IID from a device-indexed joint distribution, i.e., (x, y) \u223c Pk (Acar et al., 2021). The objective is to find\nargmin \u03b8\u2208Rd R (\u03b8) := 1 m \u2211 k\u2208[m] Lk (\u03b8) , (3) where Lk (\u03b8) = Ex\u223cPk [lk (\u03b8; (x, y))] is the local risk of the kth clients over possibly heterogeneous data distributions Pk. Also, \u03b8 represents the model parameters and lk(\u00b7) is a loss function such as cross entropy (Acar et al., 2021).\nFedElasticNet The proposed method (FedElasticNet) leverages the elastic net approach to resolve the communication cost and client drift problems. We introduce the \u21131-norm and \u21132-norm penalties on the local updates: In each round t \u2208 [T ], the kth local client attempts to find \u03b8tk by solving the following optimization problem:\n\u03b8tk = argmin \u03b8 Lk (\u03b8) + \u03bb2 2 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u22252 2 + \u03bb1 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u2225 1 , (4)\nwhere \u03b8t\u22121 denotes the global model received from the server. Then, it transmits the difference \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121 to the server.\nInspired by the elastic net, we introduce two types of regularizers for local objective functions; however, each of them works in a different way so as to resolve each of the two FL problems: the communication cost and client drift problems. First, the \u21132-norm regularizer resolves the client drift problem by limiting the impact of variable local updates as in FedProx (Li et al., 2020b). FedDyn (Acar et al., 2021) also adopts the \u21132-norm regularizer to control the client drift.\nSecond, the \u21131-norm regularizer attempts to sparsify the local updates \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121. We consider two ways of measuring communication cost: One is the number of nonzero elements in \u2206tk (Yoon et al., 2021; Jeong et al., 2021), which the \u21131-norm sparsifies. The other is the (Shannon) entropy since it is the theoretical lower bound on the data compression (Cover & Thomas, 2006). We demonstrate that the \u21131-norm penalty on the local updates can effectively reduce the number of nonzero elements as well as the entropy in Section 4. To boost sparseness of \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121, we sent \u2206tk(i) = 0 if |\u2206tk(i)| \u2264 \u03f5 where \u2206tk(i) denotes the ith element of \u2206tk. The parameter \u03f5 is chosen in a range that does not affect classification accuracy.\nOur FedElasticNet approach can be integrated into existing FL algorithms such as FedAvg (McMahan et al., 2017), SCAFFOLD (Karimireddy et al., 2020), and FedDyn (Acar et al., 2021) without additional costs, which will be described in the following subsections.\nAlgorithm 1 FedElasticNet for FedAvg & FedProx\nInput: T , \u03b80, \u03bb1 > 0, \u03bb2 > 0 1: for each round t = 1, 2, ..., T do 2: Sample devices Pt \u2286 [m] and transmit \u03b8t\u22121 to each selected local client 3: for each local client k \u2208 Pt do in parallel 4: Set \u03b8tk = argmin\n\u03b8 Lk (\u03b8) +\n\u03bb2 2 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u22252 2 + \u03bb1 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u2225 1\n5: Transmit \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121 to the global server 6: end for 7: Set \u03b8t = \u03b8t\u22121 + \u2211 k\u2208Pt nk n \u2206k 8: end for\nAlgorithm 2 FedElasticNet for SCAFFOLD\nInput: T , \u03b80, \u03bb1 > 0, \u03bb2 > 0, global step size \u03b7g , and local step size \u03b7l. 1: for each round t = 1, 2, ..., T do 2: Sample devices Pt \u2286 [m] and transmit \u03b8t\u22121 and ct\u22121 to each selected device 3: for each device k \u2208 Pt do in parallel 4: Initialize local model \u03b8tk = \u03b8 t\u22121\n5: for b = 1, . . . , B do 6: Compute mini-batch gradient\u2207Lk (\u03b8tk) 7: \u03b8tk \u2190 \u03b8tk \u2212 \u03b7l ( \u2207Lk (\u03b8tk)\u2212 c t\u22121 k + c t\u22121 + \u03bb2(\u03b8 t k \u2212 \u03b8t\u22121) + \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121) ) 8: end for 9: Set ctk = c t\u22121 k \u2212 ct\u22121 + 1 B\u03b7l\n(\u03b8t\u22121 \u2212 \u03b8tk) 10: Transmit \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121 and \u2206ck = ctk \u2212 c t\u22121 k to the global server 11: end for 12: Set \u03b8t = \u03b8t\u22121 + \u03b7g|Pt| \u2211 k\u2208Pt \u2206k\n13: Set ct = ct\u22121 + 1m \u2211\nk\u2208Pt \u2206ck 14: end for" }, { "heading": "3.1 FEDELASTICNET FOR FEDAVG & FEDPROX (FEDAVG & FEDPROX + ELASTIC NET)", "text": "Our FedElasticNet can be applied to FedAvg (McMahan et al., 2017) by adding two regularizers on the local updates, which resolves the client drift problem and the communication cost problem. As shown in Algorithm 1, the local client minimizes the local objective function (4). In Step 7, n and nk denote the total numbers of data points of all clients and the data points of the kth client, respectively.\nIt is worth mentioning that FedProx uses the \u21132-norm regularizer to address the data and system heterogeneities (Li et al., 2020b). By adding the \u21131-norm regularizer, we can sparsify the local updates of FedProx and thus effectively reduce the communication cost. Notice that Algorithm 1 can be viewed as the integration of FedProx and FedElasticNet." }, { "heading": "3.2 FEDELASTICNET FOR SCAFFOLD (SCAFFOLD + ELASTIC NET)", "text": "In SCAFFOLD, each client computes the following mini-batch gradient\u2207Lk(\u03b8tk) and control variate ctk (Karimireddy et al., 2020):\n\u03b8tk \u2190 \u03b8tk \u2212 \u03b7l ( \u2207Lk ( \u03b8tk ) \u2212 ct\u22121k + c t\u22121) , (5) ctk \u2190 ct\u22121k \u2212 c t\u22121 + 1\nB\u03b7l (\u03b8t\u22121 \u2212 \u03b8tk), (6)\nwhere \u03b7l is the local step size and B is the number of mini-batches at each round. This control variate makes the local parameters \u03b8tk updated in the direction of the global optimum rather than each local optimum, which effectively resolves the client drift problem. However, SCAFFOLD incurs twice much communication cost since it should communicate the local update \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121 and the control variate \u2206ck = ctk \u2212 c t\u22121 k , which are of the same dimension.\nIn order to reduce the communication cost of SCAFFOLD, we apply our FedElasticNet framework. In the proposed algorithm (see Algorithm 2), each local client computes the following mini-batch\ngradient instead of (5): \u03b8tk \u2190 \u03b8tk \u2212 \u03b7l ( \u2207Lk ( \u03b8tk ) \u2212 ct\u22121k + c t\u22121 + \u03bb2(\u03b8 t k \u2212 \u03b8t\u22121) + \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121) ) , (7)\nwhere \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121) corresponds to the gradient of \u21131-norm regularizer \u03bb1\u2225\u03b8tk \u2212 \u03b8t\u22121\u22251. This \u21131-norm regularizer sparsifies the local update \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121; hence, reduces the communication cost. Since the control variate already addresses the client drift problem, we can remove the \u21132-norm regularizer or set \u03bb2 as a small value." }, { "heading": "3.3 FEDELASTICNET FOR FEDDYN (FEDDYN + ELASTIC NET)", "text": "In FedDyn, each local client optimizes the following local objective, which is the sum of its empirical loss and a penalized risk function:\n\u03b8tk = argmin \u03b8 Lk (\u03b8)\u2212 \u27e8\u2207Lk(\u03b8t\u22121k ), \u03b8\u27e9+ \u03bb2 2 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u22252 2 , (8)\nwhere the penalized risk is dynamically updated so as to satisfy the following first-order condition for local optima:\n\u2207Lk(\u03b8tk)\u2212\u2207Lk(\u03b8t\u22121k ) + \u03bb2(\u03b8 t k \u2212 \u03b8t\u22121) = 0. (9)\nThis first-order condition shows that the stationary points of the local objective function are consistent with the server model (Acar et al., 2021). That is, the client drift is resolved. However, FedDyn makes no difference from FedAvg and FedProx in communication costs.\nBy integrating FedElasticNet and FedDyn, we can effectively reduce the communication cost of FedDyn as well. In the proposed method (i.e., FedElasticNet for FedDyn), each local client optimizes the following local empirical objective:\n\u03b8tk = argmin \u03b8 Lk (\u03b8)\u2212 \u27e8\u2207Lk(\u03b8t\u22121k ), \u03b8\u27e9+ \u03bb2 2 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u22252 2 + \u03bb1 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u2225 1 , (10)\nwhich is the sum of (8) and the additional \u21131-norm penalty on the local updates. The corresponding first-order condition is given by\n\u2207Lk(\u03b8tk)\u2212\u2207Lk(\u03b8t\u22121k ) + \u03bb2(\u03b8 t k \u2212 \u03b8t\u22121) + \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121) = 0. (11)\nNotice that the stationary points of the local objective function are consistent with the server model as in (9). If \u03b8tk \u0338= \u03b8t\u22121 (i.e., sign(\u03b8tk \u2212 \u03b8t\u22121) = \u00b11), then the first-order condition is\n\u2207Lk(\u03b8tk)\u2212\u2207Lk(\u03b8t\u22121k ) + \u03bb2(\u03b8 t k \u2212 \u03b8t\u22121) = \u00b1\u03bb1, (12)\nwhere \u03bb1 is a vectorized one. Our empirical results show that the optimized hyperparameter is \u03bb1 = 10\n\u22124 or 10\u22126 and the impact of \u00b1\u03bb1 in (12) would be negligible. Hence, the proposed FedElasticNet for FedDyn resolves the client drift problem. Further, the local update \u2206tk = \u03b8 t k\u2212 \u03b8t\u22121 is sparse due to the \u21131-norm regularizer, which effectively reduces the communication cost at the same time. The detailed algorithm is described in Algorithm 3.\nAlgorithm 3 FedElasticNet for FedDyn Input: T , \u03b80, \u03bb1 > 0, \u03bb2 > 0, h0 = 0,\u2207Lk ( \u03b80k ) = 0.\n1: for each round t = 1, 2, ..., T do 2: Sample devices Pt \u2286 [m] and transmit \u03b8t\u22121 to each selected device 3: for each device k \u2208 Pt do in parallel 4: Set \u03b8tk = argmin\n\u03b8 Lk (\u03b8)\u2212\n\u2329 \u2207Lk(\u03b8t\u22121k ), \u03b8 \u232a + \u03bb22 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u22252 2 + \u03bb1 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u2225 1\n5: Set \u2207Lk (\u03b8tk) = \u2207Lk ( \u03b8t\u22121k ) \u2212 \u03bb2 ( \u03b8tk \u2212 \u03b8t\u22121 ) \u2212 \u03bb1sign ( \u03b8tk \u2212 \u03b8t\u22121 ) 6: Transmit \u2206tk = \u03b8 t k \u2212 \u03b8t\u22121 to the global server 7: end for 8: for each device k /\u2208 Pt do in parallel 9: Set \u03b8tk = \u03b8 t\u22121 k and \u2207Lk (\u03b8tk) = \u2207Lk ( \u03b8t\u22121k\n) 10: end for 11: Set ht = ht\u22121 \u2212 \u03bb2m \u2211 k\u2208Pt ( \u03b8tk \u2212 \u03b8t\u22121 ) \u2212 \u03bb1m \u2211 k\u2208Pt sign(\u03b8 t k \u2212 \u03b8t\u22121)\n12: Set \u03b8t = 1|Pt| \u2211 k\u2208Pt \u03b8 t k \u2212 1\u03bb2h t 13: end for\nConvergence Analysis We provide a convergence analysis on FedElasticNet for FedDyn (Algorithm 3). Theorem 3.1. Assume that the clients are uniformly randomly selected at each round and the local loss functions are convex and \u03b2-smooth. Then Algorithm 3 satisfies the following inequality:\nE [ R ( 1\nT T\u22121\u2211 t=0 \u03b3t\n) \u2212R(\u03b8\u2217) ] \u2264 1\nT\n1\n\u03ba0 (E\u2225\u03b30 \u2212 \u03b8\u2217\u222522 + \u03baC0) +\n\u03ba\u2032 \u03ba0 \u00b7 \u03bb21d\n\u2212 1 T 2\u03bb1 \u03bb2 T\u2211 t=1\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a , (13)\nwhere \u03b8\u2217 = argmin\u03b8R(\u03b8), P = |Pt|, \u03b3t = 1P \u2211 Pt \u03b8 t k, d = dim(\u03b8), \u03ba = 10m P 1 \u03bb2 \u03bb2+\u03b2 \u03bb22\u221225\u03b22 , \u03ba0 =\n2 \u03bb2\n\u03bb22\u221225\u03bb2\u03b2\u221250\u03b2 2\n\u03bb22\u221225\u03b22 , \u03ba\u2032 = 5\u03bb2 \u03bb2+\u03b2 \u03bb22\u221225\u03b22 = \u03ba \u00b7 P2m , C0 = 1 m\n\u2211 k\u2208[m] E\u2225\u2207Lk(\u03b80k)\u2212\u2207Lk(\u03b8\u2217)\u2225 and\n\u03b8\u0303tk = argmin \u03b8\nLk (\u03b8)\u2212 \u2329 \u2207Lk(\u03b8t\u22121k ), \u03b8 \u232a +\n\u03bb2 2 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u22252 2 + \u03bb1 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u2225 1 \u2200k \u2208 [m].\nTheorem 3.1 provides a convergence rate of FedElasticNet for FedDyn. If T \u2192\u221e, the first term of (13) converges to 0 at the speed of O(1/T ). The second and the third terms of (13) are additional penalty terms caused by the \u21131-norm regularizer. The second term is a negligible constant in the range of hyperparameters of our interest. Considering the last term, notice that the summand at each t includes the expected average of sign vectors where each element is \u00b11. If a coordinate of the sign vectors across clients is viewed as an IID realization of Bern( 12 ), it can be thought of as a small value with high probability by the concentration property (see Appendix B.3). In addition, \u03b3t\u22121\u2212 \u03b8\u2217 characterizes how much the average of local models deviates from the globally optimal model, which tends to be small as training proceeds. Therefore, the effect of both additional terms is negligible." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the proposed FedElasticNet on benchmark datasets for various FL scenarios. In particular, FedElasticNet is integrated with prior methods including FedProx (Li et al., 2020b), SCAFFOLD (Karimireddy et al., 2020), and FedDyn (Acar et al., 2021). The experimental results show that FedElasticNet effectively enhances communication efficiency while maintaining classification accuracy and resolving the client drift problem. We observe that the integration of FedElasticNet and FedDyn (Algorithm 3) achieves the best performance.\nExperimental Setup We use the same benchmark datasets as prior works. The evaluated datasets include MNIST (LeCun et al., 1998), a subset of EMNIST (Cohen et al., 2017, EMNIST-L), CIFAR10, CIFAR-100 (Krizhevsky & Hinton, 2009), and Shakespeare (Shakespeare, 1914). The IID split is generated by randomly assigning datapoint to the local clients. The Dirichlet distribution is used on the label ratios to ensure uneven label distributions among local clients for non-IID splits as in Zhao et al. (2018); Acar et al. (2021). For the uneven label distributions among 100 experimental devices, the experiments are performed by using the Dirichlet parameters of 0.3 and 0.6, and the number of data points is obtained by the lognormal distribution as in Acar et al. (2021). The data imbalance is controlled by varying the variance of the lognormal distribution (Acar et al., 2021).\nWe use the same neural network models of FedDyn experiments (Acar et al., 2021). For MNIST and EMNIST-L, fully connected neural network architectures with 2 hidden layers are used. The numbers of neurons in the layers are 200 and 100, respectively (Acar et al., 2021). Remark that the model used for MNIST dataset is the same as in Acar et al. (2021); McMahan et al. (2017). For CIFAR-10 and CIFAR-100 datasets, we use a CNN model consisting of 2 convolutional layers with 64 5\u00d7 5 filters followed by 2 fully connected layers with 394 and 192 neurons and a softmax layer. For the next character prediction task for Shakespeare, we use a stacked LSTM as in Acar et al. (2021).\nFor MNIST, EMNIST-L, CIFAR10, and CIFAR100 datasets, we evaluate three cases: IID, non-IID with Dirichlet (.6), and non-IID with Dirichlet (.3). Shakespeare datasets are evaluated for IID and non-IID cases as in Acar et al. (2021). We use the batch size of 10 for the MNIST dataset, 50 for CIFAR-10, CIFAR-100, and EMNIST-L datasets, and 20 for the Shakespeare dataset. We optimize the hyperparameters depending on the evaluated datasets: learning rates, \u03bb2, and \u03bb1.\nEvaluation of Methods We compare the baseline methods (FedProx, SCAFFOLD, and FedDyn) and the proposed FedElasticNet integrations (Algorithms 1, 2, and 3), respectively. We evaluate the communication cost and classification accuracy for non-IID settings of the prior methods and the proposed methods. The robustness of the client drift problem is measured by the classification accuracy of non-IID settings.\nWe report the communication costs in two ways: (i) the number of nonzero elements in transmitted values as in (Yoon et al., 2021; Jeong et al., 2021) and (ii) the Shannon entropy of transmitted bits. Note that the Shannon entropy is the theoretical limit of data compression (Cover & Thomas, 2006), which can be achieved by practical algorithms; for instance, Han et al. (2016) used Huffman coding for model compression. We calculate the entropy of discretized values with the bin size of 0.01. Note that the transmitted values are not discretized in FL, and only the discretization is considered to calculate the entropy. The lossy compression schemes (e.g., scalar quantization, vector quantization, etc.) have not been considered since they include several implementational issues which are beyond our research scope.\nTable 2 reports the number of non-zero elements of the baseline methods with/without FedElasticNet. Basically, the communication costs per round of FedProx and FedDyn are the same; SCAFFOLD suffers from the doubled communication cost because of the control variates. The proposed FedElasticNet integrations (Algorithms 1, 2, and 3) can effectively sparsify the transmitted local updates, which enhances communication efficiency.\nIn particular, the minimal communication cost is achieved when FedElasticNet is integrated with FedDyn (Algorithm 3). It is because the classification accuracy is not degraded even if the transmitted values are more aggressively sparsified in Algorithm 3. Fig. 2 shows the transmitted local updates \u2206tk of Algorithm 3 are sparser than FedDyn and Algorithm 2. Hence, Algorithm 3 (FedElasticNet for FedDyn) achieves the best communication efficiency.\nTables 3 and 4 report the Shannon entropy of transmitted bits for the baseline methods with/without FedElasticNet. The communication costs of baseline methods are effectively improved by the\nFedElasticNet approach. Algorithms 1, 2, and 3 reduce the entropy compared to the their baseline methods. We note that FedElasticNet integrated with FedDyn (Algorithm 3) achieves the minimum entropy, i.e., the minimum communication cost.\nFor FedDyn, we evaluate the Shannon entropy values for two cases: (i) transmit the updated local models \u03b8tk as in Acar et al. (2021) and (ii) transmit the local updates \u2206 t k = \u03b8 t k\u2212\u03b8t\u22121 as in Algorithm 3. We observe that transmitting the local updates \u2206tk instead of the local models \u03b8 t k can reduce the Shannon entropy significantly. Hence, it is beneficial to transmit the local updates \u2206tk even for FedDyn if it adopts an additional compression scheme. The numbers of nonzero elements for two cases (i.e., \u03b8tk and \u2206 t k) are the same for FedDyn.\nFig. 1 shows that the FedElasticNet maintains the classification accuracy or incurs marginal degradation. We observe a classification gap between FedProx and Algorithm 1 for CIFAR-10 and CIFAR-100. However, the classification accuracies of FedDyn and Algorithm 3 are almost identical in the converged regime.\nIn particular, Algorithm 3 significantly reduces the Shannon entropy, which can be explained by Fig 2. Fig 2 compares the distributions of the transmitted local updates \u2206tk for FedDyn, Algorithm 2, and Algorithm 3. Because of the \u21131-norm penalty on the local updates, Algorithm 3 makes sparser local updates than FedDyn. The local updates of FedDyn can be modeled by the Gaussian distribution, and the local updates of FedElasticNet can be modeled by the non-Gaussian distribution (similar to the Laplacian distribution). It is well-known that the Gaussian distribution maximizes the entropy for a given variance in information theory Cover & Thomas (2006). Hence, FedElasticNet can reduce the entropy by transforming the Gaussian distribution into the non-Gaussian one." }, { "heading": "5 CONCLUSION", "text": "We proposed FedElasticNet, a general framework to improve communication efficiency and resolve the client drift problem simultaneously. We introduce two types of penalty terms on the local model updates by repurposing the classical elastic net. The \u21131-norm regularizer sparsifies the local model updates, which reduces the communication cost. The \u21132-norm regularizer limits the impact of variable local updates to resolve the client drift problem. Importantly, our framework can be integrated with prior FL techniques so as to simultaneously resolve the communication cost problem and the client drift problem. By integrating FedElasticNet with FedDyn, we can achieve the best communication efficiency while maintaining classification accuracy for heterogeneous datasets." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 EXPERIMENT DETAILS", "text": "We provide the details of our experiments. We select the datasets for our experiments, including those used in prior work on federated learning (McMahan et al., 2017; Li et al., 2020b; Acar et al., 2021). To fairly compare the non-IID environments, the datasets and the experimental environments are the same as those of Acar et al. (2021).\nHyperparameters. We describe the hyperparameters used in our experiments in Section 4. We perform a grid search to find the best \u03bb1 and \u03f5 used in the proposed algorithms. Each hyperparameter was selected to double the value as the performance improved. We use the same \u03bb2 as in Acar et al. (2021). SCAFFOLD has the same local epoch and batch size as other algorithms, and SCAFFOLD is not included in Table 4 because other hyperparameters are not required. Table 5 shows the hyperparameters used in our experiments." }, { "heading": "A.2 REGULARIZER COEFFICIENTS", "text": "We selected \u03bb1 over {10\u22122, 10\u22124, 10\u22126, 10\u22128} to observe the impact of \u03bb1 on the classification accuracy. We prefer a larger \u03bb1 to enhance communication efficiency unless the \u21131-norm regularizer does not degrade the classification accuracy. Figures 3, 4, and 5 show the classification accuracy depending on \u03bb1 in the CIFAR-10 dataset with 10% participation rate and Dirichlet (.3). The unit of the cumulative number of elements is 107.\nIn Algorithm 1, we selected \u03bb1 = 10\u22126 to avoid a degradation of classification accuracy (see Fig. 3) and maximize the sparsity of local updates. In this way, we selected the coefficient values \u03bb1 (See Fig.4 for Algorithm 2 and 5 and Algorithm 3)." }, { "heading": "A.3 EMPIRICAL RESULTS OF CLASSIFICATION ACCURACY", "text": "" }, { "heading": "B PROOF", "text": "We utilize some techniques in FedDyn (Acar et al., 2021)." }, { "heading": "B.1 DEFINITION", "text": "We introduce a formal definition and properties that we will use. Definition B.0.1. A function Lk is \u03b2-smooth if it satisfies\n\u2225\u2207Lk(x)\u2212\u2207Lk(y)\u2225 \u2264 \u03b2\u2225x\u2212 y\u2225 \u2200x, y. (14)\nIf function Lk is convex and \u03b2-smooth, it satisfies\n\u2212 \u27e8\u2207Lk(x), z \u2212 y\u27e9 \u2264 \u2212Lk(z) + Lk(y) + \u03b2\n2 \u2225z \u2212 x\u22252 \u2200x, y, z. (15)\nAs a consequence of the convexity and smoothness, the following property holds (Nesterov, 2018, Theorem 2.1.5):\n1\n2\u03b2m \u2211 k\u2208[m] \u2225\u2207Lk(x)\u2212\u2207Lk(x\u2217)\u22252 \u2264 R(x)\u2212R(x\u2217) \u2200x (16)\nwhereR(x) = 1m \u2211m\nk=1 Lk(x) and \u2207R(x\u2217) = 0. We will also use the relaxed triangle inequality (Karimireddy et al., 2020, Lemma 3):\u2225\u2225\u2225\u2225\u2225\u2225 n\u2211 j=1 vj \u2225\u2225\u2225\u2225\u2225\u2225 2 \u2264 n n\u2211 j=1 \u2225vj\u22252. (17)" }, { "heading": "B.2 PROOF OF THEOREM 3.1", "text": "The theorem that we will prove is as follows. Theorem B.1 (Full statement of Theorem 3.1). Assume that the clients are uniformly randomly selected at each round and the individual loss functions {Lk}mk=1 are convex and \u03b2-smooth. Also assume that \u03bb2 > 27\u03b2. Then Algorithm 3 satisfies the following inequality: Letting R(\u03b8) = 1 m \u2211 k\u2208[m] Lk(\u03b8) and \u03b8\u2217 = argmin\n\u03b8 R(\u03b8),\nE [ R ( 1\nT T\u22121\u2211 t=0 \u03b3t\n) \u2212R(\u03b8\u2217) ] \u2264 1\nT\n1\n\u03ba0 (E\u2225\u03b30 \u2212 \u03b8\u2217\u22252 + \u03baC0) +\n\u03ba\u2032 \u03ba0 \u00b7 \u03bb21d\n\u2212 1 T 2\u03bb1 \u03bb2 T\u2211 t=1\n\u2329 (\u03b3t\u22121 \u2212 \u03b8\u2217), 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a ,\n(18)\nwhere\n\u03b3t = 1\nP \u2211 k\u2208Pt \u03b8tk = \u03b8 t + 1 \u03bb2 ht with P = |Pt|,\n\u03ba = 10m\nP\n1\n\u03bb2\n\u03bb2 + \u03b2\n\u03bb22 \u2212 25\u03b22 ,\n\u03ba0 = 2\n\u03bb2\n\u03bb22 \u2212 25\u03bb2\u03b2 \u2212 50\u03b22\n\u03bb22 \u2212 25\u03b22 ,\n\u03ba\u2032 = 5\n\u03bb2\n\u03bb2 + \u03b2 \u03bb22 \u2212 25\u03b22 = \u03ba \u00b7 P 2m ,\nC0 = 1\nm \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b80k)\u2212\u2207Lk(\u03b8\u2217)\u2225,\nd = dim(\u03b8).\nTo prove the theorem, define variables that will be used throughout the proof.\n\u03b8\u0303tk = argmin \u03b8\nLk (\u03b8)\u2212 \u2329 \u2207Lk(\u03b8t\u22121k ), \u03b8 \u232a +\n\u03bb2 2 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u22252 2 + \u03bb1 \u2225\u2225\u03b8 \u2212 \u03b8t\u22121\u2225\u2225 1 \u2200k \u2208 [m] (19)\nCt = 1\nm \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b8tk)\u2212\u2207Lk(\u03b8\u2217)\u22252, (20)\n\u03f5t = 1\nm \u2211 k\u2208[m] E\u2225\u03b8\u0303tk \u2212 \u03b3 t\u22121\u22252. (21)\nNote that \u03b8\u0303tk optimizes the kth loss function by assuming that the kth client (k \u2208 [m]) is selected at round t. It is obvious that \u03b8\u0303tk = \u03b8 t k if k \u2208 Pt. Ct refers to the average of the expected differences between gradients of each individual model and the globally optimal model. Lastly, \u03f5t refers to the deviation of each client model from the average of local models. Remark that Ct and \u03f5t approach zero if all clients\u2019 models converge to the globally optimal model, i.e., \u03b8tk \u2192 \u03b8\u2217. The following lemma expresses ht, how much the averaged active devices\u2019 model deviates from the global model.\nLemma B.2. Algorithm 3 satisfies\nht = 1\nm \u2211 k\u2208[m] \u2207Lk(\u03b8tk) (22)\nProof. Starting from the update of ht in Algorithm 3,\nht = ht\u22121 \u2212 \u03bb2 m \u2211 k\u2208[m] (\u03b8tk \u2212 \u03b8t\u22121)\u2212 \u03bb1 m \u2211 k\u2208[m] sign(\u03b8tk \u2212 \u03b8t\u22121)\n= ht\u22121 \u2212 1 m \u2211 k\u2208[m] (\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8 t k)\u2212 \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121))\u2212 \u03bb1 m \u2211 k\u2208[m] sign(\u03b8tk \u2212 \u03b8t\u22121)\n= ht\u22121 \u2212 1 m \u2211 k\u2208[m] (\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8 t k)),\nwhere the second equality follows from (11). By summing ht recursively, we have\nht = h0 + 1\nm \u2211 k\u2208[m] \u2207Lk(\u03b8tk)\u2212 1 m \u2211 k\u2208[m] \u2207Lk(\u03b80k) = 1 m \u2211 k\u2208[m] \u2207Lk(\u03b8tk).\nThe next lemma provides how much the average of local models changes by using only t round parameters.\nLemma B.3. Algorithm 3 satisfies\nE[\u03b3t \u2212 \u03b3t\u22121] = 1 \u03bb2m \u2211 k\u2208[m] E[\u2212\u2207Lk(\u03b8\u0303tk)]\u2212 \u03bb1 \u03bb2m \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8 t\u22121)].\nProof. Starting from the definition of \u03b3t,\nE [ \u03b3t \u2212 \u03b3t\u22121 ] = E\n[( 1\nP \u2211 k\u2208Pt \u03b8tk\n) \u2212 \u03b8t\u22121 \u2212 1\n\u03bb2 ht\u22121\n]\n= E\n[ 1\nP \u2211 k\u2208Pt (\u03b8tk \u2212 \u03b8t\u22121)\u2212 1 \u03bb2 ht\u22121\n]\n= E\n[ 1\n\u03bb2P \u2211 k\u2208Pt (\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8 t k)\u2212 \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121))\u2212 1 \u03bb2 ht\u22121\n] (23)\n= E\n[ 1\n\u03bb2P \u2211 k\u2208Pt (\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8\u0303tk)\u2212 \u03bb1sign(\u03b8\u0303tk \u2212 \u03b8 t\u22121))\u2212 1 \u03bb2 ht\u22121\n] (24)\n= E 1 \u03bb2m \u2211 k\u2208[m] (\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8\u0303tk)\u2212 \u03bb1sign(\u03b8\u0303tk \u2212 \u03b8 t\u22121))\u2212 1 \u03bb2 ht\u22121 (25)\n= 1\n\u03bb2m \u2211 k\u2208[m] E[\u2212\u2207Lk(\u03b8\u0303tk)]\u2212 \u03bb1 \u03bb2m \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8 t\u22121)], (26)\nwhere (23) follows from (11), (24) follows since \u03b8\u0303tk = \u03b8 t k if k \u2208 Pt, and (25) follows since clients are randomly chosen. The last equality is due to Lemma B.2.\nNext, note that Algorithm 3 is the same as that of FedDyn except for the \u21131-norm penalty. As this new penalty does not affect derivations of Ct, \u03f5t, and E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 in FedDyn (Acar et al., 2021), we can obtain the following bounds on them. Proofs are omitted for brevity.\nE\u2225ht\u22252 \u2264 Ct (27) Ct \u2264 ( 1\u2212 P\nm\n) Ct\u22121 + 2\u03b22P\nm \u03f5t +\n4\u03b2P\nm E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] (28)\nE\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 \u2264 1 m \u2211 k\u2208[m] E[\u2225\u03b8\u0303tk \u2212 \u03b3 t\u22121\u22252] = \u03f5t (29)\nLemma B.4. Given model parameters at the round (t\u2212 1), Algorithm 3 satisfies\nE\u2225\u03b3t \u2212 \u03b8\u2217\u22252 \u2264E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 \u2212 2\n\u03bb2 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] +\n\u03b2\n\u03bb2 \u03f5t + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 (30)\n\u2212 2\u03bb1 \u03bb2m (\u03b3t\u22121 \u2212 \u03b8\u2217) \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)], (31)\nwhere the expectations are taken assuming parameters at the round (t\u2212 1) are given." }, { "heading": "Proof.", "text": "E\u2225\u03b3t \u2212 \u03b8\u2217\u22252 = E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217 + \u03b3t \u2212 \u03b3t\u22121\u22252 = E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + 2E[ \u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, \u03b3t \u2212 \u03b3t\u22121 \u232a ] + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252\n= E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252\n+ 2\n\u03bb2m \u2211 k\u2208[m] E [\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217,\u2212\u2207Lk(\u03b8\u0303tk)\u2212 \u03bb1(sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)) \u232a]\n(32)\n\u2264 E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252\n+ 2\n\u03bb2m \u2211 k\u2208[m] E[Lk(\u03b8\u2217)\u2212 Lk(\u03b3t\u22121) + \u03b2 2 \u2225\u03b8\u0303tk \u2212 \u03b3t\u22121\u22252]\n+ 2\n\u03bb2m \u2211 k\u2208[m] E [\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217,\u2212\u03bb1sign(\u03b8\u0303tk \u2212 \u03b8t\u22121) \u232a]\n(33)\n= E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 \u2212 2\n\u03bb2 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] +\n\u03b2\n\u03bb2 \u03f5t\n\u2212 2\u03bb1 \u03bb2m \u2211 k\u2208[m] E [\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, sign(\u03b8\u0303tk \u2212 \u03b8t\u22121) \u232a]\n(34)\n= E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 \u2212 2\n\u03bb2 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] +\n\u03b2\n\u03bb2 \u03f5t\n\u2212 2\u03bb1 \u03bb2\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a (35)\nwhere (32) follows from Lemma B.3, (33) follows from (15), and (34) follows from the definitions of R(\u00b7) and \u03f5t.\nLemma B.5. Algorithm 3 satisfies\n(1\u2212 5\u03b2 2\n\u03bb22 )\u03f5t \u2264 10\n1\n\u03bb22 Ct\u22121 + 10\u03b2\n1\n\u03bb22 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] + 5\u03bb21 \u03bb22 d\nProof. Starting from the definitions of \u03f5t and \u03b3t,\n\u03f5t = 1\nm \u2211 k\u2208[m] E\u2225\u03b8\u0303tk \u2212 \u03b3 t\u22121\u22252\n= 1\nm \u2211 k\u2208[m] E\u2225\u03b8\u0303tk \u2212 \u03b8 t\u22121 \u2212 1 \u03bb2 ht\u22121\u22252\n= 1\n\u03bb22\n1\nm \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8\u0303 t k)\u2212 \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121)\u2212 ht\u22121\u22252 (36)\n= 1\n\u03bb22\n1\nm \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8\u2217) +\u2207Lk(\u03b8\u2217)\u2212\u2207Lk(\u03b3 t\u22121)\n+\u2207Lk(\u03b3t\u22121)\u2212\u2207Lk(\u03b8\u0303tk)\u2212 \u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121)\u2212 ht\u22121\u22252\n\u2264 5 \u03bb22 1 m \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8\u2217)\u2225 2 + 5 \u03bb22 1 m \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b3t\u22121k )\u2212\u2207Lk(\u03b8\u2217)\u2225 2\n+ 5\n\u03bb22\n1\nm \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b8\u0303tk)\u2212\u2207Lk(\u03b3t\u22121)\u22252 + 5 \u03bb22 E\u2225\u03bb1sign(\u03b8tk \u2212 \u03b8t\u22121)\u22252 + 5 \u03bb22 E\u2225ht\u22121\u22252\n(37)\n\u2264 5 \u03bb22 1 m \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b8t\u22121k )\u2212\u2207Lk(\u03b8\u2217)\u2225 2 + 5 \u03bb22 1 m \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b3t\u22121k )\u2212\u2207Lk(\u03b8\u2217)\u2225 2\n+ 5\n\u03bb22\n1\nm \u2211 k\u2208[m] E\u2225\u2207Lk(\u03b8\u0303tk)\u2212\u2207Lk(\u03b3t\u22121)\u22252 + 5\u03bb21 \u03bb22 d+ 5 \u03bb22 Ct\u22121 (38)\n\u2264 5 \u03bb22 Ct\u22121 + 5 \u03bb22 2\u03b2 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] +\n5\u03b22\n\u03bb22\n1\nm \u2211 k\u2208[m] E\u2225\u03b8\u0303tk \u2212 \u03b3t\u22121\u22252 + 5\u03bb21 \u03bb22 d+ 5 \u03bb22 Ct\u22121\n(39)\n= 10\n\u03bb22 Ct\u22121 +\n10\u03b2\n\u03bb22 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] +\n5\u03b22\n\u03bb22 \u03f5t + 5\u03bb21 \u03bb22 d,\nwhere (36) follows from (11), (37) follows from the relaxed triangle inequality (17), (38) follows from (27), and (39) follows from the definition of Ct, the smoothness, and (16). The last equality follows from the definition of \u03f5t.\nAfter multiplying (28) by \u03ba(= 10mP 1 \u03bb2 \u03bb2+\u03b2 \u03bb22\u221225\u03b22 ), we obtain the following theorem by summing (B.4) and scaled version of (29).\nTheorem B.6. Given model parameters at the round (t\u2212 1), Algorithm 3 satisfies\n\u03ba0E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] \u2264 (E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + \u03baCt\u22121)\u2212 (E\u2225\u03b3t \u2212 \u03b8\u2217\u22252 + \u03baCt) + \u03ba P\n2m \u03bb21\n\u2212 2\u03bb1 \u03bb2\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a .\nwhere \u03ba = 10mP 1 \u03bb2 \u03bb2+\u03b2 \u03bb22\u221225\u03b22 , \u03ba0 = 2 \u03bb2\n\u03bb22\u221225\u03bb2\u03b2\u221250\u03b2 2\n\u03bb22\u221225\u03b22 . Note that the expectations taken above are\nconditional expectations given model parameters at time (t\u2212 1).\nProof. Summing Lemma B.4 and \u03ba-scaled version of (28), we have\nE\u2225\u03b3t \u2212 \u03b8\u2217\u22252 + \u03baCt\n\u2264 E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + \u03baCt\u22121 \u2212 \u03ba P\nm Ct\u22121 + \u03ba\n2\u03b22P\nm \u03f5t + \u03ba\n4\u03b2P\nm E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)]\n\u2212 2 \u03bb2 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] + \u03b2 \u03bb2 \u03f5t + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 \u2212 2\u03bb1 \u03bb2m (\u03b3t\u22121 \u2212 \u03b8\u2217) \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)].\n(40)\nAs E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 \u2264 \u03f5t by (29), we have\n\u03ba 2\u03b22P\nm \u03f5t +\n\u03b2\n\u03bb2 \u03f5t + E\u2225\u03b3t \u2212 \u03b3t\u22121\u22252 \u2264 \u03ba\n2\u03b22P\nm \u03f5t +\n\u03b2\n\u03bb2 \u03f5t + \u03f5t. (41)\nThis can be further bounded as follows.\n(41) = ( 10 m\nP\n1\n\u03bb2\n\u03bb2 + \u03b2 \u03bb22 \u2212 25\u03b22 \u00b7 2\u03b2\n2P\nm +\n\u03b2\n\u03bb2 + 1\n) \u03f5t\n= 1 \u03bb2(\u03bb22 \u2212 25\u03b22) ( 20(\u03bb2 + \u03b2)\u03b2 2 + \u03b2(\u03bb22 \u2212 25\u03b22) + \u03bb2(\u03bb22 \u2212 25\u03b22) ) \u03f5t\n= \u03bb2(\u03bb2 + \u03b2)\n\u03bb22 \u2212 25\u03b22\n( 1\u2212 5\u03b2 2\n\u03bb22\n) \u03f5t\n\u2264 \u03bb2(\u03bb2 + \u03b2) \u03bb22 \u2212 25\u03b22\n( 10\n\u03bb22 Ct\u22121 +\n10\u03b2\n\u03bb22 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] + 5\u03bb21 \u03bb22 d ) = \u03ba P\nm Ct\u22121 + \u03ba\n\u03b2P\nm E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] + \u03ba\nP\n2m \u03bb21d,\nwhere the inequality follows from Lemma B.5. Then, (40) term will be\nE\u2225\u03b3t \u2212 \u03b8\u2217\u22252 + \u03baCt \u2264 E\u2225\u03b3t\u22121 \u2212 \u03b8\u2217\u22252 + \u03baCt\u22121 \u2212 \u03ba0E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] + \u03ba P\n2m \u03bb21d\n\u2212 2\u03bb1 \u03bb2\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a .\nRearranging terms, we prove the claim.\nNow we are ready to prove the main claim by combining all lemmas. Let us take the sum on both sides of Lemma B.6 over t = 1, . . . , T . Then, telescoping gives us\n\u03ba0 T\u2211 t=1 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] \u2264 (E\u2225\u03b30 \u2212 \u03b8\u2217\u22252 + \u03baC0)\u2212 (E\u2225\u03b3T \u2212 \u03b8\u2217\u22252 + \u03baCT ) + T (\u03ba P 2m \u03bb21)\n\u2212 2\u03bb1 \u03bb2 T\u2211 t=1\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a .\nSince \u03ba is positive if \u03bb2 > 27\u03b2, we can eliminate the negative term in the middle. Then,\n\u03ba0 T\u2211 t=1 E[R(\u03b3t\u22121)\u2212R(\u03b8\u2217)] \u2264 E\u2225\u03b30 \u2212 \u03b8\u2217\u22252 + \u03baC0 + T (\u03ba P 2m \u03bb21d)\n\u2212 2\u03bb1 \u03bb2 T\u2211 t=1\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a .\nDividing by T and applying Jensen\u2019s inequality,\nE [ R( 1\nT T\u22121\u2211 t=0 \u03b3t)\u2212R(\u03b8\u2217)\n] \u2264 1\nT\n1\n\u03ba0 (E\u2225\u03b30 \u2212 \u03b8\u2217\u22252 + \u03baC0) +\n1\n\u03ba0 (\u03ba\nP\n2m \u03bb21d)\n\u2212 1 T 2\u03bb1 \u03bb2 T\u2211 t=1\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a ,\n(42)\nwhich completes the proof of Theorem B.1." }, { "heading": "B.3 DISCUSSION ON CONVERGENCE", "text": "In this section, we revisit the convergence stated in Theorem 3.1. Recall the bound\nE [ R ( 1\nT T\u22121\u2211 t=0 \u03b3t\n) \u2212R(\u03b8\u2217) ] \u2264 1\nT\n1\n\u03ba0 (E\u2225\u03b30 \u2212 \u03b8\u2217\u22252 + \u03baC0) +\n1\n\u03ba0 (\u03ba\nP\n2m \u03bb21d)\n\u2212 1 T 2\u03bb1 \u03bb2 T\u2211 t=1\n\u2329 \u03b3t\u22121 \u2212 \u03b8\u2217, 1\nm \u2211 k\u2208[m] E[sign(\u03b8\u0303tk \u2212 \u03b8t\u22121)]\n\u232a ,\nAs we discussed in the main body, the second term is a negligible constant in the range of our hyperparameters as \u03bb1 is of order of 10\u22124 or 10\u22126.\nConsider the last term where the summand is the inner product between two terms: 1) \u03b3t\u22121 \u2212 \u03b8\u2217, the deviation of the averaged local models from the globally optimal model and 2) the average of sign vectors across clients. The deviation term characterizes how much the averaged local models are different from the global model; thus, we can assume that as training proceeds it vanishes or at least is bounded by a constant vector. To argue the average of sign vectors, assume a special case where the sign vectors sign(\u03b8\u0303tk\u2212 \u03b8t\u22121) are IID across clients. To further simplify the argument, let us consider only a single coordinate of the sign vectors, say Xk = sign(\u03b8\u0303tk(i)\u2212 \u03b8t\u22121(i)), and suppose Xk = \u00b11 with probability 0.5 each. Then, the concentration inequality (Durrett, 2019) implies that for any \u03b4 > 0,\nP 1 m \u2211 k\u2208[m] sign(\u03b8\u0303tk)\u2212 \u03b8t\u22121 > \u03b4 = P 1 m \u2211 k\u2208[m] Xk > \u03b4 \u2264 e\u2212m\u03b422 holds, which vanishes exponentially fast with the number of clients m. Since m is large in many FL scenarios, the average of sign vectors is negligible with high probability, which in turn implies the last term is also negligible." } ], "year": 2022, "abstractText": "Federated learning (FL) is a distributed method to train a global model over a set of local clients while keeping data localized. It reduces the risks of privacy and security but faces important challenges including expensive communication costs and client drift issues. To address these issues, we propose FedElasticNet, a communicationefficient and drift-robust FL framework leveraging the elastic net. It repurposes two types of the elastic net regularizers (i.e., l1 and l2 penalties on the local model updates): (1) the l1-norm regularizer sparsifies the local updates to reduce the communication costs and (2) the l2-norm regularizer resolves the client drift problem by limiting the impact of drifting local updates due to data heterogeneity. FedElasticNet is a general framework for FL; hence, without additional costs, it can be integrated into prior FL techniques, e.g., FedAvg, FedProx, SCAFFOLD, and FedDyn. We show that our framework effectively resolves the communication cost and client drift problems simultaneously.", "creator": "LaTeX with hyperref" }, "output": [ [ "1. The novelty is not significant.", "2. Both l1 and l2 regularizations are mature techniques in statistics, and deep neural network training.", "3. I do not find this method very interesting.", "4. Using Lasso to get sparsity is doable, but is very tricky to tune and leads to biased gradient estimation which would hurt the model performance.", "5. The theory (Theorem 3.1) containing a nonvanishing constant is worse than the baseline FL rate.", "6. Empirical performance might also be worse in some cases as we see from the figures.", "7. The proposed method seems not very valuable both theoretically and empirically.", "8. Elastic net introduces two more hyperparameters \u03bb1 and \u03bb2 which make the method harder to tune.", "9. The related work section should be improved.", "10. The paper misses many recent papers on communication-efficient FL.", "11. The presentation is not satisfactory.", "12. In Algorithm 1, why is the local update presented in this way? How do you solve that optimization problem locally? In practice we may use SGD as in Algorithm 2? Why are they inconsistent? Are we using stochastic optimization?", "13. In the theory part, there is no formal statement of the assumptions.", "14. I do not know the setting of this analysis.", "15. Particularly, which part is related to the client drift?", "16. The assumptions should be stated very clearly." ], [ "1. **Table 1 - what do these symbols mean ? Please update the caption for readability.**", "2. **Theorem 3.1 The second term depends on d - will it be negligible when we are dealing with practical large models ? A supporting plot simulating the convergence rates would be helpful.**", "3. **One claim of the paper is that it can effectively deal with both client drift and communication compression. However, non-IID exp is extremely limited, only Shakesepeare and one non-iid setting; To support the drift robust claims - there needs to be more experiments on multiple dataset , models, and across different strength of heterogeneity comparing with multiple method that only deal with client drift / data heterogeneity.**", "4. **In the non-iid setting, SCAFFOLD which is a standard approach to deal with client drift seems to perform equally well or even better than the proposed solution; Can you add more discussion on this and what are the scenarios when SCAFFOLD is better and when the proposed algorithm is better.**", "5. **The theory is based off FedDyn. Now, FedDyn and SCAFFOLD already deal with the client drift / heterogeneity and thus it is hard to flesh out the roles of the regularization penalty introduces in this paper.**", "6. **Analysis on FedAvg + \u21131 + \u21132 i.e. Algorithm 1 needs to be done - to clearly show if there is any advantage from these penalty terms.**", "7. **For fair comparison, you need to compare with different drift robust algo + compression + EF with the proposed solution.**", "8. **Empirically, for different compression rate ( using Top-k, Sign, Quantization etc compressor + Error Feedback compression operation C ) a clean experiment is to compare : FedAvg + C + EF vs FedAvg + \u21131 + \u21132**", "9. **Also in theory, it is beneficial to discuss the convergence rate obtained (additional terms) with that in case of EF + Compression (Quantize, top k , q sparse etc several available approaches )**" ], [ "1. \"FedProx and FedDyn are also FL schemes that utilize regularization (\u21132-like), thus this paper adds the \u21131 term with a sign adjustment.\"", "2. \"The last sentence in page 6: 'We optimize the hyperparameters depending on the evaluated dataset: learning rates, \u03bb2, \u03bb1' is dubious as it relates to novelty since it can come down to the task of tuning these parameters to achieve the desired result, which is acceptable for application.\"", "3. \"Table 2 shows that the number of non-zero elements are reduced across the board, but it does not show 'when' the non-zero elements become zero. This can be shown by a plot of non-zero elements vs communication round.\"" ], [ "1. Theory makes very strong assumptions.", "2. Theory does not prove improvement over baseline method, nor does it help much with practical questions (selecting tuning parameters etc).", "3. Experiments show only marginal improvements over the baseline in terms of both compression and quality." ] ], "review_num": 4, "item_num": [ 16, 9, 3, 3 ] }